threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi.\n\nI have a \"message queue\" table, that contains in the order of 1-10m\n\"messages\". It is implemented using TheSchwartz:\nhttp://search.cpan.org/~bradfitz/TheSchwartz-1.07/lib/TheSchwartz.pm\n\nSo when a \"worker\" picks the next job it goes into the \"job\" table an\nselect the top X highest priority messages with the \"funcid\" that it can\nwork on. The table looks like this:\ndb=# \\d workqueue.job\n Table \"workqueue.job\"\n Column | Type | Modifiers\n\n---------------+----------+---------------------------------------------------------------\n jobid | integer | not null default\nnextval('workqueue.job_jobid_seq'::regclass)\n funcid | integer | not null\n arg | bytea |\n uniqkey | text |\n insert_time | integer |\n run_after | integer | not null\n grabbed_until | integer | not null\n priority | smallint |\n coalesce | text |\nIndexes:\n \"job_funcid_key\" UNIQUE, btree (funcid, uniqkey)\n \"funcid_coalesce_priority\" btree (funcid, \"coalesce\", priority)\n \"funcid_prority_idx2\" btree (funcid, priority)\n \"job_jobid_idx\" btree (jobid)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 1000;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2008.53 rows=1000 width=6) (actual\ntime=0.077..765.169 rows=1000 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.074..763.664\nrows=1000 loops=1)\n Index Cond: (funcid = 3)\n Total runtime: 766.104 ms\n(4 rows)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 50;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..100.43 rows=50 width=6) (actual time=0.037..505.765\nrows=50 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.035..505.690\nrows=50 loops=1)\n Index Cond: (funcid = 3)\n Total runtime: 505.959 ms\n(4 rows)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..20.09 rows=10 width=6) (actual time=0.056..0.653\nrows=10 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959152.95 rows=3962674 width=6) (actual time=0.054..0.640\nrows=10 loops=1)\n Index Cond: (funcid = 3)\n Total runtime: 0.687 ms\n(4 rows)\n\nSo what I see is that \"top 10\" takes < 1ms, top 50 takes over 500 times\nmore, and top 1000 only 1.5 times more than top 50.\n\nWhat can the reason be for the huge drop between limit 10 and limit 50 be?\n\n-- \nJesper\n",
"msg_date": "Fri, 01 Jan 2010 12:48:43 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Message queue table - strange performance drop with changing limit\n\tsize."
},
{
"msg_contents": "Jesper --\n\nI apologize for top-quoting -- a challenged reader. \n\nThis doesn't directly address your question, but I can't help but notice that the estimates for rows is _wildly_ off the actual number in each and every query. How often / recently have you run ANALYZE on this table ?\n\nAre the timing results consistent over several runs ? It is possible that caching effects are entering into the time results.\n\nGreg Williamson\n\n\n\n----- Original Message ----\nFrom: Jesper Krogh <[email protected]>\nTo: [email protected]\nSent: Fri, January 1, 2010 3:48:43 AM\nSubject: [PERFORM] Message queue table - strange performance drop with changing limit size. \n\nHi.\n\nI have a \"message queue\" table, that contains in the order of 1-10m\n\"messages\". It is implemented using TheSchwartz:\nhttp://search.cpan.org/~bradfitz/TheSchwartz-1.07/lib/TheSchwartz.pm\n\nSo when a \"worker\" picks the next job it goes into the \"job\" table an\nselect the top X highest priority messages with the \"funcid\" that it can\nwork on. The table looks like this:\ndb=# \\d workqueue.job\n Table \"workqueue.job\"\n Column | Type | Modifiers\n\n---------------+----------+---------------------------------------------------------------\njobid | integer | not null default\nnextval('workqueue.job_jobid_seq'::regclass)\nfuncid | integer | not null\narg | bytea |\nuniqkey | text |\ninsert_time | integer |\nrun_after | integer | not null\ngrabbed_until | integer | not null\npriority | smallint |\ncoalesce | text |\nIndexes:\n \"job_funcid_key\" UNIQUE, btree (funcid, uniqkey)\n \"funcid_coalesce_priority\" btree (funcid, \"coalesce\", priority)\n \"funcid_prority_idx2\" btree (funcid, priority)\n \"job_jobid_idx\" btree (jobid)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 1000;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..2008.53 rows=1000 width=6) (actual\ntime=0.077..765.169 rows=1000 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.074..763.664\nrows=1000 loops=1)\n Index Cond: (funcid = 3)\nTotal runtime: 766.104 ms\n(4 rows)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 50;\n QUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..100.43 rows=50 width=6) (actual time=0.037..505.765\nrows=50 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959150.95 rows=3962674 width=6) (actual time=0.035..505.690\nrows=50 loops=1)\n Index Cond: (funcid = 3)\nTotal runtime: 505.959 ms\n(4 rows)\n\nefam=# explain ANALYZe select jobid from workqueue.job where job.funcid\nin (3) order by priority asc limit 10;\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..20.09 rows=10 width=6) (actual time=0.056..0.653\nrows=10 loops=1)\n -> Index Scan using funcid_prority_idx2 on job\n(cost=0.00..7959152.95 rows=3962674 width=6) (actual time=0.054..0.640\nrows=10 loops=1)\n Index Cond: (funcid = 3)\nTotal runtime: 0.687 ms\n(4 rows)\n\nSo what I see is that \"top 10\" takes < 1ms, top 50 takes over 500 times\nmore, and top 1000 only 1.5 times more than top 50.\n\nWhat can the reason be for the huge drop between limit 10 and limit 50 be?\n\n-- \nJesper\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \n",
"msg_date": "Fri, 1 Jan 2010 06:53:52 -0800 (PST)",
"msg_from": "Greg Williamson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Message queue table - strange performance drop with changing\n\tlimit size."
},
{
"msg_contents": "Greg Williamson wrote:\n> Jesper --\n> \n> I apologize for top-quoting -- a challenged reader.\n> \n> This doesn't directly address your question, but I can't help but\n> notice that the estimates for rows is _wildly_ off the actual number\n> in each and every query. How often / recently have you run ANALYZE on\n> this table ?\n\nIt is actually rather accurate, what you see in the explain analyze is\nthe \"limit\" number getting in.. where the inner \"rows\" estiemate is for\nthe where clause+filter.\n\n> Are the timing results consistent over several runs ? It is possible\n> that caching effects are entering into the time results.\n\nYes, they are very consistent. It have subsequently found out that it\ndepends on the amount of \"workers\" doing it in parallel. I seem to top\nat around 12 processes.\n\nI think I need to rewrite the message-queue stuff in a way that can take\nadvantage of some stored procedures instead. Currenly it picks out the\n\"top X\" randomize it in the client picks one and tries to \"grab\" it. ..\nand over again if it fails. When the select top X begins to consume\nsignifcant time it self the process bites itself and gradually gets worse.\n\nThe workload for the individual jobs are \"small\". ~1-2s.\n\n-- \nJesper\n",
"msg_date": "Fri, 01 Jan 2010 16:04:53 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Message queue table - strange performance drop with\n\tchanging limit size."
},
{
"msg_contents": "Jesper Krogh <[email protected]> writes:\n> I have a \"message queue\" table, that contains in the order of 1-10m\n> \"messages\". It is implemented using TheSchwartz:\n> http://search.cpan.org/~bradfitz/TheSchwartz-1.07/lib/TheSchwartz.pm\n\nOne way to approach queueing efficiently with PostgreSQL is to rely on\nPGQ. New upcoming 3.0 version (alpha1 has been released) contains the\nbasics for having cooperative consumers, stable version (2.1.10) only\nallows multiple consumers to all do the same work (think replication).\n\n http://wiki.postgresql.org/wiki/Skytools\n http://wiki.postgresql.org/wiki/PGQ_Tutorial\n\nRegards,\n-- \ndim\n",
"msg_date": "Sat, 02 Jan 2010 13:25:14 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Message queue table - strange performance drop with changing\n\tlimit size."
},
{
"msg_contents": "Jesper Krogh wrote:\n> So what I see is that \"top 10\" takes < 1ms, top 50 takes over 500 times\n> more, and top 1000 only 1.5 times more than top 50.\n> What can the reason be for the huge drop between limit 10 and limit 50 be?\n> \n\nNormally this means you're hitting much higher performing cached \nbehavior with the smaller amount that's degrading significantly once \nyou've crossed some threshold. L1 and L2 CPUs caches vs. regular RAM, \nshared_buffers vs. OS cache changes, and cached in RAM vs. read from \ndisk are three transition spots where you can hit a large drop in \nperformance just by crossing some boundary, going from \"just fits\" to \n\"doesn't fit and data thrashes around\". Larger data sets do not take a \nlinearly larger amount of time to run queries against--they sure can \ndegrade order of magnitude faster than that.\n\n> Indexes:\n> \"job_funcid_key\" UNIQUE, btree (funcid, uniqkey)\n> \"funcid_coalesce_priority\" btree (funcid, \"coalesce\", priority)\n> \"funcid_prority_idx2\" btree (funcid, priority)\n> \"job_jobid_idx\" btree (jobid)\n> \n\nThere may very well be an underlying design issue here though. Indexes \nare far from free to maintain. You've got a fair number of them with a \nlot of redundant information, which is adding a significant amount of \noverhead for questionable gains. If you added those just from the \ntheory of \"those are the fields combinations I search via\", you really \nneed to benchmarking that design decision--it's rarely that easy to \nfigure out what works best. For example, if on average there are a \nsmall number of things attached to each funcid, the middle two indexes \nhere are questionable--it may be more efficient to the system as a whole \nto just grab them all rather than pay the overhead to maintain all these \nindexes. This is particularly true if you're deleting or updating \nentries ito remove them from this queue, which is going to add a lot of \nVACUUM-related cleanup here as well.\n\nIn your situation, I might try dropping both funcid_coalesce_priority \nand then funcid_prority_idx2 and watching what happens to your \nperformance and plans, just to learn more about whether they're really \nneeded. A look at the various pg_stat_* view to help determine what \nphysical I/O and index use is actually going on might be useful too.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sun, 03 Jan 2010 14:20:22 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Message queue table - strange performance drop with\n\tchanging limit size."
}
] |
[
{
"msg_contents": "Hi all,\n\n I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL\nv8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is\next3 on LVM with 32MB extents. It's about the only real resource-hungry\nVM on the server.\n\n It slows down over time and I can't seem to find a way to get the\nperformance to return without doing a dump and reload of the database.\nI've tried manually running 'VACUUM FULL' and restarting the postgresql\ndaemon without success.\n\nFor example, here is an actual query before the dump and again after the\ndump (sorry for the large query):\n\n-=] Before the dump/reload [=-\nserver@iwt=> EXPLAIN ANALYZE SELECT lor_order_type, lor_order_date,\nlor_order_time, lor_isp_agent_id, lor_last_modified_date,\nlor_isp_order_number, lor_instr_for_bell_rep, lor_type_of_service,\nlor_local_voice_provider, lor_dry_loop_instr, lor_termination_location,\nlor_req_line_speed, lor_server_from, lor_rate_band,\nlor_related_order_nums, lor_related_order_types, lor_activation_date,\nlor_cust_first_name, lor_cust_last_name, lor_req_activation_date,\nlor_street_number, lor_street_number_suffix, lor_street_name,\nlor_street_type, lor_street_direction, lor_location_type_1,\nlor_location_number_1, lor_location_type_2, lor_location_number_2,\nlor_postal_code, lor_municipality, lor_province, lor_customer_group,\nlor_daytime_phone, lor_daytime_phone_ext, lod_value AS circuit_number\nFROM line_owner_report LEFT JOIN line_owner_data ON (lor_id=lod_lo_id\nAND lod_variable='ISPCircuitNumber1') WHERE lor_lo_id=514;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=2115.43..112756.81 rows=8198 width=1152) (actual\ntime=1463.311..1463.380 rows=1 loops=1)\n Hash Cond: (\"outer\".lor_id = \"inner\".lod_lo_id)\n -> Seq Scan on line_owner_report (cost=0.00..108509.85 rows=8198\nwidth=1124) (actual time=1462.810..1462.872 rows=1 loops=1)\n Filter: (lor_lo_id = 514)\n -> Hash (cost=2112.85..2112.85 rows=1033 width=36) (actual\ntime=0.421..0.421 rows=5 loops=1)\n -> Bitmap Heap Scan on line_owner_data (cost=9.61..2112.85\nrows=1033 width=36) (actual time=0.274..0.378 rows=5 loops=1)\n Recheck Cond: (lod_variable = 'ISPCircuitNumber1'::text)\n -> Bitmap Index Scan on lod_variable_index\n(cost=0.00..9.61 rows=1033 width=0) (actual time=0.218..0.218 rows=5\nloops=1)\n Index Cond: (lod_variable = 'ISPCircuitNumber1'::text)\n Total runtime: 1463.679 ms\n(10 rows)\n\n-=] After the dump/reload [=-\nserver@iwt=> EXPLAIN ANALYZE SELECT lor_order_type, lor_order_date,\nlor_order_time, lor_isp_agent_id, lor_last_modified_date,\nlor_isp_order_number, lor_instr_for_bell_rep, lor_type_of_service,\nlor_local_voice_provider, lor_dry_loop_instr, lor_termination_location,\nlor_req_line_speed, lor_server_from, lor_rate_band,\nlor_related_order_nums, lor_related_order_types, lor_activation_date,\nlor_cust_first_name, lor_cust_last_name, lor_req_activation_date,\nlor_street_number, lor_street_number_suffix, lor_street_name,\nlor_street_type, lor_street_direction, lor_location_type_1,\nlor_location_number_1, lor_location_type_2, lor_location_number_2,\nlor_postal_code, lor_municipality, lor_province, lor_customer_group,\nlor_daytime_phone, lor_daytime_phone_ext, lod_value AS circuit_number\nFROM line_owner_report LEFT JOIN line_owner_data ON (lor_id=lod_lo_id\nAND lod_variable='ISPCircuitNumber1') WHERE lor_lo_id=514;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=10.84..182.57 rows=5 width=1152) (actual\ntime=1.980..2.083 rows=1 loops=1)\n -> Seq Scan on line_owner_report (cost=0.00..70.05 rows=5\nwidth=1124) (actual time=1.388..1.485 rows=1 loops=1)\n Filter: (lor_lo_id = 514)\n -> Bitmap Heap Scan on line_owner_data (cost=10.84..22.47 rows=3\nwidth=36) (actual time=0.562..0.562 rows=0 loops=1)\n Recheck Cond: ((\"outer\".lor_id = line_owner_data.lod_lo_id)\nAND (line_owner_data.lod_variable = 'ISPCircuitNumber1'::text))\n -> BitmapAnd (cost=10.84..10.84 rows=3 width=0) (actual\ntime=0.552..0.552 rows=0 loops=1)\n -> Bitmap Index Scan on lod_id_index (cost=0.00..4.80\nrows=514 width=0) (actual time=0.250..0.250 rows=126 loops=1)\n Index Cond: (\"outer\".lor_id =\nline_owner_data.lod_lo_id)\n -> Bitmap Index Scan on lod_variable_index\n(cost=0.00..5.80 rows=514 width=0) (actual time=0.262..0.262 rows=5 loops=1)\n Index Cond: (lod_variable = 'ISPCircuitNumber1'::text)\n Total runtime: 2.576 ms\n(11 rows)\n\n Any idea on what might be causing the slowdown? Is it likely\nfilesystem related or am I missing for maintenance step?\n\nThanks!\n\nMadi\n\n",
"msg_date": "Mon, 04 Jan 2010 14:10:26 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB is slow until DB is reloaded"
},
{
"msg_contents": "On 04/01/2010 7:10 PM, Madison Kelly wrote:\n> Hi all,\n>\n> I've got a fairly small DB (~850MB when pg_dump'ed) running on PgSQL\n> v8.1.11 a CentOS 5.3 x86_64 Xen-based virtual machine. The filesystem is\n> ext3 on LVM with 32MB extents. It's about the only real resource-hungry\n> VM on the server.\n>\n> It slows down over time and I can't seem to find a way to get the\n> performance to return without doing a dump and reload of the database.\n> I've tried manually running 'VACUUM FULL' and restarting the postgresql\n> daemon without success.\n>\n> For example, here is an actual query before the dump and again after the\n> dump (sorry for the large query):\n>\n> -=] Before the dump/reload [=-\n> server@iwt=> EXPLAIN ANALYZE SELECT lor_order_type, lor_order_date,\n> lor_order_time, lor_isp_agent_id, lor_last_modified_date,\n> lor_isp_order_number, lor_instr_for_bell_rep, lor_type_of_service,\n> lor_local_voice_provider, lor_dry_loop_instr, lor_termination_location,\n> lor_req_line_speed, lor_server_from, lor_rate_band,\n> lor_related_order_nums, lor_related_order_types, lor_activation_date,\n> lor_cust_first_name, lor_cust_last_name, lor_req_activation_date,\n> lor_street_number, lor_street_number_suffix, lor_street_name,\n> lor_street_type, lor_street_direction, lor_location_type_1,\n> lor_location_number_1, lor_location_type_2, lor_location_number_2,\n> lor_postal_code, lor_municipality, lor_province, lor_customer_group,\n> lor_daytime_phone, lor_daytime_phone_ext, lod_value AS circuit_number\n> FROM line_owner_report LEFT JOIN line_owner_data ON (lor_id=lod_lo_id\n> AND lod_variable='ISPCircuitNumber1') WHERE lor_lo_id=514;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------ \n>\n> Hash Left Join (cost=2115.43..112756.81 rows=8198 width=1152) (actual\n> time=1463.311..1463.380 rows=1 loops=1)\n> Hash Cond: (\"outer\".lor_id = \"inner\".lod_lo_id)\n> -> Seq Scan on line_owner_report (cost=0.00..108509.85 rows=8198\n> width=1124) (actual time=1462.810..1462.872 rows=1 loops=1)\n> Filter: (lor_lo_id = 514)\n> -> Hash (cost=2112.85..2112.85 rows=1033 width=36) (actual\n> time=0.421..0.421 rows=5 loops=1)\n> -> Bitmap Heap Scan on line_owner_data (cost=9.61..2112.85\n> rows=1033 width=36) (actual time=0.274..0.378 rows=5 loops=1)\n> Recheck Cond: (lod_variable = 'ISPCircuitNumber1'::text)\n> -> Bitmap Index Scan on lod_variable_index\n> (cost=0.00..9.61 rows=1033 width=0) (actual time=0.218..0.218 rows=5\n> loops=1)\n> Index Cond: (lod_variable = \n> 'ISPCircuitNumber1'::text)\n> Total runtime: 1463.679 ms\n> (10 rows)\n>\n> -=] After the dump/reload [=-\n> server@iwt=> EXPLAIN ANALYZE SELECT lor_order_type, lor_order_date,\n> lor_order_time, lor_isp_agent_id, lor_last_modified_date,\n> lor_isp_order_number, lor_instr_for_bell_rep, lor_type_of_service,\n> lor_local_voice_provider, lor_dry_loop_instr, lor_termination_location,\n> lor_req_line_speed, lor_server_from, lor_rate_band,\n> lor_related_order_nums, lor_related_order_types, lor_activation_date,\n> lor_cust_first_name, lor_cust_last_name, lor_req_activation_date,\n> lor_street_number, lor_street_number_suffix, lor_street_name,\n> lor_street_type, lor_street_direction, lor_location_type_1,\n> lor_location_number_1, lor_location_type_2, lor_location_number_2,\n> lor_postal_code, lor_municipality, lor_province, lor_customer_group,\n> lor_daytime_phone, lor_daytime_phone_ext, lod_value AS circuit_number\n> FROM line_owner_report LEFT JOIN line_owner_data ON (lor_id=lod_lo_id\n> AND lod_variable='ISPCircuitNumber1') WHERE lor_lo_id=514;\n> QUERY\n> PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------- \n>\n> Nested Loop Left Join (cost=10.84..182.57 rows=5 width=1152) (actual\n> time=1.980..2.083 rows=1 loops=1)\n> -> Seq Scan on line_owner_report (cost=0.00..70.05 rows=5\n> width=1124) (actual time=1.388..1.485 rows=1 loops=1)\n> Filter: (lor_lo_id = 514)\n> -> Bitmap Heap Scan on line_owner_data (cost=10.84..22.47 rows=3\n> width=36) (actual time=0.562..0.562 rows=0 loops=1)\n> Recheck Cond: ((\"outer\".lor_id = line_owner_data.lod_lo_id)\n> AND (line_owner_data.lod_variable = 'ISPCircuitNumber1'::text))\n> -> BitmapAnd (cost=10.84..10.84 rows=3 width=0) (actual\n> time=0.552..0.552 rows=0 loops=1)\n> -> Bitmap Index Scan on lod_id_index (cost=0.00..4.80\n> rows=514 width=0) (actual time=0.250..0.250 rows=126 loops=1)\n> Index Cond: (\"outer\".lor_id =\n> line_owner_data.lod_lo_id)\n> -> Bitmap Index Scan on lod_variable_index\n> (cost=0.00..5.80 rows=514 width=0) (actual time=0.262..0.262 rows=5 \n> loops=1)\n> Index Cond: (lod_variable = \n> 'ISPCircuitNumber1'::text)\n> Total runtime: 2.576 ms\n> (11 rows)\n>\n> Any idea on what might be causing the slowdown? Is it likely\n> filesystem related or am I missing for maintenance step?\n\nYou'll notice that in the first query your row estimates are off by \nseveral orders of magnitude, where in the second query they are much \nmore accurate.\n\nI'm guessing that the data is changing a fair bit over time \n(inserts/updates/deletes) and you are not ANALYZEing regularly (and \nprobably not VACUUMing regularly either).\n\nTry regular VACUUM ANALYZE DATABASE rather than VACUUM FULL or a \ndump/reload.\n\nCheers,\nGary.\n\n\n",
"msg_date": "Mon, 04 Jan 2010 19:25:15 +0000",
"msg_from": "Gary Doades <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly wrote:\n> Hi all,\n>\n> I've got a fairly small DB...\n>\n> It slows down over time and I can't seem to find a way to get the\n> performance to return without doing a dump and reload of the database...\n\nSome questions:\n\nIs autovacuum running? This is the most likely suspect. If not, things \nwill bloat and you won't be getting appropriate \"analyze\" runs. Speaking \nof which, what happens if you just run \"analyze\"?\n\nAnd as long as you are dumping and reloading anyway, how about version \nupgrading for bug reduction, performance improvement, and cool new features.\n\nCheers,\nSteve\n",
"msg_date": "Mon, 04 Jan 2010 11:31:33 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Steve Crawford wrote:\n> Madison Kelly wrote:\n>> Hi all,\n>>\n>> I've got a fairly small DB...\n>>\n>> It slows down over time and I can't seem to find a way to get the\n>> performance to return without doing a dump and reload of the database...\n> \n> Some questions:\n> \n> Is autovacuum running? This is the most likely suspect. If not, things \n> will bloat and you won't be getting appropriate \"analyze\" runs. Speaking \n> of which, what happens if you just run \"analyze\"?\n> \n> And as long as you are dumping and reloading anyway, how about version \n> upgrading for bug reduction, performance improvement, and cool new \n> features.\n> \n> Cheers,\n> Steve\n> \n\nYup, I even tried manually running 'VACUUM FULL' and it didn't help. As \nfor upgrading;\n\na) I am trying to find a way around the dump/reload. I am doing it as a \n\"last resort\" only.\nb) I want to keep the version in CentOS' repo.\n\nI'd not tried simply updating the stats via ANALYZE... I'll keep an eye \non performance and if it starts to slip again, I will run ANALYZE and \nsee if that helps. If there is a way to run ANALYZE against a query that \nI am missing, please let me know.\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 15:30:46 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "\n> Yup, I even tried manually running 'VACUUM FULL' and it didn't help. As \n> for upgrading;\n\nVACUUM FULL is usually considered a bad idea. What you probably want to \ndo instead is CLUSTER, followed by ANALYZE.\n\nBasically, VACUUM makes the indexes smaller (but doesn't reclaim much \nspace from the tables themselves). VACUUM FULL reclaims space from the \ntables, but bloats the indexes.\n\n> a) I am trying to find a way around the dump/reload. I am doing it as a \n> \"last resort\" only.\n> b) I want to keep the version in CentOS' repo.\n> \n\nPostgres is pretty easy to build from source. It's nicely \nself-contained, and won't bite you with dependency hell. So don't be too \nwary of compiling it.\n\nRichard\n",
"msg_date": "Mon, 04 Jan 2010 20:39:13 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "\n\nOn 04/01/2010 8:30 PM, Madison Kelly wrote:\n> Steve Crawford wrote:\n>> Madison Kelly wrote:\n>>> Hi all,\n>>>\n>>> I've got a fairly small DB...\n>>>\n>>> It slows down over time and I can't seem to find a way to get the\n>>> performance to return without doing a dump and reload of the \n>>> database...\n>>\n>> Some questions:\n>>\n>> Is autovacuum running? This is the most likely suspect. If not, \n>> things will bloat and you won't be getting appropriate \"analyze\" \n>> runs. Speaking of which, what happens if you just run \"analyze\"?\n>>\n>> And as long as you are dumping and reloading anyway, how about \n>> version upgrading for bug reduction, performance improvement, and \n>> cool new features.\n>>\n>> Cheers,\n>> Steve\n>>\n>\n> Yup, I even tried manually running 'VACUUM FULL' and it didn't help. \n> As for upgrading;\n>\nVACUUM FULL is not the same as VACUUM ANALYZE FULL. You shouldn't need \nthe FULL option amyway.\n> a) I am trying to find a way around the dump/reload. I am doing it as \n> a \"last resort\" only.\n> b) I want to keep the version in CentOS' repo.\n>\n> I'd not tried simply updating the stats via ANALYZE... I'll keep an \n> eye on performance and if it starts to slip again, I will run ANALYZE \n> and see if that helps. If there is a way to run ANALYZE against a \n> query that I am missing, please let me know.\n>\n From your queries it definitely looks like its your stats that are the \nproblem. When the stats get well out of date the planner is choosing a \nhash join because it thinks thousands of rows are involved where as only \na few are actually involved. Thats why, with better stats, the second \nquery is using a loop join over very few rows and running much quicker.\n\nTherefore it's ANALYZE you need to run as well as regular VACUUMing. \nThere should be no need to VACUUM FULL at all as long as you VACUUM and \nANALYZE regularly. Once a day may be enough, but you don't say how long \nit takes your database to become \"slow\".\n\nYou can VACUUM either the whole database (often easiest) or individual \ntables if you know in more detail what the problem is and that only \ncertain tables need it.\n\nSetting up autovacuum may well be sufficient.\n\nCheers,\nGary.\n\n\n\n\n\n\n> Madi\n>\n",
"msg_date": "Mon, 04 Jan 2010 20:40:02 +0000",
"msg_from": "Gary Doades <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Gary Doades wrote:\n> From your queries it definitely looks like its your stats that are the \n> problem. When the stats get well out of date the planner is choosing a \n> hash join because it thinks thousands of rows are involved where as only \n> a few are actually involved. Thats why, with better stats, the second \n> query is using a loop join over very few rows and running much quicker.\n> \n> Therefore it's ANALYZE you need to run as well as regular VACUUMing. \n> There should be no need to VACUUM FULL at all as long as you VACUUM and \n> ANALYZE regularly. Once a day may be enough, but you don't say how long \n> it takes your database to become \"slow\".\n> \n> You can VACUUM either the whole database (often easiest) or individual \n> tables if you know in more detail what the problem is and that only \n> certain tables need it.\n> \n> Setting up autovacuum may well be sufficient.\n> \n> Cheers,\n> Gary.\n\nThat explains things, thank you!\n\nFor the record; It was taking a few months for the performance to become \nintolerable. I've added CLUSTER -> ANALYZE -> VACUUM to my nightly \nroutine and dropped the VACUUM FULL call. I'll see how this works.\n\nCheers!\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 15:53:41 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "On Mon, 2010-01-04 at 15:53 -0500, Madison Kelly wrote:\n> Gary Doades wrote:\n> > From your queries it definitely looks like its your stats that are the \n> > problem. When the stats get well out of date the planner is choosing a \n> > hash join because it thinks thousands of rows are involved where as only \n> > a few are actually involved. Thats why, with better stats, the second \n> > query is using a loop join over very few rows and running much quicker.\n> > \n> > Therefore it's ANALYZE you need to run as well as regular VACUUMing. \n> > There should be no need to VACUUM FULL at all as long as you VACUUM and \n> > ANALYZE regularly. Once a day may be enough, but you don't say how long \n> > it takes your database to become \"slow\".\n> > \n> > You can VACUUM either the whole database (often easiest) or individual \n> > tables if you know in more detail what the problem is and that only \n> > certain tables need it.\n> > \n> > Setting up autovacuum may well be sufficient.\n> > \n> > Cheers,\n> > Gary.\n> \n> That explains things, thank you!\n> \n> For the record; It was taking a few months for the performance to become \n> intolerable. I've added CLUSTER -> ANALYZE -> VACUUM to my nightly \n> routine and dropped the VACUUM FULL call. I'll see how this works.\n\nI think you are going down the wrong route here - you should be looking\nat preventative maintenance instead of fixing it after its broken.\n\nEnsure that autovacuum is running for the database (assuming that you\nare on a relatively modern version of PG), and possibly tune it to be\nmore aggressive (we can help).\n\nThis will ensure that the condition never comes up.\n\nps - if you do go with the route specify, no need to VACUUM after the\nCLUSTER. CLUSTER gets rid of the dead tuples - nothing for VACUUM to\ndo.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Mon, 04 Jan 2010 16:00:56 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly <[email protected]> wrote:\n \n> I've added CLUSTER -> ANALYZE -> VACUUM to my nightly \n> routine and dropped the VACUUM FULL call.\n \nThe CLUSTER is probably not going to make much difference once\nyou've eliminated bloat, unless your queries do a lot of searches in\nthe sequence of the index used. Be sure to run VACUUM ANALYZE as\none statement, not two separate steps.\n \n-Kevin\n",
"msg_date": "Mon, 04 Jan 2010 15:03:48 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Madison Kelly <[email protected]> wrote:\n> \n>> I've added CLUSTER -> ANALYZE -> VACUUM to my nightly \n>> routine and dropped the VACUUM FULL call.\n> \n> The CLUSTER is probably not going to make much difference once\n> you've eliminated bloat, unless your queries do a lot of searches in\n> the sequence of the index used. Be sure to run VACUUM ANALYZE as\n> one statement, not two separate steps.\n> \n> -Kevin\n\nAh, noted and updated, thank you.\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 16:28:25 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Brad Nicholson wrote:\n> I think you are going down the wrong route here - you should be looking\n> at preventative maintenance instead of fixing it after its broken.\n> \n> Ensure that autovacuum is running for the database (assuming that you\n> are on a relatively modern version of PG), and possibly tune it to be\n> more aggressive (we can help).\n> \n> This will ensure that the condition never comes up.\n> \n> ps - if you do go with the route specify, no need to VACUUM after the\n> CLUSTER. CLUSTER gets rid of the dead tuples - nothing for VACUUM to\n> do.\n> \n\n I wanted to get ahead of the problem, hence my question here. :) I've \nset this to run at night ('iwt' being the DB in question):\n\nsu postgres -c \"psql iwt -c \\\"VACUUM ANALYZE VERBOSE\\\"\n\n I will keep an eye on the output for a little while (it appends to a \nlog) and see what it says. Also, I read that CLUSTER can mess up back \nups as it makes tables look empty while running. If the above doesn't \nseem to help, I will swap out the VACUUM and run a CLUSTER before the \nANALYZE and see how that works.\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 16:33:07 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly wrote:\n> Steve Crawford wrote:\n>> Madison Kelly wrote:\n>>> Hi all,\n>>>\n>>> I've got a fairly small DB...\n>>>\n>>> It slows down over time and I can't seem to find a way to get the\n>>> performance to return without doing a dump and reload of the \n>>> database...\n>>\n> Yup, I even tried manually running 'VACUUM FULL' and it didn't help.\nThat's because VACUUM reclaims space (er, actually marks space that is \navailable for reuse) while ANALYZE refreshes the statistics that the \nplanner uses.\n\n> As for upgrading;\n>\n> a) I am trying to find a way around the dump/reload. I am doing it as \n> a \"last resort\" only.\nAgreed - it is the last resort. But since you were doing it I was just \nsuggesting that you could combine with a upgrade and get more benefits.\n> b) I want to keep the version in CentOS' repo.\nDepends on reasoning. If you absolutely require a fully vanilla \nparticular version of CentOS for some reason then fine. But telling \nCentOS to use the PostgreSQL Development Group pre-built releases for \nCentOS is a very easy one-time process (it's what I do on my CentOS \nmachines). From memory (but read to end for warnings):\n\nDownload the setup rpm:\nwget http://yum.pgsqlrpms.org/reporpms/8.4/pgdg-centos-8.4-1.noarch.rpm\n\nInstall it:\nrpm -i pgdg-centos-8.4-1.noarch.rpm\n\nNote: This does not install PostgreSQL - it just updates your repository \nlist to add the repository containing PostgreSQL binaries. Now make sure \nthat you get your updates from PostgreSQL, not CentOS:\n\nEdit /etc/yum.repos.d/CentOS-Base.repo and add \"exclude=postgresql*\" to \nthe [base] and [updates] sections.\n\nNow you can use \"yum\" as normal and you will get PostgreSQL 8.4 and \nupdates thereto rather than using 8.1.\n\nBUT!! I have only done this on new installs. I have not tried it on an \nalready running machine. As always, test first on a dev machine and do \nyour pre-update dump using the new version of the pg_dump utilities, not \nthe old ones.\n\nCheers,\nSteve\n\n>\n>\n> I'd not tried simply updating the stats via ANALYZE... I'll keep an \n> eye on performance and if it starts to slip again, I will run ANALYZE \n> and see if that helps. If there is a way to run ANALYZE against a \n> query that I am missing, please let me know.\nIf you stick with 8.1x, you may want to edit postgresql.conf and change \ndefault_statistics_target to 100 if it is still at the previous default \nof 10. 100 is the new default setting as testing indicates that it tends \nto yield better query plans with minimal additional overhead.\n\nCheers,\nSteve\n\n",
"msg_date": "Mon, 04 Jan 2010 13:43:02 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly wrote:\n>\n> I wanted to get ahead of the problem, hence my question here. :) \n> I've set this to run at night ('iwt' being the DB in question):\n>\n> su postgres -c \"psql iwt -c \\\"VACUUM ANALYZE VERBOSE\\\" \n\nAnd why not the vacuumdb command?:\n\nsu postgres -c \"vacuumdb --analyze --verbose iwt\"\n\n\nBut this is duct-tape and bailing-wire. You REALLY need to make sure \nthat autovacuum is running - you are likely to have much better results \nwith less pain.\n\nCheers,\nSteve\n\n\n\n",
"msg_date": "Mon, 04 Jan 2010 13:53:29 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Steve Crawford wrote:\n> Madison Kelly wrote:\n>> Steve Crawford wrote:\n>>> Madison Kelly wrote:\n>>>> Hi all,\n>>>>\n>>>> I've got a fairly small DB...\n>>>>\n>>>> It slows down over time and I can't seem to find a way to get the\n>>>> performance to return without doing a dump and reload of the \n>>>> database...\n>>>\n>> Yup, I even tried manually running 'VACUUM FULL' and it didn't help.\n> That's because VACUUM reclaims space (er, actually marks space that is \n> available for reuse) while ANALYZE refreshes the statistics that the \n> planner uses.\n> \n>> As for upgrading;\n>>\n>> a) I am trying to find a way around the dump/reload. I am doing it as \n>> a \"last resort\" only.\n> Agreed - it is the last resort. But since you were doing it I was just \n> suggesting that you could combine with a upgrade and get more benefits.\n>> b) I want to keep the version in CentOS' repo.\n> Depends on reasoning. If you absolutely require a fully vanilla \n> particular version of CentOS for some reason then fine. But telling \n> CentOS to use the PostgreSQL Development Group pre-built releases for \n> CentOS is a very easy one-time process (it's what I do on my CentOS \n> machines). From memory (but read to end for warnings):\n> \n> Download the setup rpm:\n> wget http://yum.pgsqlrpms.org/reporpms/8.4/pgdg-centos-8.4-1.noarch.rpm\n> \n> Install it:\n> rpm -i pgdg-centos-8.4-1.noarch.rpm\n> \n> Note: This does not install PostgreSQL - it just updates your repository \n> list to add the repository containing PostgreSQL binaries. Now make sure \n> that you get your updates from PostgreSQL, not CentOS:\n> \n> Edit /etc/yum.repos.d/CentOS-Base.repo and add \"exclude=postgresql*\" to \n> the [base] and [updates] sections.\n> \n> Now you can use \"yum\" as normal and you will get PostgreSQL 8.4 and \n> updates thereto rather than using 8.1.\n> \n> BUT!! I have only done this on new installs. I have not tried it on an \n> already running machine. As always, test first on a dev machine and do \n> your pre-update dump using the new version of the pg_dump utilities, not \n> the old ones.\n> \n> Cheers,\n> Steve\n> \n>>\n>>\n>> I'd not tried simply updating the stats via ANALYZE... I'll keep an \n>> eye on performance and if it starts to slip again, I will run ANALYZE \n>> and see if that helps. If there is a way to run ANALYZE against a \n>> query that I am missing, please let me know.\n> If you stick with 8.1x, you may want to edit postgresql.conf and change \n> default_statistics_target to 100 if it is still at the previous default \n> of 10. 100 is the new default setting as testing indicates that it tends \n> to yield better query plans with minimal additional overhead.\n> \n> Cheers,\n> Steve\n\nI think for now, I will stick with 8.1, but I will certainly try out \nyour repo edit above on a test machine and see how that works out. I am \nalways reticent to change something as fundamental as postgres without \n\"good reason\". I guess I am a fan of \"if it ain't broke...\". :)\n\nAs for the edit to postgresql.conf, I've made the change. Thanks for the \ndetailed input on that.\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 16:54:19 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Steve Crawford wrote:\n> Madison Kelly wrote:\n>>\n>> I wanted to get ahead of the problem, hence my question here. :) \n>> I've set this to run at night ('iwt' being the DB in question):\n>>\n>> su postgres -c \"psql iwt -c \\\"VACUUM ANALYZE VERBOSE\\\" \n> \n> And why not the vacuumdb command?:\n> \n> su postgres -c \"vacuumdb --analyze --verbose iwt\"\n> \n> \n> But this is duct-tape and bailing-wire. You REALLY need to make sure \n> that autovacuum is running - you are likely to have much better results \n> with less pain.\n> \n> Cheers,\n> Steve\n\nAs for why '-c ...', I guess it was just a matter of which command came \nto mind first. :) Is there a particular benefit to using the 'vacuumdb' \nwrapper?\n\nAs for autovacuum, I assumed (yes, I know) that all v8.x releases \nenabled it by default. How would I confirm that it's running or not?\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 16:57:12 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "On Mon, Jan 4, 2010 at 2:57 PM, Madison Kelly <[email protected]> wrote:\n> As for autovacuum, I assumed (yes, I know) that all v8.x releases enabled it\n> by default. How would I confirm that it's running or not?\n\nI believe it's not enabled by default in 8.1-land, and is as of 8.2\nand later. Whether it's running or not, try \"SELECT * FROM\npg_autovacuum;\". If that returns the null set, it's not doing\nanything, as it hasn't been told it has anything to do.\n\nIME, however, if you really want to benefit from the autovacuum\ndaemon, you probably do want to be on something more recent than 8.1.\n(And, yes, this is a bit of the pot calling the kettle black: I have a\nmixed set of 8.1 and 8.3 hosts. Autovacuum is only running on the\nlatter, while the former are queued for an upgrade.)\n\nrls\n\n-- \n:wq\n",
"msg_date": "Mon, 4 Jan 2010 15:06:34 -0700",
"msg_from": "Rosser Schwarz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly wrote:\n> I think for now, I will stick with 8.1, but I will certainly try out \n> your repo edit above on a test machine and see how that works out. I \n> am always reticent to change something as fundamental as postgres \n> without \"good reason\". I guess I am a fan of \"if it ain't broke...\". :)\n\nPostgreSQL has many fundamental limitations that cannot be resolved no \nmatter what you do in 8.1 that are fixed in later versions. The default \nbehavior for the problem you're having has been massively improved by \nupdates made in 8.2, 8.3, and 8.4. 8.1 can certainly be considered \nbroken in regards to its lack of good and automatic VACUUM and ANALYZE \nbehavior, and you're just seeing the first round of issues in that \narea. Every minute you spend applying temporary fixes to the \nfundamental issues is time you could be better spending toward upgrading \ninstead.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 04 Jan 2010 17:13:55 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "On Mon, Jan 4, 2010 at 3:13 PM, Greg Smith <[email protected]> wrote:\n> Madison Kelly wrote:\n>>\n>> I think for now, I will stick with 8.1, but I will certainly try out your\n>> repo edit above on a test machine and see how that works out. I am always\n>> reticent to change something as fundamental as postgres without \"good\n>> reason\". I guess I am a fan of \"if it ain't broke...\". :)\n>\n> PostgreSQL has many fundamental limitations that cannot be resolved no\n> matter what you do in 8.1 that are fixed in later versions. The default\n> behavior for the problem you're having has been massively improved by\n> updates made in 8.2, 8.3, and 8.4. 8.1 can certainly be considered broken\n> in regards to its lack of good and automatic VACUUM and ANALYZE behavior,\n> and you're just seeing the first round of issues in that area. Every minute\n> you spend applying temporary fixes to the fundamental issues is time you\n> could be better spending toward upgrading instead.\n\nAlso, the HOT updates in 8.3 made a compelling case for us to update,\nand if the OP is suffering from table bloat, HOT might help a lot.\n",
"msg_date": "Mon, 4 Jan 2010 15:51:37 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Rosser Schwarz wrote:\n> On Mon, Jan 4, 2010 at 2:57 PM, Madison Kelly <[email protected]> wrote:\n>> As for autovacuum, I assumed (yes, I know) that all v8.x releases enabled it\n>> by default. How would I confirm that it's running or not?\n> \n> I believe it's not enabled by default in 8.1-land, and is as of 8.2\n> and later. Whether it's running or not, try \"SELECT * FROM\n> pg_autovacuum;\". If that returns the null set, it's not doing\n> anything, as it hasn't been told it has anything to do.\n> \n> IME, however, if you really want to benefit from the autovacuum\n> daemon, you probably do want to be on something more recent than 8.1.\n> (And, yes, this is a bit of the pot calling the kettle black: I have a\n> mixed set of 8.1 and 8.3 hosts. Autovacuum is only running on the\n> latter, while the former are queued for an upgrade.)\n> \n> rls\n\nYou are right, autovacuum is not running after all. From your comment, I \nam wondering if you'd recommend I turn it on or not? If so, given that I \ndoubt I will upgrade any time soon, how would I enable it? I suppose I \ncould google that, but google rarely shares gotcha's. :)\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 19:34:31 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Mon, Jan 4, 2010 at 3:13 PM, Greg Smith <[email protected]> wrote:\n>> Madison Kelly wrote:\n>>> I think for now, I will stick with 8.1, but I will certainly try out your\n>>> repo edit above on a test machine and see how that works out. I am always\n>>> reticent to change something as fundamental as postgres without \"good\n>>> reason\". I guess I am a fan of \"if it ain't broke...\". :)\n>> PostgreSQL has many fundamental limitations that cannot be resolved no\n>> matter what you do in 8.1 that are fixed in later versions. The default\n>> behavior for the problem you're having has been massively improved by\n>> updates made in 8.2, 8.3, and 8.4. 8.1 can certainly be considered broken\n>> in regards to its lack of good and automatic VACUUM and ANALYZE behavior,\n>> and you're just seeing the first round of issues in that area. Every minute\n>> you spend applying temporary fixes to the fundamental issues is time you\n>> could be better spending toward upgrading instead.\n> \n> Also, the HOT updates in 8.3 made a compelling case for us to update,\n> and if the OP is suffering from table bloat, HOT might help a lot.\n> \n\nThese are certainly compelling reasons for me to try upgrading... I will \ntry a test upgrade on a devel server tomorrow using Steve's repo edits.\n\nMadi\n",
"msg_date": "Mon, 04 Jan 2010 19:36:03 -0500",
"msg_from": "Madison Kelly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Madison Kelly wrote:\n>\n> You are right, autovacuum is not running after all. From your comment, \n> I am wondering if you'd recommend I turn it on or not?...\n>\n>\nI see you are considering an upgrade but FWIW on your 8.1 instance, my \nremaining 8.1 server has been running for years with it on. Read up on \nit at:\nhttp://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n\nBasically you need to turn on some stats stuff so autovacuum can \ndetermine when to run (in postgresql.conf):\nstats_start_collector = on\nstats_row_level = on\n\nAnd you need to enable autovacuum (in postgresql.conf):\nautovacuum = on\nautovacuum_naptime = 300 # time between autovacuum runs, \nin secs\n\nThen you can tune it if you need to but at least it will be looking for \nthings that are vacuumworthy every 5 minutes.\n\nCheers,\nSteve\n\n",
"msg_date": "Mon, 04 Jan 2010 17:10:47 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "+Madison Kelly wrote:\n> You are right, autovacuum is not running after all. From your comment, I \n> am wondering if you'd recommend I turn it on or not? If so, given that I \n> doubt I will upgrade any time soon, how would I enable it? I suppose I \n> could google that, but google rarely shares gotcha's. :)\n\nMost of the pain of a Postgres upgrade is the dump/reload step. But you've already had to do that several times, so why the hesitation to upgrade? Upgrading Postgres to the latest release, even if you have to do it from the source code, takes almost no time at all compared to the time you've already burned trying to solve this problem. Do the upgrade, you won't regret it.\n\nCraig\n",
"msg_date": "Mon, 04 Jan 2010 20:02:10 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "On Mon, 2010-01-04 at 20:02 -0800, Craig James wrote:\n> +Madison Kelly wrote:\n> > You are right, autovacuum is not running after all. From your comment, I \n> > am wondering if you'd recommend I turn it on or not? If so, given that I \n> > doubt I will upgrade any time soon, how would I enable it? I suppose I \n> > could google that, but google rarely shares gotcha's. :)\n>\n> Most of the pain of a Postgres upgrade is the dump/reload step. But you've already had to do that several times, so why the hesitation to upgrade? Upgrading Postgres to the latest release, even if you have to do it from the source code, takes almost no time at all compared to the time you've already burned trying to solve this problem. \n\nActually, the biggest pain going beyond 8.2 is the change to implicit\ncasting. \n\n> Do the upgrade, you won't regret it.\n\nAgree.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Tue, 05 Jan 2010 08:17:38 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "CLUSTER also does *nothing at all* to a table unless you have chosen an index to CLUSTER on. Its not as simple as switching from VACUUM or VACUUM FULL to CLUSTER.\n\nDoes CLUSTER also REINDEX? I seem to recall reducing the size of my indexes by REINDEXing after a CLUSTER, but it was a while ago and I could have been mistaken.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Kevin Grittner\nSent: Monday, January 04, 2010 1:04 PM\nTo: Madison Kelly; Gary Doades\nCc: [email protected]\nSubject: Re: [PERFORM] DB is slow until DB is reloaded\n\nMadison Kelly <[email protected]> wrote:\n \n> I've added CLUSTER -> ANALYZE -> VACUUM to my nightly \n> routine and dropped the VACUUM FULL call.\n \nThe CLUSTER is probably not going to make much difference once\nyou've eliminated bloat, unless your queries do a lot of searches in\nthe sequence of the index used. Be sure to run VACUUM ANALYZE as\none statement, not two separate steps.\n \n-Kevin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 5 Jan 2010 17:04:32 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
},
{
"msg_contents": "Scott Carey wrote:\n> CLUSTER also does *nothing at all* to a table unless you have chosen an index to CLUSTER on. Its not as simple as switching from VACUUM or VACUUM FULL to CLUSTER.\n> \n> Does CLUSTER also REINDEX? I seem to recall reducing the size of my indexes by REINDEXing after a CLUSTER, but it was a while ago and I could have been mistaken.\n\nAFAIK CLUSTER builds a new copy of the table, and new indexes for it,\nthen swaps them into the old table and index's place.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 06 Jan 2010 09:16:47 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB is slow until DB is reloaded"
}
] |
[
{
"msg_contents": "The query shown below [select count(distinct...] seems to be looping \n(99-101% CPU as shown by top for 11+ hours). This using postgres 8.3.5 \non a dual quad core machine (Intel(R) Xeon(R) CPU X5460 @ 3.16GHz) with \n32G RAM. Can I provide any other info to help investigate this issue? Or \nany thoughts on how to prevent/avoid it?\n\nThanks,\nBrian\n\ntop - 11:03:39 up 91 days, 22:39, 2 users, load average: 3.73, 2.14, 1.42\nTasks: 135 total, 3 running, 132 sleeping, 0 stopped, 0 zombie\nCpu(s): 27.3% us, 7.7% sy, 0.0% ni, 54.0% id, 11.0% wa, 0.0% hi, 0.0% si\nMem: 33264272k total, 33247780k used, 16492k free, 17232k buffers\nSwap: 4088532k total, 334264k used, 3754268k free, 26760304k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n24585 postgres 17 0 572m 494m 484m R 99 1.5 646:13.63 postmaster\n\ncemdb=# select procpid,query_start,current_query from pg_stat_activity;\n procpid | query_start | \n current_query\n---------+-------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n 13210 | 2010-01-04 10:54:04.490107-08 | <IDLE>\n 24496 | 2010-01-04 10:56:14.982997-08 | <IDLE>\n 30636 | 2010-01-04 10:54:04.488569-08 | <IDLE>\n 5309 | 2010-01-04 10:56:22.850159-08 | select \nprocpid,query_start,current_query from pg_stat_activity;\n 30637 | 2010-01-04 10:54:04.490152-08 | <IDLE>\n 24500 | 2010-01-04 10:56:14.98354-08 | <IDLE>\n 13213 | 2010-01-04 10:54:04.491743-08 | <IDLE>\n 13214 | 2010-01-04 10:56:04.197887-08 | <IDLE>\n 24499 | 2010-01-04 10:56:14.981709-08 | <IDLE>\n 24502 | 2010-01-04 10:56:14.985027-08 | <IDLE>\n 13217 | 2010-01-04 10:54:04.487039-08 | <IDLE>\n 24504 | 2010-01-04 10:56:14.984631-08 | <IDLE>\n 24505 | 2010-01-04 10:56:14.985248-08 | <IDLE>\n 27938 | 2010-01-04 10:54:04.485782-08 | <IDLE>\n 1104 | 2010-01-04 10:54:04.489896-08 | <IDLE>\n 27941 | 2010-01-04 10:54:04.491925-08 | <IDLE>\n 24585 | 2010-01-04 00:16:52.764899-08 | select count(distinct \nb.ts_id) from ts_stats_transetgroup_user_weekly b, \nts_stats_transet_user_interval c, ts_transetgroup_transets_map m where \nb.ts_transet_group_id = m.ts_transet_group_id and \nm.ts_transet_incarnation_id = c.ts_transet_incarnation_id and \nc.ts_user_incarnation_id = b.ts_user_incarnation_id and \nc.ts_interval_start_time >= $1 and c.ts_interval_start_time < $2 and \nb.ts_interval_start_time >= $3 and b.ts_interval_start_time < $4\n 24586 | 2010-01-04 10:56:14.986201-08 | <IDLE>\n 13224 | 2010-01-04 10:54:04.487762-08 | <IDLE>\n 24591 | 2010-01-04 10:56:14.98333-08 | <IDLE>\n 24592 | 2010-01-04 10:56:14.98157-08 | <IDLE>\n(21 rows)\n\ncemdb=# select \nlocktype,database,relation,virtualtransaction,mode,granted from pg_locks \nwhere pid=24585;\n locktype | database | relation | virtualtransaction | mode \n | granted\n------------+----------+----------+--------------------+-----------------+---------\n relation | 74755 | 76064 | 23/1037332 | \nAccessShareLock | t\n virtualxid | | | 23/1037332 | ExclusiveLock \n | t\n relation | 74755 | 75245 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76062 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76188 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 74822 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76187 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76186 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76189 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76057 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 75846 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76063 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76058 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76185 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 75874 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76061 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76191 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76059 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76363 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76364 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76192 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76362 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76190 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 75920 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 75343 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76193 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76060 | 23/1037332 | \nAccessShareLock | t\n relation | 74755 | 76065 | 23/1037332 | \nAccessShareLock | t\n(28 rows)\n\ncemdb=> explain analyze select count(distinct b.ts_id) from \nts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c, \nts_transetgroup_transets_map m where b.ts_transet_group_id = \nm.ts_transet_group_id and m.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and c.ts_user_incarnation_id = \nb.ts_user_incarnation_id and c.ts_interval_start_time >= '2010-01-03 \n00:00' and c.ts_interval_start_time < '2010-01-03 08:00' and \nb.ts_interval_start_time >= '2009-12-28 00:00' and \nb.ts_interval_start_time < '2010-01-04 00:00';\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1726281.28..1726281.29 rows=1 width=8) (actual \ntime=16690.541..16690.542 rows=1 loops=1)\n -> Hash Join (cost=43363.87..1725837.91 rows=354697 width=8) \n(actual time=1343.960..14467.012 rows=1600000 loops=1)\n Hash Cond: ((b.ts_transet_group_id = m.ts_transet_group_id) \nAND (c.ts_transet_incarnation_id = m.ts_transet_incarnation_id))\n -> Hash Join (cost=43362.03..1717697.04 rows=1697479 \nwidth=24) (actual time=1343.778..11432.270 rows=1600000 loops=1)\n Hash Cond: (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id)\n -> Bitmap Heap Scan on ts_stats_transet_user_interval c \n (cost=19014.73..1666918.08 rows=844436 width=16) (actual \ntime=950.097..8125.102 rows=800000 loops=1)\n Recheck Cond: ((ts_interval_start_time >= \n'2010-01-03 00:00:00-08'::timestamp with time zone) AND \n(ts_interval_start_time < '2010-01-03 08:00:00-08'::timestamp with time \nzone))\n -> Bitmap Index Scan on \nts_stats_transet_user_interval_starttime (cost=0.00..18909.17 \nrows=844436 width=0) (actual time=930.036..930.036 rows=800000 loops=1)\n Index Cond: ((ts_interval_start_time >= \n'2010-01-03 00:00:00-08'::timestamp with time zone) AND \n(ts_interval_start_time < '2010-01-03 08:00:00-08'::timestamp with time \nzone))\n -> Hash (cost=23787.37..23787.37 rows=89590 width=24) \n(actual time=393.570..393.570 rows=89758 loops=1)\n -> Seq Scan on ts_stats_transetgroup_user_weekly \nb (cost=0.00..23787.37 rows=89590 width=24) (actual time=0.040..295.414 \nrows=89758 loops=1)\n Filter: ((ts_interval_start_time >= \n'2009-12-28 00:00:00-08'::timestamp with time zone) AND \n(ts_interval_start_time < '2010-01-04 00:00:00-08'::timestamp with time \nzone))\n -> Hash (cost=1.33..1.33 rows=67 width=16) (actual \ntime=0.156..0.156 rows=67 loops=1)\n -> Seq Scan on ts_transetgroup_transets_map m \n(cost=0.00..1.33 rows=67 width=16) (actual time=0.022..0.080 rows=67 \nloops=1)\n Total runtime: 16691.964 ms\n(15 rows)\n",
"msg_date": "Mon, 04 Jan 2010 11:41:30 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "query looping?"
},
{
"msg_contents": "On Mon, Jan 4, 2010 at 2:41 PM, Brian Cox <[email protected]> wrote:\n> The query shown below [select count(distinct...] seems to be looping\n> (99-101% CPU as shown by top for 11+ hours). This using postgres 8.3.5 on a\n> dual quad core machine (Intel(R) Xeon(R) CPU X5460 @ 3.16GHz) with 32G RAM.\n> Can I provide any other info to help investigate this issue? Or any thoughts\n> on how to prevent/avoid it?\n\nYou posted an EXPLAIN ANALYZE showing almost the same query taking 17\nseconds, but a crucial difference is that the EXPLAIN ANALYZE shows\nthe query with the parameters actually in the query, whereas the query\nthat's actually running for a long time uses bound parameters.\nPostgreSQL won't take the particular values into account in planning\nthat version, which can sometimes lead to a markedly inferior plan.\n\nWhat do you get if you do this?\n\nPREPARE foo AS <the query, with the $x entries still in there>\nEXPLAIN EXECUTE foo(<the values>);\n\n...Robert\n",
"msg_date": "Mon, 4 Jan 2010 16:53:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query looping?"
}
] |
[
{
"msg_contents": "hi,\n\njust a small question: is it normal that PostgreSQL 8.4.1 always uses\nsequential scanning on any table when there is a condition having the\nconstant \"current_user\"? Of course there is a btree index set on that table,\nbut the DBMS just doesn't want to utilize it. When I replace current_user to\nany string, the planner uses the index normally.\n\nI can demonstrate it with the following simple query:\n\nSELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE\npsz.felhasznalo_id = current_user;\n\nExplain analyze:\n\n\"Seq Scan on projektszervezet psz (cost=0.00..255.07 rows=42 width=9)\"\n\" Filter: ((felhasznalo_id)::name = \"current_user\"())\"\n\nMetadata:\n\nCREATE TABLE \"projekt\".\"projektszervezet\" (\n CONSTRAINT \"projektszervezet_pkey\" PRIMARY KEY(\"kotesszam\",\n\"felhasznalo_id\"),\n CONSTRAINT \"projektszervezet_fk_felhasznalo\" FOREIGN KEY\n(\"felhasznalo_id\")\n REFERENCES \"felhasznalo\".\"felhasznalo\"(\"felhasznalo_id\")\n ON DELETE RESTRICT\n ON UPDATE RESTRICT\n NOT DEFERRABLE,\n CONSTRAINT \"projektszervezet_fk_projekt\" FOREIGN KEY (\"kotesszam\")\n REFERENCES \"projekt\".\"projekt\"(\"kotesszam\")\n ON DELETE CASCADE\n ON UPDATE CASCADE\n NOT DEFERRABLE,\n CONSTRAINT \"projektszervezet_fk_szerep\" FOREIGN KEY (\"szerep_id\")\n REFERENCES \"felhasznalo\".\"szerep\"(\"szerep_id\")\n ON DELETE RESTRICT\n ON UPDATE RESTRICT\n NOT DEFERRABLE\n) INHERITS (\"projekt\".\"projektszervezet_sablon\")\nWITH OIDS;\n\nCREATE INDEX \"projektszervezet_idx_felhasznalo_id\" ON\n\"projekt\".\"projektszervezet\"\n USING btree (\"felhasznalo_id\");\n\nCREATE INDEX \"projektszervezet_idx_kotesszam\" ON\n\"projekt\".\"projektszervezet\"\n USING btree (\"kotesszam\");\n\nCREATE TRIGGER \"projektszervezet_archivalas\" BEFORE UPDATE OR DELETE ON\n\"projekt\".\"projektszervezet\" FOR EACH ROW EXECUTE PROCEDURE\n\"public\".\"projektszervezet_archivalas_trigger\"();\n\nCREATE TRIGGER \"projektszervezet_idopecset\" BEFORE UPDATE ON\n\"projekt\".\"projektszervezet\" FOR EACH ROW EXECUTE PROCEDURE\n\"public\".\"idopecset_trigger\"();\n\nCREATE TRIGGER \"projektszervezet_naplozas\" BEFORE INSERT OR UPDATE OR DELETE\nON \"projekt\".\"projektszervezet\" FOR EACH ROW EXECUTE PROCEDURE\n\"public\".\"projektszervezet_naplozas_trigger\"();\n\nInherited table:\n\nCREATE TABLE \"projekt\".\"projektszervezet_sablon\" (\n \"kotesszam\" VARCHAR(10) NOT NULL,\n \"felhasznalo_id\" VARCHAR NOT NULL,\n \"szerep_id\" VARCHAR(3),\n \"felvivo\" VARCHAR DEFAULT \"current_user\"() NOT NULL,\n \"felvitel_idopont\" TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,\n \"modosito\" VARCHAR,\n \"modositas_idopont\" TIMESTAMP WITHOUT TIME ZONE,\n \"elso_felvivo\" VARCHAR DEFAULT \"current_user\"() NOT NULL,\n \"elso_felvitel_idopont\" TIMESTAMP(0) WITHOUT TIME ZONE DEFAULT now() NOT\nNULL\n) WITH OIDS;\n\nCREATE TRIGGER \"projektszervezet_idopecset\" BEFORE UPDATE ON\n\"projekt\".\"projektszervezet_sablon\" FOR EACH ROW EXECUTE PROCEDURE\n\"public\".\"idopecset_trigger\"();\n\n\nThanks!\nBalazs\n\n",
"msg_date": "Mon, 4 Jan 2010 22:47:37 +0100",
"msg_from": "=?iso-8859-2?Q?Keresztury_Bal=E1zs?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "forced sequential scan when condition has current_user"
},
{
"msg_contents": "2010/1/4 Keresztury Balázs <[email protected]>:\n> just a small question: is it normal that PostgreSQL 8.4.1 always uses\n> sequential scanning on any table when there is a condition having the\n> constant \"current_user\"? Of course there is a btree index set on that table,\n> but the DBMS just doesn't want to utilize it. When I replace current_user to\n> any string, the planner uses the index normally.\n>\n> I can demonstrate it with the following simple query:\n>\n> SELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE\n> psz.felhasznalo_id = current_user;\n>\n> Explain analyze:\n>\n> \"Seq Scan on projektszervezet psz (cost=0.00..255.07 rows=42 width=9)\"\n> \" Filter: ((felhasznalo_id)::name = \"current_user\"())\"\n\nYou've only got 42 rows in that table - PostgreSQL probably thinks a\nsequential scan will be faster. It might even be right. The thing\nis, PostgreSQL doesn't know at planning time what the value of\ncurrent_user() will be, so the plan can't depend on that; the planner\njust takes its best shot. But if you provide a particular value in\nthe query then it will look at the stats and see what seems to make\nthe most sense for that particular value. So using one of the more\ncommonly-occuring value in the table might produce a sequential scan,\nwhile a less common value might lead to an index scan.\n\n...Robert\n",
"msg_date": "Mon, 4 Jan 2010 16:59:25 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "Actually table itself has ~8000 records. I don't know why does it report 42\nrows, since there is not even a matching row in the table for this specific\ncondition.. But as we all know, the universal answer for every question is\n42 ;) Autovacuum is on, and I also did some vacuuming before I started to\nplay with this query.\n\nI could implement a function into my application to replace current_user to\nthe actual username, but it just doesn't worth it. By the way, replacing\ncurrent_user to a text constant reduces cost from 255->72, so there is a\nsignificant difference. Don't you think this is actually a bug, not a\nfeature?\n\nbalazs\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Monday, January 04, 2010 10:59 PM\nTo: Keresztury Balázs\nCc: [email protected]\nSubject: Re: [PERFORM] forced sequential scan when condition has\ncurrent_user\n\n2010/1/4 Keresztury Balázs <[email protected]>:\n> just a small question: is it normal that PostgreSQL 8.4.1 always uses\n> sequential scanning on any table when there is a condition having the\n> constant \"current_user\"? Of course there is a btree index set on that\ntable,\n> but the DBMS just doesn't want to utilize it. When I replace current_user\nto\n> any string, the planner uses the index normally.\n>\n> I can demonstrate it with the following simple query:\n>\n> SELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE\n> psz.felhasznalo_id = current_user;\n>\n> Explain analyze:\n>\n> \"Seq Scan on projektszervezet psz (cost=0.00..255.07 rows=42 width=9)\"\n> \" Filter: ((felhasznalo_id)::name = \"current_user\"())\"\n\nYou've only got 42 rows in that table - PostgreSQL probably thinks a\nsequential scan will be faster. It might even be right. The thing\nis, PostgreSQL doesn't know at planning time what the value of\ncurrent_user() will be, so the plan can't depend on that; the planner\njust takes its best shot. But if you provide a particular value in\nthe query then it will look at the stats and see what seems to make\nthe most sense for that particular value. So using one of the more\ncommonly-occuring value in the table might produce a sequential scan,\nwhile a less common value might lead to an index scan.\n\n...Robert\n\n",
"msg_date": "Mon, 4 Jan 2010 23:49:24 +0100",
"msg_from": "=?iso-8859-2?Q?Keresztury_Bal=E1zs?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "\nOn Jan 4, 2010, at 1:59 PM, Robert Haas wrote:\n\n> The thing is, PostgreSQL doesn't know at planning time what the value of\n> current_user() will be, so the plan can't depend on that; the planner\n> just takes its best shot. \n\ncurrent_user() is a stable function and the manual is explicit that the result of stable function can be used in an index scan:\n\n\"A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement. This category allows the optimizer to optimize multiple calls of the function to a single call. In particular, it is safe to use an expression containing such a function in an index scan condition. (Since an index scan will evaluate the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an index scan condition.)\"\n\npostgres=# select provolatile from pg_proc where proname = 'current_user';\n provolatile \n-------------\n s\n\nSo, I think the OP's question is still valid.\n\nErik Jones, Database Administrator\nEngine Yard\nSupport, Scalability, Reliability\n866.518.9273 x 260\nLocation: US/Pacific\nIRC: mage2k\n\n\n\n\n\n",
"msg_date": "Mon, 4 Jan 2010 15:13:43 -0800",
"msg_from": "Erik Jones <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "2010/1/4 Erik Jones <[email protected]>:\n> On Jan 4, 2010, at 1:59 PM, Robert Haas wrote:\n>> The thing is, PostgreSQL doesn't know at planning time what the value of\n>> current_user() will be, so the plan can't depend on that; the planner\n>> just takes its best shot.\n>\n> current_user() is a stable function and the manual is explicit that the result of stable function can be used in an index scan:\n\nThat's true, but what I said is also true. It CAN be used in an index\nscan, and on a sufficiently large table it WILL be used in an index\nscan (I tried it). But the planner doesn't automatically use an index\njust because there is one; it tries to gauge whether that's the right\nstrategy. Unfortunately, in cases where it is comparing to a function\nrather than a constant, its estimates are not always terribly\naccurate.\n\nOne thing I notice is that the OP has not included any information on\nhow fast the seqscan or index-scan actually is. If the seqscan is\nslower than the index-scan, then the OP might want to consider\nadjusting the page cost parameters - EXPLAIN ANALYZE output for both\nplans (perhaps obtained by temporarily setting enable_seqscan to\nfalse) would be helpful in understanding what is happening.\n\n...Robert\n",
"msg_date": "Mon, 4 Jan 2010 21:09:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "Erik Jones wrote:\n> On Jan 4, 2010, at 1:59 PM, Robert Haas wrote:\n> \n>> The thing is, PostgreSQL doesn't know at planning time what the value of\n>> current_user() will be, so the plan can't depend on that; the planner\n>> just takes its best shot. \n> \n> current_user() is a stable function and the manual is explicit that the result of stable function can be used in an index scan:\n\nYes ... but the planner doesn't know the value current_user will return,\nso it can't use its statistics on the frequency with which a\n_particular_ value occurs to make decisions. It has to come up with the\nbest generic plan for any value that current_user might return. It's as\nif current_user were a query parameter that won't be resolved until\nEXECUTE time.\n\nArguably, in this particular case the planner *could* know what value\ncurrent_user will return, but adding such special cases to the planner\nwithout a really good reason seems undesirable.\n\n--\nCraig Ringer\n",
"msg_date": "Tue, 05 Jan 2010 10:14:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> Erik Jones wrote:\n>> current_user() is a stable function and the manual is explicit that the result of stable function can be used in an index scan:\n\n> Yes ... but the planner doesn't know the value current_user will return,\n\nI think it's got nothing to do with that and everything to do with the\nfact that he's comparing to a varchar rather than text column ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jan 2010 00:15:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user "
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Keresztury Balázs\n> \n> hi,\n> \n> just a small question: is it normal that PostgreSQL 8.4.1 \n> always uses sequential scanning on any table when there is a \n> condition having the constant \"current_user\"? Of course there \n> is a btree index set on that table, but the DBMS just doesn't \n> want to utilize it. When I replace current_user to any \n> string, the planner uses the index normally.\n> \n> I can demonstrate it with the following simple query:\n> \n> SELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE \n> psz.felhasznalo_id = current_user;\n> \n\nProbably you are comparing different types. Try explicitly casting\ncurrent_user to text:\n\nSELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE \npsz.felhasznalo_id = current_user::text\n\n\n\n",
"msg_date": "Tue, 5 Jan 2010 11:15:38 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: forced sequential scan when condition has current_user"
},
{
"msg_contents": "You are right along with the others, the seq scan was only forced because of\nthe varchar-text comparision.. Using the cast solves the problem.\n\nThanks for the answers everyone!\n\nBalazs\n\n\n\n-----Original Message-----\nFrom: Fernando Hevia [mailto:[email protected]] \nSent: Tuesday, January 05, 2010 3:16 PM\nTo: 'Keresztury Balázs'; [email protected]\nSubject: RE: [PERFORM] forced sequential scan when condition has\ncurrent_user\n\n \n\n> -----Mensaje original-----\n> De: Keresztury Balázs\n> \n> hi,\n> \n> just a small question: is it normal that PostgreSQL 8.4.1 \n> always uses sequential scanning on any table when there is a \n> condition having the constant \"current_user\"? Of course there \n> is a btree index set on that table, but the DBMS just doesn't \n> want to utilize it. When I replace current_user to any \n> string, the planner uses the index normally.\n> \n> I can demonstrate it with the following simple query:\n> \n> SELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE \n> psz.felhasznalo_id = current_user;\n> \n\nProbably you are comparing different types. Try explicitly casting\ncurrent_user to text:\n\nSELECT psz.kotesszam FROM projekt.projektszervezet psz WHERE \npsz.felhasznalo_id = current_user::text\n\n\n\n",
"msg_date": "Tue, 5 Jan 2010 18:38:23 +0100",
"msg_from": "=?iso-8859-2?Q?Keresztury_Bal=E1zs?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: forced sequential scan when condition has current_user"
}
] |
[
{
"msg_contents": "On 01/04/2010 04:53 PM, Robert Haas [[email protected]] wrote:\n> PREPARE foo AS <the query, with the $x entries still in there>\n> EXPLAIN EXECUTE foo(<the values>);\n\nThanks for the response. Results below. Brian\n\ncemdb=> prepare foo as select count(distinct b.ts_id) from \nts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c, \nts_transetgroup_transets_map m where b.ts_transet_group_id = \nm.ts_transet_group_id and m.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and c.ts_user_incarnation_id = \nb.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and \nc.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and \nb.ts_interval_start_time < $4;\nPREPARE\n\ncemdb=> explain execute foo('2010-01-03 00:00','2010-01-03 \n08:00','2009-12-28 00:00','2010-01-04 00:00');\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=325382.51..325382.51 rows=1 width=8)\n -> Hash Join (cost=3486.00..325382.00 rows=406 width=8)\n Hash Cond: ((b.ts_transet_group_id = m.ts_transet_group_id) \nAND (c.ts_transet_incarnation_id = m.ts_transet_incarnation_id))\n -> Hash Join (cost=3484.17..325370.84 rows=1944 width=24)\n Hash Cond: (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id)\n -> Bitmap Heap Scan on ts_stats_transet_user_interval c \n (cost=2177.34..322486.61 rows=96473 width=16)\n Recheck Cond: ((ts_interval_start_time >= $1) AND \n(ts_interval_start_time < $2))\n -> Bitmap Index Scan on \nts_stats_transet_user_interval_starttime (cost=0.00..2165.28 rows=96473 \nwidth=0)\n Index Cond: ((ts_interval_start_time >= $1) \nAND (ts_interval_start_time < $2))\n -> Hash (cost=1301.21..1301.21 rows=898 width=24)\n -> Index Scan using \nts_stats_transetgroup_user_weekly_starttimeindex on \nts_stats_transetgroup_user_weekly b (cost=0.00..1301.21 rows=898 width=24)\n Index Cond: ((ts_interval_start_time >= $3) \nAND (ts_interval_start_time < $4))\n -> Hash (cost=1.33..1.33 rows=67 width=16)\n -> Seq Scan on ts_transetgroup_transets_map m \n(cost=0.00..1.33 rows=67 width=16)\n(14 rows)\n",
"msg_date": "Mon, 04 Jan 2010 14:24:47 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query looping?"
},
{
"msg_contents": "On Mon, Jan 4, 2010 at 5:24 PM, Brian Cox <[email protected]> wrote:\n> On 01/04/2010 04:53 PM, Robert Haas [[email protected]] wrote:\n>>\n>> PREPARE foo AS <the query, with the $x entries still in there>\n>> EXPLAIN EXECUTE foo(<the values>);\n>\n> Thanks for the response. Results below. Brian\n>\n> cemdb=> prepare foo as select count(distinct b.ts_id) from\n> ts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c,\n> ts_transetgroup_transets_map m where b.ts_transet_group_id =\n> m.ts_transet_group_id and m.ts_transet_incarnation_id =\n> c.ts_transet_incarnation_id and c.ts_user_incarnation_id =\n> b.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and\n> c.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and\n> b.ts_interval_start_time < $4;\n> PREPARE\n>\n> cemdb=> explain execute foo('2010-01-03 00:00','2010-01-03\n> 08:00','2009-12-28 00:00','2010-01-04 00:00');\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=325382.51..325382.51 rows=1 width=8)\n> -> Hash Join (cost=3486.00..325382.00 rows=406 width=8)\n> Hash Cond: ((b.ts_transet_group_id = m.ts_transet_group_id) AND\n> (c.ts_transet_incarnation_id = m.ts_transet_incarnation_id))\n> -> Hash Join (cost=3484.17..325370.84 rows=1944 width=24)\n> Hash Cond: (c.ts_user_incarnation_id =\n> b.ts_user_incarnation_id)\n> -> Bitmap Heap Scan on ts_stats_transet_user_interval c\n> (cost=2177.34..322486.61 rows=96473 width=16)\n> Recheck Cond: ((ts_interval_start_time >= $1) AND\n> (ts_interval_start_time < $2))\n> -> Bitmap Index Scan on\n> ts_stats_transet_user_interval_starttime (cost=0.00..2165.28 rows=96473\n> width=0)\n> Index Cond: ((ts_interval_start_time >= $1) AND\n> (ts_interval_start_time < $2))\n> -> Hash (cost=1301.21..1301.21 rows=898 width=24)\n> -> Index Scan using\n> ts_stats_transetgroup_user_weekly_starttimeindex on\n> ts_stats_transetgroup_user_weekly b (cost=0.00..1301.21 rows=898 width=24)\n> Index Cond: ((ts_interval_start_time >= $3) AND\n> (ts_interval_start_time < $4))\n> -> Hash (cost=1.33..1.33 rows=67 width=16)\n> -> Seq Scan on ts_transetgroup_transets_map m\n> (cost=0.00..1.33 rows=67 width=16)\n> (14 rows)\n\nHmm. Looks like the same plan.\n\nIt's not obvious to me what is wrong. Maybe it would make sense to\nstart by checking the row count estimates for the different rows in\nthis plan. For example:\n\nSELECT SUM(1) FROM ts_stats_transetgroup_user_weekly b WHERE\nts_interval_start_time > [value] AND ts_interval_start_time < [value];\n\n...and similarly for the bitmap index scan.\n\n...Robert\n",
"msg_date": "Tue, 5 Jan 2010 10:03:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query looping?"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n\nThe web application server which runs Apache 1.3/PHP2.9 has an intermittent\nproblem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\nThe long request happens at approximate rate 1:100.\n\nI turned on logs on postgres server side, and there is\nnothing suspicious for me there. When a connection request comes, it is\nbeing served without any delay.\n\nCould anyone point me to the direction in which I should investigate this\nproblem further?\nThank you in advance!\n\n\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\nThe database & web servers are in the 2 local subnets.\n\n\nDmitri.\n\nHi everybody,\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.The web application server which runs Apache 1.3/PHP2.9 has an intermittent problem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.The long request happens at approximate rate 1:100.\nI turned on logs on postgres server side, and there is nothing suspicious for me there. When a connection request comes, it is being served without any delay. \nCould anyone point me to the direction in which I should investigate this problem further?Thank you in advance!\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.The database & web servers are in the 2 local subnets. \nDmitri.",
"msg_date": "Tue, 5 Jan 2010 13:12:53 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Dmitri Girski wrote:\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an \n> intermittent problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n\nFirst thing to check for intermittent multi-second delays is whether a \ncheckpoint is happening at that time. See \nhttp://wiki.postgresql.org/wiki/Logging_Checkpoints for an intro, you'd \nwant to see if the checkpoints are around the same time as the delays \neach time. The default configuration makes checkpoints happen all the \ntime if there's any significant write traffic on your database.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nDmitri Girski wrote:\n\nI am running a PostgreSQL\nserver 8.3.5 with a pretty much standard config.\n\n\nThe web application server\nwhich runs Apache 1.3/PHP2.9 has an intermittent problem:\npg_connect takes exactly\n3.0 seconds. The usual connection time is 0.0045.\nThe long request happens at\napproximate rate 1:100.\n\n\nFirst thing to check for intermittent multi-second delays is whether a\ncheckpoint is happening at that time. See\nhttp://wiki.postgresql.org/wiki/Logging_Checkpoints for an intro, you'd\nwant to see if the checkpoints are around the same time as the delays\neach time. The default configuration makes checkpoints happen all the\ntime if there's any significant write traffic on your database.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Mon, 04 Jan 2010 22:01:54 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Dmitri Girski <[email protected]> writes:\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n\n> The web application server which runs Apache 1.3/PHP2.9 has an intermittent\n> problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n\nSounds a lot like a dropped-packets problem. The exact timing would be\nexplained if that is the retransmit timeout in your client-side TCP\nstack. If that's what it is, you need some network engineers, not us\ndatabase geeks ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Jan 2010 00:14:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds "
},
{
"msg_contents": "On 1/4/2010 8:12 PM, Dmitri Girski wrote:\n> Hi everybody,\n>\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n>\n> I turned on logs on postgres server side, and there is\n> nothing suspicious for me there. When a connection request comes, it is\n> being served without any delay.\n>\n> Could anyone point me to the direction in which I should investigate\n> this problem further?\n> Thank you in advance!\n>\n>\n> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> The database & web servers are in the 2 local subnets.\n>\n>\n> Dmitri.\n>\n\nHow do you have the connect string? With an IP or a name? Maybe its a \nDNS lookup timeout? You could switch to IP or drop the name in the \nhosts file and see if that makes a difference.\n\n-Andy\n",
"msg_date": "Tue, 05 Jan 2010 09:03:12 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Delays that are almost exactly 3 seconds over a network are almost always\nsome sort of network configuration issue.\n\nInside a datacenter, mis-configured load balancers or routers can cause low\nlevel network issues that result in intermittent network delays of exactly 3\nseconds (a loop in a routing network?).\nDNS timeouts are often 3 seconds.\n\nNot sure if any of the above is it, but this sounds like a network\nconfiguration problem to me.\n\nOn 1/4/10 6:12 PM, \"Dmitri Girski\" <[email protected]> wrote:\n\n> Hi everybody,\n> \n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n> \n> The web application server which runs Apache 1.3/PHP2.9 has an intermittent\n> problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n> \n> I turned on logs on postgres server side, and there is nothing suspicious for\n> me there. When a connection request comes, it is being served without any\n> delay. \n> \n> Could anyone point me to the direction in which I should investigate this\n> problem further?\n> Thank you in advance!\n> \n> \n> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> The database & web servers are in the 2 local subnets. \n> \n> \n> Dmitri.\n> \n> \n\n",
"msg_date": "Tue, 5 Jan 2010 14:32:12 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Hi Andy,\n\nI tried 2 connections strings:\n- server name (DB1), which is listed in all machines hosts files.\n- ip address.\n\nThere is no difference in both methods, still I have 5-7 pg_connects which\nlast around 3 seconds.\n\nCheers,\nDmitri.\n\nOn Wed, Jan 6, 2010 at 2:03 AM, Andy Colson <[email protected]> wrote:\n\n> On 1/4/2010 8:12 PM, Dmitri Girski wrote:\n>\n>> Hi everybody,\n>>\n>> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>>\n>> The web application server which runs Apache 1.3/PHP2.9 has an\n>> intermittent problem:\n>> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n>> The long request happens at approximate rate 1:100.\n>>\n>> I turned on logs on postgres server side, and there is\n>> nothing suspicious for me there. When a connection request comes, it is\n>> being served without any delay.\n>>\n>> Could anyone point me to the direction in which I should investigate\n>> this problem further?\n>> Thank you in advance!\n>>\n>>\n>> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n>> The database & web servers are in the 2 local subnets.\n>>\n>>\n>> Dmitri.\n>>\n>>\n> How do you have the connect string? With an IP or a name? Maybe its a DNS\n> lookup timeout? You could switch to IP or drop the name in the hosts file\n> and see if that makes a difference.\n>\n> -Andy\n>\n\n\n\n-- \n@Gmail\n\nHi Andy,I tried 2 connections strings:- server name (DB1), which is listed in all machines hosts files.\n- ip address.There is no difference in both methods, still I have 5-7 pg_connects which last around 3 seconds.\nCheers,Dmitri.On Wed, Jan 6, 2010 at 2:03 AM, Andy Colson <[email protected]> wrote:\nOn 1/4/2010 8:12 PM, Dmitri Girski wrote:\n\nHi everybody,\n\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n\nThe web application server which runs Apache 1.3/PHP2.9 has an\nintermittent problem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\nThe long request happens at approximate rate 1:100.\n\nI turned on logs on postgres server side, and there is\nnothing suspicious for me there. When a connection request comes, it is\nbeing served without any delay.\n\nCould anyone point me to the direction in which I should investigate\nthis problem further?\nThank you in advance!\n\n\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\nThe database & web servers are in the 2 local subnets.\n\n\nDmitri.\n\n\n\nHow do you have the connect string? With an IP or a name? Maybe its a DNS lookup timeout? You could switch to IP or drop the name in the hosts file and see if that makes a difference.\n\n-Andy\n-- @Gmail",
"msg_date": "Wed, 6 Jan 2010 14:25:00 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Hi Scott,\n\nThank you pointers, I've spoken to the network guy, he will help to monitor\nconnections on the firewall.\nOn the other hand, if I use ip addresses this should not attract any\npossible issues with DNS, right?\n\nThanks!\n\nDmitri.\n\nOn Wed, Jan 6, 2010 at 9:32 AM, Scott Carey <[email protected]> wrote:\n\n> Delays that are almost exactly 3 seconds over a network are almost always\n> some sort of network configuration issue.\n>\n> Inside a datacenter, mis-configured load balancers or routers can cause low\n> level network issues that result in intermittent network delays of exactly\n> 3\n> seconds (a loop in a routing network?).\n> DNS timeouts are often 3 seconds.\n>\n> Not sure if any of the above is it, but this sounds like a network\n> configuration problem to me.\n>\n> On 1/4/10 6:12 PM, \"Dmitri Girski\" <[email protected]> wrote:\n>\n> > Hi everybody,\n> >\n> > I am running a PostgreSQL server 8.3.5 with a pretty much standard\n> config.\n> >\n> > The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent\n> > problem:\n> > pg_connect takes exactly 3.0 seconds. The usual connection time is\n> 0.0045.\n> > The long request happens at approximate rate 1:100.\n> >\n> > I turned on logs on postgres server side, and there is\n> nothing suspicious for\n> > me there. When a connection request comes, it is being served without any\n> > delay.\n> >\n> > Could anyone point me to the direction in which I should investigate this\n> > problem further?\n> > Thank you in advance!\n> >\n> >\n> > PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> > The database & web servers are in the 2 local subnets.\n> >\n> >\n> > Dmitri.\n> >\n> >\n>\n>\n\n\n-- \n@Gmail\n\nHi Scott,Thank you pointers, I've spoken to the network guy, he will help to monitor connections on the firewall. \nOn the other hand, if I use ip addresses this should not attract any possible issues with DNS, right?Thanks!\nDmitri.On Wed, Jan 6, 2010 at 9:32 AM, Scott Carey <[email protected]> wrote:\nDelays that are almost exactly 3 seconds over a network are almost always\nsome sort of network configuration issue.\n\nInside a datacenter, mis-configured load balancers or routers can cause low\nlevel network issues that result in intermittent network delays of exactly 3\nseconds (a loop in a routing network?).\nDNS timeouts are often 3 seconds.\n\nNot sure if any of the above is it, but this sounds like a network\nconfiguration problem to me.\n\nOn 1/4/10 6:12 PM, \"Dmitri Girski\" <[email protected]> wrote:\n\n> Hi everybody,\n>\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an intermittent\n> problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n>\n> I turned on logs on postgres server side, and there is nothing suspicious for\n> me there. When a connection request comes, it is being served without any\n> delay. \n>\n> Could anyone point me to the direction in which I should investigate this\n> problem further?\n> Thank you in advance!\n>\n>\n> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> The database & web servers are in the 2 local subnets. \n>\n>\n> Dmitri.\n>\n>\n\n-- @Gmail",
"msg_date": "Wed, 6 Jan 2010 14:28:17 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Thank you for reply , Andy!\n\nI tried both cases: server name which is listed in hosts file and ip address\n( 192.168.2.2) - no difference so far.\n\nCheers,\nDmitri.\n\nOn Wed, Jan 6, 2010 at 2:03 AM, Andy Colson <[email protected]> wrote:\n\n> On 1/4/2010 8:12 PM, Dmitri Girski wrote:\n>\n>> Hi everybody,\n>>\n>> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>>\n>> The web application server which runs Apache 1.3/PHP2.9 has an\n>> intermittent problem:\n>> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n>> The long request happens at approximate rate 1:100.\n>>\n>> I turned on logs on postgres server side, and there is\n>> nothing suspicious for me there. When a connection request comes, it is\n>> being served without any delay.\n>>\n>> Could anyone point me to the direction in which I should investigate\n>> this problem further?\n>> Thank you in advance!\n>>\n>>\n>> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n>> The database & web servers are in the 2 local subnets.\n>>\n>>\n>> Dmitri.\n>>\n>>\n> How do you have the connect string? With an IP or a name? Maybe its a DNS\n> lookup timeout? You could switch to IP or drop the name in the hosts file\n> and see if that makes a difference.\n>\n> -Andy\n>\n\n\n\n-- \n@Gmail\n\nThank you for reply , Andy!I tried both cases: server name which is listed in hosts file and ip address ( 192.168.2.2) - no difference so far.\nCheers,Dmitri.On Wed, Jan 6, 2010 at 2:03 AM, Andy Colson <[email protected]> wrote:\nOn 1/4/2010 8:12 PM, Dmitri Girski wrote:\n\nHi everybody,\n\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n\nThe web application server which runs Apache 1.3/PHP2.9 has an\nintermittent problem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\nThe long request happens at approximate rate 1:100.\n\nI turned on logs on postgres server side, and there is\nnothing suspicious for me there. When a connection request comes, it is\nbeing served without any delay.\n\nCould anyone point me to the direction in which I should investigate\nthis problem further?\nThank you in advance!\n\n\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\nThe database & web servers are in the 2 local subnets.\n\n\nDmitri.\n\n\n\nHow do you have the connect string? With an IP or a name? Maybe its a DNS lookup timeout? You could switch to IP or drop the name in the hosts file and see if that makes a difference.\n\n-Andy\n-- @Gmail",
"msg_date": "Wed, 6 Jan 2010 14:29:28 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Hi Tom,\n\nThe timing is around 3.0 seconds\nTime=3.0037\nTime=3.4038\nTime=3.0038\nTime=3.004\nTime=3.2037\nTime=3.0039\nTime=3.0034\nTime=3.0034\nTime=3.2039\nTime=3.0044\nTime=3.8044\nTime=3.2034\n\nI don't think that it could relate to DNS problem as I tried 2 methods which\ndoes not use name resolution ( hosts file & ip address)\nI will definitely seek the help from network geeks and I will check all TCP\nstack settings.\n\nThank you!\n\nCheers,\nDmitri.\n\n\nOn Tue, Jan 5, 2010 at 4:14 PM, Tom Lane <[email protected]> wrote:\n\n> Dmitri Girski <[email protected]> writes:\n> > I am running a PostgreSQL server 8.3.5 with a pretty much standard\n> config.\n>\n> > The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent\n> > problem:\n> > pg_connect takes exactly 3.0 seconds. The usual connection time is\n> 0.0045.\n> > The long request happens at approximate rate 1:100.\n>\n> Sounds a lot like a dropped-packets problem. The exact timing would be\n> explained if that is the retransmit timeout in your client-side TCP\n> stack. If that's what it is, you need some network engineers, not us\n> database geeks ...\n>\n> regards, tom lane\n>\n\n\n\n-- \n@Gmail\n\nHi Tom,The timing is around 3.0 seconds\nTime=3.0037 Time=3.4038 Time=3.0038 Time=3.004 Time=3.2037 \nTime=3.0039 Time=3.0034 Time=3.0034 Time=3.2039 Time=3.0044 \nTime=3.8044 Time=3.2034 I don't think that it could relate to DNS problem as I tried 2 methods which does not use name resolution ( hosts file & ip address) \nI will definitely seek the help from network geeks and I will check all TCP stack settings. Thank you!Cheers,Dmitri.\n\nOn Tue, Jan 5, 2010 at 4:14 PM, Tom Lane <[email protected]> wrote:\nDmitri Girski <[email protected]> writes:\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n\n> The web application server which runs Apache 1.3/PHP2.9 has an intermittent\n> problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n\nSounds a lot like a dropped-packets problem. The exact timing would be\nexplained if that is the retransmit timeout in your client-side TCP\nstack. If that's what it is, you need some network engineers, not us\ndatabase geeks ...\n\n regards, tom lane\n-- @Gmail",
"msg_date": "Wed, 6 Jan 2010 14:49:27 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Hi Greg,\n\nThank you for idea, reading about checkpints & tuning was very useful.\n\nI had a checkpoints logging turned on. I studied a couple of days logs and I\nthere is no clear dependency on checkpoint write. Sometimes it is within a\nvicinity of 3 seconds CONNECT, sometimes well off it.\n\nAlso the postgres log file does not show any long operations, which inclines\nme to think that this is a network connectivity/apache/php issue rather than\npostgres.\n\n\nHere is the excerpts from logs:\n\nLog file from WWW server\n===========================================================\n[06-01-10 14:58:16] UserId=15 Time=3.0032 Req=DB CONNECT\n===========================================================\n\nLog file from DB server:\n===========================================================\n[2010-01-06 14:58:13 EST] idleLOG: 00000: disconnection: session time:\n0:00:00.027 user=pri_user database=data host=192.168.1.10 port=50087\n[2010-01-06 14:58:13 EST] idleLOCATION: log_disconnections, postgres.c:3982\n\n[2010-01-06 14:58:18 EST] /usr/lib64/postgresql-8.3/bin/postgresLOG: 00000:\nconnection received: host=192.168.1.10 port=52425\n[2010-01-06 14:58:18 EST] /usr/lib64/postgresql-8.3/bin/postgresLOCATION:\n BackendInitialize, postmaster.c:3027\n[2010-01-06 14:58:18 EST] authenticationLOG: 00000: connection authorized:\nuser=pri_user database=data\n[2010-01-06 14:58:18 EST] authenticationLOCATION: BackendInitialize,\npostmaster.c:3097\n\n[2010-01-06 14:58:18 EST] idleLOG: 00000: statement: SELECT\n\"fIsLoggedIn\"(15)\n[2010-01-06 14:58:18 EST] idleLOCATION: exec_simple_query, postgres.c:845\n[2010-01-06 14:58:18 EST] SELECTLOG: 00000: duration: 39.233 ms\n[2010-01-06 14:58:18 EST] SELECTLOCATION: exec_simple_query,\npostgres.c:1056\n[2010-01-06 14:58:18 EST] idleLOG: 00000: statement: START TRANSACTION\n[2010-01-06 14:58:18 EST] idleLOCATION: exec_simple_query, postgres.c:845\n[2010-01-06 14:58:18 EST] START TRANSACTIONLOG: 00000: duration: 0.050 ms\n===========================================================\n\nCheers,\nDmitri.\n\n\nOn Tue, Jan 5, 2010 at 2:01 PM, Greg Smith <[email protected]> wrote:\n\n> Dmitri Girski wrote:\n>\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n>\n>\n> First thing to check for intermittent multi-second delays is whether a\n> checkpoint is happening at that time. See\n> http://wiki.postgresql.org/wiki/Logging_Checkpoints for an intro, you'd\n> want to see if the checkpoints are around the same time as the delays each\n> time. The default configuration makes checkpoints happen all the time if\n> there's any significant write traffic on your database.\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and [email protected] www.2ndQuadrant.com\n>\n>\n\n\n-- \n@Gmail\n\nHi Greg,Thank you for idea, reading about checkpints & tuning was very useful.\nI had a checkpoints logging turned on. I studied a couple of days logs and I there is no clear dependency on checkpoint write. Sometimes it is within a vicinity of 3 seconds CONNECT, sometimes well off it.\nAlso the postgres log file does not show any long operations, which inclines me to think that this is a network connectivity/apache/php issue rather than postgres.\nHere is the excerpts from logs:\nLog file from WWW server\n\n===========================================================[06-01-10 14:58:16] UserId=15 Time=3.0032 Req=DB CONNECT===========================================================\nLog file from DB server:===========================================================\n[2010-01-06 14:58:13 EST] idleLOG: 00000: disconnection: session time: 0:00:00.027 user=pri_user database=data host=192.168.1.10 port=50087\n[2010-01-06 14:58:13 EST] idleLOCATION: log_disconnections, postgres.c:3982[2010-01-06 14:58:18 EST] /usr/lib64/postgresql-8.3/bin/postgresLOG: 00000: connection received: host=192.168.1.10 port=52425\n[2010-01-06 14:58:18 EST] /usr/lib64/postgresql-8.3/bin/postgresLOCATION: BackendInitialize, postmaster.c:3027[2010-01-06 14:58:18 EST] authenticationLOG: 00000: connection authorized: user=pri_user database=data\n[2010-01-06 14:58:18 EST] authenticationLOCATION: BackendInitialize, postmaster.c:3097[2010-01-06 14:58:18 EST] idleLOG: 00000: statement: SELECT \"fIsLoggedIn\"(15)\n[2010-01-06 14:58:18 EST] idleLOCATION: exec_simple_query, postgres.c:845[2010-01-06 14:58:18 EST] SELECTLOG: 00000: duration: 39.233 ms[2010-01-06 14:58:18 EST] SELECTLOCATION: exec_simple_query, postgres.c:1056\n[2010-01-06 14:58:18 EST] idleLOG: 00000: statement: START TRANSACTION[2010-01-06 14:58:18 EST] idleLOCATION: exec_simple_query, postgres.c:845[2010-01-06 14:58:18 EST] START TRANSACTIONLOG: 00000: duration: 0.050 ms\n===========================================================Cheers,\nDmitri.On Tue, Jan 5, 2010 at 2:01 PM, Greg Smith <[email protected]> wrote:\n\n\nDmitri Girski wrote:\n\nI am running a PostgreSQL\nserver 8.3.5 with a pretty much standard config.\n\n\nThe web application server\nwhich runs Apache 1.3/PHP2.9 has an intermittent problem:\npg_connect takes exactly\n3.0 seconds. The usual connection time is 0.0045.\nThe long request happens at\napproximate rate 1:100.\n\n\nFirst thing to check for intermittent multi-second delays is whether a\ncheckpoint is happening at that time. See\nhttp://wiki.postgresql.org/wiki/Logging_Checkpoints for an intro, you'd\nwant to see if the checkpoints are around the same time as the delays\neach time. The default configuration makes checkpoints happen all the\ntime if there's any significant write traffic on your database.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n-- @Gmail",
"msg_date": "Wed, 6 Jan 2010 15:26:30 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "On Tue, Jan 5, 2010 at 8:49 PM, Dmitri Girski <[email protected]> wrote:\n> Hi Tom,\n> The timing is around 3.0 seconds\n> Time=3.0037\n> Time=3.4038\n> Time=3.0038\n> Time=3.004\n> Time=3.2037\n> Time=3.0039\n> Time=3.0034\n> Time=3.0034\n> Time=3.2039\n> Time=3.0044\n> Time=3.8044\n> Time=3.2034\n>\n> I don't think that it could relate to DNS problem as I tried 2 methods which\n> does not use name resolution ( hosts file & ip address)\n> I will definitely seek the help from network geeks and I will check all TCP\n> stack settings.\n\nHave you looked at the various logs inn /var/log on both machines?\n",
"msg_date": "Tue, 5 Jan 2010 21:39:40 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Dmitri Girski wrote:\n> Hi Andy,\n> \n> I tried 2 connections strings:\n> - server name (DB1), which is listed in all machines hosts files.\n> - ip address.\n> \n> There is no difference in both methods, still I have 5-7 pg_connects which\n> last around 3 seconds.\n\nDon't rule out reverse DNS issues (such as a negative cache entry\nexpiring and being re-checked), packet loss, nss issues on the server\nrelated to NIS, LDAP etc user directories, and so on.\n\nWireshark is your friend.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 06 Jan 2010 12:50:47 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "On Wed, 6 Jan 2010, Dmitri Girski wrote:\n> On the other hand, if I use ip addresses this should not attract any possible issues with\n> DNS, right?\n\nNot true. It is likely that the server program you are connecting to will \nperform a reverse DNS lookup to work out who the client is, for logging or \nauthentication purposes.\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n",
"msg_date": "Wed, 6 Jan 2010 10:45:20 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "On Tue, Jan 5, 2010 at 11:50 PM, Craig Ringer\n<[email protected]> wrote:\n> Wireshark is your friend.\n\n+1. I think if you put a packet sniffer on the interface you are\nconnecting from it will become clear what the problem is in short\norder.\n\n...Robert\n",
"msg_date": "Wed, 6 Jan 2010 11:18:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "The fact that the delays are clustered at (3 + 0.2 n) seconds, rather than a\ndistributed range, strongly indicates a timeout and not (directly) a\nresource issue.\n\n3 seconds is too fast for a timeout on almost any DNS operation, unless it\nhas been modified, so I'd suspect it's the TCP layer, e.g. perhaps the SYN\npacket goes awol and it has to retry.\n\nI'd second the vote for investigation with a packet sniffing tool\n(Wireshark, tcpdump, etc)\n\nCheers\nDave\n\nOn Mon, Jan 4, 2010 at 8:12 PM, Dmitri Girski <[email protected]> wrote:\n\n> Hi everybody,\n>\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n>\n> I turned on logs on postgres server side, and there is\n> nothing suspicious for me there. When a connection request comes, it is\n> being served without any delay.\n>\n> Could anyone point me to the direction in which I should investigate this\n> problem further?\n> Thank you in advance!\n>\n>\n> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> The database & web servers are in the 2 local subnets.\n>\n>\n> Dmitri.\n>\n>\n\nThe fact that the delays are clustered at (3 + 0.2 n) seconds, rather than a distributed range, strongly indicates a timeout and not (directly) a resource issue.3 seconds is too fast for a timeout on almost any DNS operation, unless it has been modified, so I'd suspect it's the TCP layer, e.g. perhaps the SYN packet goes awol and it has to retry.\nI'd second the vote for investigation with a packet sniffing tool (Wireshark, tcpdump, etc)CheersDaveOn Mon, Jan 4, 2010 at 8:12 PM, Dmitri Girski <[email protected]> wrote:\nHi everybody,\n\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.The web application server which runs Apache 1.3/PHP2.9 has an intermittent problem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.The long request happens at approximate rate 1:100.\nI turned on logs on postgres server side, and there is nothing suspicious for me there. When a connection request comes, it is being served without any delay. \nCould anyone point me to the direction in which I should investigate this problem further?Thank you in advance!\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.The database & web servers are in the 2 local subnets. \nDmitri.",
"msg_date": "Wed, 6 Jan 2010 11:04:03 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Dave Crooke wrote:\n> The fact that the delays are clustered at (3 + 0.2 n) seconds, rather \n> than a distributed range, strongly indicates a timeout and not \n> (directly) a resource issue.\n> \n> 3 seconds is too fast for a timeout on almost any DNS operation, unless \n> it has been modified, so I'd suspect it's the TCP layer, e.g. perhaps \n> the SYN packet goes awol and it has to retry.\n> \n> I'd second the vote for investigation with a packet sniffing tool \n> (Wireshark, tcpdump, etc)\n\nIf you have a PC (Windows), pingplotter is a remarkable and simple tool to use that quickly identifies problems, and gives results that are convincing when you show them to your network admin. Wireshark and tcpdump have a pretty steep learning curve and are overkill if your problem is simple.\n\nCraig\n",
"msg_date": "Wed, 06 Jan 2010 10:31:12 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Hi everybody,\n\nMany thanks to everyone replied, I think we are on the right way.\nI've used tcpdump to generate the logs and there are a lot of dropped\npackets due to the bad checksum. Network guy is currently looking at the\nproblem and most likely this is hardware issue.\n\nCheers,\nDmitri.\n\nOn Tue, Jan 5, 2010 at 1:12 PM, Dmitri Girski <[email protected]> wrote:\n\n> Hi everybody,\n>\n> I am running a PostgreSQL server 8.3.5 with a pretty much standard config.\n>\n> The web application server which runs Apache 1.3/PHP2.9 has an\n> intermittent problem:\n> pg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.\n> The long request happens at approximate rate 1:100.\n>\n> I turned on logs on postgres server side, and there is\n> nothing suspicious for me there. When a connection request comes, it is\n> being served without any delay.\n>\n> Could anyone point me to the direction in which I should investigate this\n> problem further?\n> Thank you in advance!\n>\n>\n> PS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.\n> The database & web servers are in the 2 local subnets.\n>\n>\n> Dmitri.\n>\n>\n\n\n-- \n@Gmail\n\nHi everybody,Many thanks to everyone replied, I think we are on the right way. \nI've used tcpdump to generate the logs and there are a lot of dropped packets due to the bad checksum. Network guy is currently looking at the problem and most likely this is hardware issue.\nCheers,Dmitri.On Tue, Jan 5, 2010 at 1:12 PM, Dmitri Girski <[email protected]> wrote:\nHi everybody,\n\nI am running a PostgreSQL server 8.3.5 with a pretty much standard config.The web application server which runs Apache 1.3/PHP2.9 has an intermittent problem:\npg_connect takes exactly 3.0 seconds. The usual connection time is 0.0045.The long request happens at approximate rate 1:100.\nI turned on logs on postgres server side, and there is nothing suspicious for me there. When a connection request comes, it is being served without any delay. \nCould anyone point me to the direction in which I should investigate this problem further?Thank you in advance!\nPS The hardware is: Dell SC1435/4Gb/2x2.0GHz/Gentoo Linux.The database & web servers are in the 2 local subnets. \nDmitri.\n-- @Gmail",
"msg_date": "Thu, 7 Jan 2010 13:44:08 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "On Wed, Jan 6, 2010 at 7:44 PM, Dmitri Girski <[email protected]> wrote:\n> Hi everybody,\n> Many thanks to everyone replied, I think we are on the right way.\n> I've used tcpdump to generate the logs and there are a lot of dropped\n> packets due to the bad checksum. Network guy is currently looking at the\n> problem and most likely this is hardware issue.\n\n95% of these problems are a bad NIC or a bad cable. Since cables are\neasy to change I'd try those first, then NICs. Since lots of servers\nhave dual nics that's a pretty easy change too.\n\n Every now and then it's a bad switch / router.\n",
"msg_date": "Wed, 6 Jan 2010 20:34:56 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "On 7/01/2010 10:44 AM, Dmitri Girski wrote:\n> Hi everybody,\n>\n> Many thanks to everyone replied, I think we are on the right way.\n> I've used tcpdump to generate the logs and there are a lot of dropped\n> packets due to the bad checksum. Network guy is currently looking at the\n> problem and most likely this is hardware issue.\n\nHang on a sec. You need to ignore bad checksums on *outbound* packets, \nbecause many (most?) Ethernet drivers implement some level of TCP \noffloading, and this will result in packet sniffers seeing invalid \nchecksums for transmitted packets - the checksums haven't been generated \nby the NIC yet.\n\nUnless you know for sure that your NIC doesn't do TSO, ignore bad \nchecksums on outbound packets from the local interface.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 07 Jan 2010 15:40:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Oops, I meant to mention this too .... virtually all GigE and/or server\nclass NICs do TCP checksum offload.\n\nDimitri - it's unlikely that you have a hardware issue on the NIC, it's more\nlikely to be a cable problem or network congestion. What you want to look\nfor in the tcpdump capture is things like SYN retries.\n\nA good way to test for cable issues is to use a ping flood with a large\npacket size.\n\nCheers\nDave\n\nHang on a sec. You need to ignore bad checksums on *outbound* packets,\n> because many (most?) Ethernet drivers implement some level of TCP\n> offloading, and this will result in packet sniffers seeing invalid checksums\n> for transmitted packets - the checksums haven't been generated by the NIC\n> yet.\n>\n> Unless you know for sure that your NIC doesn't do TSO, ignore bad checksums\n> on outbound packets from the local interface.\n>\n> --\n> Craig Ringer\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOops, I meant to mention this too .... virtually all GigE and/or server class NICs do TCP checksum offload.Dimitri - it's unlikely that you have a hardware issue on the NIC, it's more likely to be a cable problem or network congestion. What you want to look for in the tcpdump capture is things like SYN retries.\nA good way to test for cable issues is to use a ping flood with a large packet size.CheersDave\nHang on a sec. You need to ignore bad checksums on *outbound* packets, because many (most?) Ethernet drivers implement some level of TCP offloading, and this will result in packet sniffers seeing invalid checksums for transmitted packets - the checksums haven't been generated by the NIC yet.\n\nUnless you know for sure that your NIC doesn't do TSO, ignore bad checksums on outbound packets from the local interface.\n\n--\nCraig Ringer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 7 Jan 2010 14:13:08 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
},
{
"msg_contents": "Thanks for advice, Dave!\n\nThis saga ended in an unexpected way: the firewall died.\nSince the replacement firewall installed I have not seen any 3 seconds\nconnects. Well, there was no real load so far, but I will keep checking.\n\n\nThanks to everyone replied, it was very helpful.\n\nCheers,\nDmitri.\n\n\n\n\nOn Fri, Jan 8, 2010 at 7:13 AM, Dave Crooke <[email protected]> wrote:\n\n> Oops, I meant to mention this too .... virtually all GigE and/or server\n> class NICs do TCP checksum offload.\n>\n> Dimitri - it's unlikely that you have a hardware issue on the NIC, it's\n> more likely to be a cable problem or network congestion. What you want to\n> look for in the tcpdump capture is things like SYN retries.\n>\n> A good way to test for cable issues is to use a ping flood with a large\n> packet size.\n>\n> Cheers\n> Dave\n>\n> Hang on a sec. You need to ignore bad checksums on *outbound* packets,\n>> because many (most?) Ethernet drivers implement some level of TCP\n>> offloading, and this will result in packet sniffers seeing invalid checksums\n>> for transmitted packets - the checksums haven't been generated by the NIC\n>> yet.\n>>\n>> Unless you know for sure that your NIC doesn't do TSO, ignore bad\n>> checksums on outbound packets from the local interface.\n>>\n>> --\n>> Craig Ringer\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n>> )\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n\n\n-- \n@Gmail\n\nThanks for advice, Dave!This saga ended in an unexpected way: the firewall died. \nSince the replacement firewall installed I have not seen any 3 seconds connects. Well, there was no real load so far, but I will keep checking.Thanks to everyone replied, it was very helpful.\nCheers,Dmitri.On Fri, Jan 8, 2010 at 7:13 AM, Dave Crooke <[email protected]> wrote:\nOops, I meant to mention this too .... virtually all GigE and/or server class NICs do TCP checksum offload.Dimitri - it's unlikely that you have a hardware issue on the NIC, it's more likely to be a cable problem or network congestion. What you want to look for in the tcpdump capture is things like SYN retries.\nA good way to test for cable issues is to use a ping flood with a large packet size.CheersDave\n\nHang on a sec. You need to ignore bad checksums on *outbound* packets, because many (most?) Ethernet drivers implement some level of TCP offloading, and this will result in packet sniffers seeing invalid checksums for transmitted packets - the checksums haven't been generated by the NIC yet.\n\nUnless you know for sure that your NIC doesn't do TSO, ignore bad checksums on outbound packets from the local interface.\n\n--\nCraig Ringer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- @Gmail",
"msg_date": "Sun, 10 Jan 2010 03:36:50 +1100",
"msg_from": "Dmitri Girski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_connect takes 3.0 seconds"
}
] |
[
{
"msg_contents": "> SELECT SUM(1) FROM ts_stats_transetgroup_user_weekly b WHERE\n> ts_interval_start_time > [value] AND ts_interval_start_time < [value];\n>\n> ...and similarly for the bitmap index scan.\ncemdb=> SELECT SUM(1) FROM ts_stats_transetgroup_user_weekly b WHERE \nts_interval_start_time >= '2009-12-28' AND ts_interval_start_time < \n'2010-01-04';\n sum\n-------\n 89758\n(1 row)\n\ncemdb=> select sum(1) from ts_stats_transet_user_interval where \nts_interval_start_time >= '2009-01-03' and ts_interval_start_time < \n'2009-01-03 08:00';\n sum\n-----\n\n(1 row)\n\ncemdb=> select sum(1) from ts_stats_transet_user_interval where \nts_interval_start_time >= '2010-01-03' and ts_interval_start_time < \n'2010-01-03 08:00';\n sum\n--------\n 800000\n(1 row)\n\nthe estimates in the 1st query plan are OK (since they are the \"same\"). \nThe 2nd, however, look to be too low. FYI, this query finally completed, \nso it wasn't looping but the query plan is very poor:\n\n[24585-cemdb-admin-2010-01-05 10:54:49.511 PST]LOG: duration: \n124676746.863 ms execute <unnamed>: select count(distinct b.ts_id) from \nts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c, \nts_transetgroup_transets_map m where b.ts_transet_group_id = \nm.ts_transet_group_id and m.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and c.ts_user_incarnation_id = \nb.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and \nc.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and \nb.ts_interval_start_time < $4\n[24585-cemdb-admin-2010-01-05 10:54:49.511 PST]DETAIL: parameters: $1 = \n'2010-01-03 00:00:00-08', $2 = '2010-01-03 08:00:00-08', $3 = \n'2010-01-01 00:00:00-08', $4 = '2010-01-04 00:00:00-08'\n\ncompare to:\n\n[root@rdl64xeoserv01 log]# time PGPASSWORD=**** psql -U admin -d cemdb \n-c \"select count(distinct b.ts_id) from \nts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c, \nts_transetgroup_transets_map m where b.ts_transet_group_id = \nm.ts_transet_group_id and m.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and c.ts_user_incarnation_id = \nb.ts_user_incarnation_id and c.ts_interval_start_time >= '2010-01-03 \n00:00' and c.ts_interval_start_time < '2010-01-03 08:00' and \nb.ts_interval_start_time >= '2009-12-28 00:00' and \nb.ts_interval_start_time < '2010-01-04 00:00'\"\n count\n-------\n 89758\n(1 row)\n\n\nreal 0m3.804s\nuser 0m0.001s\nsys 0m0.003s\n\nso why the former ~40,000 times slower?\n\nThanks,\nBrian\n",
"msg_date": "Tue, 05 Jan 2010 13:05:58 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query looping?"
}
] |
[
{
"msg_contents": "also compare:\n\n[4258-cemdb-admin-2010-01-05 13:11:42.913 PST]LOG: duration: 6401.314 \nms statement: execute foo('2010-01-03 00:00','2010-01-03 \n08:00','2009-12-28 00:00','2010-01-04 00:00');\n[4258-cemdb-admin-2010-01-05 13:11:42.913 PST]DETAIL: prepare: prepare \nfoo as select count(distinct b.ts_id) from \nts_stats_transetgroup_user_weekly b, ts_stats_transet_user_interval c, \nts_transetgroup_transets_map m where b.ts_transet_group_id = \nm.ts_transet_group_id and m.ts_transet_incarnation_id = \nc.ts_transet_incarnation_id and c.ts_user_incarnation_id = \nb.ts_user_incarnation_id and c.ts_interval_start_time >= $1 and \nc.ts_interval_start_time < $2 and b.ts_interval_start_time >= $3 and \nb.ts_interval_start_time < $4;\n\nstill the original query is ~20,000 times slower. Here's the explain foo \noutput for the execute above:\n\ncemdb=> explain execute foo('2010-01-03 00:00','2010-01-03 \n08:00','2009-12-28 00:00','2010-01-04 00:00'); \n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=347318.10..347318.11 rows=1 width=8)\n -> Hash Join (cost=3836.14..347317.41 rows=549 width=8)\n Hash Cond: ((b.ts_transet_group_id = m.ts_transet_group_id) \nAND (c.ts_transet_incarnation_id = m.ts_transet_incarnation_id))\n -> Hash Join (cost=3834.30..347302.98 rows=2628 width=24)\n Hash Cond: (c.ts_user_incarnation_id = \nb.ts_user_incarnation_id)\n -> Bitmap Heap Scan on ts_stats_transet_user_interval c \n (cost=2199.30..343132.02 rows=103500 width=16)\n Recheck Cond: ((ts_interval_start_time >= $1) AND \n(ts_interval_start_time < $2))\n -> Bitmap Index Scan on \nts_stats_transet_user_interval_starttime (cost=0.00..2186.36 \nrows=103500 width=0)\n Index Cond: ((ts_interval_start_time >= $1) \nAND (ts_interval_start_time < $2))\n -> Hash (cost=1627.99..1627.99 rows=1122 width=24)\n -> Index Scan using \nts_stats_transetgroup_user_weekly_starttimeindex on \nts_stats_transetgroup_user_weekly b (cost=0.00..1627.99 rows=1122 width=24)\n Index Cond: ((ts_interval_start_time >= $3) \nAND (ts_interval_start_time < $4))\n -> Hash (cost=1.33..1.33 rows=67 width=16)\n -> Seq Scan on ts_transetgroup_transets_map m \n(cost=0.00..1.33 rows=67 width=16)\n(14 rows)\n\ncomparing this to the 1st explain foo output shows some minor \ndifferences in row estimates -- but nothing, I assume, that could \nexplain the huge time difference. Of course, the 1st plan may not (and \nprobably? wasn't) the plan that was used to take 124M ms.\n\nAny thoughts on how to avoid this?\n\nThanks,\nBrian\n\n",
"msg_date": "Tue, 05 Jan 2010 13:33:08 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query looping?"
},
{
"msg_contents": "On Tue, Jan 5, 2010 at 4:33 PM, Brian Cox <[email protected]> wrote:\n> comparing this to the 1st explain foo output shows some minor differences in\n> row estimates -- but nothing, I assume, that could explain the huge time\n> difference. Of course, the 1st plan may not (and probably? wasn't) the plan\n> that was used to take 124M ms.\n>\n> Any thoughts on how to avoid this?\n\nThe incorrect row estimates can only foul up the plan; they can't\ndirectly make anything slow. Comparing the two plans line by line,\nthe only difference I see is the fast plan has:\n\n-> Seq Scan on ts_stats_transetgroup_user_weekly b\n(cost=0.00..23787.37 rows=89590 width=24) (actual time=0.040..295.414\nrows=89758 loops=1)\n Filter: ((ts_interval_start_time >= '2009-12-28\n00:00:00-08'::timestamp with time zone) AND (ts_interval_start_time <\n'2010-01-04 00:00:00-08'::timestamp with time zone))\n\n...while the slow one has:\n\nIndex Scan using ts_stats_transetgroup_user_weekly_starttimeindex on\nts_stats_transetgroup_user_weekly b (cost=0.00..1301.21 rows=898\nwidth=24)\n Index Cond: ((ts_interval_start_time >=\n$3) AND (ts_interval_start_time < $4))\n\nSo it looks like using that index to fetch the data is a LOT slower\nthan just scanning the whole table. In terms of fixing this problem,\nI have two ideas:\n\n- If you don't have any queries where this index makes things faster,\nthen you can just drop the index.\n\n- If you have other queries where this index helps (even though it is\nhurting this one), then you're going to have to find a way to execute\nthe query without using bound parameters - i.e. with the actual values\nin there instead of $1 through $4. That will allow the planner to see\nthat the index scan is a loser because it will see that there are a\nlot of rows in the specified range of ts_interval_start_times.\n\n...Robert\n",
"msg_date": "Tue, 5 Jan 2010 20:34:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query looping?"
}
] |
[
{
"msg_contents": "Hi.\n\nI have a table that consists of somewhere in the magnitude of 100.000.000\nrows and all rows are of this tuples\n\n(id1,id2,evalue);\n\nThen I'd like to speed up a query like this:\n\nexplain analyze select id from table where id1 = 2067 or id2 = 2067 order\nby evalue asc limit 100;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1423.28..1423.28 rows=100 width=12) (actual\ntime=2.565..2.567 rows=100 loops=1)\n -> Sort (cost=1423.28..1424.54 rows=505 width=12) (actual\ntime=2.560..2.560 rows=100 loops=1)\n Sort Key: evalue\n Sort Method: top-N heapsort Memory: 25kB\n -> Bitmap Heap Scan on table (cost=16.58..1420.75 rows=505\nwidth=12) (actual time=0.709..1.752 rows=450 loops=1)\n Recheck Cond: ((id1 = 2067) OR (id2 = 2067))\n -> BitmapOr (cost=16.58..16.58 rows=506 width=0) (actual\ntime=0.676..0.676 rows=0 loops=1)\n -> Bitmap Index Scan on id1_evalue_idx\n(cost=0.00..11.44 rows=423 width=0) (actual\ntime=0.599..0.599 rows=450 loops=1)\n Index Cond: (id1_id = 2067)\n -> Bitmap Index Scan on id2_evalue_idx\n(cost=0.00..4.89 rows=83 width=0) (actual\ntime=0.070..0.070 rows=1 loops=1)\n Index Cond: (id2_id = 2067)\n Total runtime: 2.642 ms\n(12 rows)\n\n\nWhat I had expected was to see the \"Bitmap Index Scan on id1_evalue_idx\"\nto chop it off at a \"limit 1\". The inner sets are on average 3.000 for\nboth id1 and id2 and a typical limit would be 100, so if I could convince\npostgresql to not fetch all of them then I would reduce the set retrieved\nby around 60. The dataset is quite large so the random query is not very\nlikely to be hitting the same part of the dataset again, so there is going\nto be a fair amount of going to disk.,\n\nI would also mean that using it in a for loop in a stored-procedure in\nplpgsql it would not get any benefit from the CURSOR effect?\n\nI actually tried to stuff id1,id2 into an array and do a GIST index on the\narray,evalue hoping that it directly would satisfy this query.. it used\nthe GIST index fetch the rows the post-sorting and limit on the set.\n\nWhat it boils down to is more or less:\n\nDoes a \"bitmap index scan\" support ordering and limit ?\nDoes a \"multicolummn gist index\" support ordering and limit ?\n\nHave I missed anything that can hugely speed up fetching these (typically\n100 rows) from the database.\n\n-- \nJesper\n\n\n",
"msg_date": "Wed, 06 Jan 2010 20:10:34 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Digesting explain analyze"
},
{
"msg_contents": "Jesper Krogh wrote:\n> I have a table that consists of somewhere in the magnitude of 100.000.000\n> rows and all rows are of this tuples\n> \n> (id1,id2,evalue);\n> \n> Then I'd like to speed up a query like this:\n> \n> explain analyze select id from table where id1 = 2067 or id2 = 2067 order\n> by evalue asc limit 100;\n> \n> ...The inner sets are on average 3.000 for\n> both id1 and id2 and a typical limit would be 100, so if I could convince\n> postgresql to not fetch all of them then I would reduce the set retrieved\n> by around 60. The dataset is quite large so the random query is not very\n> likely to be hitting the same part of the dataset again, so there is going\n> to be a fair amount of going to disk.,\n\nIf disk seeks are killing you a kinda crazy idea would be to\nduplicate the table - clustering one by (id1) and\nthe other one by an index on (id2) and unioning the\nresults of each.\n\nSince each of these duplicates of the table will be clustered\nby the column you're querying it on, it should just take one\nseek in each table.\n\nThen your query could be something like\n\n select * from (\n select * from t1 where id1=2067 order by evalue limit 100\n union\n select * from t2 where id2=2067 order by evalue limit 100\n ) as foo order by evalue limit 100;\n\nHmm.. and I wonder if putting evalue into the criteria to cluster\nthe tables too (i.e. cluster on id1,evalue) if you could make it\nso the limit finds the right 100 evalues first for each table....\n\n\n\n\n\n",
"msg_date": "Wed, 06 Jan 2010 12:03:21 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Digesting explain analyze"
},
{
"msg_contents": "On Wed, Jan 6, 2010 at 2:10 PM, Jesper Krogh <[email protected]> wrote:\n> Hi.\n>\n> I have a table that consists of somewhere in the magnitude of 100.000.000\n> rows and all rows are of this tuples\n>\n> (id1,id2,evalue);\n>\n> Then I'd like to speed up a query like this:\n>\n> explain analyze select id from table where id1 = 2067 or id2 = 2067 order\n> by evalue asc limit 100;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1423.28..1423.28 rows=100 width=12) (actual\n> time=2.565..2.567 rows=100 loops=1)\n> -> Sort (cost=1423.28..1424.54 rows=505 width=12) (actual\n> time=2.560..2.560 rows=100 loops=1)\n> Sort Key: evalue\n> Sort Method: top-N heapsort Memory: 25kB\n> -> Bitmap Heap Scan on table (cost=16.58..1420.75 rows=505\n> width=12) (actual time=0.709..1.752 rows=450 loops=1)\n> Recheck Cond: ((id1 = 2067) OR (id2 = 2067))\n> -> BitmapOr (cost=16.58..16.58 rows=506 width=0) (actual\n> time=0.676..0.676 rows=0 loops=1)\n> -> Bitmap Index Scan on id1_evalue_idx\n> (cost=0.00..11.44 rows=423 width=0) (actual\n> time=0.599..0.599 rows=450 loops=1)\n> Index Cond: (id1_id = 2067)\n> -> Bitmap Index Scan on id2_evalue_idx\n> (cost=0.00..4.89 rows=83 width=0) (actual\n> time=0.070..0.070 rows=1 loops=1)\n> Index Cond: (id2_id = 2067)\n> Total runtime: 2.642 ms\n> (12 rows)\n>\n>\n> What I had expected was to see the \"Bitmap Index Scan on id1_evalue_idx\"\n> to chop it off at a \"limit 1\". The inner sets are on average 3.000 for\n> both id1 and id2 and a typical limit would be 100, so if I could convince\n> postgresql to not fetch all of them then I would reduce the set retrieved\n> by around 60. The dataset is quite large so the random query is not very\n> likely to be hitting the same part of the dataset again, so there is going\n> to be a fair amount of going to disk.,\n>\n> I would also mean that using it in a for loop in a stored-procedure in\n> plpgsql it would not get any benefit from the CURSOR effect?\n>\n> I actually tried to stuff id1,id2 into an array and do a GIST index on the\n> array,evalue hoping that it directly would satisfy this query.. it used\n> the GIST index fetch the rows the post-sorting and limit on the set.\n>\n> What it boils down to is more or less:\n>\n> Does a \"bitmap index scan\" support ordering and limit ?\n> Does a \"multicolummn gist index\" support ordering and limit ?\n>\n> Have I missed anything that can hugely speed up fetching these (typically\n> 100 rows) from the database.\n\nBitmap index scans always return all the matching rows. It would be\nnice if they could fetch them in chunks for queries like this, but\nthey can't. I am not sure whether there's any way to make GIST do\nwhat you want.\n\nYou might try something like this (untested):\n\nSELECT * FROM (\n select id from table where id1 = 2067 order by evalue asc limit 100\n union all\n select id from table where id2 = 2067 order by evalue asc limit 100\n) x ORDER BY evalue LIMIT 100\n\nIf you have an index by (id1, evalue) and by (id2, evalue) then I\nwould think this would be pretty quick, as it should do two index\nscans (not bitmap index scans) to fetch 100 rows for each, then append\nthe results, sort them, and then limit again.\n\n...Robert\n",
"msg_date": "Wed, 6 Jan 2010 20:58:24 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Digesting explain analyze"
},
{
"msg_contents": "Ron Mayer wrote:\n>> ...The inner sets are on average 3.000 for\n>> both id1 and id2 and a typical limit would be 100, so if I could convince\n>> postgresql to not fetch all of them then I would reduce the set retrieved\n>> by around 60. The dataset is quite large so the random query is not very\n>> likely to be hitting the same part of the dataset again, so there is going\n>> to be a fair amount of going to disk.,\n> \n> If disk seeks are killing you a kinda crazy idea would be to\n> duplicate the table - clustering one by (id1) and\n> the other one by an index on (id2) and unioning the\n> results of each.\n\nThat's doubling the disk space needs for the table. Is there any odds\nthat this would benefit when the intitial table significantly exceeds\navailable memory by itself?\n\n> Since each of these duplicates of the table will be clustered\n> by the column you're querying it on, it should just take one\n> seek in each table.\n> \n> Then your query could be something like\n> \n> select * from (\n> select * from t1 where id1=2067 order by evalue limit 100\n> union\n> select * from t2 where id2=2067 order by evalue limit 100\n> ) as foo order by evalue limit 100;\n\nThis is actually what I ended up with as the best performing query, just\nstill on a single table, because without duplication I can add index and\noptimize this one by (id1,evalue) and (id2,evalue). It is still getting\nkilled quite a lot by disk IO. So I guess I'm up to:\n\n1) By better disk (I need to get an estimate how large it actually is\ngoing to get).\n2) Stick with one table, but make sure to have enough activity to get a\nlarge part of the index in the OS-cache anyway. (and add more memory if\nnessesary).\n\nThe data is seeing a fair amount of growth (doubles in a couple of years\n) so it is fairly hard to maintain clustering on them .. I would suspect.\n\nIs it possible to get PG to tell me, how many rows that fits in a\ndisk-page. All columns are sitting in \"plain\" storage according to \\d+\non the table.\n\n> Hmm.. and I wonder if putting evalue into the criteria to cluster\n> the tables too (i.e. cluster on id1,evalue) if you could make it\n> so the limit finds the right 100 evalues first for each table....\n\nI didnt cluster it, since clustering \"locks everything\".\n\n-- \nJesper\n",
"msg_date": "Thu, 07 Jan 2010 07:10:28 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Digesting explain analyze"
},
{
"msg_contents": "Jesper Krogh wrote:\n> Is it possible to get PG to tell me, how many rows that fits in a\n> disk-page. All columns are sitting in \"plain\" storage according to \\d+\n> on the table.\n> \nselect relname,round(reltuples / relpages) as \"avg_rows_per_page\" from \npg_class where relpages > 0;\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 07 Jan 2010 01:39:31 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Digesting explain analyze"
},
{
"msg_contents": "Jesper,\n\nthe whole idea of bitmap index scan is to optimize heap access, so it ruins\nany ordering, returned by index. That's why our new KNNGist, which returned\nordered index tuples doesn't supports bitmap index scan (note, this is only\nfor knn search).\n\nOleg\n\nOn Wed, 6 Jan 2010, Robert Haas wrote:\n\n> On Wed, Jan 6, 2010 at 2:10 PM, Jesper Krogh <[email protected]> wrote:\n> > Hi.\n> >\n> > I have a table that consists of somewhere in the magnitude of 100.000.000\n> > rows and all rows are of this tuples\n> >\n> > (id1,id2,evalue);\n> >\n> > Then I'd like to speed up a query like this:\n> >\n> > explain analyze select id from table where id1 =3D 2067 or id2 =3D 2067 o=\n> rder\n> > by evalue asc limit 100;\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =\n> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0QUERY PLAN\n> > -------------------------------------------------------------------------=\n> ----------------------------------------------------------------------\n> > =A0Limit =A0(cost=3D1423.28..1423.28 rows=3D100 width=3D12) (actual\n> > time=3D2.565..2.567 rows=3D100 loops=3D1)\n> > =A0 -> =A0Sort =A0(cost=3D1423.28..1424.54 rows=3D505 width=3D12) (actual\n> > time=3D2.560..2.560 rows=3D100 loops=3D1)\n> > =A0 =A0 =A0 =A0 Sort Key: evalue\n> > =A0 =A0 =A0 =A0 Sort Method: =A0top-N heapsort =A0Memory: 25kB\n> > =A0 =A0 =A0 =A0 -> =A0Bitmap Heap Scan on table =A0(cost=3D16.58..1420.75=\n> rows=3D505\n> > width=3D12) (actual time=3D0.709..1.752 rows=3D450 loops=3D1)\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 Recheck Cond: ((id1 =3D 2067) OR (id2 =3D 206=\n> 7))\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 -> =A0BitmapOr =A0(cost=3D16.58..16.58 rows=\n> =3D506 width=3D0) (actual\n> > time=3D0.676..0.676 rows=3D0 loops=3D1)\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 -> =A0Bitmap Index Scan on id1_ev=\n> alue_idx\n> > (cost=3D0.00..11.44 rows=3D423 width=3D0) (actual\n> > time=3D0.599..0.599 rows=3D450 loops=3D1)\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Index Cond: (id1_id =\n> =3D 2067)\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 -> =A0Bitmap Index Scan on id2_ev=\n> alue_idx\n> > (cost=3D0.00..4.89 rows=3D83 width=3D0) (actual\n> > time=3D0.070..0.070 rows=3D1 loops=3D1)\n> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Index Cond: (id2_id =\n> =3D 2067)\n> > =A0Total runtime: 2.642 ms\n> > (12 rows)\n> >\n> >\n> > What I had expected was to see the \"Bitmap Index Scan on id1_evalue_idx\"\n> > to chop it off at a \"limit 1\". The inner sets are on average 3.000 for\n> > both id1 and id2 and a typical limit would be 100, so if I could convince\n> > postgresql to not fetch all of them then I would reduce the set retrieved\n> > by around 60. The dataset is quite large so the random query is not very\n> > likely to be hitting the same part of the dataset again, so there is going\n> > to be a fair amount of going to disk.,\n> >\n> > I would also mean that using it in a for loop in a stored-procedure in\n> > plpgsql it would not get any benefit from the CURSOR effect?\n> >\n> > I actually tried to stuff id1,id2 into an array and do a GIST index on the\n> > array,evalue hoping that it directly would satisfy this query.. it used\n> > the GIST index fetch the rows the post-sorting and limit on the set.\n> >\n> > What it boils down to is more or less:\n> >\n> > Does a \"bitmap index scan\" support ordering and limit ?\n> > Does a \"multicolummn gist index\" support ordering and limit ?\n> >\n> > Have I missed anything that can hugely speed up fetching these (typically\n> > 100 rows) from the database.\n> \n> Bitmap index scans always return all the matching rows. It would be\n> nice if they could fetch them in chunks for queries like this, but\n> they can't. I am not sure whether there's any way to make GIST do\n> what you want.\n> \n> You might try something like this (untested):\n> \n> SELECT * FROM (\n> select id from table where id1 =3D 2067 order by evalue asc limit 100\n> union all\n> select id from table where id2 =3D 2067 order by evalue asc limit 100\n> ) x ORDER BY evalue LIMIT 100\n> \n> If you have an index by (id1, evalue) and by (id2, evalue) then I\n> would think this would be pretty quick, as it should do two index\n> scans (not bitmap index scans) to fetch 100 rows for each, then append\n> the results, sort them, and then limit again.\n> \n> ...Robert\n> \n> --=20\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n",
"msg_date": "Thu, 7 Jan 2010 11:24:39 +0300 (MSK)",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Digesting explain analyze"
},
{
"msg_contents": "On Thu, 7 Jan 2010, Jesper Krogh wrote:\n>> If disk seeks are killing you a kinda crazy idea would be to\n>> duplicate the table - clustering one by (id1) and\n>> the other one by an index on (id2) and unioning the\n>> results of each.\n>\n> That's doubling the disk space needs for the table. Is there any odds\n> that this would benefit when the intitial table significantly exceeds\n> available memory by itself?\n\nIf the table already greatly exceeds the available RAM, then doubling the \namount of data won't make a massive difference to performance. You're \ngoing to disc for everything anyway.\n\n>> Since each of these duplicates of the table will be clustered\n>> by the column you're querying it on, it should just take one\n>> seek in each table.\n>>\n>> Then your query could be something like\n>>\n>> select * from (\n>> select * from t1 where id1=2067 order by evalue limit 100\n>> union\n>> select * from t2 where id2=2067 order by evalue limit 100\n>> ) as foo order by evalue limit 100;\n>\n> This is actually what I ended up with as the best performing query, just\n> still on a single table, because without duplication I can add index and\n> optimize this one by (id1,evalue) and (id2,evalue). It is still getting\n> killed quite a lot by disk IO. So I guess I'm up to:\n\nYou're kind of missing the point. The crucial step in the above suggestion \nis to cluster the table on the index. This will mean that all the rows \nthat are fetched together are located together on disc, and you will no \nlonger be killed by disc IO.\n\n> 1) By better disk (I need to get an estimate how large it actually is\n> going to get).\n\nUnless you cluster, you are still going to be limited by the rate at which \nthe discs can seek. Postgres 8.4 has some improvements here for bitmap \nindex scans if you have a RAID array, and set the effective_concurrency \nsetting correctly.\n\n> 2) Stick with one table, but make sure to have enough activity to get a\n> large part of the index in the OS-cache anyway. (and add more memory if\n> nessesary).\n\nIn order to win here, you will need to make memory at least as big as the \ncommonly-accessed parts of the database. This could get expensive.\n\n> I didnt cluster it, since clustering \"locks everything\".\n\nYou can also test out the hypothesis by copying the table instead:\n\nCREATE NEW TABLE test1 AS SELECT * FROM table1 ORDER BY id1;\n\nThen create an index on id1, and test against that table. The copy will \nbecome out of date quickly, but it will allow you to see whether the \nperformance benefit is worth it. It will also tell you how long a cluster \nwill actually take, without actually locking anything.\n\nMatthew\n\n-- \n In the beginning was the word, and the word was unsigned,\n and the main() {} was without form and void...\n",
"msg_date": "Thu, 7 Jan 2010 11:23:54 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Digesting explain analyze"
}
] |
[
{
"msg_contents": "Hello,\n\nI am complete noob to Postgres and to this list, and I hope this will be the\nappropriate list for this question.\n\nI'm hoping the inheritance feature will be a nice alternative method for me\nto implement categories in particular database of products I need to keep\nupdated. I suppose in MySQL I would probably do this by creating, for\nexample, one table for the products, and then a table(s) for categories, and\nthen I'd be faced with a choice between using an adjacency list or nested\nset paradigm for, say, breadcrumb links in my private web app.\n\nOn the other hand, in Postgres what I'd like to do it just create an empty\nroot \"product\" table, then create, for example, a \"spirts\" table that\ninherits from products, and \"rums\" table that inherits from spirits, and\nthen \"aged rum\", \"flavored rum\", et al, which inherit from rums.\n\nIn this scenario, my idea was to have all my fields in \"products\" and to not\nadd any additional fields in the child tables. Also, only the lowest level\nof child tables in any given branch of products would actually contain data\n/ rows.\n\nAssuming this is a good design, what I'm wondering is how inheritance is\nactually implemented deep down inside Postgres, if it's anything at all like\nJOINS (say, in the case of merely doing:\nSELECT * FROM \"flavored_rum\" (the lowest level in a given branch)\nor\nSELECT * FROM \"spirits\" (the root level, or some intermediate level in a\ngiven branch)\n\nI'm wondering if there's any performance penalty here, analogous to the\npenalty of JOINs in a regular RDBMS (versus an ORDBMS).\n\nIf anyone can offer in any insight as too how inheritance is actually\nexecuted (compared to JOINs especially), I'd be most grateful.\n\nThank you,\nDG\n\nHello,I am complete noob to Postgres and to this list, and I hope this will be the appropriate list for this question.I'm hoping the inheritance feature will be a nice alternative method for me to implement categories in particular database of products I need to keep updated. I suppose in MySQL I would probably do this by creating, for example, one table for the products, and then a table(s) for categories, and then I'd be faced with a choice between using an adjacency list or nested set paradigm for, say, breadcrumb links in my private web app.\nOn the other hand, in Postgres what I'd like to do it just create an empty root \"product\" table, then create, for example, a \"spirts\" table that inherits from products, and \"rums\" table that inherits from spirits, and then \"aged rum\", \"flavored rum\", et al, which inherit from rums.\nIn this scenario, my idea was to have all my fields in \"products\" and to not add any additional fields in the child tables. Also, only the lowest level of child tables in any given branch of products would actually contain data / rows.\nAssuming this is a good design, what I'm wondering is how inheritance is actually implemented deep down inside Postgres, if it's anything at all like JOINS (say, in the case of merely doing:\nSELECT * FROM \"flavored_rum\" (the lowest level in a given branch)orSELECT * FROM \"spirits\" (the root level, or some intermediate level in a given branch)\nI'm wondering if there's any performance penalty here, analogous to the penalty of JOINs in a regular RDBMS (versus an ORDBMS).If anyone can offer in any insight as too how inheritance is actually executed (compared to JOINs especially), I'd be most grateful.\nThank you,DG",
"msg_date": "Wed, 6 Jan 2010 18:53:56 -0500",
"msg_from": "Zintrigue <[email protected]>",
"msg_from_op": true,
"msg_subject": "noob inheritance question"
},
{
"msg_contents": "On Wed, Jan 6, 2010 at 3:53 PM, Zintrigue <[email protected]> wrote:\n> I'm wondering if there's any performance penalty here, analogous to the\n> penalty of JOINs in a regular RDBMS (versus an ORDBMS).\n> If anyone can offer in any insight as too how inheritance is actually\n> executed (compared to JOINs especially), I'd be most grateful.\n\nPostgreSQL inheritance is just a sugar coated form of horizontal table\npartitioning. So it suffers from all of the problems associated with\nselection on UNION ALL queries.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n",
"msg_date": "Wed, 6 Jan 2010 16:00:11 -0800",
"msg_from": "Richard Broersma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: noob inheritance question"
},
{
"msg_contents": "Zintrigue wrote:\n\n> I'm hoping the inheritance feature will be a nice alternative method for \n> me to implement categories in particular database of products I need to \n> keep updated. I suppose in MySQL I would probably do this by creating, \n> for example, one table for the products, and then a table(s) for \n> categories, and then I'd be faced with a choice between using an \n> adjacency list or nested set paradigm for, say, breadcrumb links in my \n> private web app.\n> \n> On the other hand, in Postgres what I'd like to do it just create an \n> empty root \"product\" table, then create, for example, a \"spirts\" table \n> that inherits from products, and \"rums\" table that inherits from \n> spirits, and then \"aged rum\", \"flavored rum\", et al, which inherit from \n> rums.\n> \n> In this scenario, my idea was to have all my fields in \"products\" and to \n> not add any additional fields in the child tables. Also, only the lowest \n> level of child tables in any given branch of products would actually \n> contain data / rows.\n> \n> Assuming this is a good design,\n\nMay I venture to stop you there. This sounds like you are doing it\nThe Hard Way.\n\nIn particular, each time you add a new category, you're going to have to \nadd a new database table, and your schema is going to get to be \nhorrible. Inserts aren't going to be much fun either.\n\nRather than adding multiple child tables, may I suggest some other way \nof tracking which item is a subset of the other.\n You could do it by having 2 columns:\n id, parent_id (each integer and indexed)\nor you could do it by having 2 columns:\n id, list (id is integer, list is eg \"1,3,5,13\")\n(where the list is a comma-separated list, or an array, and holds the \nfull path)\n\n\nDepending on scale, you may be able to choose a simple algorithm instead \nof hunting for the most efficient one.\n\n\nBest wishes,\n\nRichard\n\n\nP.S. This is the performance mailing list - you may find one of the \nother lists better suited to your questions.\n",
"msg_date": "Thu, 07 Jan 2010 00:13:45 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: noob inheritance question"
},
{
"msg_contents": "Inheritance would only make sense if each of your categories had more\ncolumns. Say if you had a \"wines\" category and only they had a year column.\n Its probably not worth it for one or two columns but if you've got a big\ncrazy heterogeneous tree of stuff then its probably appropriate.\n\nI'm with Richard in that it sounds like the right way to solve your problem\nis to have a \"categories\" table and a \"products\" table. Let the categories\ntable have a reference to the parent. I suppose just like they do in the\nfirst section of\nhttp://dev.mysql.com/tech-resources/articles/hierarchical-data.html . The\nother sections on the page just seem like overkill to me.\n\nOn Wed, Jan 6, 2010 at 7:13 PM, Richard Neill <[email protected]> wrote:\n\n> Zintrigue wrote:\n>\n> I'm hoping the inheritance feature will be a nice alternative method for\n>> me to implement categories in particular database of products I need to keep\n>> updated. I suppose in MySQL I would probably do this by creating, for\n>> example, one table for the products, and then a table(s) for categories, and\n>> then I'd be faced with a choice between using an adjacency list or nested\n>> set paradigm for, say, breadcrumb links in my private web app.\n>>\n>> On the other hand, in Postgres what I'd like to do it just create an empty\n>> root \"product\" table, then create, for example, a \"spirts\" table that\n>> inherits from products, and \"rums\" table that inherits from spirits, and\n>> then \"aged rum\", \"flavored rum\", et al, which inherit from rums.\n>>\n>> In this scenario, my idea was to have all my fields in \"products\" and to\n>> not add any additional fields in the child tables. Also, only the lowest\n>> level of child tables in any given branch of products would actually contain\n>> data / rows.\n>>\n>> Assuming this is a good design,\n>>\n>\n> May I venture to stop you there. This sounds like you are doing it\n> The Hard Way.\n>\n> In particular, each time you add a new category, you're going to have to\n> add a new database table, and your schema is going to get to be horrible.\n> Inserts aren't going to be much fun either.\n>\n> Rather than adding multiple child tables, may I suggest some other way of\n> tracking which item is a subset of the other.\n> You could do it by having 2 columns:\n> id, parent_id (each integer and indexed)\n> or you could do it by having 2 columns:\n> id, list (id is integer, list is eg \"1,3,5,13\")\n> (where the list is a comma-separated list, or an array, and holds the full\n> path)\n>\n>\n> Depending on scale, you may be able to choose a simple algorithm instead of\n> hunting for the most efficient one.\n>\n>\n> Best wishes,\n>\n> Richard\n>\n>\n> P.S. This is the performance mailing list - you may find one of the other\n> lists better suited to your questions.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nInheritance would only make sense if each of your categories had more columns. Say if you had a \"wines\" category and only they had a year column. Its probably not worth it for one or two columns but if you've got a big crazy heterogeneous tree of stuff then its probably appropriate.\nI'm with Richard in that it sounds like the right way to solve your problem is to have a \"categories\" table and a \"products\" table. Let the categories table have a reference to the parent. I suppose just like they do in the first section of http://dev.mysql.com/tech-resources/articles/hierarchical-data.html . The other sections on the page just seem like overkill to me.\nOn Wed, Jan 6, 2010 at 7:13 PM, Richard Neill <[email protected]> wrote:\nZintrigue wrote:\n\n\nI'm hoping the inheritance feature will be a nice alternative method for me to implement categories in particular database of products I need to keep updated. I suppose in MySQL I would probably do this by creating, for example, one table for the products, and then a table(s) for categories, and then I'd be faced with a choice between using an adjacency list or nested set paradigm for, say, breadcrumb links in my private web app.\n\nOn the other hand, in Postgres what I'd like to do it just create an empty root \"product\" table, then create, for example, a \"spirts\" table that inherits from products, and \"rums\" table that inherits from spirits, and then \"aged rum\", \"flavored rum\", et al, which inherit from rums.\n\nIn this scenario, my idea was to have all my fields in \"products\" and to not add any additional fields in the child tables. Also, only the lowest level of child tables in any given branch of products would actually contain data / rows.\n\nAssuming this is a good design,\n\n\nMay I venture to stop you there. This sounds like you are doing it\nThe Hard Way.\n\nIn particular, each time you add a new category, you're going to have to add a new database table, and your schema is going to get to be horrible. Inserts aren't going to be much fun either.\n\nRather than adding multiple child tables, may I suggest some other way of tracking which item is a subset of the other.\n You could do it by having 2 columns:\n id, parent_id (each integer and indexed)\nor you could do it by having 2 columns:\n id, list (id is integer, list is eg \"1,3,5,13\")\n(where the list is a comma-separated list, or an array, and holds the full path)\n\n\nDepending on scale, you may be able to choose a simple algorithm instead of hunting for the most efficient one.\n\n\nBest wishes,\n\nRichard\n\n\nP.S. This is the performance mailing list - you may find one of the other lists better suited to your questions.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 6 Jan 2010 23:00:42 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: noob inheritance question"
},
{
"msg_contents": "On Wed, Jan 6, 2010 at 6:53 PM, Zintrigue <[email protected]> wrote:\n> I'm wondering if there's any performance penalty here\n\nThere definitely is. Your design sounds pretty painful to me...\nadding a column referencing a side-table will be much nicer.\n\n> If anyone can offer in any insight as too how inheritance is actually\n> executed (compared to JOINs especially), I'd be most grateful.\n\nYou can look at the query plans for your queries using EXPLAIN.\nInheritance is really just UNION ALL under the covers - it's meant for\npartitioning, not the sort of thing you're trying to do here, so your\nquery plans will probably not be too good with this design.\n\n...Robert\n",
"msg_date": "Thu, 7 Jan 2010 15:19:57 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: noob inheritance question"
}
] |
[
{
"msg_contents": "Hi,\nI am going to test this out but would be good to know anyways. A large\ntable is joined to a tiny table (8 rows) on a text field. Should I be\njoining on an int field eg: recid intead of name? Is the performance\naffected in either case?\nThanks .\n\nHi,I am going to test this out but would be good to know anyways. A largetable is joined to a tiny table (8 rows) on a text field. Should I bejoining on an int field eg: recid intead of name? Is the performance\naffected in either case?Thanks .",
"msg_date": "Wed, 6 Jan 2010 23:21:47 -0500",
"msg_from": "Radhika S <[email protected]>",
"msg_from_op": true,
"msg_subject": "Joining on text field VS int"
},
{
"msg_contents": "Joining via a tinyint or something will make your large table smaller which\nis nice. Smaller tables = faster tables.\n\nOn Wed, Jan 6, 2010 at 11:21 PM, Radhika S <[email protected]> wrote:\n\n> Hi,\n> I am going to test this out but would be good to know anyways. A large\n> table is joined to a tiny table (8 rows) on a text field. Should I be\n> joining on an int field eg: recid intead of name? Is the performance\n> affected in either case?\n> Thanks .\n>\n\nJoining via a tinyint or something will make your large table smaller which is nice. Smaller tables = faster tables.On Wed, Jan 6, 2010 at 11:21 PM, Radhika S <[email protected]> wrote:\nHi,I am going to test this out but would be good to know anyways. A large\n\ntable is joined to a tiny table (8 rows) on a text field. Should I bejoining on an int field eg: recid intead of name? Is the performance\naffected in either case?Thanks .",
"msg_date": "Thu, 7 Jan 2010 13:52:40 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joining on text field VS int"
},
{
"msg_contents": "Hi all,\nI want to compare the two arrys in sql, The below example is wrong.\nBut i want to know how to do that in postgres 8.2.5\n\nSELECT 'x' WHERE string_to_array('the,quick,ram,fox', ',') any (string_to_array('the,quick,lazy ,fox', ','))\nRegards,\nRam\n\n\n\n\n\n\n\nHi all,\nI want to compare the two arrys in sql, The below \nexample is wrong.\nBut i want to know how to do that in postgres \n8.2.5\n \nSELECT 'x' WHERE \nstring_to_array('the,quick,ram,fox', ',') any \n(string_to_array('the,quick,lazy ,fox', ','))Regards,\nRam",
"msg_date": "Fri, 8 Jan 2010 10:30:50 +0530",
"msg_from": "\"ramasubramanian\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Array comparison"
},
{
"msg_contents": "Hi\n\nwell it's pretty simple\n\nif you want to see if there are elements in common then instead of \"any\" use\n\"&&\"\nif you want to see if they are equal just use \" = \" that will five you true\nor false\n\n\n\n you can check array functions in here\nhttp://www.postgresql.org/docs/8.2/static/functions-array.html\n\n\nOn Fri, Jan 8, 2010 at 5:00 AM, ramasubramanian <\[email protected]> wrote:\n\n> Hi all,\n> I want to compare the two arrys in sql, The below example is wrong.\n> But i want to know how to do that in postgres 8.2.5\n>\n> SELECT 'x' WHERE string_to_array('the,quick,ram,fox', ',') any\n> (string_to_array('the,quick,lazy ,fox', ','))\n> Regards,\n> Ram\n>\n>\n>\n>\n\nHiwell it's pretty simpleif you want to see if there are elements in common then instead of \"any\" use \"&&\"if you want to see if they are equal just use \" = \" that will five you true or false \n you can check array functions in here http://www.postgresql.org/docs/8.2/static/functions-array.html\nOn Fri, Jan 8, 2010 at 5:00 AM, ramasubramanian <[email protected]> wrote:\n\nHi all,\nI want to compare the two arrys in sql, The below \nexample is wrong.\nBut i want to know how to do that in postgres \n8.2.5\n \nSELECT 'x' WHERE \nstring_to_array('the,quick,ram,fox', ',') any \n(string_to_array('the,quick,lazy ,fox', ','))Regards,\nRam",
"msg_date": "Fri, 8 Jan 2010 16:06:44 +0000",
"msg_from": "Rui Carvalho <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array comparison"
},
{
"msg_contents": "On Fri, Jan 8, 2010 at 11:06 AM, Rui Carvalho <[email protected]> wrote:\n> Hi\n>\n> well it's pretty simple\n>\n> if you want to see if there are elements in common then instead of \"any\" use\n> \"&&\"\n> if you want to see if they are equal just use \" = \" that will five you true\n> or false\n\nyou also have the option of expanding the array into a set and using\nstandard query techniques. w/8.4, you can use the built in\nunnest()...for earlier versions you have to create it yourself\n(trivially done):\n\nhttp://archives.postgresql.org/pgsql-patches/2008-03/msg00308.php\n\nmerlin\n",
"msg_date": "Fri, 8 Jan 2010 11:36:04 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Array comparison"
}
] |
[
{
"msg_contents": "Our DB has an audit table which is 500M rows and growing. (FYI the objects \nbeing audited are grouped semantically, not individual field values).\n\nRecently we wanted to add a new feature and we altered the table to add a \nnew column. We are backfilling this varchar(255) column by writing a TCL \nscript to page through the rows (where every update is a UPDATE ... WHERE id \n >= x AND id < x+10 and a commit is performed after every 1000 updates \nstatement, i.e. every 10000 rows.)\n\nWe have 10 columns, six of which are indexed. Rough calculations suggest \nthat this will take two to three weeks to complete on an 8-core CPU with \nmore than enough memory.\n\nAs a ballpark estimate - is this sort of performance for an 500M updates \nwhat one would expect of PG given the table structure (detailed below) or \nshould I dig deeper to look for performance issues?\n\nAs always, thanks!\n\nCarlo\n\nTable/index structure:\n\nCREATE TABLE mdx_core.audit_impt\n(\n audit_impt_id serial NOT NULL,\n impt_session integer,\n impt_version character varying(255),\n impt_name character varying(255),\n impt_id integer,\n target_table character varying(255),\n target_id integer,\n target_op character varying(10),\n note text,\n source_table character varying(255),\n CONSTRAINT audit_impt_pkey PRIMARY KEY (audit_impt_id)\n)\n\nCREATE INDEX audit_impt_impt_id_idx\n ON mdx_core.audit_impt\n USING btree\n (impt_id);\nCREATE INDEX audit_impt_impt_name\n ON mdx_core.audit_impt\n USING btree\n (impt_name, impt_version);\nCREATE INDEX audit_impt_session_idx\n ON mdx_core.audit_impt\n USING btree\n (impt_session);\nCREATE INDEX audit_impt_source_table\n ON mdx_core.audit_impt\n USING btree\n (source_table);\nCREATE INDEX audit_impt_target_id_idx\n ON mdx_core.audit_impt\n USING btree\n (target_id, audit_impt_id);\nCREATE INDEX audit_impt_target_table_idx\n ON mdx_core.audit_impt\n USING btree\n (target_table, target_id, audit_impt_id);\n\n\n\n",
"msg_date": "Thu, 7 Jan 2010 02:17:22 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 12:17 AM, Carlo Stonebanks\n<[email protected]> wrote:\n> Our DB has an audit table which is 500M rows and growing. (FYI the objects\n> being audited are grouped semantically, not individual field values).\n>\n> Recently we wanted to add a new feature and we altered the table to add a\n> new column. We are backfilling this varchar(255) column by writing a TCL\n> script to page through the rows (where every update is a UPDATE ... WHERE id\n>>= x AND id < x+10 and a commit is performed after every 1000 updates\n> statement, i.e. every 10000 rows.)\n>\n> We have 10 columns, six of which are indexed. Rough calculations suggest\n> that this will take two to three weeks to complete on an 8-core CPU with\n> more than enough memory.\n>\n> As a ballpark estimate - is this sort of performance for an 500M updates\n> what one would expect of PG given the table structure (detailed below) or\n> should I dig deeper to look for performance issues?\n\nGot an explain analyze of the delete query?\n",
"msg_date": "Thu, 7 Jan 2010 00:49:30 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> Our DB has an audit table which is 500M rows and growing. (FYI the\n> objects being audited are grouped semantically, not individual field\n> values).\n> \n> Recently we wanted to add a new feature and we altered the table to add\n> a new column. We are backfilling this varchar(255) column by writing a\n> TCL script to page through the rows (where every update is a UPDATE ...\n> WHERE id >= x AND id < x+10 and a commit is performed after every 1000\n> updates statement, i.e. every 10000 rows.)\n> \n> We have 10 columns, six of which are indexed. Rough calculations suggest\n> that this will take two to three weeks to complete on an 8-core CPU with\n> more than enough memory.\n> \n> As a ballpark estimate - is this sort of performance for an 500M updates\n> what one would expect of PG given the table structure (detailed below)\n> or should I dig deeper to look for performance issues?\n> \n> As always, thanks!\n> \n> Carlo\n> \n\nIf it is possible to lock this audit table exclusively (may be during\noff peak hours) I would look into\n- create new_audit_table as select col1, col2, col3 ... col9,\n'new_col_value' from old_audit_table;\n- create all indexes\n- drop old_audit_table\n- rename new_audit_table to old_audit_table\n\nThat is probably the fasted method you can do, even if you have to join\nthe \"new_col_value\" from an extra helper-table with the correspondig id.\nRemeber, databases are born to join.\n\nYou could also try to just update the whole table in one go, it is\nprobably faster than you expect.\n\njust a thought\nLeo\n",
"msg_date": "Thu, 07 Jan 2010 14:14:52 +0100",
"msg_from": "Leo Mannhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Leo Mannhart <[email protected]> wrote:\n \n> You could also try to just update the whole table in one go, it is\n> probably faster than you expect.\n \nThat would, of course, bloat the table and indexes horribly. One\nadvantage of the incremental approach is that there is a chance for\nautovacuum or scheduled vacuums to make space available for re-use\nby subsequent updates.\n \n-Kevin\n",
"msg_date": "Thu, 07 Jan 2010 09:18:54 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Leo Mannhart <[email protected]> wrote:\n> \n>> You could also try to just update the whole table in one go, it is\n>> probably faster than you expect.\n> \n> That would, of course, bloat the table and indexes horribly. One\n> advantage of the incremental approach is that there is a chance for\n> autovacuum or scheduled vacuums to make space available for re-use\n> by subsequent updates.\n> \n> -Kevin\n> \n\nouch...\nthanks for correcting this.\n... and forgive an old man coming from Oracle ;)\n\nLeo\n",
"msg_date": "Thu, 07 Jan 2010 16:47:10 +0100",
"msg_from": "Leo Mannhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "I would suggest:\n1. turn off autovacuum\n1a. ewentually tune db for better performace for this kind of operation\n(cant not help here)\n2. restart database\n3. drop all indexes\n4. update\n5. vacuum full table\n6. create indexes\n7. turn on autovacuum\n\nLudwik\n\n\n2010/1/7 Leo Mannhart <[email protected]>\n\n> Kevin Grittner wrote:\n> > Leo Mannhart <[email protected]> wrote:\n> >\n> >> You could also try to just update the whole table in one go, it is\n> >> probably faster than you expect.\n> >\n> > That would, of course, bloat the table and indexes horribly. One\n> > advantage of the incremental approach is that there is a chance for\n> > autovacuum or scheduled vacuums to make space available for re-use\n> > by subsequent updates.\n> >\n> > -Kevin\n> >\n>\n> ouch...\n> thanks for correcting this.\n> ... and forgive an old man coming from Oracle ;)\n>\n> Leo\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nLudwik Dyląg\n\nI would suggest:1. turn off autovacuum1a. ewentually tune db for better performace for this kind of operation (cant not help here)2. restart database3. drop all indexes4. update\n5. vacuum full table6. create indexes7. turn on autovacuumLudwik2010/1/7 Leo Mannhart <[email protected]>\nKevin Grittner wrote:\n> Leo Mannhart <[email protected]> wrote:\n>\n>> You could also try to just update the whole table in one go, it is\n>> probably faster than you expect.\n>\n> That would, of course, bloat the table and indexes horribly. One\n> advantage of the incremental approach is that there is a chance for\n> autovacuum or scheduled vacuums to make space available for re-use\n> by subsequent updates.\n>\n> -Kevin\n>\n\nouch...\nthanks for correcting this.\n... and forgive an old man coming from Oracle ;)\n\nLeo\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Ludwik Dyląg",
"msg_date": "Thu, 7 Jan 2010 17:38:26 +0100",
"msg_from": "Ludwik Dylag <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> Got an explain analyze of the delete query?\n\nUPDATE mdx_core.audit_impt\nSET source_table = 'mdx_import.'||impt_name\nWHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010\nAND coalesce(source_table, '') = ''\n\nIndex Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1 \nwidth=608) (actual time=0.081..0.244 rows=10 loops=1)\n Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <= \n319400010))\n Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)\nTotal runtime: 372.141 ms\n\nHard to tell how reliable these numbers are, because the caches are likely \nspun up for the WHERE clause - in particular, SELECT queries have been run \nto test whether the rows actually qualify for the update.\n\nThe coalesce may be slowing things down slightly, but is a necessary evil. \n\n",
"msg_date": "Thu, 7 Jan 2010 12:49:54 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE----- \nHash: RIPEMD160 \n\n\n> If it is possible to lock this audit table exclusively (may be during\n> off peak hours) I would look into\n> - create new_audit_table as select col1, col2, col3 ... col9,\n> 'new_col_value' from old_audit_table;\n> - create all indexes\n> - drop old_audit_table\n> - rename new_audit_table to old_audit_table\n\nThis is a good approach, but you don't necessarily have to exclusively\nlock the table. Only allowing reads would be enough, or you could\ninstall a trigger to keep track of which rows were updated. Then\nthe process becomes:\n\n1. Create trigger on the old table to store changed pks\n2. Create new_audit_table as select col1, col2, col3 ... col9,\n 'new_col_value' from old_audit_table;\n3. Create all indexes on the new table\n4. Stop your app from writing to the old table\n5. COPY over the rows that have changed\n6. Rename the old table to something else (for safety)\n7. Rename the new table to the real name\n8. Drop the old table when all is good\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation http://www.endpoint.com/\nPGP Key: 0x14964AC8 201001071253\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAktGIC0ACgkQvJuQZxSWSshEAQCfRT3PsQyWCOBXGW1XRAB814df\npJUAoMuAJoOKho39opoHq/d1J9NprGlH\n=htaE\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 7 Jan 2010 17:56:16 -0000",
"msg_from": "\"Greg Sabino Mullane\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> If it is possible to lock this audit table exclusively (may be during\n> off peak hours) I would look into\n> - create new_audit_table as select col1, col2, col3 ... col9,\n> 'new_col_value' from old_audit_table;\n> - create all indexes\n> - drop old_audit_table\n> - rename new_audit_table to old_audit_table\n>\n> That is probably the fasted method you can do, even if you have to join\n> the \"new_col_value\" from an extra helper-table with the correspondig id.\n> Remeber, databases are born to join.\n>\n\nThis has all been done before - the production team was crippled while they \nwaited for this and the SECOND the table was available again, they jumped on \nit - even though it meant recreating the bare minimum of the indexes.\n\n> You could also try to just update the whole table in one go, it is\n> probably faster than you expect.\n\nPossibly, but with such a large table you have no idea of the progress, you \ncannot interrupt it without rolling back everything. Worse, you have \napplications stalling and users wanting to know what is going on - is the OS \nand the DB/MVCC trashing while it does internal maintenance? Have you \nreached some sort of deadlock condition that you can't see because the \nserver status is not helpful with so many uncommitted pending updates?\n\nAnd of course, there is the file bloat. \n\n",
"msg_date": "Thu, 7 Jan 2010 12:56:37 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> every update is a UPDATE ... WHERE id\n>>= x AND id < x+10 and a commit is performed after every 1000 updates\n> statement, i.e. every 10000 rows.\n\nWhat is the rationale behind this? How about doing 10k rows in 1\nupdate, and committing every time?\n\nYou could try making the condition on the ctid column, to not have to\nuse the index on ID, and process the rows in physical order. First\nmake sure that newly inserted production data has the correct value in\nthe new column, and add 'where new_column is null' to the conditions.\nBut I have never tried this, use at Your own risk.\n\nGreetings\nMarcin Mank\n",
"msg_date": "Thu, 7 Jan 2010 22:05:14 +0100",
"msg_from": "marcin mank <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> What is the rationale behind this? How about doing 10k rows in 1\n> update, and committing every time?\n\nWhen we did 10K updates, the application would sometimes appear to have \nfrozen, and we were concerned that there was a deadlock condition because of \nthe number of locked rows. While we may have the patience to sit around and \nwait five minutes to see if the update would continue, we couldn't risk \nhaving other applications appear frozen if that was the case. In fact, there \nis no reason for any user or application to write to the same records we are \nwriting to - but the audit table is read frequently. We are not explicitly \nlocking anything, or writing any additional code to arbitrate the lcoking \nmodel anywhere -it's all default UPDATE and SELECT syntax.\n\nDoing the updates in smaller chunks resolved these apparent freezes - or, \nmore specifically, when the application DID freeze, it didn't do it for more \nthan 30 seconds. In all likelyhood, this is the OS and the DB thrashing.\n\nWe have since modified the updates to process 1000 rows at a time with a \ncommit every 10 pages. Just this morning, though, the IS manager asked me to \nstop the backfill because of the load affect on other processes.\n\n> You could try making the condition on the ctid column, to not have to\n> use the index on ID, and process the rows in physical order.\n\nAn interesting idea, if I can confirm that the performance problem is \nbecause of the WHERE clause, not the UPDATE.\n\n>'where new_column is null' to the conditions.\n\nAlready being done, albeit with a coalesce(val, '') = '' - it's quite \npossible that this is hurting the WHERE clause; the EXPLAIN shows the table \nusing the pkey and then filtering on the COALESCE as one would expect.\n\nCarlo \n\n",
"msg_date": "Thu, 7 Jan 2010 16:48:19 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> An interesting idea, if I can confirm that the performance problem\n> is because of the WHERE clause, not the UPDATE.\n \nIf you could show EXPLAIN ANALYZE output for one iteration, with\nrelated queries and maybe more info on the environment, it would\ntake most of the guesswork out of things.\n \n-Kevin\n",
"msg_date": "Thu, 07 Jan 2010 18:18:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 2:48 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Doing the updates in smaller chunks resolved these apparent freezes - or,\n> more specifically, when the application DID freeze, it didn't do it for more\n> than 30 seconds. In all likelyhood, this is the OS and the DB thrashing.\n\nIt might well be checkpoints. Have you tried cranking up checkpoint\nsegments to something like 100 or more and seeing how it behaves then?\n",
"msg_date": "Thu, 7 Jan 2010 18:40:14 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Already done in an earlier post, Kevin - I have included it again below. As \nyou can see, it's pretty well wqhat you would expect, index scan plus a \nfilter.\n\nOne note: updates where no rows qualify run appreciably faster than the ones \nthat do. That is, the update itself appears to be consuming a good deal of \nthe processing time. This may be due to the 6 indexes.\n\nUPDATE mdx_core.audit_impt\nSET source_table = 'mdx_import.'||impt_name\nWHERE audit_impt_id >= 319400001 AND audit_impt_id <= 319400010\nAND coalesce(source_table, '') = ''\n\nIndex Scan using audit_impt_pkey on audit_impt (cost=0.00..92.63 rows=1\nwidth=608) (actual time=0.081..0.244 rows=10 loops=1)\n Index Cond: ((audit_impt_id >= 319400001) AND (audit_impt_id <=\n319400010))\n Filter: ((COALESCE(source_table, ''::character varying))::text = ''::text)\nTotal runtime: 372.141 ms\n\n\n\n\n\"\"Kevin Grittner\"\" <[email protected]> wrote in message \nnews:[email protected]...\n> \"Carlo Stonebanks\" <[email protected]> wrote:\n>\n>> An interesting idea, if I can confirm that the performance problem\n>> is because of the WHERE clause, not the UPDATE.\n>\n> If you could show EXPLAIN ANALYZE output for one iteration, with\n> related queries and maybe more info on the environment, it would\n> take most of the guesswork out of things.\n>\n> -Kevin\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Fri, 8 Jan 2010 01:02:25 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> It might well be checkpoints. Have you tried cranking up checkpoint\n> segments to something like 100 or more and seeing how it behaves then?\n\nNo I haven't, althugh it certainly make sense - watching the process run, \nyou get this sense that the system occaisionally pauses to take a deep, long \nbreath before returning to work frantically ;D\n\nCheckpoint_segments are currently set to 64. The DB is large and is on a \nconstant state of receiving single-row updates as multiple ETL and \nrefinement processes run continuously.\n\nWould you expect going to 100 or more to make an appreciable difference, or \nshould I be more aggressive?\n> \n\n",
"msg_date": "Fri, 8 Jan 2010 01:14:58 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 11:14 PM, Carlo Stonebanks\n<[email protected]> wrote:\n>> It might well be checkpoints. Have you tried cranking up checkpoint\n>> segments to something like 100 or more and seeing how it behaves then?\n>\n> No I haven't, althugh it certainly make sense - watching the process run,\n> you get this sense that the system occaisionally pauses to take a deep, long\n> breath before returning to work frantically ;D\n>\n> Checkpoint_segments are currently set to 64. The DB is large and is on a\n> constant state of receiving single-row updates as multiple ETL and\n> refinement processes run continuously.\n>\n> Would you expect going to 100 or more to make an appreciable difference, or\n> should I be more aggressive?\n\nIf you're already at 64 then not much. Probably wouldn't hurt to\ncrank it up more and delay the checkpoints as much as possible during\nthese updates. 64 segments is already 1024M. If you're changing a\nlot more data than that in a single update / insert then cranking them\nup more might help.\n\n What you might need to do is to change your completion target to\nsomething closer to 100% since it's likely that most of the updates /\ninserts are not happening to the same rows over and over, but to\ndifferent rows for each one, the closer you can get to 100% completed\nbefore the next checkpoint the better. This will cause some more IO\nto happen, but will even it out more (hopefully) so that you don't get\ncheckpoint spikes. Look into the checkpoint logging options so you\ncan monitor how they're affecting system performance.\n",
"msg_date": "Thu, 7 Jan 2010 23:21:31 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "At 08:17 07/01/2010, Carlo Stonebanks wrote:\n>Our DB has an audit table which is 500M rows and growing. (FYI the objects being audited are grouped semantically, not individual field values).\n>\n>Recently we wanted to add a new feature and we altered the table to add a new column. We are backfilling this varchar(255) column by writing a TCL script to page through the rows (where every update is a UPDATE ... WHERE id >= x AND id < x+10 and a commit is performed after every 1000 updates statement, i.e. every 10000 rows.)\n>\n>We have 10 columns, six of which are indexed. Rough calculations suggest that this will take two to three weeks to complete on an 8-core CPU with more than enough memory.\n>\n>As a ballpark estimate - is this sort of performance for an 500M updates what one would expect of PG given the table structure (detailed below) or should I dig deeper to look for performance issues?\n>\n>As always, thanks!\n>\n>Carlo\n\nYou can dump the table with pg_dumpand get a file like this (only a few columns, not all of them). Note that the last line, has the data in TSV (Tab Separate Format) plus a LF.\n\nSET statement_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = off;\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET escape_string_warning = off;\n\nSET search_path = public, pg_catalog;\n\nSET default_tablespace = '';\n\nSET default_with_oids = false;\n\nCREATE TABLE mdx_core.audit_impt\n(\n audit_impt_id serial NOT NULL,\n impt_session integer,\n impt_version character varying(255),\n}\n\nCOPY mdx_core.audit_impt (audit_impt_id, impt_session, impt_version) FROM stdin;\n1 1 tateti\n2 32 coll\n\nYou can add a new column hacking it, just adding the new column to the schema, the name in the copy statement and a tabulator+data at the end of the line (before the LF). Note that the table name is different from the original.\n\nSET statement_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = off;\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET escape_string_warning = off;\nSET search_path = public, pg_catalog;\nSET default_tablespace = '';\nSET default_with_oids = false;\nCREATE TABLE mdx_core.audit_impt2\n(\n audit_impt_id serial NOT NULL,\n impt_session integer,\n impt_version character varying(255),\n source_table character varying(255) \n}\nCOPY mdx_core.audit_impt2 (audit_impt_id, impt_session, impt_version, source_table) FROM stdin;\n1 1 tateti tentown\n1 32 coll krof\n\n\nAfter this, add indexes, constraints as usual.\n\nHTH\n\n--------------------------------\nEduardo Morrás González\nDept. I+D+i e-Crime Vigilancia Digital\nS21sec Labs\nTlf: +34 902 222 521\nMóvil: +34 555 555 555 \nwww.s21sec.com, blog.s21sec.com \n\n\nSalvo que se indique lo contrario, esta información es CONFIDENCIAL y\ncontiene datos de carácter personal que han de ser tratados conforme a la\nlegislación vigente en materia de protección de datos. Si usted no es\ndestinatario original de este mensaje, le comunicamos que no está autorizado\na revisar, reenviar, distribuir, copiar o imprimir la información en él\ncontenida y le rogamos que proceda a borrarlo de sus sistemas.\n\nKontrakoa adierazi ezean, posta elektroniko honen barruan doana ISILPEKO\ninformazioa da eta izaera pertsonaleko datuak dituenez, indarrean dagoen\ndatu pertsonalak babesteko legediaren arabera tratatu beharrekoa. Posta\nhonen hartzaile ez zaren kasuan, jakinarazten dizugu baimenik ez duzula\nbertan dagoen informazioa aztertu, igorri, banatu, kopiatu edo inprimatzeko.\nHortaz, erregutzen dizugu posta hau zure sistemetatik berehala ezabatzea. \n\nAntes de imprimir este mensaje valora si verdaderamente es necesario. De\nesta forma contribuimos a la preservación del Medio Ambiente. \n\n",
"msg_date": "Fri, 08 Jan 2010 12:19:38 +0100",
"msg_from": "Eduardo Morras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> Already done in an earlier post\n \nPerhaps I misunderstood; I thought that post mentioned that the plan\nwas one statement in an iteration, and that the cache would have\nbeen primed by a previous query checking whether there were any rows\nto update. If that was the case, it might be worthwhile to look at\nthe entire flow of an iteration.\n \nAlso, if you ever responded with version and configuration\ninformation, I missed it. The solution to parts of what you\ndescribe would be different in different versions. In particular,\nyou might be able to solve checkpoint-related lockup issues and then\nimprove performance by using bigger batches. Right now I would be\nguessing at what might work for you.\n \n-Kevin\n",
"msg_date": "Fri, 08 Jan 2010 08:11:19 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "> crank it up more and delay the checkpoints as much as possible during\n> these updates. 64 segments is already 1024M.\n\nWe have 425M rows, total table size is 78GB, so we can imagine a worst case \nUPDATE write is less than 200 bytes * number of rows specified in the update \n(is that logic correct?).\n\nInerestingly, the total index size is 148GB, twice that of the table, which \nmay be an indication of where the performance bottleneck is.\n\n",
"msg_date": "Fri, 8 Jan 2010 12:06:14 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Carlo Stonebanks <[email protected]> wrote:\n\n> Inerestingly, the total index size is 148GB, twice that of the table, \n> which may be an indication of where the performance bottleneck is.\n\nMaybe a sign for massive index-bloat?\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Fri, 8 Jan 2010 18:21:47 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": ">I thought that post mentioned that the plan\n> was one statement in an iteration, and that the cache would have\n> been primed by a previous query checking whether there were any rows\n> to update. If that was the case, it might be worthwhile to look at\n> the entire flow of an iteration.\n\nThis is the only SQL query in the code in question - the rest of the code \nmanages the looping and commit. The code was copied to PgAdminIII and values \nwritten in for the WHERE clause. In order for me to validate that rows would \nhave been updated, I had to run a SELECT with the same WHERE clause in \nPgAdminIII first to see how many rows would have qualified. But this was for \ntesting purposes only. The SELECT statement does not exist in the code. The \nvast majority of the rows that will be processed will be updated as this is \na backfill to synch the old rows with the values being filled into new \ncolumns now being inserted.\n\n> Also, if you ever responded with version and configuration\n> information, I missed it.\n\nThis is hosted on a new server the client set up so I am waiting for the \nexact OS and hardware config. PG Version is PostgreSQL 8.3.6, compiled by \nVisual C++ build 1400, OS appears to be Windows 2003 x64 Server.\n\nMore than anything, I am more concerned with the long-term use of the \nsystem. This particular challenge with the 500M row update is one thing, but \nI am concerned about the exceptional effort required to do this. Is it \nREALLY this exceptional to want to update 500M rows of data in this day and \nage? Or is the fact that we are considering dumping and restoring and \ndropping indexes, etc to do all an early warning that we don't have a \nsolution that is scaled to the problem?\n\nConfig data follows (I am assuming commented values which I did not include \nare defaulted).\n\nCarlo\n\nautovacuum = on\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 250\nautovacuum_naptime = 1min\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 500\nbgwriter_lru_maxpages = 100\ncheckpoint_segments = 64\ncheckpoint_warning = 290\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nlog_destination = 'stderr'\nlog_line_prefix = '%t '\nlogging_collector = on\nmaintenance_work_mem = 16MB\nmax_connections = 200\nmax_fsm_pages = 204800\nmax_locks_per_transaction = 128\nport = 5432\nshared_buffers = 500MB\nvacuum_cost_delay = 100\nwork_mem = 512MB\n\n",
"msg_date": "Fri, 8 Jan 2010 12:38:46 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Fri, Jan 08, 2010 at 12:38:46PM -0500, Carlo Stonebanks wrote:\n>> I thought that post mentioned that the plan\n>> was one statement in an iteration, and that the cache would have\n>> been primed by a previous query checking whether there were any rows\n>> to update. If that was the case, it might be worthwhile to look at\n>> the entire flow of an iteration.\n>\n> This is the only SQL query in the code in question - the rest of the code \n> manages the looping and commit. The code was copied to PgAdminIII and \n> values written in for the WHERE clause. In order for me to validate that \n> rows would have been updated, I had to run a SELECT with the same WHERE \n> clause in PgAdminIII first to see how many rows would have qualified. But \n> this was for testing purposes only. The SELECT statement does not exist in \n> the code. The vast majority of the rows that will be processed will be \n> updated as this is a backfill to synch the old rows with the values being \n> filled into new columns now being inserted.\n>\n>> Also, if you ever responded with version and configuration\n>> information, I missed it.\n>\n> This is hosted on a new server the client set up so I am waiting for the \n> exact OS and hardware config. PG Version is PostgreSQL 8.3.6, compiled by \n> Visual C++ build 1400, OS appears to be Windows 2003 x64 Server.\n>\n> More than anything, I am more concerned with the long-term use of the \n> system. This particular challenge with the 500M row update is one thing, \n> but I am concerned about the exceptional effort required to do this. Is it \n> REALLY this exceptional to want to update 500M rows of data in this day and \n> age? Or is the fact that we are considering dumping and restoring and \n> dropping indexes, etc to do all an early warning that we don't have a \n> solution that is scaled to the problem?\n>\n> Config data follows (I am assuming commented values which I did not include \n> are defaulted).\n>\n> Carlo\n>\n\nHi Carlo,\n\nIt all boils down to resource management and constraints. For small\nproblems relative to the amount of system resources available, little\neffort is needed to have satisfactory performance. As the problems\nconsume more and more of the total resource capacity, you will need\nto know more and more in depth about the workings of every piece of\nthe system to wring the most possible performance out of the system.\nSome of the discussions on this topic have covered a smattering of\ndetails that will become increasingly important as your system scales\nand determine whether or not it will scale. Many times the need for\nupdates on such a massive scale do point to normalization problems.\n\nMy two cents.\n\nCheers,\nKen\n",
"msg_date": "Fri, 8 Jan 2010 12:08:36 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> This is hosted on a new server the client set up so I am waiting for \n> the exact OS and hardware config. PG Version is PostgreSQL 8.3.6, \n> compiled by Visual C++ build 1400, OS appears to be Windows 2003 x64 \n> Server.\n>\n> More than anything, I am more concerned with the long-term use of the \n> system. This particular challenge with the 500M row update is one \n> thing, but I am concerned about the exceptional effort required to do \n> this. Is it REALLY this exceptional to want to update 500M rows of \n> data in this day and age? Or is the fact that we are considering \n> dumping and restoring and dropping indexes, etc to do all an early \n> warning that we don't have a solution that is scaled to the problem?\n\nIt's certainly not common or easy to handle. If someone told me I had \nto make that task well perform well and the tools at hand were anything \nother than a largish UNIX-ish server with a properly designed disk \nsubsystem, I'd tell them it's unlikely to work well. An UPDATE is the \nmost intensive single operation you can do in PostgreSQL; the full \nlifecycle of executing it requires:\n\n-Creating a whole new row\n-Updating all the indexes to point to the new row\n-Marking the original row dead\n-Pruning the original row out of the database once it's no longer \nvisible anywhere (VACUUM)\n\nAs a rule, if you can't fit a large chunk of the indexes involved in \nshared_buffers on your system, this is probably going to have terrible \nperformance. And you're on a platform where that's very hard to do, \neven if there's a lot of RAM around. It sounds like your system spends \nall its time swapping index blocks in and out of the database buffer \ncache here. That suggests there's a design problem here either with \nthose indexes (they're too big) or with the provisioned \nhardware/software combination (needs more RAM, faster disks, or a \nplatform where large amounts of RAM work better).\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 08 Jan 2010 13:12:00 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n> In order for me to validate that rows would have been updated, I\n> had to run a SELECT with the same WHERE clause in PgAdminIII first\n> to see how many rows would have qualified. But this was for\n> testing purposes only. The SELECT statement does not exist in the\n> code.\n \nOK, I did misunderstand your earlier post. Got it now, I think.\n \n> This is hosted on a new server the client set up so I am waiting\n> for the exact OS and hardware config. PG Version is PostgreSQL\n> 8.3.6, compiled by Visual C++ build 1400, OS appears to be Windows\n> 2003 x64 Server.\n \nThat might provide more clues, when you get it.\n \n> bgwriter_lru_maxpages = 100\n \nWith the large database size and the apparent checkpoint-related\ndelays, I would make that more aggressive. Getting dirty pages to\nthe OS cache reduces how much physical I/O needs to happen during\ncheckpoint. We use this on our large databases:\n \nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4.0\n \nBoosting your checkpoint_completion_target along with or instead of\na more aggressive background writer might also help.\n \n> max_fsm_pages = 204800\n \nThis looks suspiciously low for the size of your database. If you\ndo a VACUUM VERBOSE (for the database), what do the last few lines\nshow?\n \n> work_mem = 512MB\n \nThat's OK only if you are sure you don't have a lot of connections\nrequesting that much RAM at one time, or you could drive yourself\ninto swapping.\n \nI hope this helps.\n \n-Kevin\n",
"msg_date": "Fri, 08 Jan 2010 12:12:40 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "\n>> crank it up more and delay the checkpoints as much as possible during\n>> these updates. 64 segments is already 1024M.\n>\n> We have 425M rows, total table size is 78GB, so we can imagine a worst \n> case UPDATE write is less than 200 bytes * number of rows specified in \n> the update (is that logic correct?).\n\nThere is also the WAL : all these updates need to be logged, which doubles \nthe UPDATE write throughput. Perhaps you're WAL-bound (every 16MB segment \nneeds fsyncing), and tuning of fsync= and wal_buffers, or a faster WAL \ndisk could help ? (I don't remember your config).\n\n> Inerestingly, the total index size is 148GB, twice that of the table, \n> which may be an indication of where the performance bottleneck is.\n\nIndex updates can create random I/O (suppose you have a btree on a rather \nrandom column)...\n\n",
"msg_date": "Sat, 09 Jan 2010 09:15:11 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "My client just informed me that new hardware is available for our DB server.\n\n. Intel Core 2 Quads Quad\n. 48 GB RAM\n. 4 Disk RAID drive (RAID level TBD)\n\nI have put the ugly details of what we do with our DB below, as well as the \npostgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB \nwith very large tables and the server is always busy serving a constant \nstream of single-row UPDATEs and INSERTs from parallel automated processes.\n\nThere are less than 10 users, as the server is devoted to the KB production \nsystem.\n\nMy questions:\n\n1) Which RAID level would you recommend\n2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n3) If we were to port to a *NIX flavour, which would you recommend? (which \nsupport trouble-free PG builds/makes please!)\n4) Is this the right PG version for our needs?\n\nThanks,\n\nCarlo\n\nThe details of our use:\n\n. The DB hosts is a data warehouse and a knowledgebase (KB) tracking the \nprofessional information of 1.3M individuals.\n. The KB tables related to these 130M individuals are naturally also large\n. The DB is in a perpetual state of serving TCL-scripted Extract, Transform \nand Load (ETL) processes\n. These ETL processes typically run 10 at-a-time (i.e. in parallel)\n. We would like to run more, but the server appears to be the bottleneck\n. The ETL write processes are 99% single row UPDATEs or INSERTs.\n. There are few, if any DELETEs\n. The ETL source data are \"import tables\"\n. The import tables are permanently kept in the data warehouse so that we \ncan trace the original source of any information.\n. There are 6000+ and counting\n. The import tables number from dozens to hundreds of thousands of rows. \nThey rarely require more than a pkey index.\n. Linking the KB to the source import date requires an \"audit table\" of 500M \nrows, and counting.\n. The size of the audit table makes it very difficult to manage, especially \nif we need to modify the design.\n. Because we query the audit table different ways to audit the ETL processes \ndecisions, almost every column in the audit table is indexed.\n. The maximum number of physical users is 10 and these users RARELY perform \nany kind of write\n. By contrast, the 10+ ETL processes are writing constantly\n. We find that internal stats drift, for whatever reason, causing row seq \nscans instead of index scans.\n. So far, we have never seen a situation where a seq scan has improved \nperformance, which I would attribute to the size of the tables\n. We believe our requirements are exceptional, and we would benefit \nimmensely from setting up the PG planner to always favour index-oriented \ndecisions - which seems to contradict everything that PG advice suggests as \nbest practice.\n\nCurrent non-default conf settings are:\n\nautovacuum = on\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 250\nautovacuum_naptime = 1min\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 500\nbgwriter_lru_maxpages = 100\ncheckpoint_segments = 64\ncheckpoint_warning = 290\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nlog_destination = 'stderr'\nlog_line_prefix = '%t '\nlogging_collector = on\nmaintenance_work_mem = 16MB\nmax_connections = 200\nmax_fsm_pages = 204800\nmax_locks_per_transaction = 128\nport = 5432\nshared_buffers = 500MB\nvacuum_cost_delay = 100\nwork_mem = 512MB\n\n\n",
"msg_date": "Thu, 14 Jan 2010 14:17:13 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "New server to improve performance on our large and busy DB - advice?"
},
{
"msg_contents": "Guys, I want to thank you for all of the advice - my client has just made a \nsurprise announcement that he would like to set start from scratch with a \nnew server, so I am afraid that all of this great advice has to be seen in \nthe context of whatever decision is made on that. I am out there, \nhat-in-hand, looking for advice under the PERFORM post: \"New server to \nimprove performance on our large and busy DB - advice?\"\n\nThanks again!\n\nCarlo\n\n\n\"Scott Marlowe\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Thu, Jan 7, 2010 at 2:48 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n>> Doing the updates in smaller chunks resolved these apparent freezes - or,\n>> more specifically, when the application DID freeze, it didn't do it for \n>> more\n>> than 30 seconds. In all likelyhood, this is the OS and the DB thrashing.\n>\n> It might well be checkpoints. Have you tried cranking up checkpoint\n> segments to something like 100 or more and seeing how it behaves then?\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Thu, 14 Jan 2010 14:20:30 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thu, 14 Jan 2010 14:17:13 -0500, \"Carlo Stonebanks\"\n<[email protected]> wrote:\n> My client just informed me that new hardware is available for our DB\n> server.\n> \n> . Intel Core 2 Quads Quad\n> . 48 GB RAM\n> . 4 Disk RAID drive (RAID level TBD)\n> \n> I have put the ugly details of what we do with our DB below, as well as\n> the \n> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\n\n> with very large tables and the server is always busy serving a constant \n> stream of single-row UPDATEs and INSERTs from parallel automated\nprocesses.\n> \n> There are less than 10 users, as the server is devoted to the KB\n> production \n> system.\n> \n> My questions:\n> \n> 1) Which RAID level would you recommend\n\n10\n\n> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n\nIf you have to run Windows... that works.\n\n> 3) If we were to port to a *NIX flavour, which would you recommend?\n(which \n> support trouble-free PG builds/makes please!)\n\nCommunity driven:\nDebian Stable\nCentOS 5\n\nCommercial:\nUbuntu LTS\nRHEL 5\n\n> 4) Is this the right PG version for our needs?\n\nYou want to run at least the latest stable 8.3 series which I believe is\n8.3.9.\nWith the imminent release of 8.5 (6 months), it may be time to move to\n8.4.2 instead.\n\n\nJoshua D. Drake\n\n\n> \n> Thanks,\n> \n> Carlo\n> \n> The details of our use:\n> \n> . The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\n\n> professional information of 1.3M individuals.\n> . The KB tables related to these 130M individuals are naturally also\nlarge\n> . The DB is in a perpetual state of serving TCL-scripted Extract,\n> Transform \n> and Load (ETL) processes\n> . These ETL processes typically run 10 at-a-time (i.e. in parallel)\n> . We would like to run more, but the server appears to be the bottleneck\n> . The ETL write processes are 99% single row UPDATEs or INSERTs.\n> . There are few, if any DELETEs\n> . The ETL source data are \"import tables\"\n> . The import tables are permanently kept in the data warehouse so that\nwe \n> can trace the original source of any information.\n> . There are 6000+ and counting\n> . The import tables number from dozens to hundreds of thousands of rows.\n\n> They rarely require more than a pkey index.\n> . Linking the KB to the source import date requires an \"audit table\" of\n> 500M \n> rows, and counting.\n> . The size of the audit table makes it very difficult to manage,\n> especially \n> if we need to modify the design.\n> . Because we query the audit table different ways to audit the ETL\n> processes \n> decisions, almost every column in the audit table is indexed.\n> . The maximum number of physical users is 10 and these users RARELY\n> perform \n> any kind of write\n> . By contrast, the 10+ ETL processes are writing constantly\n> . We find that internal stats drift, for whatever reason, causing row\nseq \n> scans instead of index scans.\n> . So far, we have never seen a situation where a seq scan has improved \n> performance, which I would attribute to the size of the tables\n> . We believe our requirements are exceptional, and we would benefit \n> immensely from setting up the PG planner to always favour index-oriented\n\n> decisions - which seems to contradict everything that PG advice suggests\n> as \n> best practice.\n> \n> Current non-default conf settings are:\n> \n> autovacuum = on\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_analyze_threshold = 250\n> autovacuum_naptime = 1min\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_vacuum_threshold = 500\n> bgwriter_lru_maxpages = 100\n> checkpoint_segments = 64\n> checkpoint_warning = 290\n> datestyle = 'iso, mdy'\n> default_text_search_config = 'pg_catalog.english'\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> log_destination = 'stderr'\n> log_line_prefix = '%t '\n> logging_collector = on\n> maintenance_work_mem = 16MB\n> max_connections = 200\n> max_fsm_pages = 204800\n> max_locks_per_transaction = 128\n> port = 5432\n> shared_buffers = 500MB\n> vacuum_cost_delay = 100\n> work_mem = 512MB\n\n-- \nPostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n",
"msg_date": "Thu, 14 Jan 2010 11:54:09 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\t=?UTF-8?Q?advice=3F?="
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> Guys, I want to thank you for all of the advice - my client has just \n> made a surprise announcement that he would like to set start from \n> scratch with a new server, so I am afraid that all of this great advice \n> has to be seen in the context of whatever decision is made on that. I am \n> out there, hat-in-hand, looking for advice under the PERFORM post: \"New \n> server to improve performance on our large and busy DB - advice?\"\n\nYou might start this as a new topic with a relevant title, and reiterate your database requirements. Otherwise it will get submerged as just a footnote to your original question. It's really nice to be able to quickly find the new-equipment discussions.\n\nCraig\n",
"msg_date": "Thu, 14 Jan 2010 13:19:05 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> My client just informed me that new hardware is available for our DB server.\n>\n> . Intel Core 2 Quads Quad\n> . 48 GB RAM\n> . 4 Disk RAID drive (RAID level TBD)\n>\n> I have put the ugly details of what we do with our DB below, as well as the\n> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\n> with very large tables and the server is always busy serving a constant\n> stream of single-row UPDATEs and INSERTs from parallel automated processes.\n>\n> There are less than 10 users, as the server is devoted to the KB production\n> system.\n>\n> My questions:\n>\n> 1) Which RAID level would you recommend\n\nRAID-10 with a battery backed hardware caching controller.\n\n> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n\nThat's probably the most stable choice out there for Windows.\n\n> 3) If we were to port to a *NIX flavour, which would you recommend? (which\n> support trouble-free PG builds/makes please!)\n\nI'd parrot what Joshua Drake said here. Centos / RHEL / Debian / Ubuntu\n\n> 4) Is this the right PG version for our needs?\n\n8.3 is very stable. Update to the latest. 8.4 seems good, but I've\nhad, and still am having, problems with it crashing in production.\nNot often, maybe once every couple of months, but just enough that I'm\nnot ready to try and use it there yet. And I can't force the same\nfailure in testing, at least not yet.\n\n> The details of our use:\n>\n> . These ETL processes typically run 10 at-a-time (i.e. in parallel)\n> . We would like to run more, but the server appears to be the bottleneck\n> . The ETL write processes are 99% single row UPDATEs or INSERTs.\n\nCan you run the ETL processes in such a way that they can do many\ninserts and updates at once? That would certainly speed things up a\nbit.\n\n> . The size of the audit table makes it very difficult to manage, especially\n> if we need to modify the design.\n\nYou might want to look into partitioning / inheritance if that would help.\n\n> . Because we query the audit table different ways to audit the ETL processes\n> decisions, almost every column in the audit table is indexed.\n\nThis may or may not help. If you're querying it and the part in the\nwhere clause referencing this column isn't very selective, and index\nwon't be chosen anyway. If you've got multiple columns in your where\nclause, the more selective ones will use and index and the rest will\nget filtered out instead of using an index. Look in\npg_stat_user_indexes for indexes that don't get used and drop them\nunless, of course, they're unique indexes.\n\n> . The maximum number of physical users is 10 and these users RARELY perform\n> any kind of write\n> . By contrast, the 10+ ETL processes are writing constantly\n\nYou may be well served by having two servers, one to write to, and a\nslave that is used by the actual users. Our slony slaves have a much\neasier time writing out their data than our master database does.\n\n> . We find that internal stats drift, for whatever reason, causing row seq\n> scans instead of index scans.\n\nYeah, this is a known problem on heavily updated tables and recent\nentries. Cranking up autovacuum a bit can help, but often it requires\nspecial treatment, either by adjusting the autovac analyze threshold\nvalues for just those tables, or running manual analyzes every couple\nof minutes.\n\n> . So far, we have never seen a situation where a seq scan has improved\n> performance, which I would attribute to the size of the tables\n\nNot so much the size of the tables, as the size of the request. If\nyou were running aggregates across whole large tables, a seq scan\nwould definitely be the way to go. If you're asking for one row,\nindex scan should win. Somewhere between those two, when you get up\nto hitting some decent percentage of the rows, the switch from index\nscan to seq scan makes sense, and it's likely happening too early for\nyou. Look at random_page_cost and effective_cache_size for starters.\n\n> . We believe our requirements are exceptional, and we would benefit\n> immensely from setting up the PG planner to always favour index-oriented\n> decisions - which seems to contradict everything that PG advice suggests as\n> best practice.\n\nSee previous comment I made up there ^^^ It's not about always using\nindexes, it's about giving the planner the information it needs to\nmake the right choice.\n\n> Current non-default conf settings are:\n>\n> autovacuum = on\n> autovacuum_analyze_scale_factor = 0.1\n\nYou might wanna lower the analyze scale factor if you're having\nproblems with bad query plans on fresh data.\n\n> autovacuum_analyze_threshold = 250\n> autovacuum_naptime = 1min\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_vacuum_threshold = 500\n> bgwriter_lru_maxpages = 100\n> checkpoint_segments = 64\n> checkpoint_warning = 290\n> datestyle = 'iso, mdy'\n> default_text_search_config = 'pg_catalog.english'\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> log_destination = 'stderr'\n> log_line_prefix = '%t '\n> logging_collector = on\n> maintenance_work_mem = 16MB\n> max_connections = 200\n> max_fsm_pages = 204800\n\nThe default tends to be low. Run vacuum verbose to see if you're\noverrunning the max_fsm_pages settings or the max_fsm_relations.\n\n> max_locks_per_transaction = 128\n> port = 5432\n> shared_buffers = 500MB\n> vacuum_cost_delay = 100\n\nThat's REALLY REALLY high. You might want to look at something in the\n5 to 20 range.\n\n> work_mem = 512MB\n",
"msg_date": "Thu, 14 Jan 2010 14:43:29 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "My bad - I thought I had, so it has been re-posted with a (v2) disclaimer in \nthe title... like THAT will stop the flaming! <g>\n\nThanks for your patience!\n\n\"Craig James\" <[email protected]> wrote in message \nnews:[email protected]...\n> Carlo Stonebanks wrote:\n>> Guys, I want to thank you for all of the advice - my client has just made \n>> a surprise announcement that he would like to set start from scratch with \n>> a new server, so I am afraid that all of this great advice has to be seen \n>> in the context of whatever decision is made on that. I am out there, \n>> hat-in-hand, looking for advice under the PERFORM post: \"New server to \n>> improve performance on our large and busy DB - advice?\"\n>\n> You might start this as a new topic with a relevant title, and reiterate \n> your database requirements. Otherwise it will get submerged as just a \n> footnote to your original question. It's really nice to be able to \n> quickly find the new-equipment discussions.\n>\n> Craig\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n",
"msg_date": "Thu, 14 Jan 2010 17:05:48 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n> 1) Which RAID level would you recommend\n\nIt looks like you stepped over a critical step, which is \"will the \nserver have a good performing RAID card?\". Your whole upgrade could \nunderperform if you make a bad mistake on that part. It's really \nimportant to nail that down, and to benchmark to prove you got what you \nexpected from your hardware vendor.\n\n> 3) If we were to port to a *NIX flavour, which would you recommend? \n> (which support trouble-free PG builds/makes please!)\n\nThe only platform I consider close to trouble free as far as the PG \nbuilds working without issues are RHEL/CentOS, due to the maturity of \nthe PGDG yum repository and how up to date it's kept. Every time I \nwander onto another platform I find the lag and care taken in packaging \nPostgreSQL to be at least a small step down from there.\n\n> 4) Is this the right PG version for our needs?\n\n8.4 removes the FSM, which takes away a common source for unexpected \nperformance issues when you overflow max_fsm_pages one day. If you're \ngoing to deploy 8.3, you need to be more careful to monitor the whole \nVACUUM process; it's easier to ignore in 8.4 and still get by OK. As \nfar as general code stability goes, I think it's a wash at this point. \nYou might discover a bug in 8.4 that causes a regression, but I think \nyou're just as likely to run into a situation that 8.3 handles badly \nthat's improved in 8.4. Hard to say which will work out better in a \nreally general way.\n\n> . We believe our requirements are exceptional, and we would benefit \n> immensely from setting up the PG planner to always favour \n> index-oriented decisions - which seems to contradict everything that \n> PG advice suggests as best practice.\n\nPretty much everyone thinks their requirements are exceptional. It's \nfunny how infrequently that's actually true. The techniques that favor \nindex-use aren't that unique: collect better stats, set basic \nparameters correctly, adjust random_page_cost, investigate plans that \ndon't do what you want to figure out why. It's easy to say there's \nsomething special about your data rather than follow fundamentals here; \nI'd urge you to avoid doing that. The odds that the real issue is that \nyou're feeding the optimizer bad data is more likely than most people \nthink, which brings us to:\n\n> Current non-default conf settings are:\n\nI don't see effective_cache_size listed there. If that's at the \ndefault, I wouldn't be surprised that you're seeing sequential scans \ninstead of indexed ones far too often.\n\n\n> max_connections = 200\n> work_mem = 512MB\n\nThis is a frightening combination by the way.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 00:17:15 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice?"
},
{
"msg_contents": "> Pretty much everyone thinks their requirements are exceptional. It's \n> funny how infrequently that's actually true. The techniques that favor \n> index-use aren't that unique: collect better stats, set basic parameters \n> correctly, adjust random_page_cost, investigate plans that don't do what \n> you want to figure out why. It's easy to say there's something special \n> about your data rather than follow fundamentals here; I'd urge you to \n> avoid doing that. The odds that the real issue is that you're feeding the \n> optimizer bad data is more likely than most people think, which brings us \n> to:\n\nI understand that. And the answer is usually to go and do and ANALYZE \nmanually (if it isn't this, it will be some dependency on a set-returning \nstored function we wrote before we could specify the rows and cost). My \nquestion is really - why do I need this constant intervention? When we \nrarely do aggregates, when our queries are (nearly) always single row \nqueries (and very rarely more than 50 rows) out of tables that have hundreds \nof thousands to millions of rows, what does it take to NOT have to \nintervene? WHich brings me to your next point:\n\n> I don't see effective_cache_size listed there. If that's at the default, \n> I wouldn't be surprised that you're seeing sequential scans instead of \n> indexed ones far too often.\n\nNice to know - I suspect someone has been messing around with stuff they \ndon't understand. I do know that after some screwing around they got the \nserver to the point that it wouldn't restart and tried to back out until it \nwould.\n\n>> max_connections = 200\n>> work_mem = 512MB\n\n> This is a frightening combination by the way.\n\nLooks like it's connected to the above issue. The real max connection value \nis 1/10th of that.\n\nThanks Greg!\n\nCarlo \n\n",
"msg_date": "Fri, 15 Jan 2010 02:12:22 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice?"
},
{
"msg_contents": "On Thu, Jan 14, 2010 at 8:17 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> . 48 GB RAM\n> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n\nThere is not a 64-bit windows build now - You would be limited to\nshared_buffers at about a gigabyte. Choose Linux\n\nGreetings\nMarcin Mańk\n",
"msg_date": "Fri, 15 Jan 2010 17:48:32 +0100",
"msg_from": "marcin mank <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "Hi Scott,\n\nSorry for the very late reply on this post, but I'd like to follow up. The \nreason that I took so long to reply was due to this suggestion:\n\n<<Run vacuum verbose to see if you're\noverrunning the max_fsm_pages settings or the max_fsm_relations.\n>>\n\nMy first thought was, does he mean against the entire DB? That would take a \nweek! But, since it was recommended, I decided to see what would happen. So, \nI just ran VACUUM VERBOSE. After five days, it was still vacuuming and the \nserver admin said they needed to bounce the server, which means the command \nnever completed (I kept the log of the progress so far, but don't know if \nthe values you needed would appear at the end. I confess I have no idea how \nto relate the INFO and DETAIL data coming back with regards to max_fsm_pages \nsettings or the max_fsm_relations.\n\nSo, now my questions are:\n\n1) Did you really mean you wanted VACUUM VERBOSE to run against the entire \nDB?\n2) Given my previous comments on the size of the DB (and my thinking that \nthis is an exceptionally large and busy DB) were you expecting it to take \nthis long?\n3) I took no exceptional measures before running it, I didn't stop the \nautomated import processes, I didn't turn off autovacuum. Would this have \naccounted for the time it is taking to THAT degree?\n4) Any other way to get max_fsm_pages settings and max_fsm_relations?\n\nCarlo \n\n",
"msg_date": "Tue, 19 Jan 2010 16:09:56 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice?"
},
{
"msg_contents": "On Tue, Jan 19, 2010 at 2:09 PM, Carlo Stonebanks\n<[email protected]> wrote:\n> Hi Scott,\n>\n> Sorry for the very late reply on this post, but I'd like to follow up. The\n> reason that I took so long to reply was due to this suggestion:\n>\n> <<Run vacuum verbose to see if you're\n> overrunning the max_fsm_pages settings or the max_fsm_relations.\n>>>\n>\n> My first thought was, does he mean against the entire DB? That would take a\n> week! But, since it was recommended, I decided to see what would happen. So,\n> I just ran VACUUM VERBOSE. After five days, it was still vacuuming and the\n> server admin said they needed to bounce the server, which means the command\n> never completed (I kept the log of the progress so far, but don't know if\n> the values you needed would appear at the end. I confess I have no idea how\n> to relate the INFO and DETAIL data coming back with regards to max_fsm_pages\n> settings or the max_fsm_relations.\n\nyeah, the values are at the end. Sounds like your vacuum settings are\ntoo non-aggresive. Generally this is the vacuum cost delay being too\nhigh.\n\n> So, now my questions are:\n>\n> 1) Did you really mean you wanted VACUUM VERBOSE to run against the entire\n> DB?\n\nYes. A whole db at least. However...\n\n> 2) Given my previous comments on the size of the DB (and my thinking that\n> this is an exceptionally large and busy DB) were you expecting it to take\n> this long?\n\nYes, I was figuring it would be a while. However...\n\n> 3) I took no exceptional measures before running it, I didn't stop the\n> automated import processes, I didn't turn off autovacuum. Would this have\n> accounted for the time it is taking to THAT degree?\n\nNah, not really. However...\n\n> 4) Any other way to get max_fsm_pages settings and max_fsm_relations?\n\nYes! You can run vacuum verbose against the regular old postgres\ndatabase (or just create one for testing with nothing in it) and\nyou'll still get the fsm usage numbers from that! So, no need to run\nit against the big db. However, if regular vacuum verbose couldn't\nfinish in a week, then you've likely got vacuum and autovacuum set to\nbe too timid in their operation, and may be getting pretty bloated as\nwe speak. Once the fsm gets too blown out of the water, it's quicker\nto dump and reload the whole DB than to try and fix it.\n",
"msg_date": "Tue, 19 Jan 2010 15:57:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "> yeah, the values are at the end. Sounds like your vacuum settings are\n> too non-aggresive. Generally this is the vacuum cost delay being too\n> high.\n\nOf course, I have to ask: what's the down side?\n\n> Yes! You can run vacuum verbose against the regular old postgres\n> database (or just create one for testing with nothing in it) and\n> you'll still get the fsm usage numbers from that! So, no need to run\n> it against the big db. However, if regular vacuum verbose couldn't\n> finish in a week, then you've likely got vacuum and autovacuum set to\n> be too timid in their operation, and may be getting pretty bloated as\n> we speak. Once the fsm gets too blown out of the water, it's quicker\n> to dump and reload the whole DB than to try and fix it.\n\nMy client reports this is what they actualyl do on a monthly basis.\n\nAnd the numbers are in:\n\n>> NOTICE: number of page slots needed (4090224) exceeds max_fsm_pages \n>> (204800)\n>> HINT: Consider increasing the configuration parameter \"max_fsm_pages\" to \n>> a value over 4090224.\n\nGee, only off by a factor of 20. What happens if I go for this number (once \nagain, what's the down side)?\n\nCarlo \n\n",
"msg_date": "Wed, 20 Jan 2010 15:03:27 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice?"
},
{
"msg_contents": "\"Carlo Stonebanks\" <[email protected]> wrote:\n \n>> yeah, the values are at the end. Sounds like your vacuum\n>> settings are too non-aggresive. Generally this is the vacuum\n>> cost delay being too high.\n> \n> Of course, I have to ask: what's the down side?\n \nIf you make it too aggressive, it could impact throughput or\nresponse time. Odds are that the bloat from having it not\naggressive enough is currently having a worse impact.\n \n>> Once the fsm gets too blown out of the water, it's quicker\n>> to dump and reload the whole DB than to try and fix it.\n> \n> My client reports this is what they actualyl do on a monthly\n> basis.\n \nThe probably won't need to do that with proper configuration and\nvacuum policies.\n \n>>> NOTICE: number of page slots needed (4090224) exceeds\n>>> max_fsm_pages (204800)\n>>> HINT: Consider increasing the configuration parameter\n>>> \"max_fsm_pages\" to a value over 4090224.\n> \n> Gee, only off by a factor of 20. What happens if I go for this\n> number (once again, what's the down side)?\n \nIt costs six bytes of shared memory per entry.\n \nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM\n \n-Kevin\n",
"msg_date": "Wed, 20 Jan 2010 14:36:39 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large\n\tand busy DB - advice?"
},
{
"msg_contents": "On Wed, Jan 20, 2010 at 3:03 PM, Carlo Stonebanks\n<[email protected]> wrote:\n>> Yes! You can run vacuum verbose against the regular old postgres\n>> database (or just create one for testing with nothing in it) and\n>> you'll still get the fsm usage numbers from that! So, no need to run\n>> it against the big db. However, if regular vacuum verbose couldn't\n>> finish in a week, then you've likely got vacuum and autovacuum set to\n>> be too timid in their operation, and may be getting pretty bloated as\n>> we speak. Once the fsm gets too blown out of the water, it's quicker\n>> to dump and reload the whole DB than to try and fix it.\n>\n> My client reports this is what they actualyl do on a monthly basis.\n\nSomething is deeply wrong with your client's vacuuming policies.\n\n...Robert\n",
"msg_date": "Wed, 20 Jan 2010 21:03:33 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "Scott Marlowe escribi�:\n> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks\n> <[email protected]> wrote:\n\n> > 4) Is this the right PG version for our needs?\n> \n> 8.3 is very stable. Update to the latest. 8.4 seems good, but I've\n> had, and still am having, problems with it crashing in production.\n> Not often, maybe once every couple of months, but just enough that I'm\n> not ready to try and use it there yet. And I can't force the same\n> failure in testing, at least not yet.\n\nuh. Is there a report of the crash somewhere with details, say stack\ntraces and such?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 21 Jan 2010 12:51:51 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice?"
},
{
"msg_contents": "On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera\n<[email protected]> wrote:\n> Scott Marlowe escribió:\n>> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks\n>> <[email protected]> wrote:\n>\n>> > 4) Is this the right PG version for our needs?\n>>\n>> 8.3 is very stable. Update to the latest. 8.4 seems good, but I've\n>> had, and still am having, problems with it crashing in production.\n>> Not often, maybe once every couple of months, but just enough that I'm\n>> not ready to try and use it there yet. And I can't force the same\n>> failure in testing, at least not yet.\n>\n> uh. Is there a report of the crash somewhere with details, say stack\n> traces and such?\n\nNo, the only server that does this is in production as our stats db\nand when it happens it usually gets restarted immediately. It does\nthis about once every two months. Do the PGDG releases have debugging\nsymbols and what not? I'll see about having a stack trace ready to\nrun for the next time it does this.\n",
"msg_date": "Thu, 21 Jan 2010 09:41:19 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "Scott Marlowe escribi�:\n> On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera\n> <[email protected]> wrote:\n> > Scott Marlowe escribi�:\n> >> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks\n> >> <[email protected]> wrote:\n> >\n> >> > 4) Is this the right PG version for our needs?\n> >>\n> >> 8.3 is very stable. �Update to the latest. �8.4 seems good, but I've\n> >> had, and still am having, problems with it crashing in production.\n> >> Not often, maybe once every couple of months, but just enough that I'm\n> >> not ready to try and use it there yet. �And I can't force the same\n> >> failure in testing, at least not yet.\n> >\n> > uh. �Is there a report of the crash somewhere with details, say stack\n> > traces and such?\n> \n> No, the only server that does this is in production as our stats db\n> and when it happens it usually gets restarted immediately. It does\n> this about once every two months. Do the PGDG releases have debugging\n> symbols and what not? I'll see about having a stack trace ready to\n> run for the next time it does this.\n\nYou mean the RPMs? Yes, I think Devrim publishes debuginfo packages\nwhich you need to install separately.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 21 Jan 2010 13:44:40 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice?"
},
{
"msg_contents": "On Thu, Jan 21, 2010 at 9:44 AM, Alvaro Herrera\n<[email protected]> wrote:\n> Scott Marlowe escribió:\n>> On Thu, Jan 21, 2010 at 8:51 AM, Alvaro Herrera\n>> <[email protected]> wrote:\n>> > Scott Marlowe escribió:\n>> >> On Thu, Jan 14, 2010 at 12:17 PM, Carlo Stonebanks\n>> >> <[email protected]> wrote:\n>> >\n>> >> > 4) Is this the right PG version for our needs?\n>> >>\n>> >> 8.3 is very stable. Update to the latest. 8.4 seems good, but I've\n>> >> had, and still am having, problems with it crashing in production.\n>> >> Not often, maybe once every couple of months, but just enough that I'm\n>> >> not ready to try and use it there yet. And I can't force the same\n>> >> failure in testing, at least not yet.\n>> >\n>> > uh. Is there a report of the crash somewhere with details, say stack\n>> > traces and such?\n>>\n>> No, the only server that does this is in production as our stats db\n>> and when it happens it usually gets restarted immediately. It does\n>> this about once every two months. Do the PGDG releases have debugging\n>> symbols and what not? I'll see about having a stack trace ready to\n>> run for the next time it does this.\n>\n> You mean the RPMs? Yes, I think Devrim publishes debuginfo packages\n> which you need to install separately.\n\nWell crap, this one was built from source, and not with debugging.\nGimme a day or so and I'll have it rebuilt with debug and can run a\nuseful backtrace on it.\n",
"msg_date": "Thu, 21 Jan 2010 09:49:55 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice?"
},
{
"msg_contents": "On Thu, 2010-01-21 at 13:44 -0300, Alvaro Herrera wrote:\n> I think Devrim publishes debuginfo packages which you need to install\n> separately. \n\nRight.\n-- \nDevrim GÜNDÜZ, RHCE\nCommand Prompt - http://www.CommandPrompt.com \ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz",
"msg_date": "Thu, 21 Jan 2010 21:45:09 +0200",
"msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice?"
},
{
"msg_contents": "Hi Greg,\n\nAs a follow up to this suggestion:\n\n>> I don't see effective_cache_size listed there. If that's at the default, \n>> I wouldn't be surprised that you're seeing sequential scans instead of \n>> indexed ones far too often.\n\nI found an article written by you \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-5minute.htm and thought \nthis was pretty useful, and especially this comment:\n\n<<effective_cache_size should be set to how much memory is leftover for disk \ncaching after taking into account what's used by the operating system, \ndedicated PostgreSQL memory, and other applications. If it's set too low, \nindexes may not be used for executing queries the way you'd expect. Setting \neffective_cache_size to 1/2 of total memory would be a normal conservative \nsetting. You might find a better estimate by looking at your operating \nsystem's statistics. On UNIX-like systems, add the free+cached numbers from \nfree or top. On Windows see the \"System Cache\" in the Windows Task Manager's \nPerformance tab.\n>>\n\nAre these values to look at BEFORE starting PG? If so, how do I relate the \nvalues returned to setting the effective_cache_size values?\n\nCarlo\n\nPS Loved your 1995 era pages. Being a musician, it was great to read your \nrecommendations on how to buy these things called \"CD's\". I Googled the \nterm, and they appear to be some ancient precursor to MP3s which people \nactually PAID for. What kind of stone were they engraved on? ;-D\n\n\n",
"msg_date": "Fri, 22 Jan 2010 12:37:56 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice?"
},
{
"msg_contents": "Carlo Stonebanks wrote:\n>\n> <<effective_cache_size should be set to how much memory is leftover \n> for disk caching after taking into account what's used by the \n> operating system, dedicated PostgreSQL memory, and other applications. \n> If it's set too low, indexes may not be used for executing queries the \n> way you'd expect. Setting effective_cache_size to 1/2 of total memory \n> would be a normal conservative setting. You might find a better \n> estimate by looking at your operating system's statistics. On \n> UNIX-like systems, add the free+cached numbers from free or top. On \n> Windows see the \"System Cache\" in the Windows Task Manager's \n> Performance tab.\n>>>\n> Are these values to look at BEFORE starting PG? If so, how do I relate \n> the values returned to setting the effective_cache_size values?\n>\nAfter starting the database. You can set effective_cache_size to a size \nin megabytes, so basically you'd look at the amount of free cache, maybe \nround down a bit, and set effective_cache_size to exactly that. It's \nnot super important to get the number right. The point is that the \ndefault is going to be a tiny number way smaller than the RAM in your \nsystem, and even getting it within a factor of 2 or 3 of reality will \nradically change some types of query plans.\n\n> PS Loved your 1995 era pages. Being a musician, it was great to read \n> your recommendations on how to buy these things called \"CD's\". I \n> Googled the term, and they appear to be some ancient precursor to MP3s \n> which people actually PAID for. What kind of stone were they engraved \n> on? ;-D\n\nThey're plastic, just like the iPod, iPhone, iToilet, or whatever other \nwhite plastic Apple products people listen to music during this new \nera. Since both my CD collection and the stereo I listen to them on are \neach individually worth more than my car, it's really tough to sell me \non all the terrible sounding MP3s I hear nowadays. I'm the guy who can \ntell you how the LP, regular CD, gold CD, and SACD/DVD-A for albums I \nlike all compare, so dropping below CD quality is right out. If you \never find yourself going \"hey, I wish I had six different versions of \n'Dark Side of the Moon' around so I could compare the subtle differences \nin the mastering and mix on each of them\", I'm your guy.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 26 Jan 2010 16:53:21 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice?"
}
] |
[
{
"msg_contents": "Hi all,\n\nfollowing the simple but interesting air-traffic benchmark published at:\nhttp://www.mysqlperformanceblog.com/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/\n\nI decided to run the benchmark over postgres to get some more\nexperience and insights. Unfortunately, the query times I got from\npostgres were not the expected ones: the fast ones were in the order\nof 5 minutes while the slow ones in the order of 40 minutes. I came to\nthe conclusion that some parameters are not set correctly. I give all\nthe information I can think of in the sequel of this email, to\nhopefully get some useful suggestions to improve query times. Thank\nyou very much for your time!\n\nAll of the schema, data and query information can be found on the\noriginal blog post given above.\n\nThe machine runs Fedora 10:\nuname -a:\nLinux 2.6.27.30-170.2.82.fc10.x86_64 #1 SMP Mon Aug 17 08:18:34 EDT\n2009 x86_64 x86_64 x86_64 GNU/Linux\n\nThe hardware characteristics are:\nPlatform Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz with 8GB RAM and\nample disk space (2x 500 GB SATA disk @ 7200 RPM as SW-RAID-0)\n\nI used postgresql-8.4.2. Source download and compiled by hand, no\nspecial parameters where passed to the configure phase.\n\n-- DETAILS on loading, analyze and index creation on postgresql-8.4.2\n-- loading time was 77m15.976s\n-- total rows: 119790558\n-- total size of pgdata/base: 46G\nANALYZE;\n-- 219698.365ms aka 3m39.698s\nCREATE INDEX year_month_idx ON ontime (\"Year\", \"Month\");\n-- 517481.632ms aka 8m37.481s\nCREATE INDEX DepDelay_idx ON ontime (\"DepDelay\");\n-- 1257051.668ms aka 20m57.051s\n\nQuery 1:\n\nSELECT \"DayOfWeek\", count(*) AS c FROM ontime WHERE \"Year\" BETWEEN\n2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER BY c DESC;\n\nReported query times are (in sec):\nMonetDB 7.9s\nInfoBright 12.13s\nLucidDB 54.8s\n\nFor pg-8.4.2 I got with 3 consecutive runs on the server:\n5m52.384s\n5m55.885s\n5m54.309s\n\nFor more complex queries, postgres will become even more slower, that\nis for queries from Q2 to Q8 will need over 40 minutes, while on the\nrest ones around 5 minutes. By examining the explain analyze in these\nqueries, the main slow down factor is the HashAggregate and mainly\nSort operations. For query 1 the explain analyze returns:\n\nairtraffic=# EXPLAIN ANALYZE SELECT \"DayOfWeek\", count(*) AS c FROM\nontime WHERE \"Year\" BETWEEN 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER\nBY c DESC;\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\ntime=371188.821..371188.823 rows=7 loops=1)\n Sort Key: (count(*))\n Sort Method: quicksort Memory: 25kB\n -> HashAggregate (cost=7407754.04..7407754.08 rows=4 width=2)\n(actual time=371163.316..371163.320 rows=7 loops=1)\n -> Seq Scan on ontime (cost=0.00..7143875.40 rows=52775727\nwidth=2) (actual time=190938.959..346180.079 rows=52484047 loops=1)\n Filter: ((\"Year\" >= 2000) AND (\"Year\" <= 2008))\n Total runtime: 371201.156 ms\n(7 rows)\n\n\nI understand that the problem here is the memory used by the sort\nmethod. *But*, I already changed the work_mem parameter to 6gigs:)\n\nairtraffic=# show work_mem;\n work_mem\n----------\n 6GB\n(1 row)\n\nAnd when I saw that this did not work, I also changed the 8.3 style\n(?) parameter sort_mem:\nairtraffic=# show sort_mem;\n work_mem\n----------\n 6GB\n(1 row)\n\nwhich means that of course work_mem == sort_mem, as\nsuch, shouldn't be the case that the sort algorithm should have used\nmuch more memory?\n\nI also attach the output of 'show all;' so you can advice me in any\nother configuration settings that I might need to change to perform\nbetter.\n\n\nThank you very much for your feedback and all of your help!\n\nlefteris sidirourgos\n\nP.S. sorry if you get this message twice but the first one does not\nappear to have reached the list\n\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nairtraffic=# show all;\n\n name | setting\n |\n description\n---------------------------------+---------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------\n add_missing_from | off\n | Automatically adds missing table references to\nFROM clauses.\n allow_system_table_mods | off\n | Allows modifications of the structure of system\ntables.\n archive_command | (disabled)\n | Sets the shell command that will be called to\narchive a WAL file.\n archive_mode | off\n | Allows archiving of WAL files using\narchive_command.\n archive_timeout | 0\n | Forces a switch to the next xlog file if a new\nfile has not been started within N seconds.\n array_nulls | on\n | Enable input of NULL elements in arrays.\n authentication_timeout | 1min\n | Sets the maximum allowed time to complete client\nauthentication.\n autovacuum | on\n | Starts the autovacuum subprocess.\n autovacuum_analyze_scale_factor | 0.1\n | Number of tuple inserts, updates or deletes\nprior to analyze as a fraction of reltuples.\n autovacuum_analyze_threshold | 50\n | Minimum number of tuple inserts, updates or\ndeletes prior to analyze.\n autovacuum_freeze_max_age | 200000000\n | Age at which to autovacuum a table to prevent\ntransaction ID wraparound.\n autovacuum_max_workers | 3\n | Sets the maximum number of simultaneously\nrunning autovacuum worker processes.\n autovacuum_naptime | 1min\n | Time to sleep between autovacuum runs.\n autovacuum_vacuum_cost_delay | 20ms\n | Vacuum cost delay in milliseconds, for\nautovacuum.\n autovacuum_vacuum_cost_limit | -1\n | Vacuum cost amount available before napping, for\nautovacuum.\n autovacuum_vacuum_scale_factor | 0.2\n | Number of tuple updates or deletes prior to\nvacuum as a fraction of reltuples.\n autovacuum_vacuum_threshold | 50\n | Minimum number of tuple updates or deletes prior\nto vacuum.\n backslash_quote | safe_encoding\n | Sets whether \"\\'\" is allowed in string literals.\n bgwriter_delay | 200ms\n | Background writer sleep time between rounds.\n bgwriter_lru_maxpages | 100\n | Background writer maximum number of LRU pages to\nflush per round.\n bgwriter_lru_multiplier | 2\n | Multiple of the average buffer usage to free per\nround.\n block_size | 8192\n | Shows the size of a disk block.\n bonjour_name |\n | Sets the Bonjour broadcast service name.\n check_function_bodies | on\n | Check function bodies during CREATE FUNCTION.\n checkpoint_completion_target | 0.5\n | Time spent flushing dirty buffers during\ncheckpoint, as fraction of checkpoint interval.\n checkpoint_segments | 30\n | Sets the maximum distance in log segments\nbetween automatic WAL checkpoints.\n checkpoint_timeout | 5min\n | Sets the maximum time between automatic WAL\ncheckpoints.\n checkpoint_warning | 30s\n | Enables warnings if checkpoint segments are\nfilled more frequently than this.\n client_encoding | UTF8\n | Sets the client's character set encoding.\n client_min_messages | notice\n | Sets the message levels that are sent to the\nclient.\n commit_delay | 0\n | Sets the delay in microseconds between\ntransaction commit and flushing WAL to disk.\n commit_siblings | 5\n | Sets the minimum concurrent open transactions\nbefore performing commit_delay.\n config_file |\n/export/scratch0/lsidir/postgres/pgdata/postgresql.conf | Sets the\nserver's main configuration file.\n constraint_exclusion | partition\n | Enables the planner to use constraints to\noptimize queries.\n cpu_index_tuple_cost | 0.005\n | Sets the planner's estimate of the cost of\nprocessing each index entry during an index scan.\n cpu_operator_cost | 0.0025\n | Sets the planner's estimate of the cost of\nprocessing each operator or function call.\n cpu_tuple_cost | 0.01\n | Sets the planner's estimate of the cost of\nprocessing each tuple (row).\n cursor_tuple_fraction | 0.1\n | Sets the planner's estimate of the fraction of a\ncursor's rows that will be retrieved.\n custom_variable_classes |\n | Sets the list of known custom variable classes.\n data_directory |\n/export/scratch0/lsidir/postgres/pgdata | Sets the\nserver's data directory.\n DateStyle | ISO, DMY\n | Sets the display format for date and time\nvalues.\n db_user_namespace | off\n | Enables per-database user names.\n deadlock_timeout | 1s\n | Sets the time to wait on a lock before checking\nfor deadlock.\n debug_assertions | off\n | Turns on various assertion checks.\n debug_pretty_print | on\n | Indents parse and plan tree displays.\n debug_print_parse | off\n | Logs each query's parse tree.\n debug_print_plan | off\n | Logs each query's execution plan.\n debug_print_rewritten | off\n | Logs each query's rewritten parse tree.\n default_statistics_target | 100\n | Sets the default statistics target.\n default_tablespace |\n | Sets the default tablespace to create tables and\nindexes in.\n default_text_search_config | pg_catalog.english\n | Sets default text search configuration.\n default_transaction_isolation | read committed\n | Sets the transaction isolation level of each new\ntransaction.\n default_transaction_read_only | off\n | Sets the default read-only status of new\ntransactions.\n default_with_oids | off\n | Create new tables with OIDs by default.\n dynamic_library_path | $libdir\n | Sets the path for dynamically loadable modules.\n effective_cache_size | 8MB\n | Sets the planner's assumption about the size of\nthe disk cache.\n effective_io_concurrency | 1\n | Number of simultaneous requests that can be\nhandled efficiently by the disk subsystem.\n enable_bitmapscan | on\n | Enables the planner's use of bitmap-scan plans.\n enable_hashagg | on\n | Enables the planner's use of hashed aggregation\nplans.\n enable_hashjoin | on\n | Enables the planner's use of hash join plans.\n enable_indexscan | on\n | Enables the planner's use of index-scan plans.\n enable_mergejoin | on\n | Enables the planner's use of merge join plans.\n enable_nestloop | on\n | Enables the planner's use of nested-loop join\nplans.\n enable_seqscan | on\n | Enables the planner's use of sequential-scan\nplans.\n enable_sort | on\n | Enables the planner's use of explicit sort\nsteps.\n enable_tidscan | on\n | Enables the planner's use of TID scan plans.\n escape_string_warning | on\n | Warn about backslash escapes in ordinary string\nliterals.\n external_pid_file |\n | Writes the postmaster PID to the specified file.\n extra_float_digits | 0\n | Sets the number of digits displayed for\nfloating-point values.\n from_collapse_limit | 8\n | Sets the FROM-list size beyond which subqueries\nare not collapsed.\n fsync | on\n | Forces synchronization of updates to disk.\n full_page_writes | on\n | Writes full pages to WAL when first modified\nafter a checkpoint.\n geqo | on\n | Enables genetic query optimization.\n geqo_effort | 5\n | GEQO: effort is used to set the default for\nother GEQO parameters.\n geqo_generations | 0\n | GEQO: number of iterations of the algorithm.\n geqo_pool_size | 0\n | GEQO: number of individuals in the population.\n geqo_selection_bias | 2\n | GEQO: selective pressure within the population.\n geqo_threshold | 12\n | Sets the threshold of FROM items beyond which\nGEQO is used.\n gin_fuzzy_search_limit | 0\n | Sets the maximum allowed result for exact search\nby GIN.\n hba_file |\n/export/scratch0/lsidir/postgres/pgdata/pg_hba.conf | Sets the\nserver's \"hba\" configuration file.\n ident_file |\n/export/scratch0/lsidir/postgres/pgdata/pg_ident.conf | Sets the\nserver's \"ident\" configuration file.\n ignore_system_indexes | off\n | Disables reading from system indexes.\n integer_datetimes | on\n | Datetimes are integer based.\n IntervalStyle | postgres\n | Sets the display format for interval values.\n join_collapse_limit | 8\n | Sets the FROM-list size beyond which JOIN\nconstructs are not flattened.\n krb_caseins_users | off\n | Sets whether Kerberos and GSSAPI user names\nshould be treated as case-insensitive.\n krb_server_keyfile |\n | Sets the location of the Kerberos server key\nfile.\n krb_srvname | postgres\n | Sets the name of the Kerberos service.\n lc_collate | en_GB.UTF-8\n | Shows the collation order locale.\n lc_ctype | en_GB.UTF-8\n | Shows the character classification and case\nconversion locale.\n lc_messages | en_GB.UTF-8\n | Sets the language in which messages are\ndisplayed.\n lc_monetary | en_GB.UTF-8\n | Sets the locale for formatting monetary amounts.\n lc_numeric | en_GB.UTF-8\n | Sets the locale for formatting numbers.\n lc_time | en_GB.UTF-8\n | Sets the locale for formatting date and time\nvalues.\n listen_addresses | localhost\n | Sets the host name or IP address(es) to listen\nto.\n local_preload_libraries |\n | Lists shared libraries to preload into each\nbackend.\n log_autovacuum_min_duration | -1\n | Sets the minimum execution time above which\nautovacuum actions will be logged.\n log_checkpoints | off\n | Logs each checkpoint.\n log_connections | off\n | Logs each successful connection.\n log_destination | stderr\n | Sets the destination for server log output.\n log_directory | pg_log\n | Sets the destination directory for log files.\n log_disconnections | off\n | Logs end of a session, including duration.\n log_duration | off\n | Logs the duration of each completed SQL\nstatement.\n log_error_verbosity | default\n | Sets the verbosity of logged messages.\n log_executor_stats | off\n | Writes executor performance statistics to the\nserver log.\n log_filename | postgresql-%Y-%m-%d_%H%M%S.log\n | Sets the file name pattern for log files.\n log_hostname | off\n | Logs the host name in the connection logs.\n log_line_prefix |\n | Controls information prefixed to each log line.\n log_lock_waits | off\n | Logs long lock waits.\n log_min_duration_statement | -1\n | Sets the minimum execution time above which\nstatements will be logged.\n log_min_error_statement | error\n | Causes all statements generating error at or\nabove this level to be logged.\n log_min_messages | warning\n | Sets the message levels that are logged.\n log_parser_stats | off\n | Writes parser performance statistics to the\nserver log.\n log_planner_stats | off\n | Writes planner performance statistics to the\nserver log.\n log_rotation_age | 1d\n | Automatic log file rotation will occur after N\nminutes.\n log_rotation_size | 10MB\n | Automatic log file rotation will occur after N\nkilobytes.\n log_statement | none\n | Sets the type of statements logged.\n log_statement_stats | off\n | Writes cumulative performance statistics to the\nserver log.\n log_temp_files | -1\n | Log the use of temporary files larger than this\nnumber of kilobytes.\n log_timezone | Europe/Amsterdam\n | Sets the time zone to use in log messages.\n log_truncate_on_rotation | off\n | Truncate existing log files of same name during\nlog rotation.\n logging_collector | off\n | Start a subprocess to capture stderr output\nand/or csvlogs into log files.\n maintenance_work_mem | 2GB\n | Sets the maximum memory to be used for\nmaintenance operations.\n max_connections | 100\n | Sets the maximum number of concurrent\nconnections.\n max_files_per_process | 1000\n | Sets the maximum number of simultaneously open\nfiles for each server process.\n max_function_args | 100\n | Shows the maximum number of function arguments.\n max_identifier_length | 63\n | Shows the maximum identifier length.\n max_index_keys | 32\n | Shows the maximum number of index keys.\n max_locks_per_transaction | 64\n | Sets the maximum number of locks per\ntransaction.\n max_prepared_transactions | 0\n | Sets the maximum number of simultaneously\nprepared transactions.\n max_stack_depth | 2MB\n | Sets the maximum stack depth, in kilobytes.\n password_encryption | on\n | Encrypt passwords.\n port | 5432\n | Sets the TCP port the server listens on.\n post_auth_delay | 0\n | Waits N seconds on connection startup after\nauthentication.\n pre_auth_delay | 0\n | Waits N seconds on connection startup before\nauthentication.\n random_page_cost | 4\n | Sets the planner's estimate of the cost of a\nnonsequentially fetched disk page.\n regex_flavor | advanced\n | Sets the regular expression \"flavor\".\n search_path | \"$user\",public\n | Sets the schema search order for names that are\nnot schema-qualified.\n segment_size | 1GB\n | Shows the number of pages per disk file.\n seq_page_cost | 1\n | Sets the planner's estimate of the cost of a\nsequentially fetched disk page.\n server_encoding | UTF8\n | Sets the server (database) character set\nencoding.\n server_version | 8.4.2\n | Shows the server version.\n server_version_num | 80402\n | Shows the server version as an integer.\n session_replication_role | origin\n | Sets the session's behavior for triggers and\nrewrite rules.\n shared_buffers | 32MB\n | Sets the number of shared memory buffers used by\nthe server.\n shared_preload_libraries |\n | Lists shared libraries to preload into server.\n silent_mode | off\n | Runs the server silently.\n sql_inheritance | on\n | Causes subtables to be included by default in\nvarious commands.\n ssl | off\n | Enables SSL connections.\n standard_conforming_strings | off\n | Causes '...' strings to treat backslashes\nliterally.\n statement_timeout | 0\n | Sets the maximum allowed duration of any\nstatement.\n stats_temp_directory | pg_stat_tmp\n | Writes temporary statistics files to the\nspecified directory.\n superuser_reserved_connections | 3\n | Sets the number of connection slots reserved for\nsuperusers.\n synchronize_seqscans | on\n | Enable synchronized sequential scans.\n synchronous_commit | on\n | Sets immediate fsync at commit.\n syslog_facility | local0\n | Sets the syslog \"facility\" to be used when\nsyslog enabled.\n syslog_ident | postgres\n | Sets the program name used to identify\nPostgreSQL messages in syslog.\n tcp_keepalives_count | 0\n | Maximum number of TCP keepalive retransmits.\n tcp_keepalives_idle | 0\n | Time between issuing TCP keepalives.\n tcp_keepalives_interval | 0\n | Time between TCP keepalive retransmits.\n temp_buffers | 1024\n | Sets the maximum number of temporary buffers\nused by each session.\n temp_tablespaces |\n | Sets the tablespace(s) to use for temporary\ntables and sort files.\n TimeZone | Europe/Amsterdam\n | Sets the time zone for displaying and\ninterpreting time stamps.\n timezone_abbreviations | Default\n | Selects a file of time zone abbreviations.\n trace_notify | off\n | Generates debugging output for LISTEN and\nNOTIFY.\n trace_sort | off\n | Emit information about resource usage in\nsorting.\n track_activities | on\n | Collects information about executing commands.\n track_activity_query_size | 1024\n | Sets the size reserved for\npg_stat_activity.current_query, in bytes.\n track_counts | on\n | Collects statistics on database activity.\n track_functions | none\n | Collects function-level statistics on database\nactivity.\n transaction_isolation | read committed\n | Sets the current transaction's isolation level.\n transaction_read_only | off\n | Sets the current transaction's read-only status.\n transform_null_equals | off\n | Treats \"expr=NULL\" as \"expr IS NULL\".\n unix_socket_directory |\n | Sets the directory where the Unix-domain socket\nwill be created.\n unix_socket_group |\n | Sets the owning group of the Unix-domain socket.\n unix_socket_permissions | 511\n | Sets the access permissions of the Unix-domain\nsocket.\n update_process_title | on\n | Updates the process title to show the active SQL\ncommand.\n vacuum_cost_delay | 0\n | Vacuum cost delay in milliseconds.\n vacuum_cost_limit | 200\n | Vacuum cost amount available before napping.\n vacuum_cost_page_dirty | 20\n | Vacuum cost for a page dirtied by vacuum.\n vacuum_cost_page_hit | 1\n | Vacuum cost for a page found in the buffer\ncache.\n vacuum_cost_page_miss | 10\n | Vacuum cost for a page not found in the buffer\ncache.\n vacuum_freeze_min_age | 50000000\n | Minimum age at which VACUUM should freeze a\ntable row.\n vacuum_freeze_table_age | 150000000\n | Age at which VACUUM should scan whole table to\nfreeze tuples.\n wal_block_size | 8192\n | Shows the block size in the write ahead log.\n wal_buffers | 64kB\n | Sets the number of disk-page buffers in shared\nmemory for WAL.\n wal_segment_size | 16MB\n | Shows the number of pages per write ahead log\nsegment.\n wal_sync_method | fdatasync\n | Selects the method used for forcing WAL updates\nto disk.\n wal_writer_delay | 200ms\n | WAL writer sleep time between WAL flushes.\n work_mem | 6GB\n | Sets the maximum memory to be used for query\nworkspaces.\n xmlbinary | base64\n | Sets how binary values are to be encoded in XML.\n xmloption | content\n | Sets whether XML data in implicit parsing and\nserialization operations is to be considered as documents or content\nfragments.\n zero_damaged_pages | off\n | Continues processing past damaged page headers.\n(193 rows)\n",
"msg_date": "Thu, 7 Jan 2010 13:38:41 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Air-traffic benchmark"
},
{
"msg_contents": "Hello\n\n----- \"Lefteris\" <[email protected]> escreveu:\n> Hi all,\n> \n> following the simple but interesting air-traffic benchmark published\n> at:\n> http://www.mysqlperformanceblog.com/2009/10/02/analyzing-air-traffic-performance-with-infobright-and-monetdb/\n\nQuite interesting test, if you have the time to download all that raw data.\n\n> of 5 minutes while the slow ones in the order of 40 minutes. I came\n> to\n> the conclusion that some parameters are not set correctly. I give all\n\nI do think so too.\n\n> The hardware characteristics are:\n> Platform Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz with 8GB RAM and\n> ample disk space (2x 500 GB SATA disk @ 7200 RPM as SW-RAID-0)\n\nSATA disks are not the best for a \"benchmark\" like yours, but the results were so deeply different from expected that I'm sure that you have hardware enough.\n\n> I used postgresql-8.4.2. Source download and compiled by hand, no\n> special parameters where passed to the configure phase.\n> \n> -- DETAILS on loading, analyze and index creation on postgresql-8.4.2\n> -- loading time was 77m15.976s\n> -- total rows: 119790558\n> -- total size of pgdata/base: 46G\n> ANALYZE;\n> -- 219698.365ms aka 3m39.698s\n> CREATE INDEX year_month_idx ON ontime (\"Year\", \"Month\");\n> -- 517481.632ms aka 8m37.481s\n> CREATE INDEX DepDelay_idx ON ontime (\"DepDelay\");\n> -- 1257051.668ms aka 20m57.051s\n\nHere comes my first question:\nDid you ANALYZE your database (or at least related tables) _after_ index creation?\nIf not, try to do so. PostgreSQL needs statistics of the database when everything is in its place.\n\n> airtraffic=# EXPLAIN ANALYZE SELECT \"DayOfWeek\", count(*) AS c FROM\n> ontime WHERE \"Year\" BETWEEN 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER\n> BY c DESC;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\n> time=371188.821..371188.823 rows=7 loops=1)\n> Sort Key: (count(*))\n> Sort Method: quicksort Memory: 25kB\n> -> HashAggregate (cost=7407754.04..7407754.08 rows=4 width=2)\n> (actual time=371163.316..371163.320 rows=7 loops=1)\n> -> Seq Scan on ontime (cost=0.00..7143875.40 rows=52775727\n> width=2) (actual time=190938.959..346180.079 rows=52484047 loops=1)\n> Filter: ((\"Year\" >= 2000) AND (\"Year\" <= 2008))\n> Total runtime: 371201.156 ms\n> (7 rows)\n\nYou'll see here that PostgreSQL is not using the index you just created.\nANALYZE VERBOSE ontime;\nshould solve this.\n\n> I understand that the problem here is the memory used by the sort\n> method. *But*, I already changed the work_mem parameter to 6gigs:)\n\nIf you look to you explain you'll see that you don't need that much of memory.\nYou have 8GB of total RAM, if you use that much for sorting you'll start to swap.\n\n> which means that of course work_mem == sort_mem, as\n> such, shouldn't be the case that the sort algorithm should have used\n> much more memory?\n\nsort_mem is a part of the work_mem in recent versions of PostgreSQL.\nNo, the sort algorithm doesn't need that at all.\n\n> I also attach the output of 'show all;' so you can advice me in any\n> other configuration settings that I might need to change to perform\n> better.\n\nLet's take a look.\n\n> geqo | on\n\nIf in doubt, turn this off. Geqo is capable of making bad execution plans for you.\nBut this is not the most important to change.\n\n> shared_buffers | 32MB\n\nHere it is. The default setting for shared_buffer doesn't give space for the buffercache.\nI would recomend you to increase this to, at least, 40% of your total RAM.\nDon't forget to restart PostgreSQL after changing here, and it's possible that you'll need to increase some system V parameters in Fedora. Read PostgreSQL documentation about it.\n\n> effective_cache_size | 8MB\n\nIncrease here to, at least, shared_buffers + any caches that you have in your hardware (e.g. in the RAID controller)\n\n> wal_buffers | 64kB\n\nIncreasing here can make your bulk load faster. 8MB is a good number.\nThis will not make your SELECT queries faster.\n\n> wal_sync_method | fdatasync\n\nfdatasync is the recommended method for Solaris. I would use open_sync. It's not important for SELECT too.\n\n> work_mem | 6GB\nStart here with 10MB. If you have temp files when executing your SELECTs try to increase just a bit.\n\nLet us know what happens in your new tests.\nBest regards\n\nFlavio Henrique A. Gurgel\ntel. 55-11-2125.4765\nfax. 55-11-2125.4777\nwww.4linux.com.br\nFREE SOFTWARE SOLUTIONS\n",
"msg_date": "Thu, 7 Jan 2010 11:24:24 -0200 (BRST)",
"msg_from": "\"Gurgel, Flavio\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "In response to Lefteris :\n> \n> airtraffic=# EXPLAIN ANALYZE SELECT \"DayOfWeek\", count(*) AS c FROM\n> ontime WHERE \"Year\" BETWEEN 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER\n> BY c DESC;\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\n> time=371188.821..371188.823 rows=7 loops=1)\n> Sort Key: (count(*))\n> Sort Method: quicksort Memory: 25kB\n> -> HashAggregate (cost=7407754.04..7407754.08 rows=4 width=2)\n> (actual time=371163.316..371163.320 rows=7 loops=1)\n> -> Seq Scan on ontime (cost=0.00..7143875.40 rows=52775727\n> width=2) (actual time=190938.959..346180.079 rows=52484047 loops=1)\n> Filter: ((\"Year\" >= 2000) AND (\"Year\" <= 2008))\n> Total runtime: 371201.156 ms\n> (7 rows)\n> \n> \n> I understand that the problem here is the memory used by the sort\n> method. *But*, I already changed the work_mem parameter to 6gigs:)\n\nNo.\n\nThe problem here is the Seq-Scan. It returns about 52.000.000 rows,\napproximately roughly table, it needs 346 seconds.\n\nThe sort needs only 25 KByte and only 0.02ms.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 7 Jan 2010 14:32:25 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On 7-1-2010 13:38 Lefteris wrote:\n> I decided to run the benchmark over postgres to get some more\n> experience and insights. Unfortunately, the query times I got from\n> postgres were not the expected ones:\n\nWhy were they not expected? In the given scenario, column databases are \nhaving a huge advantage. Especially the given simple example is the type \nof query a column database *should* excel.\nYou should, at the very least, compare the queries to MyISAM:\nhttp://www.mysqlperformanceblog.com/2009/11/05/air-traffic-queries-in-myisam-and-tokutek-tokudb/\n\nBut unfortunately, that one also beats your postgresql-results.\n\n> The hardware characteristics are:\n> Platform Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz with 8GB RAM and\n> ample disk space (2x 500 GB SATA disk @ 7200 RPM as SW-RAID-0)\n\nUnfortunately, the blogpost fails to mention the disk-subsystem. So it \nmay well be much faster than yours, although its not a new, big or fast \nserver, so unless it has external storage, it shouldn't be too different \nfor sequential scans.\n\n> SELECT \"DayOfWeek\", count(*) AS c FROM ontime WHERE \"Year\" BETWEEN\n> 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER BY c DESC;\n>\n> Reported query times are (in sec):\n> MonetDB 7.9s\n> InfoBright 12.13s\n> LucidDB 54.8s\n>\n> For pg-8.4.2 I got with 3 consecutive runs on the server:\n> 5m52.384s\n> 5m55.885s\n> 5m54.309s\n\nMaybe an index of the type 'year, dayofweek' will help for this query. \nBut it'll have to scan about half the table any way, so a seq scan isn't \na bad idea.\nIn this case, a partitioned table with partitions per year and \nconstraint exclusion enabled would help a bit more.\n\nBest regards,\n\nArjen\n",
"msg_date": "Thu, 07 Jan 2010 14:36:46 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Thank you all for your answers!\n\nAndrea, I see the other way around what you are saying:\n\nSort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\ntime=371188.821..371188.823 rows=7 loops=1)\nSeq Scan on ontime (cost=0.00..7143875.40 rows=52775727 width=2)\n(actual time=190938.959..346180.079 rows=52484047 loops=1)\n\n\nI dont see the seq scan to ba a problem, and it is the correct choice\nhere because Year spans from 1999 to 2009 and the query asks from 2000\nand on, so PG correctly decides to use seq scan and not index access.\n\nlefteris\n\nOn Thu, Jan 7, 2010 at 2:32 PM, A. Kretschmer\n<[email protected]> wrote:\n> In response to Lefteris :\n>>\n>> airtraffic=# EXPLAIN ANALYZE SELECT \"DayOfWeek\", count(*) AS c FROM\n>> ontime WHERE \"Year\" BETWEEN 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER\n>> BY c DESC;\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------------------------------------------------------------------------------------\n>> Sort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\n>> time=371188.821..371188.823 rows=7 loops=1)\n>> Sort Key: (count(*))\n>> Sort Method: quicksort Memory: 25kB\n>> -> HashAggregate (cost=7407754.04..7407754.08 rows=4 width=2)\n>> (actual time=371163.316..371163.320 rows=7 loops=1)\n>> -> Seq Scan on ontime (cost=0.00..7143875.40 rows=52775727\n>> width=2) (actual time=190938.959..346180.079 rows=52484047 loops=1)\n>> Filter: ((\"Year\" >= 2000) AND (\"Year\" <= 2008))\n>> Total runtime: 371201.156 ms\n>> (7 rows)\n>>\n>>\n>> I understand that the problem here is the memory used by the sort\n>> method. *But*, I already changed the work_mem parameter to 6gigs:)\n>\n> No.\n>\n> The problem here is the Seq-Scan. It returns about 52.000.000 rows,\n> approximately roughly table, it needs 346 seconds.\n>\n> The sort needs only 25 KByte and only 0.02ms.\n>\n>\n> Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Thu, 7 Jan 2010 14:40:39 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Hi Arjen,\n\nso I understand from all of you that you don't consider the use of 25k\nfor sorting to be the cause of the slowdown? Probably I am missing\nsomething on the specific sort algorithm used by PG. My RAM does fill\nup, mainly by file buffers from linux, but postgres process remains to\n0.1% consumption of main memory. There is no way to force sort to use\nsay blocks of 128MB ? wouldn't that make a difference?\n\nlefteris\n\np.s. i already started the analyze verbose again as Flavio suggested\nand reset the parrameters, although I think some of Flavioo's\nsuggestions have to do with multiple users/queries and not 1 long\nrunning query, like shared_buffers, or not?\n\nOn Thu, Jan 7, 2010 at 2:36 PM, Arjen van der Meijden\n<[email protected]> wrote:\n> On 7-1-2010 13:38 Lefteris wrote:\n>>\n>> I decided to run the benchmark over postgres to get some more\n>> experience and insights. Unfortunately, the query times I got from\n>> postgres were not the expected ones:\n>\n> Why were they not expected? In the given scenario, column databases are\n> having a huge advantage. Especially the given simple example is the type of\n> query a column database *should* excel.\n> You should, at the very least, compare the queries to MyISAM:\n> http://www.mysqlperformanceblog.com/2009/11/05/air-traffic-queries-in-myisam-and-tokutek-tokudb/\n>\n> But unfortunately, that one also beats your postgresql-results.\n>\n>> The hardware characteristics are:\n>> Platform Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz with 8GB RAM and\n>> ample disk space (2x 500 GB SATA disk @ 7200 RPM as SW-RAID-0)\n>\n> Unfortunately, the blogpost fails to mention the disk-subsystem. So it may\n> well be much faster than yours, although its not a new, big or fast server,\n> so unless it has external storage, it shouldn't be too different for\n> sequential scans.\n>\n>> SELECT \"DayOfWeek\", count(*) AS c FROM ontime WHERE \"Year\" BETWEEN\n>> 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER BY c DESC;\n>>\n>> Reported query times are (in sec):\n>> MonetDB 7.9s\n>> InfoBright 12.13s\n>> LucidDB 54.8s\n>>\n>> For pg-8.4.2 I got with 3 consecutive runs on the server:\n>> 5m52.384s\n>> 5m55.885s\n>> 5m54.309s\n>\n> Maybe an index of the type 'year, dayofweek' will help for this query. But\n> it'll have to scan about half the table any way, so a seq scan isn't a bad\n> idea.\n> In this case, a partitioned table with partitions per year and constraint\n> exclusion enabled would help a bit more.\n>\n> Best regards,\n>\n> Arjen\n>\n",
"msg_date": "Thu, 7 Jan 2010 14:47:36 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "In response to Lefteris :\n> Thank you all for your answers!\n> \n> Andrea, I see the other way around what you are saying:\n> \n> Sort (cost=7407754.12..7407754.13 rows=4 width=2) (actual\n> time=371188.821..371188.823 rows=7 loops=1)\n> Seq Scan on ontime (cost=0.00..7143875.40 rows=52775727 width=2)\n> (actual time=190938.959..346180.079 rows=52484047 loops=1)\n> \n> \n> I dont see the seq scan to ba a problem, and it is the correct choice\n> here because Year spans from 1999 to 2009 and the query asks from 2000\n> and on, so PG correctly decides to use seq scan and not index access.\n\nThats right.\n\nBut this is not a contradiction, this seq-scan *is* the real problem, not\nthe sort. And yes, as others said, increment the work_mem isn't the\nsolution. It is counterproductive, because you lost buffer-cache.\n\n\nAndreas, note the 's' at the end ;-)\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 7 Jan 2010 14:59:00 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "In response to Lefteris :\n> Hi Arjen,\n> \n> so I understand from all of you that you don't consider the use of 25k\n> for sorting to be the cause of the slowdown? Probably I am missing\n> something on the specific sort algorithm used by PG. My RAM does fill\n> up, mainly by file buffers from linux, but postgres process remains to\n> 0.1% consumption of main memory. There is no way to force sort to use\n> say blocks of 128MB ? wouldn't that make a difference?\n\nThe result-table fits in that piece of memory.\n\nYou have only this little table:\n\n[ 5, 7509643 ]\n[ 1, 7478969 ]\n[ 4, 7453687 ]\n[ 3, 7412939 ]\n[ 2, 7370368 ]\n[ 7, 7095198 ]\n[ 6, 6425690 ]\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 7 Jan 2010 15:02:44 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Thursday, January 7, 2010, 2:47:36 PM you wrote:\n\n> so I understand from all of you that you don't consider the use of 25k\n> for sorting to be the cause of the slowdown? Probably I am missing\n\nMaybe you are reading the plan wrong:\n\n- the sort needs only 25kB of memory, and finishes in sub-second time, \n mainly because the sort only sorts the already summarized data, and not\n the whole table \n- the sequential scan takes 346 seconds, and thus is the major factor in\n time to finish!\n \nSo the total query time is 371 seconds, of which 346 are required to \ncompletely scan the table once.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 7 Jan 2010 15:05:35 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Yes, I am reading the plan wrong! I thought that each row from the\nplan reported the total time for the operation but it actually reports\nthe starting and ending point.\n\nSo we all agree that the problem is on the scans:)\n\nSo the next question is why changing shared memory buffers will fix\nthat? i only have one session with one connection, do I have like many\nreader workers or something?\n\nThank you and sorry for the plethora of questions, but I know few\nabout the inner parts of postgres:)\n\nlefteris\n\nOn Thu, Jan 7, 2010 at 3:05 PM, Jochen Erwied\n<[email protected]> wrote:\n> Thursday, January 7, 2010, 2:47:36 PM you wrote:\n>\n>> so I understand from all of you that you don't consider the use of 25k\n>> for sorting to be the cause of the slowdown? Probably I am missing\n>\n> Maybe you are reading the plan wrong:\n>\n> - the sort needs only 25kB of memory, and finishes in sub-second time,\n> mainly because the sort only sorts the already summarized data, and not\n> the whole table\n> - the sequential scan takes 346 seconds, and thus is the major factor in\n> time to finish!\n>\n> So the total query time is 371 seconds, of which 346 are required to\n> completely scan the table once.\n>\n> --\n> Jochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\n> Sauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\n> D-45470 Muelheim | mobile: [email protected] +49-173-5404164\n>\n>\n",
"msg_date": "Thu, 7 Jan 2010 15:10:20 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Lefteris escribi�:\n> Yes, I am reading the plan wrong! I thought that each row from the\n> plan reported the total time for the operation but it actually reports\n> the starting and ending point.\n> \n> So we all agree that the problem is on the scans:)\n> \n> So the next question is why changing shared memory buffers will fix\n> that? i only have one session with one connection, do I have like many\n> reader workers or something?\n\nNo amount of tinkering is going to change the fact that a seqscan is the\nfastest way to execute these queries. Even if you got it to be all in\nmemory, it would still be much slower than the other systems which, I\ngather, are using columnar storage and thus are perfectly suited to this\nproblem (unlike Postgres). The talk about \"compression ratios\" caught\nme by surprise until I realized it was columnar stuff. There's no way\nyou can get such high ratios on a regular, row-oriented storage.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 7 Jan 2010 11:14:23 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Alvaro Herrera escribi�:\n\n> No amount of tinkering is going to change the fact that a seqscan is the\n> fastest way to execute these queries. Even if you got it to be all in\n> memory, it would still be much slower than the other systems which, I\n> gather, are using columnar storage and thus are perfectly suited to this\n> problem (unlike Postgres). The talk about \"compression ratios\" caught\n> me by surprise until I realized it was columnar stuff. There's no way\n> you can get such high ratios on a regular, row-oriented storage.\n\nFWIW if you want a fair comparison, get InnoDB numbers.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 7 Jan 2010 11:16:38 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 3:14 PM, Alvaro Herrera\n<[email protected]> wrote:\n> Lefteris escribió:\n>> Yes, I am reading the plan wrong! I thought that each row from the\n>> plan reported the total time for the operation but it actually reports\n>> the starting and ending point.\n>>\n>> So we all agree that the problem is on the scans:)\n>>\n>> So the next question is why changing shared memory buffers will fix\n>> that? i only have one session with one connection, do I have like many\n>> reader workers or something?\n>\n> No amount of tinkering is going to change the fact that a seqscan is the\n> fastest way to execute these queries. Even if you got it to be all in\n> memory, it would still be much slower than the other systems which, I\n> gather, are using columnar storage and thus are perfectly suited to this\n> problem (unlike Postgres). The talk about \"compression ratios\" caught\n> me by surprise until I realized it was columnar stuff. There's no way\n> you can get such high ratios on a regular, row-oriented storage.\n>\n> --\n> Alvaro Herrera http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n\nI am aware of that and I totally agree. I would not expect from a row\nstore to have the same performance with a column. I was just trying to\ndouble check that all settings are correct because usually you have\ndifference of seconds and minutes between column-rows, not seconds and\nalmost an hour (for queries Q2-Q8).\n\nI think what you all said was very helpful and clear! The only part\nthat I still disagree/don't understand is the shared_buffer option:))\n\nLefteris\n",
"msg_date": "Thu, 7 Jan 2010 15:23:24 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On 7.1.2010 15:23, Lefteris wrote:\n\n> I think what you all said was very helpful and clear! The only part\n> that I still disagree/don't understand is the shared_buffer option:))\n\nDid you ever try increasing shared_buffers to what was suggested (around\n4 GB) and see what happens (I didn't see it in your posts)?\n\nShared_buffers can be thought as the PostgreSQLs internal cache. If the\npages being scanned for a particular query are in the cache, this will\nhelp performance very much on multiple exequtions of the same query.\nOTOH, since the file system's cache didn't help you significantly, there\nis low possibility shared_buffers will. It is still worth trying.\n\n From the description of the data (\"...from years 1988 to 2009...\") it\nlooks like the query for \"between 2000 and 2009\" pulls out about half of\nthe data. If an index could be used instead of seqscan, it could be\nperhaps only 50% faster, which is still not very comparable to others.\n\nThe table is very wide, which is probably why the tested databases can\ndeal with it faster than PG. You could try and narrow the table down\n(for instance: remove the Div* fields) to make the data more\n\"relational-like\". In real life, speedups in this circumstances would\nprobably be gained by normalizing the data to make the basic table\nsmaller and easier to use with indexing.\n\n",
"msg_date": "Thu, 07 Jan 2010 15:51:26 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras <[email protected]> wrote:\n> On 7.1.2010 15:23, Lefteris wrote:\n>\n>> I think what you all said was very helpful and clear! The only part\n>> that I still disagree/don't understand is the shared_buffer option:))\n>\n> Did you ever try increasing shared_buffers to what was suggested (around\n> 4 GB) and see what happens (I didn't see it in your posts)?\n\nNo I did not to that yet, mainly because I need the admin of the\nmachine to change the shmmax of the kernel and also because I have no\nmultiple queries running. Does Seq scan uses shared_buffers?\n\n>\n> Shared_buffers can be thought as the PostgreSQLs internal cache. If the\n> pages being scanned for a particular query are in the cache, this will\n> help performance very much on multiple exequtions of the same query.\n> OTOH, since the file system's cache didn't help you significantly, there\n> is low possibility shared_buffers will. It is still worth trying.\n>\n> From the description of the data (\"...from years 1988 to 2009...\") it\n> looks like the query for \"between 2000 and 2009\" pulls out about half of\n> the data. If an index could be used instead of seqscan, it could be\n> perhaps only 50% faster, which is still not very comparable to others.\n>\n> The table is very wide, which is probably why the tested databases can\n> deal with it faster than PG. You could try and narrow the table down\n> (for instance: remove the Div* fields) to make the data more\n> \"relational-like\". In real life, speedups in this circumstances would\n> probably be gained by normalizing the data to make the basic table\n> smaller and easier to use with indexing.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Thu, 7 Jan 2010 16:05:33 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 3:05 PM, Lefteris <[email protected]> wrote:\n> On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras <[email protected]> wrote:\n>> On 7.1.2010 15:23, Lefteris wrote:\n>>\n>>> I think what you all said was very helpful and clear! The only part\n>>> that I still disagree/don't understand is the shared_buffer option:))\n>>\n>> Did you ever try increasing shared_buffers to what was suggested (around\n>> 4 GB) and see what happens (I didn't see it in your posts)?\n>\n> No I did not to that yet, mainly because I need the admin of the\n> machine to change the shmmax of the kernel and also because I have no\n> multiple queries running. Does Seq scan uses shared_buffers?\nThink of the shared buffer as a cache. It will help subsequent queries\nrunning to not have to use disk.\n\n\n-- \nGJ\n",
"msg_date": "Thu, 7 Jan 2010 15:16:07 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "2010/1/7 Lefteris <[email protected]>:\n> On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras <[email protected]> wrote:\n>> On 7.1.2010 15:23, Lefteris wrote:\n>>\n>>> I think what you all said was very helpful and clear! The only part\n>>> that I still disagree/don't understand is the shared_buffer option:))\n>>\n>> Did you ever try increasing shared_buffers to what was suggested (around\n>> 4 GB) and see what happens (I didn't see it in your posts)?\n>\n> No I did not to that yet, mainly because I need the admin of the\n> machine to change the shmmax of the kernel and also because I have no\n> multiple queries running. Does Seq scan uses shared_buffers?\n\nEverything uses shared_buffers, even things that do not benefit from\nit. This is because shared_buffers is the part of the general database\nIO - it's unavoidable.\n",
"msg_date": "Thu, 7 Jan 2010 16:57:18 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 4:57 PM, Ivan Voras <[email protected]> wrote:\n> 2010/1/7 Lefteris <[email protected]>:\n>> On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras <[email protected]> wrote:\n>>> On 7.1.2010 15:23, Lefteris wrote:\n>>>\n>>>> I think what you all said was very helpful and clear! The only part\n>>>> that I still disagree/don't understand is the shared_buffer option:))\n>>>\n>>> Did you ever try increasing shared_buffers to what was suggested (around\n>>> 4 GB) and see what happens (I didn't see it in your posts)?\n>>\n>> No I did not to that yet, mainly because I need the admin of the\n>> machine to change the shmmax of the kernel and also because I have no\n>> multiple queries running. Does Seq scan uses shared_buffers?\n>\n> Everything uses shared_buffers, even things that do not benefit from\n> it. This is because shared_buffers is the part of the general database\n> IO - it's unavoidable.\n>\n\n\nI will increase the shared_buffers once my kernel is configured and I\nwill report back to you.\n\nAs for the index scan, I already build an b-tree on year/month but PG\n(correctly) decides not to use it. The data are from year 1999 up to\n2009 (my typo mistake) so it is almost 90% of the data to be accessed.\nWhen I ask for a specific year, like 2004 then the index is used and\nquery times become faster.\n\nlefteris\n",
"msg_date": "Thu, 7 Jan 2010 17:03:49 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> No amount of tinkering is going to change the fact that a seqscan is the\n> fastest way to execute these queries. Even if you got it to be all in\n> memory, it would still be much slower than the other systems which, I\n> gather, are using columnar storage and thus are perfectly suited to this\n> problem (unlike Postgres). The talk about \"compression ratios\" caught\n> me by surprise until I realized it was columnar stuff. There's no way\n> you can get such high ratios on a regular, row-oriented storage.\n\nOne of the \"good tricks\" with Postgres is to convert a very wide table into a set of narrow tables, then use a view to create something that looks like the original table. It requires you to modify the write portions of your app, but the read portions can stay the same.\n\nA seq scan on one column will *much* faster when you rearrange your database this way since it's only scanning relevant data. You pay the price of an extra join on primary keys, though.\n\nIf you have just a few columns in a very wide table that are seq-scanned a lot, you can pull out just those columns and leave the rest in the wide table.\n\nThe same trick is also useful if you have one or a few columns that are updated frequently: pull them out, and use a view to recreate the original appearance. It saves a lot on garbage collection.\n\nCraig\n\n",
"msg_date": "Thu, 07 Jan 2010 08:31:25 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Lefteris wrote:\n> So we all agree that the problem is on the scans:)\n>\n> So the next question is why changing shared memory buffers will fix\n> that? i only have one session with one connection, do I have like many\n> reader workers or something?\n> \n\nI wouldn't expect it to. Large sequential scans like this one are \noptimized in PostgreSQL to only use up a small portion of the \nshared_buffers cache. Allocating more RAM to the database won't improve \nthe fact that you're spending the whole time waiting for physical I/O to \nhappen very much.\n\nWhat might help is increasing effective_cache_size a lot though, because \nthere you might discover the database switching to all new sorts of \nplans for some of these queries. But, again, that doesn't impact the \nsituation where a sequential scan is the only approach.\n\nI have this whole data set on my PC already and have been trying to find \ntime to get it loaded and start my own tests here, it is a quite \ninteresting set of information. Can you tell me what you had to do in \norder to get it running in PostgreSQL? If you made any customizations \nthere, I'd like to get a copy of them. Would save me some time and help \nme get to where I could give suggestions out if I had a \"pgdumpall \n--schema-only\" dump from your database for example, or however you got \nthe schema into there, and the set of PostgreSQL-compatible queries \nyou're using.\n\nBy the way: if anybody else wants to join in, here's a script that \ngenerates a script to download the whole data set:\n\n#!/usr/bin/env python\nfor y in range(1988,2010):\n for m in range(1,13):\n print \"wget --limit-rate=100k \nhttp://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_%s_%s.zip\" \n% (y,m)\n\nIt's 3.8GB of download that uncompresses into 46GB of CSV data, which is \nwhy I put the rate limiter on there--kept it from clogging my entire \nInternet connection.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 07 Jan 2010 17:21:47 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Hi Greg,\n\nthank you for your help. The changes I did on the dataset was just\nremoving the last comma from the CSV files as it was interpreted by pg\nas an extra column. The schema I used, the load script and queries can\nbe found at:\n\nhttp://homepages.cwi.nl/~lsidir/postgres/\n\n(I understood that if I attach these files here, my email will not\nreach the list so I give you a link to download them).\n\nAlso since you are interesting on the benchmark, you can also check\n\nhttp://homepages.cwi.nl/~mk/ontimeReport\n\nfor a report of various experiments with MonetDB and comparison with\npreviously published numbers.\n\nThe schema I used for pg is slightly different from that one of\nMonetDB since the delay fields could not be parsed by pg as integers\nbut only as varchar(4). Hence the extra index on DepDelay field:)\n\nAlso at http://homepages.cwi.nl/~lsidir/PostgreSQL-ontimeReport you\ncan see the detailed times I got from postgres.\n\nI really appreciate your help! this is a great opportunity for me to\nget some feeling and insights on postgres since I never had the chance\nto use it in a large scale project.\n\nlefteris\n\n\nOn Thu, Jan 7, 2010 at 11:21 PM, Greg Smith <[email protected]> wrote:\n> Lefteris wrote:\n>>\n>> So we all agree that the problem is on the scans:)\n>>\n>> So the next question is why changing shared memory buffers will fix\n>> that? i only have one session with one connection, do I have like many\n>> reader workers or something?\n>>\n>\n> I wouldn't expect it to. Large sequential scans like this one are optimized\n> in PostgreSQL to only use up a small portion of the shared_buffers cache.\n> Allocating more RAM to the database won't improve the fact that you're\n> spending the whole time waiting for physical I/O to happen very much.\n>\n> What might help is increasing effective_cache_size a lot though, because\n> there you might discover the database switching to all new sorts of plans\n> for some of these queries. But, again, that doesn't impact the situation\n> where a sequential scan is the only approach.\n>\n> I have this whole data set on my PC already and have been trying to find\n> time to get it loaded and start my own tests here, it is a quite interesting\n> set of information. Can you tell me what you had to do in order to get it\n> running in PostgreSQL? If you made any customizations there, I'd like to\n> get a copy of them. Would save me some time and help me get to where I\n> could give suggestions out if I had a \"pgdumpall --schema-only\" dump from\n> your database for example, or however you got the schema into there, and the\n> set of PostgreSQL-compatible queries you're using.\n>\n> By the way: if anybody else wants to join in, here's a script that\n> generates a script to download the whole data set:\n>\n> #!/usr/bin/env python\n> for y in range(1988,2010):\n> for m in range(1,13):\n> print \"wget --limit-rate=100k\n> http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_%s_%s.zip\"\n> % (y,m)\n>\n> It's 3.8GB of download that uncompresses into 46GB of CSV data, which is why\n> I put the rate limiter on there--kept it from clogging my entire Internet\n> connection.\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n",
"msg_date": "Thu, 7 Jan 2010 23:57:04 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 11:57 PM, Lefteris <[email protected]> wrote:\n> Hi Greg,\n>\n> thank you for your help. The changes I did on the dataset was just\n> removing the last comma from the CSV files as it was interpreted by pg\n> as an extra column. The schema I used, the load script and queries can\n> be found at:\n>\n> http://homepages.cwi.nl/~lsidir/postgres/\n>\n> (I understood that if I attach these files here, my email will not\n> reach the list so I give you a link to download them).\n>\n> Also since you are interesting on the benchmark, you can also check\n>\n> http://homepages.cwi.nl/~mk/ontimeReport\n>\n> for a report of various experiments with MonetDB and comparison with\n> previously published numbers.\n>\n> The schema I used for pg is slightly different from that one of\n> MonetDB since the delay fields could not be parsed by pg as integers\n> but only as varchar(4). Hence the extra index on DepDelay field:)\n\nSorry, I mean the ArrTime DepTime fields were changed, because they\napear on the data as HHMM, but these fields were not used on the\nqueries. The index on DepDelay was done for q3,4,5 and 7\n\n>\n> Also at http://homepages.cwi.nl/~lsidir/PostgreSQL-ontimeReport you\n> can see the detailed times I got from postgres.\n>\n> I really appreciate your help! this is a great opportunity for me to\n> get some feeling and insights on postgres since I never had the chance\n> to use it in a large scale project.\n>\n> lefteris\n>\n>\n> On Thu, Jan 7, 2010 at 11:21 PM, Greg Smith <[email protected]> wrote:\n>> Lefteris wrote:\n>>>\n>>> So we all agree that the problem is on the scans:)\n>>>\n>>> So the next question is why changing shared memory buffers will fix\n>>> that? i only have one session with one connection, do I have like many\n>>> reader workers or something?\n>>>\n>>\n>> I wouldn't expect it to. Large sequential scans like this one are optimized\n>> in PostgreSQL to only use up a small portion of the shared_buffers cache.\n>> Allocating more RAM to the database won't improve the fact that you're\n>> spending the whole time waiting for physical I/O to happen very much.\n>>\n>> What might help is increasing effective_cache_size a lot though, because\n>> there you might discover the database switching to all new sorts of plans\n>> for some of these queries. But, again, that doesn't impact the situation\n>> where a sequential scan is the only approach.\n>>\n>> I have this whole data set on my PC already and have been trying to find\n>> time to get it loaded and start my own tests here, it is a quite interesting\n>> set of information. Can you tell me what you had to do in order to get it\n>> running in PostgreSQL? If you made any customizations there, I'd like to\n>> get a copy of them. Would save me some time and help me get to where I\n>> could give suggestions out if I had a \"pgdumpall --schema-only\" dump from\n>> your database for example, or however you got the schema into there, and the\n>> set of PostgreSQL-compatible queries you're using.\n>>\n>> By the way: if anybody else wants to join in, here's a script that\n>> generates a script to download the whole data set:\n>>\n>> #!/usr/bin/env python\n>> for y in range(1988,2010):\n>> for m in range(1,13):\n>> print \"wget --limit-rate=100k\n>> http://www.transtats.bts.gov/Download/On_Time_On_Time_Performance_%s_%s.zip\"\n>> % (y,m)\n>>\n>> It's 3.8GB of download that uncompresses into 46GB of CSV data, which is why\n>> I put the rate limiter on there--kept it from clogging my entire Internet\n>> connection.\n>>\n>> --\n>> Greg Smith 2ndQuadrant Baltimore, MD\n>> PostgreSQL Training, Services and Support\n>> [email protected] www.2ndQuadrant.com\n>>\n>>\n>\n",
"msg_date": "Fri, 8 Jan 2010 00:08:57 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 07, 2010 at 01:38:41PM +0100, Lefteris wrote:\n> airtraffic=# EXPLAIN ANALYZE SELECT \"DayOfWeek\", count(*) AS c FROM\n> ontime WHERE \"Year\" BETWEEN 2000 AND 2008 GROUP BY \"DayOfWeek\" ORDER\n> BY c DESC;\n\nWell, this query basically has to be slow. Correct approach to this\nproblem is to add precalculated aggregates - either with triggers or\nwith some cronjob.\nAfterwards query speed depends only on how good are your aggregates,\nand/or how detailed.\nOf course calculating them is not free, but is done on write (or\nperiodically), and not on user-request, which makes user-requests *much*\nfaster.\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n",
"msg_date": "Fri, 8 Jan 2010 00:39:33 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "hubert depesz lubaczewski wrote:\n> Well, this query basically has to be slow. Correct approach to this\n> problem is to add precalculated aggregates...\n\nThe point of this data set and associated queries is to see how fast the \ndatabase can do certain types of queries on its own. Some other types \nof database implementations automatically do well here due to column \nstorage and other techniques; the idea is to measure how you can do \nwithout trying to rewrite the app any though. If precomputed aggregates \nwere added, they'd improve performance for everybody else, too.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 07 Jan 2010 19:38:34 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, 2010-01-07 at 13:38 +0100, Lefteris wrote:\n> Reported query times are (in sec):\n> MonetDB 7.9s\n> InfoBright 12.13s\n> LucidDB 54.8s\n\nIt needs to be pointed out that those databases are specifically\noptimised for Data Warehousing, whereas Postgres core is optimised for\nconcurrent write workloads in production systems.\n\nIf you want a best-vs-best type of comparison, you should be looking at\na version of Postgres optimised for Data Warehousing. These results show\nthat Postgres-related options exist that clearly beat the above numbers.\nhttp://community.greenplum.com/showthread.php?t=111\nI note also that Greenplum's Single Node Edition is now free to use, so\nis a reasonable product for comparison on this list.\n\nAlso, I'm unimpressed by a Data Warehouse database that requires\neverything to reside in memory, e.g. MonetDB. That severely limits\nreal-world usability, in my experience because it implies the queries\nyou're running aren't ad-hoc.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 04 Feb 2010 10:41:13 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
}
] |
[
{
"msg_contents": "Hello, \n\nI've been lurking on this list a couple weeks now, and have asked some \"side questions\" to some of the list members, who have been gracious, and helpful, and encouraged me to just dive in and participate on the list.\n\nI'll not tell you the whole story right off the bat, but let me give you a basic outline.\n\nI dual report into two academic departments at the University of Alabama at Birmingham - Computer & Information Sciences and Justice Sciences. Although my background is on the CS side, I specialize in cybercrime investigations and our research focuses on things that help to train or equip cybercrime investigators.\n\nOne of our research projects is called the \"UAB Spam Data Mine\". Basically, we collect spam, use it to detect emerging malware threats, phishing sites, or spam campaigns, and share our findings with law enforcement and our corporate partners.\n\nWe started off small, with only around 10,000 to 20,000 emails per day running on a smallish server. Once we had our basic workflow down, we moved to nicer hardware, and opened the floodgates a bit. We're currently receiving about 1.2 million emails per day, and hope to very quickly grow that to more than 5 million emails per day.\n\nI've got very nice hardware - many TB of very fast disks, and several servers with 12GB of RAM and 8 pentium cores each. \n\nFor the types of investigative support we do, some of our queries are of the 'can you tell me what this botnet was spamming for the past six months', but most of them are more \"real time\", of the \"what is the top spam campaign today?\" or \"what domains are being spammed by this botnet RIGHT NOW\".\n\nWe currently use 15 minute batch queues, where we parse between 10,000 to 20,000 emails every 15 minutes. Each message is assigned a unique message_id, which is a combination of what date and time \"batch\" it is in, followed by a sequential number, so the most recent batch processed this morning starts with \"10Jan07.0\" and goes through \"10Jan07.13800\".\n\nOK, you've done the math . . . we're at 60 million records in the spam table. The other \"main\" table is \"spam_links\" which has the URL information. Its got 170 million records and grows by more than 3 million per day currently. \n\nBelieve it or not, many law enforcement cases actually seek evidence from two or more years ago when its time to go to trial. We're looking at a potential max retention size of a billion emails and 3 billion URLs.\n\n-------------\n\nI don't know what this list considers \"large databases\", but I'm going to go ahead and call 170 million records a \"large\" table.\n\n-------------\n\nI'll have several questions, but I'll limit this thread to:\n\n - if you have 8 pentium cores, 12GB of RAM and \"infinite\" diskspace, what sorts of memory settings would you have in your start up tables?\n\n My biggest question mark there really has to do with how many users I have and how that might alter the results. My research team has about 12 folks who might be using the UAB Spam Data Mine at any given time, plus we have the \"parser\" running pretty much constantly, and routines that are fetching IP addresses for all spammed URLs and nameservers for all spammed domains and constantly updating the databases with that information. In the very near future, we'll be accepting queries directly from law enforcement through a web interface and may have as many as another 12 simultaneous users, so maybe 25 max users. We plan to limit \"web users\" to a \"recent\" subset of the data, probably allowing \"today\" \"previous 24 hour\" and \"previous 7 days\" as query options within the web interface. The onsite researchers will bang the heck out of much larger datasets.\n\n\n(I'll let this thread run a bit, and then come back to ask questions about \"vacuum analyze\" and \"partitioned tables\" as a second and third round of questions.) \n\n--\n\n----------------------------------------------------------\n\nGary Warner\nDirector of Research in Computer Forensics\nThe University of Alabama at Birmingham\nDepartment of Computer & Information Sciences\n& Department of Justice Sciences\n205.934.8620 205.422.2113\[email protected] [email protected]\n\n-----------------------------------------------------------\n",
"msg_date": "Thu, 7 Jan 2010 09:23:17 -0600 (CST)",
"msg_from": "Gary Warner <[email protected]>",
"msg_from_op": true,
"msg_subject": "\"large\" spam tables and performance: postgres memory parameters"
},
{
"msg_contents": "Welcome out of the shadows, Gary! ;-)\n\nGary Warner <[email protected]> wrote:\n \n> My biggest question mark there really has to do with how many\n> users I have and how that might alter the results.\n \nIn benchmarks I've run with our software load, I've found that I get\nbest throughput when I use a connection pool which limits the active\ndatabase transaction count to (2 * CPU_count) + effective_spindles. \nCPU_count should be fairly obvious; effective_spindles is\nessentially \"what's the maximum number of random read requests your\ndisk subsystem can productively handle concurrently?\"\n \nOne or two others commented that their benchmark results seemed to\nfit with that formula. I don't know just how far to trust it as a\ngeneralization, but in the absence of anything else, it probably\nisn't a horrible rule of thumb. If you expect to have more logical\nconnections than that number, you might want to establish a\nconnection pool which limits to that number. Be sure that if it is\n\"full\" when a request to start a transaction comes in, the request\nqueues instead of failing.\n \nTo convince yourself that this really helps, picture a hypothetical\nmachine which can only make progress on one request at a time, but\nwill task-switch among as many requests as are presented. Then\npicture 100 requests being presented simultaneously, each of which\nneeds one second of time to complete. Even without figuring in the\noverhead of task switching or the cache effects, it's clear that a\nconnection \"pool\" of one connection improve response time by almost\n50% with no cost in throughput. When you factor in the very real\ncosts of having large numbers of requests competing, both throughput\nand response time win with connection pooling above some threshold.\nOf course, with multiple CPUs, multiple spindles, network latency,\netc., the pool should be large enough to tend to keep them all busy.\n \nOf course, the exact point at which a connection pool gives optimal\nperformance depends on so many factors that the only way to *really*\nget it right is to test with a realistic load through your actual\nsoftware. The above is just intended to suggest a reasonable\nstarting point.\n \n-Kevin\n",
"msg_date": "Thu, 07 Jan 2010 11:35:48 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"large\" spam tables and performance: postgres memory parameters"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 8:23 AM, Gary Warner <[email protected]> wrote:\n> Hello,\n>\n> I've been lurking on this list a couple weeks now, and have asked some \"side questions\" to some of the list members, who have been gracious, and helpful, and encouraged me to just dive in and participate on the list.\n>\n> I'll not tell you the whole story right off the bat, but let me give you a basic outline.\n>\n> I dual report into two academic departments at the University of Alabama at Birmingham - Computer & Information Sciences and Justice Sciences. Although my background is on the CS side, I specialize in cybercrime investigations and our research focuses on things that help to train or equip cybercrime investigators.\n>\n> One of our research projects is called the \"UAB Spam Data Mine\". Basically, we collect spam, use it to detect emerging malware threats, phishing sites, or spam campaigns, and share our findings with law enforcement and our corporate partners.\n>\n> We started off small, with only around 10,000 to 20,000 emails per day running on a smallish server. Once we had our basic workflow down, we moved to nicer hardware, and opened the floodgates a bit. We're currently receiving about 1.2 million emails per day, and hope to very quickly grow that to more than 5 million emails per day.\n>\n> I've got very nice hardware - many TB of very fast disks, and several servers with 12GB of RAM and 8 pentium cores each.\n\nAre you running 8.3.x? I'd go with that as a minimum for now.\n\nYou'll want to make sure those fast disks are under a fast RAID setup\nlike RAID-10, perhaps with a high quality RAID controller with battery\nbacked cache as well. I/O is going to be your real issue here, not\nCPU, most likely. Also look at increasing RAM to 48 or 96Gig if if\nyou can afford it. I assume your pentium cores are Nehalem since\nyou've got 12G of ram (multiple of 3). Those are a good choice here,\nthey're fast and have memory access.\n\n> For the types of investigative support we do, some of our queries are of the 'can you tell me what this botnet was spamming for the past six months', but most of them are more \"real time\", of the \"what is the top spam campaign today?\" or \"what domains are being spammed by this botnet RIGHT NOW\".\n\nThen you'll probably want to look at partitioning your data.\n\n> We currently use 15 minute batch queues, where we parse between 10,000 to 20,000 emails every 15 minutes. Each message is assigned a unique message_id, which is a combination of what date and time \"batch\" it is in, followed by a sequential number, so the most recent batch processed this morning starts with \"10Jan07.0\" and goes through \"10Jan07.13800\".\n\n> I don't know what this list considers \"large databases\", but I'm going to go ahead and call 170 million records a \"large\" table.\n\n\n> - if you have 8 pentium cores, 12GB of RAM and \"infinite\" diskspace, what sorts of memory settings would you have in your start up tables?\n\ncrank up shared_buffers and effective cache size. Probably could use\na bump for work_mem, something in the 16 to 32 Meg range. Especially\nsince you're only looking at a dozen or so, not hundreds, of\nconcurrent users. work_mem is per sort, so it can get out of hand\nfast if you crank it up too high, and for most users higher settings\nwon't help anyway.\n\n> My biggest question mark there really has to do with how many users I have and how that might alter the results. My research team has about 12 folks who might be using the UAB Spam Data Mine at any given time, plus we have the \"parser\" running pretty much constantly, and routines that are fetching IP addresses for all spammed URLs and nameservers for all spammed domains and constantly updating the databases with that information. In the very near future, we'll be accepting queries directly from law enforcement through a web interface and may have as many as another 12 simultaneous users, so maybe 25 max users. We plan to limit \"web users\" to a \"recent\" subset of the data, probably allowing \"today\" \"previous 24 hour\" and \"previous 7 days\" as query options within the web interface. The onsite researchers will bang the heck out of much larger datasets.\n\nYou might want to look at pre-rolling common requests if that isn't\nwhat you're already doing with the routines you're mentioning up\nthere.\n\n> (I'll let this thread run a bit, and then come back to ask questions about \"vacuum analyze\" and \"partitioned tables\" as a second and third round of questions.)\n\nYou shouldn't have a lot of vacuum problems since you won't be\ndeleting anything. You might wanna crank up autovacuum aggressiveness\nas regards analyze though. Partitioning is in your future.\n",
"msg_date": "Thu, 7 Jan 2010 18:59:39 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"large\" spam tables and performance: postgres memory parameters"
},
{
"msg_contents": "* Gary Warner ([email protected]) wrote:\n> - if you have 8 pentium cores, 12GB of RAM and \"infinite\" diskspace, what sorts of memory settings would you have in your start up tables?\n\nIf the PG database is the only thing on the system, I'd probably go with\nsomething like:\n\nshared_buffers = 4GB\ntemp_buffers = 1GB\nwork_mem = 128M # Maybe adjust this during a session if you have\n # big/complex queries\nmaintenance_work_mem = 256M\ncheckpoint_segments = 20 # Maybe more..\neffective_cache_size = 8GB # Maybe more if you have a SAN which is doing\n # cacheing for you too..\n\n\n> My biggest question mark there really has to do with how many users I have and how that might alter the results. \n\nPresuming what you mean by this is \"how would the number of users change\nthe settings I'm suggesting above\", I'd say \"probably not much for the\nnumber of users you're talking about.\". Really, 12-25 users just isn't\nall that many. You probably need a queueing system to handle requests\nthat are going to take a long time to complete (as in, don't expect the\nuser or their web browser to stick around while you run a query that\ntakes half an hour to complete...).\n\n> (I'll let this thread run a bit, and then come back to ask questions about \"vacuum analyze\" and \"partitioned tables\" as a second and third round of questions.) \n\nYou should definitely be lookig to do partitioning based on the type of\ndata and the way you want to restrict the queries. I'm not convinced\nyou'd actually *need* to restrict the queries to recent things if you\npartition correctly- you might restrict the total *range* to be\nsomething small enough that it won't take too long.\n\nIt sounds like you have a number of systems, in which case you might\nconsider sharding if you get a large number of users (where large is a\nwhole lot bigger than 25...) or you find that users really do need\nreal-time results on very large ranges. It involves a fair bit of code\nto do and do well though, so you really have to consider it carefully\nand make sure it will help your important use cases (and not too badly\nimpact your other use cases) before going that route.\n\nautovacuum is your friend.. though you might need to tune it.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Thu, 7 Jan 2010 21:02:58 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: \"large\" spam tables and performance: postgres memory\n\tparameters"
}
] |
[
{
"msg_contents": "----- \"Lefteris\" <[email protected]> escreveu:\n> > Did you ever try increasing shared_buffers to what was suggested\n> (around\n> > 4 GB) and see what happens (I didn't see it in your posts)?\n> \n> No I did not to that yet, mainly because I need the admin of the\n> machine to change the shmmax of the kernel and also because I have no\n> multiple queries running. Does Seq scan uses shared_buffers?\n\nHaving multiple queries running is *not* the only reason you need lots of shared_buffers.\nThink of shared_buffers as a page cache, data in PostgreSQL is organized in pages.\nIf one single query execution had a step that brought a page to the buffercache, it's enough to increase another step speed and change the execution plan, since the data access in memory is (usually) faster then disk.\n\n> > help performance very much on multiple exequtions of the same\n> query.\n\nThis is also true.\nThis kind of test should, and will, give different results in subsequent executions.\n\n> > From the description of the data (\"...from years 1988 to 2009...\")\n> it\n> > looks like the query for \"between 2000 and 2009\" pulls out about\n> half of\n> > the data. If an index could be used instead of seqscan, it could be\n> > perhaps only 50% faster, which is still not very comparable to\n> others.\n\nThe use of the index over seqscan has to be tested. I don't agree in 50% gain, since simple integers stored on B-Tree have a huge possibility of beeing retrieved in the required order, and the discarded data will be discarder quickly too, so the gain has to be measured. \n\nI bet that an index scan will be a lot faster, but it's just a bet :)\n\n> > The table is very wide, which is probably why the tested databases\n> can\n> > deal with it faster than PG. You could try and narrow the table\n> down\n> > (for instance: remove the Div* fields) to make the data more\n> > \"relational-like\". In real life, speedups in this circumstances\n> would\n> > probably be gained by normalizing the data to make the basic table\n> > smaller and easier to use with indexing.\n\nUgh. I don't think so. That's why indexes were invented. PostgreSQL is smart enough to \"jump\" over columns using byte offsets.\nA better option for this table is to partition it in year (or year/month) chunks.\n\n45GB is not a so huge table compared to other ones I have seen before. I have systems where each partition is like 10 or 20GB and data is very fast to access even whith aggregation queries.\n\nFlavio Henrique A. Gurgel\ntel. 55-11-2125.4765\nfax. 55-11-2125.4777\nwww.4linux.com.br\nFREE SOFTWARE SOLUTIONS\n",
"msg_date": "Thu, 7 Jan 2010 13:45:16 -0200 (BRST)",
"msg_from": "\"Gurgel, Flavio\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, 7 Jan 2010, Gurgel, Flavio wrote:\n> If one single query execution had a step that brought a page to the \n> buffercache, it's enough to increase another step speed and change the \n> execution plan, since the data access in memory is (usually) faster then \n> disk.\n\nPostgres does not change a query plan according to the shared_buffers \nsetting. It does not anticipate one step contributing to another step in \nthis way. It does however make use of the effective_cache_size setting to \nestimate this effect, and that does affect the planner.\n\n> The use of the index over seqscan has to be tested. I don't agree in 50% \n> gain, since simple integers stored on B-Tree have a huge possibility of \n> beeing retrieved in the required order, and the discarded data will be \n> discarder quickly too, so the gain has to be measured.\n>\n> I bet that an index scan will be a lot faster, but it's just a bet :)\n\nIn a situation like this, the opposite will be true. If you were accessing \na very small part of a table, say to order by a field with a small limit, \nthen an index can be very useful by providing the results in the correct \norder. However, in this case, almost the entire table has to be read. \nChanging the order in which it is read will mean that the disc access is \nno longer sequential, which will slow things down, not speed them up. \nThe Postgres planner isn't stupid (mostly), there is probably a good \nreason why it isn't using an index scan.\n\n>> The table is very wide, which is probably why the tested databases can\n>> deal with it faster than PG. You could try and narrow the table down\n>> (for instance: remove the Div* fields) to make the data more\n>> \"relational-like\". In real life, speedups in this circumstances would\n>> probably be gained by normalizing the data to make the basic table\n>> smaller and easier to use with indexing.\n>\n> Ugh. I don't think so. That's why indexes were invented. PostgreSQL is \n> smart enough to \"jump\" over columns using byte offsets.\n> A better option for this table is to partition it in year (or year/month) chunks.\n\nPostgres (mostly) stores the columns for a row together with a row, so \nwhat you say is completely wrong. Postgres does not \"jump\" over columns \nusing byte offsets in this way. The index references a row in a page on \ndisc, and that page is fetched separately in order to retrieve the row. \nThe expensive part is physically moving the disc head to the right part of \nthe disc in order to fetch the correct page from the disc - jumping over \ncolumns will not help with that at all.\n\nReducing the width of the table will greatly improve the performance of a \nsequential scan, as it will reduce the size of the table on disc, and \ntherefore the time taken to read the entire table sequentially.\n\nMoreover, your suggestion of partitioning the table may not help much with \nthis query. It will turn a single sequential scan into a UNION of many \ntables, which may be harder for the planner to plan. Also, for queries \nthat access small parts of the table, indexes will help more than \npartitioning will.\n\nPartitioning will help most in the case where you want to summarise a \nsingle year's data. Not really otherwise.\n\nMatthew\n\n-- \n Q: What's the difference between ignorance and apathy?\n A: I don't know, and I don't care.\n",
"msg_date": "Thu, 7 Jan 2010 16:23:40 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On 7/01/2010 11:45 PM, Gurgel, Flavio wrote:\n\n>>> The table is very wide, which is probably why the tested databases\n>> can\n>>> deal with it faster than PG. You could try and narrow the table\n>> down\n>>> (for instance: remove the Div* fields) to make the data more\n>>> \"relational-like\". In real life, speedups in this circumstances\n>> would\n>>> probably be gained by normalizing the data to make the basic table\n>>> smaller and easier to use with indexing.\n>\n> Ugh. I don't think so. That's why indexes were invented. PostgreSQL is smart enough to \"jump\" over columns using byte offsets.\n\nEven if Pg tried to do so, it would generally not help. The cost of a \ndisk seek to the start of the next row would be much greater than the \ncost of continuing to sequentially read until that point was reached.\n\nWith the amazing sequential read speeds and still mediocre seek speeds \nmodern disks it's rarely worth seeking over unwanted data that's less \nthan a megabyte or two in size.\n\nAnyway, in practice the OS-level, array-level and/or disk-level \nreadahead would generally ensure that the data you were trying to skip \nhad already been read or was in the process of being read.\n\nCan Pg even read partial records ? I thought it all operated on a page \nlevel, where if an index indicates that a particular value is present on \na page the whole page gets read in and all records on the page are \nchecked for the value of interest. No?\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 08 Jan 2010 11:31:09 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "Craig Ringer <[email protected]> writes:\n> Can Pg even read partial records ? I thought it all operated on a page \n> level, where if an index indicates that a particular value is present on \n> a page the whole page gets read in and all records on the page are \n> checked for the value of interest. No?\n\nThe whole page gets read, but we do know which record on the page\nthe index entry is pointing at.\n\n(This statement is an oversimplification, because of lossy indexes\nand lossy bitmap scans, but most of the time it's not a matter of\n\"checking all records on the page\".)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 07 Jan 2010 22:43:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark "
}
] |
[
{
"msg_contents": "Ludwik Dylag <[email protected]> wrote:\n> I would suggest:\n> 1. turn off autovacuum\n> 1a. ewentually tune db for better performace for this kind of\n> operation (cant not help here)\n> 2. restart database\n> 3. drop all indexes\n> 4. update\n> 5. vacuum full table\n> 6. create indexes\n> 7. turn on autovacuum\n \nI've only ever attempted something like that with a few tens of\nmillions of rows. I gave up on waiting for the VACUUM FULL step\nafter a few days.\n \nI some scheduled down time is acceptable (with \"some\" kind of hard\nto estimate accurately) the best bet would be to add the column with\nthe USING clause to fill in the value. (I think that would cause a\ntable rewrite; if not, then add something to the ALTER TABLE which\nwould.) My impression is that the OP would rather stretch out the\nimplementation than to suffer down time, which can certainly be a\nvalid call.\n \nIf that is the goal, then the real question is whether there's a way\nto tune the incremental updates to speed that phase. Carlo, what\nversion of PostgreSQL is this? Can you show us the results of an\nEXPLAIN ANALYZE for the run of one iteration of the UPDATE? \nInformation on the OS, hardware, PostgreSQL build configuration, and\nthe contents of postgresql.conf (excluding all comments) could help\nus spot possible techniques to speed this up.\n \n-Kevin\n",
"msg_date": "Thu, 07 Jan 2010 10:57:59 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
},
{
"msg_contents": "On Thursday 07 January 2010 09:57:59 Kevin Grittner wrote:\n> Ludwik Dylag <[email protected]> wrote:\n> > I would suggest:\n> > 1. turn off autovacuum\n> > 1a. ewentually tune db for better performace for this kind of\n> > operation (cant not help here)\n> > 2. restart database\n> > 3. drop all indexes\n> > 4. update\n> > 5. vacuum full table\n> > 6. create indexes\n> > 7. turn on autovacuum\n> \n> I've only ever attempted something like that with a few tens of\n> millions of rows. I gave up on waiting for the VACUUM FULL step\n> after a few days.\n> \n> I some scheduled down time is acceptable (with \"some\" kind of hard\n> to estimate accurately) the best bet would be to add the column with\n> the USING clause to fill in the value. (I think that would cause a\n> table rewrite; if not, then add something to the ALTER TABLE which\n> would.) My impression is that the OP would rather stretch out the\n> implementation than to suffer down time, which can certainly be a\n> valid call.\n> \n> If that is the goal, then the real question is whether there's a way\n> to tune the incremental updates to speed that phase. Carlo, what\n> version of PostgreSQL is this? Can you show us the results of an\n> EXPLAIN ANALYZE for the run of one iteration of the UPDATE?\n> Information on the OS, hardware, PostgreSQL build configuration, and\n> the contents of postgresql.conf (excluding all comments) could help\n> us spot possible techniques to speed this up.\n> \n> -Kevin\n> \n\n\n\nIf you can come up with an effective method of tracking updates/deletes/inserts \nsuch as a trigger that writes the PK to a separate table upon any inserts, \nupdates or deletes to the table you could do something like this:\n\n\n\n1) create new table (no indexes) with the structure you want the table to have \nat the end of the process (i.e. the post-altered state) [new_tab]\n\n2) create the insert,update,delete triggers mentioned above on the existing \ntable [curr_tab] and write all the PK id's that change into a 3rd table \n[changed_keys]\n\n3) kick off a process that simply does a select from curr_tab into new_tab and \npopulates/back-fills the new column as part of the query\n\n4) let it run as long as it takes\n\n5) once it's complete do this:\n \n Create the all the indexes on the new_tab\n \n BEGIN;\n \n LOCK TABLE curr_tab;\n\n DELETE from new_tab \n where pk_id in (select distinct pk_id from changed_keys);\n\n INSERT into new_tab\n select * from curr_tab\n where curr_tab.pk_id in (select distinct pk_id from changed_keys);\n\n ALTER TABLE curr_tab RENAME to old_tab;\n\n ALTER TABLE new_tab RENAME to curr_tab;\n\n COMMIT;\n\n\n\nAlso you might want to consider partitioning this table in the process...\n\n\n\nOnce you're confident you no longer need the old table [old_tab] you can drop \nit \n\n\n",
"msg_date": "Thu, 7 Jan 2010 10:46:24 -0700",
"msg_from": "Kevin Kempter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Massive table (500M rows) update nightmare"
}
] |
[
{
"msg_contents": "----- \"Matthew Wakeling\" <[email protected]> escreveu:\n> On Thu, 7 Jan 2010, Gurgel, Flavio wrote:\n> Postgres does not change a query plan according to the shared_buffers\n> \n> setting. It does not anticipate one step contributing to another step\n> in \n> this way. It does however make use of the effective_cache_size setting\n> to \n> estimate this effect, and that does affect the planner.\n\nThat was what I was trying to say :)\n\n> In a situation like this, the opposite will be true. If you were\n> accessing \n> a very small part of a table, say to order by a field with a small\n> limit, \n> then an index can be very useful by providing the results in the\n> correct \n> order. However, in this case, almost the entire table has to be read.\n> \n> Changing the order in which it is read will mean that the disc access\n> is \n> no longer sequential, which will slow things down, not speed them up.\n> \n> The Postgres planner isn't stupid (mostly), there is probably a good \n> reason why it isn't using an index scan.\n\nSorry but I disagree. This is the typical case where the test has to be made.\nThe results are partial, let's say 50% of the table. Considerind that the disk is fast enough, the cost estimation of sequential and random reads are in a proportion of 1 to 4, considering default settings in PostgreSQL.\n\nIf, and only if the data is time distributed in the table (which can be this case during bulk load) there will be some gain in seqscan.\nIf, let's say 50% of the 50% (25% of the data) is time distributed (which can be the case in most data warehouses), the cost of random reads * number of reads can be cheaper then seqscan.\n\nThe volume of data doesn't turn the index generations so deep, let's say 2^7 or 2^8. This can lead to very fast data retrieval.\n\n> > Ugh. I don't think so. That's why indexes were invented. PostgreSQL\n> is \n> > smart enough to \"jump\" over columns using byte offsets.\n> > A better option for this table is to partition it in year (or\n> year/month) chunks.\n> \n> Postgres (mostly) stores the columns for a row together with a row, so\n> \n> what you say is completely wrong. Postgres does not \"jump\" over\n> columns \n> using byte offsets in this way. The index references a row in a page\n> on \n> disc, and that page is fetched separately in order to retrieve the\n> row. \n> The expensive part is physically moving the disc head to the right\n> part of \n> the disc in order to fetch the correct page from the disc - jumping\n> over \n> columns will not help with that at all.\n\nIf the index point to the right pages, I keep the circumstance of 1 to 4 cost.\nAgreed about seqscans. When I talked about byte offsets I was talking of data in the same disk page, and this does not help I/O reduction at all.\n\n> Reducing the width of the table will greatly improve the performance\n> of a \n> sequential scan, as it will reduce the size of the table on disc, and\n> \n> therefore the time taken to read the entire table sequentially.\n\nI just don't understand if you're seeing this situation as OLTP or DW, sorry.\nDW tables are usually wider then OLTP.\n\n> Moreover, your suggestion of partitioning the table may not help much\n> with \n> this query. It will turn a single sequential scan into a UNION of many\n> \n> tables, which may be harder for the planner to plan. Also, for queries\n\nPartitioned plans are a collection of an independent plan for each table in the inheritance.\nIf the data to be retrieved is confined in selected partitions, you won't seqscan the partitions you don't need.\nThe cost of the \"union\" of the aggregations in memory is a lot cheaper then the avoided seqscans.\n\nI have at least 3 cases of partitioning in queries exactly like this that droped from 3min to 5s execution times.\nAll of that DW tables, with aggregation and huge seqscans.\n\nI keep my word that the right use of indexes here has to be tested.\n\nFlavio Henrique A. Gurgel\ntel. 55-11-2125.4786\ncel. 55-11-8389.7635\nwww.4linux.com.br\nFREE SOFTWARE SOLUTIONS\n",
"msg_date": "Thu, 7 Jan 2010 15:57:01 -0200 (BRST)",
"msg_from": "\"Gurgel, Flavio\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 10:57 AM, Gurgel, Flavio <[email protected]> wrote:\n> ----- \"Matthew Wakeling\" <[email protected]> escreveu:\n>> On Thu, 7 Jan 2010, Gurgel, Flavio wrote:\n>> Postgres does not change a query plan according to the shared_buffers\n>>\n>> setting. It does not anticipate one step contributing to another step\n>> in\n>> this way. It does however make use of the effective_cache_size setting\n>> to\n>> estimate this effect, and that does affect the planner.\n>\n> That was what I was trying to say :)\n>\n>> In a situation like this, the opposite will be true. If you were\n>> accessing\n>> a very small part of a table, say to order by a field with a small\n>> limit,\n>> then an index can be very useful by providing the results in the\n>> correct\n>> order. However, in this case, almost the entire table has to be read.\n>>\n>> Changing the order in which it is read will mean that the disc access\n>> is\n>> no longer sequential, which will slow things down, not speed them up.\n>>\n>> The Postgres planner isn't stupid (mostly), there is probably a good\n>> reason why it isn't using an index scan.\n>\n> Sorry but I disagree. This is the typical case where the test has to be made.\n> The results are partial, let's say 50% of the table. Considerind that the disk is fast enough, the cost estimation of sequential and random reads are in a proportion of 1 to 4, considering default settings in PostgreSQL.\n\nYou do know that indexes in postgresql are not \"covering\" right? I.e.\nafter hitting the index, the db then has to hit the table to see if\nthose rows are in fact visible. So there's no such thing in pgsql, at\nthe moment, as an index only scan.\n",
"msg_date": "Thu, 7 Jan 2010 11:02:13 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "----- \"Scott Marlowe\" <[email protected]> escreveu:\n \n> You do know that indexes in postgresql are not \"covering\" right? \n> I.e.\n> after hitting the index, the db then has to hit the table to see if\n> those rows are in fact visible. So there's no such thing in pgsql,\n> at\n> the moment, as an index only scan.\n\nThat was just an estimation of effort to reach a tuple through seqscan X indexscan. In both cases the tuple have to be checked, sure.\n\nFlavio Henrique A. Gurgel\ntel. 55-11-2125.4786\ncel. 55-11-8389.7635\nwww.4linux.com.br\nFREE SOFTWARE SOLUTIONS\n",
"msg_date": "Thu, 7 Jan 2010 16:10:21 -0200 (BRST)",
"msg_from": "\"Gurgel, Flavio\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "This table is totally unnormalized. Normalize it and try again. You'll\nprobably see a huge speedup. Maybe even 10x. My mantra has always been\nless data stored means less data to scan means faster scans.\n\nOn Thu, Jan 7, 2010 at 12:57 PM, Gurgel, Flavio <[email protected]>wrote:\n\n> ----- \"Matthew Wakeling\" <[email protected]> escreveu:\n> > On Thu, 7 Jan 2010, Gurgel, Flavio wrote:\n> > Postgres does not change a query plan according to the shared_buffers\n> >\n> > setting. It does not anticipate one step contributing to another step\n> > in\n> > this way. It does however make use of the effective_cache_size setting\n> > to\n> > estimate this effect, and that does affect the planner.\n>\n> That was what I was trying to say :)\n>\n> > In a situation like this, the opposite will be true. If you were\n> > accessing\n> > a very small part of a table, say to order by a field with a small\n> > limit,\n> > then an index can be very useful by providing the results in the\n> > correct\n> > order. However, in this case, almost the entire table has to be read.\n> >\n> > Changing the order in which it is read will mean that the disc access\n> > is\n> > no longer sequential, which will slow things down, not speed them up.\n> >\n> > The Postgres planner isn't stupid (mostly), there is probably a good\n> > reason why it isn't using an index scan.\n>\n> Sorry but I disagree. This is the typical case where the test has to be\n> made.\n> The results are partial, let's say 50% of the table. Considerind that the\n> disk is fast enough, the cost estimation of sequential and random reads are\n> in a proportion of 1 to 4, considering default settings in PostgreSQL.\n>\n> If, and only if the data is time distributed in the table (which can be\n> this case during bulk load) there will be some gain in seqscan.\n> If, let's say 50% of the 50% (25% of the data) is time distributed (which\n> can be the case in most data warehouses), the cost of random reads * number\n> of reads can be cheaper then seqscan.\n>\n> The volume of data doesn't turn the index generations so deep, let's say\n> 2^7 or 2^8. This can lead to very fast data retrieval.\n>\n> > > Ugh. I don't think so. That's why indexes were invented. PostgreSQL\n> > is\n> > > smart enough to \"jump\" over columns using byte offsets.\n> > > A better option for this table is to partition it in year (or\n> > year/month) chunks.\n> >\n> > Postgres (mostly) stores the columns for a row together with a row, so\n> >\n> > what you say is completely wrong. Postgres does not \"jump\" over\n> > columns\n> > using byte offsets in this way. The index references a row in a page\n> > on\n> > disc, and that page is fetched separately in order to retrieve the\n> > row.\n> > The expensive part is physically moving the disc head to the right\n> > part of\n> > the disc in order to fetch the correct page from the disc - jumping\n> > over\n> > columns will not help with that at all.\n>\n> If the index point to the right pages, I keep the circumstance of 1 to 4\n> cost.\n> Agreed about seqscans. When I talked about byte offsets I was talking of\n> data in the same disk page, and this does not help I/O reduction at all.\n>\n> > Reducing the width of the table will greatly improve the performance\n> > of a\n> > sequential scan, as it will reduce the size of the table on disc, and\n> >\n> > therefore the time taken to read the entire table sequentially.\n>\n> I just don't understand if you're seeing this situation as OLTP or DW,\n> sorry.\n> DW tables are usually wider then OLTP.\n>\n> > Moreover, your suggestion of partitioning the table may not help much\n> > with\n> > this query. It will turn a single sequential scan into a UNION of many\n> >\n> > tables, which may be harder for the planner to plan. Also, for queries\n>\n> Partitioned plans are a collection of an independent plan for each table in\n> the inheritance.\n> If the data to be retrieved is confined in selected partitions, you won't\n> seqscan the partitions you don't need.\n> The cost of the \"union\" of the aggregations in memory is a lot cheaper then\n> the avoided seqscans.\n>\n> I have at least 3 cases of partitioning in queries exactly like this that\n> droped from 3min to 5s execution times.\n> All of that DW tables, with aggregation and huge seqscans.\n>\n> I keep my word that the right use of indexes here has to be tested.\n>\n> Flavio Henrique A. Gurgel\n> tel. 55-11-2125.4786\n> cel. 55-11-8389.7635\n> www.4linux.com.br\n> FREE SOFTWARE SOLUTIONS\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis table is totally unnormalized. Normalize it and try again. You'll probably see a huge speedup. Maybe even 10x. My mantra has always been less data stored means less data to scan means faster scans.\n\nOn Thu, Jan 7, 2010 at 12:57 PM, Gurgel, Flavio <[email protected]> wrote:\n\n----- \"Matthew Wakeling\" <[email protected]> escreveu:\n> On Thu, 7 Jan 2010, Gurgel, Flavio wrote:\n> Postgres does not change a query plan according to the shared_buffers\n>\n> setting. It does not anticipate one step contributing to another step\n> in\n> this way. It does however make use of the effective_cache_size setting\n> to\n> estimate this effect, and that does affect the planner.\n\nThat was what I was trying to say :)\n\n> In a situation like this, the opposite will be true. If you were\n> accessing\n> a very small part of a table, say to order by a field with a small\n> limit,\n> then an index can be very useful by providing the results in the\n> correct\n> order. However, in this case, almost the entire table has to be read.\n>\n> Changing the order in which it is read will mean that the disc access\n> is\n> no longer sequential, which will slow things down, not speed them up.\n>\n> The Postgres planner isn't stupid (mostly), there is probably a good\n> reason why it isn't using an index scan.\n\nSorry but I disagree. This is the typical case where the test has to be made.\nThe results are partial, let's say 50% of the table. Considerind that the disk is fast enough, the cost estimation of sequential and random reads are in a proportion of 1 to 4, considering default settings in PostgreSQL.\n\nIf, and only if the data is time distributed in the table (which can be this case during bulk load) there will be some gain in seqscan.\nIf, let's say 50% of the 50% (25% of the data) is time distributed (which can be the case in most data warehouses), the cost of random reads * number of reads can be cheaper then seqscan.\n\nThe volume of data doesn't turn the index generations so deep, let's say 2^7 or 2^8. This can lead to very fast data retrieval.\n\n> > Ugh. I don't think so. That's why indexes were invented. PostgreSQL\n> is\n> > smart enough to \"jump\" over columns using byte offsets.\n> > A better option for this table is to partition it in year (or\n> year/month) chunks.\n>\n> Postgres (mostly) stores the columns for a row together with a row, so\n>\n> what you say is completely wrong. Postgres does not \"jump\" over\n> columns\n> using byte offsets in this way. The index references a row in a page\n> on\n> disc, and that page is fetched separately in order to retrieve the\n> row.\n> The expensive part is physically moving the disc head to the right\n> part of\n> the disc in order to fetch the correct page from the disc - jumping\n> over\n> columns will not help with that at all.\n\nIf the index point to the right pages, I keep the circumstance of 1 to 4 cost.\nAgreed about seqscans. When I talked about byte offsets I was talking of data in the same disk page, and this does not help I/O reduction at all.\n\n> Reducing the width of the table will greatly improve the performance\n> of a\n> sequential scan, as it will reduce the size of the table on disc, and\n>\n> therefore the time taken to read the entire table sequentially.\n\nI just don't understand if you're seeing this situation as OLTP or DW, sorry.\nDW tables are usually wider then OLTP.\n\n> Moreover, your suggestion of partitioning the table may not help much\n> with\n> this query. It will turn a single sequential scan into a UNION of many\n>\n> tables, which may be harder for the planner to plan. Also, for queries\n\nPartitioned plans are a collection of an independent plan for each table in the inheritance.\nIf the data to be retrieved is confined in selected partitions, you won't seqscan the partitions you don't need.\nThe cost of the \"union\" of the aggregations in memory is a lot cheaper then the avoided seqscans.\n\nI have at least 3 cases of partitioning in queries exactly like this that droped from 3min to 5s execution times.\nAll of that DW tables, with aggregation and huge seqscans.\n\nI keep my word that the right use of indexes here has to be tested.\n\nFlavio Henrique A. Gurgel\ntel. 55-11-2125.4786\ncel. 55-11-8389.7635\nwww.4linux.com.br\nFREE SOFTWARE SOLUTIONS\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 7 Jan 2010 13:11:15 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
},
{
"msg_contents": "On 8/01/2010 2:11 AM, Nikolas Everett wrote:\n> This table is totally unnormalized. Normalize it and try again. You'll\n> probably see a huge speedup. Maybe even 10x. My mantra has always been\n> less data stored means less data to scan means faster scans.\n\nSometimes one intentionally denormalizes storage, though. JOIN costs can \nbe considerable too, and if most of the time you're interested in all \nthe data for a record not just a subset of it, storing it denormalized \nis often faster and cheaper than JOINing for it or using subqueries to \nfetch it.\n\nNormalization or any other splitting of record into multiple separately \nstored records also has costs in complexity, management, the need for \nadditional indexes, storage of foreign key references, all the extra \ntuple headers you need to store, etc.\n\nIt's still generally the right thing to do, but it should be thought \nabout, not just tackled blindly. I only tend to view it as a no-brainer \nif the alternative is storing numbered fields (\"field0\", \"field1\", \n\"field2\", etc) ... and even then there are exceptions. One of my schema \nat the moment has address_line_1 through address_line_4 in a `contact' \nentity, and there's absolutely *no* way I'm splitting that into a \nseparate table of address_lines accessed by join and sort! (Arguably I \nshould be using a single `text' field with embedded newlines instead, \nthough).\n\nSometimes it's even better to hold your nose and embed an array in a \nrecord rather than join to an external table. Purism can be taken too far.\n\nNote that Pg's TOAST mechanism plays a part here, too. If you have a big \n`text' field, it's probably going to get stored out-of-line (TOASTed) \nanyway, and TOAST is going to be cleverer about fetching it than you \nwill be using a JOIN. So storing it in-line is likely to be the right \nway to go. You can even force out-of-line storage if you're worried.\n\nIn the case of this benchmark, even if they split much of this data out \ninto other tables by reference, it's likely to be slower rather than \nfaster if they still want the data they've split out for most of their \nqueries.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 08 Jan 2010 11:45:00 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Air-traffic benchmark"
}
] |
[
{
"msg_contents": "On 01/05/2010 08:34 PM, Robert Haas [[email protected]] wrote:\n> - If you have other queries where this index helps (even though it is\n> hurting this one), then you're going to have to find a way to execute\n> the query without using bound parameters - i.e. with the actual values\n> in there instead of $1 through $4. That will allow the planner to see\n> that the index scan is a loser because it will see that there are a\n> lot of rows in the specified range of ts_interval_start_times.\nI think that this is possible without too much work.\n\nFYI - this test is still running and the same query has been executed at \nleast 2 more times (it gets done 1-24 times per day) since it took 124M \nms with acceptable response times (several secs). I don't see how either \nof the 2 query plans posted could've taken that long (and the actually \nexecution times I posted confirm this), so I'm assuming that some other \nplan was used for the 124M ms execution. Seems like it must have been \nsome NxM plan. Do you think that autovacuuming more frequently would \nprevent the query planner from making this poor choice?\n\nBrian\n",
"msg_date": "Thu, 07 Jan 2010 10:43:25 -0800",
"msg_from": "Brian Cox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query looping?"
},
{
"msg_contents": "On Thu, Jan 7, 2010 at 1:43 PM, Brian Cox <[email protected]> wrote:\n> On 01/05/2010 08:34 PM, Robert Haas [[email protected]] wrote:\n>>\n>> - If you have other queries where this index helps (even though it is\n>> hurting this one), then you're going to have to find a way to execute\n>> the query without using bound parameters - i.e. with the actual values\n>> in there instead of $1 through $4. That will allow the planner to see\n>> that the index scan is a loser because it will see that there are a\n>> lot of rows in the specified range of ts_interval_start_times.\n>\n> I think that this is possible without too much work.\n\nOh, good.\n\n> FYI - this test is still running and the same query has been executed at\n> least 2 more times (it gets done 1-24 times per day) since it took 124M ms\n> with acceptable response times (several secs). I don't see how either of the\n> 2 query plans posted could've taken that long (and the actually execution\n> times I posted confirm this), so I'm assuming that some other plan was used\n> for the 124M ms execution. Seems like it must have been some NxM plan. Do\n> you think that autovacuuming more frequently would prevent the query planner\n> from making this poor choice?\n\nThat seems pretty speculative... I'm not really sure.\n\n...Robert\n",
"msg_date": "Thu, 7 Jan 2010 15:16:39 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query looping?"
}
] |
[
{
"msg_contents": "Hi,\n\nI've got a Fusion IO disk (actually a HP StorageWorks IO Accelerator)\nthat I'm going to performance test with postgresql on windows. I'm\nguessing people here may be interested in the results.\n\nSo, does there exist any simple way to find some... comparable numbers\nfor a server - should I use pgbech here? Is bonnie++ something I\ncan/should run?\n\n\n-- \nEld på åren og sol på eng gjer mannen fegen og fjåg. [Jøtul]\n<demo> 2010 Tore Halvorsen || +052 0553034554\n",
"msg_date": "Fri, 8 Jan 2010 12:03:55 +0100",
"msg_from": "Tore Halvorsen <[email protected]>",
"msg_from_op": true,
"msg_subject": "FusionIO performance"
}
] |
[
{
"msg_contents": "Hi ,\n\nI have a simple query with two tables.\nms_data ~ 4500000 rows\nms_commands_history ~ 500000 rows\n\nI have done analyze and there are indexes.\nMy question is why the planner didn't do the index scan first on ms_data \nto reduce the rows to ~ 11000 and the use the PK index on \nms_commands_history.\n\nNow, if I red the explain correctly it first do the seq_scan on \nms_commands_history the then the index scan on ms_data.\n\nAny Ideas?\n\nThanks in advance.\n\nKaloyan Iliev\n\nSELECT version();\n version\n--------------------------------------------------------------------------------------------------------- \n\nPostgreSQL 8.4.0 on i386-portbld-freebsd7.2, compiled by GCC cc (GCC) \n4.2.1 20070719 [FreeBSD], 32-bit\n(1 row)\n\n\nexplain analyze SELECT COUNT(*) as count\n FROM\n ms_data AS DT,\n \nms_commands_history AS CH\n WHERE \nDT.ms_command_history_id = CH.id AND\n CH.ms_device_id = 1 \nAND\n DT.ms_value_type_id \n= 1 AND\n \nDT.meassure_date::date >= '2010-01-01' AND\n \nDT.meassure_date::date <= '2010-01-08';\n\n\n \nQUERY \nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------ \n\nAggregate (cost=88778.73..88778.74 rows=1 width=0) (actual \ntime=16979.109..16979.112 rows=1 loops=1)\n -> Hash Join (cost=63056.45..88750.77 rows=11183 width=0) (actual \ntime=13774.132..16958.507 rows=11093 loops=1)\n Hash Cond: (dt.ms_command_history_id = ch.id)\n -> Index Scan using ms_data_meassure_date_idx on ms_data dt \n(cost=0.01..23485.68 rows=11183 width=8) (actual time=58.869..2701.928 \nrows=11093 loops=1)\n Index Cond: (((meassure_date)::date >= '2010-01-01'::date) \nAND ((meassure_date)::date <= '2010-01-08'::date))\n Filter: (ms_value_type_id = 1)\n -> Hash (cost=55149.22..55149.22 rows=481938 width=8) (actual \ntime=13590.853..13590.853 rows=481040 loops=1)\n -> Seq Scan on ms_commands_history ch \n(cost=0.00..55149.22 rows=481938 width=8) (actual time=0.078..12321.037 \nrows=481040 loops=1)\n Filter: (ms_device_id = 1)\nTotal runtime: 16979.326 ms\n(10 rows)\n\n",
"msg_date": "Fri, 08 Jan 2010 19:58:02 +0200",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Change query join order"
},
{
"msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> My question is why the planner didn't do the index scan first on ms_data \n> to reduce the rows to ~ 11000 and the use the PK index on \n> ms_commands_history.\n\n11000 index probes aren't exactly free. If they take more than about\n1msec apiece, the planner picked the right plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jan 2010 13:27:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change query join order "
},
{
"msg_contents": "On Fri, Jan 8, 2010 at 1:27 PM, Tom Lane <[email protected]> wrote:\n> Kaloyan Iliev Iliev <[email protected]> writes:\n>> My question is why the planner didn't do the index scan first on ms_data\n>> to reduce the rows to ~ 11000 and the use the PK index on\n>> ms_commands_history.\n>\n> 11000 index probes aren't exactly free. If they take more than about\n> 1msec apiece, the planner picked the right plan.\n\nThe OP could try setting enable_hashjoin to false (just for testing,\nnever for production) and do EXPLAIN ANALYZE again. That might\ngenerate the desired plan, and we could see which one is actually\nfaster.\n\nIf the other plan does turn out to be faster (and I agree with Tom\nthat there is no guarantee of that), then one thing to check is\nwhether seq_page_cost and random_page_cost are set too high. If the\ndata is all cached, the default values of 4 and 1 are three orders of\nmagnitude too large, and they should also be set to equal rather than\nunequal values.\n\n...Robert\n",
"msg_date": "Fri, 8 Jan 2010 14:01:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change query join order"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Fri, Jan 8, 2010 at 1:27 PM, Tom Lane <[email protected]> wrote:\n>> 11000 index probes aren't exactly free. �If they take more than about\n>> 1msec apiece, the planner picked the right plan.\n\n> The OP could try setting enable_hashjoin to false (just for testing,\n> never for production) and do EXPLAIN ANALYZE again. That might\n> generate the desired plan, and we could see which one is actually\n> faster.\n\nRight, sorry for the overly brief response. It might switch to a merge\njoin next, in which case try enable_mergejoin = off as well.\n\n> If the other plan does turn out to be faster (and I agree with Tom\n> that there is no guarantee of that), then one thing to check is\n> whether seq_page_cost and random_page_cost are set too high. If the\n> data is all cached, the default values of 4 and 1 are three orders of\n> magnitude too large, and they should also be set to equal rather than\n> unequal values.\n\nTweaking the cost parameters to suit your local situation is the\nrecommended cure for planner misjudgments; but I'd recommend against\nchanging them on the basis of only one example. You could easily\nfind yourself making other cases worse. Get a collection of common\nqueries for your app and look at the overall effects.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 08 Jan 2010 14:23:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change query join order "
},
{
"msg_contents": "On Fri, Jan 8, 2010 at 2:23 PM, Tom Lane <[email protected]> wrote:\n>> If the other plan does turn out to be faster (and I agree with Tom\n>> that there is no guarantee of that), then one thing to check is\n>> whether seq_page_cost and random_page_cost are set too high. If the\n>> data is all cached, the default values of 4 and 1 are three orders of\n>> magnitude too large, and they should also be set to equal rather than\n>> unequal values.\n>\n> Tweaking the cost parameters to suit your local situation is the\n> recommended cure for planner misjudgments; but I'd recommend against\n> changing them on the basis of only one example. You could easily\n> find yourself making other cases worse. Get a collection of common\n> queries for your app and look at the overall effects.\n\nNo argument, and well said -- just trying to point out that the\ndefault values really are FAR too high for people with databases that\nfit in OS cache.\n\n...Robert\n",
"msg_date": "Fri, 8 Jan 2010 14:55:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Change query join order"
},
{
"msg_contents": "\n\n\n\n\nThanks You,\n I changed the random_page_cost to 2 and the query plan has changed and\nspeeds up.\n I will check the other queries but I think I will leave it at this\nvalue.\n\nThank you again.\n Kaloyan Iliev\n\n\nRobert Haas wrote:\n\nOn Fri, Jan 8, 2010 at 2:23 PM, Tom Lane <[email protected]> wrote:\n \n\n\nIf the other plan does turn out to be faster (and I agree with Tom\nthat there is no guarantee of that), then one thing to check is\nwhether seq_page_cost and random_page_cost are set too high. If the\ndata is all cached, the default values of 4 and 1 are three orders of\nmagnitude too large, and they should also be set to equal rather than\nunequal values.\n \n\nTweaking the cost parameters to suit your local situation is the\nrecommended cure for planner misjudgments; but I'd recommend against\nchanging them on the basis of only one example. You could easily\nfind yourself making other cases worse. Get a collection of common\nqueries for your app and look at the overall effects.\n \n\n\nNo argument, and well said -- just trying to point out that the\ndefault values really are FAR too high for people with databases that\nfit in OS cache.\n\n...Robert\n\n \n\n\n\n",
"msg_date": "Wed, 20 Jan 2010 18:06:51 +0200",
"msg_from": "Kaloyan Iliev Iliev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Change query join order"
}
] |
[
{
"msg_contents": "Hi 2 all,\n\nHere is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series \nserver, SATA raid, Linux.\n\nI have 1 big table (called \"archive\") which contains short text messages \nwith a plenty of additional service info.\nCurrently this table contains more than 4M rows for a period of 4,5 \nmonths, i.e. each row has average size of 1K.\n\nI'm going to make our application work with partitions of this table \ninstead of one large table. The primary reason is that eventually we'd \nneed to remove old rows and it would be pretty hard with one table \nbecause of blocking (and rows are being added constantly).\n\n1. What would be your recommendations on how to partition this table (by \nmonths, years or quarters)?\n2. What is recommended PG settings for such configuration? Would it be \nok to set shared_buffers to let's say 512M (if RAM is 1Gig may be \nshared_buffers is to be 400M?)? What other settings would you recommend?\n\nThanks in advance,\nNick.\n",
"msg_date": "Sat, 09 Jan 2010 13:24:53 +0300",
"msg_from": "Nickolay <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG optimization question"
},
{
"msg_contents": "maybe that 'one big table' needs something called 'normalisation'\nfirst. See how much that will shed off. You might be surprised.\nThe partitioning needs to be done by some constant intervals, of time\n- in your case. Whatever suits you, I would suggest to use the rate\nthat will give you both ease of archiving/removal of old data (so not\ntoo wide), and also, one that would make sure that most of the data\nyou'll be searching for in your queries will be in one , two\npartitions per query.\n",
"msg_date": "Sat, 9 Jan 2010 12:18:04 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "Nickolay wrote on 09.01.2010 11:24:\n> it would be pretty hard with one table because of blocking\n\nWhat do you man with \"because of blocking\"?\n\nThomas\n\n",
"msg_date": "Sat, 09 Jan 2010 13:32:49 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "I do not see any way to normalize this table anymore. it's size is 4Gig \nfor ~4M rows, i.e. 1Kb per row, i think it's ok.\nAlso there are 2 indexes: by date_time and by a couple of service fields \n(total index size is 250Mb now).\nI think i'll be going to partition by months (approx. 1M rows or 1Gig \nper month), so it would be like 60 partitions for 5 years. Is that OK \nfor postgres?\nOh, btw, 95% of queries are searching rows for current date (last 24 hours).\nAlso we use SELECT...FOR UPDATE row-level locking for updating the rows \nin archive (i.e. we INSERT new row when starting outgoing message \ntransmittion and then doing SELECT...FOR UPDATE and UPDATE for source \n(incoming) message when outgoing message was sent), so I guess we would \nhave to explicitly write the name of partition table (i.e. \n\"archive_2009_12\" instead of \"archive\") for SELECT...FOR UPDATE and \nUPDATE requests, as they may need to access row in previous partition \ninstead of the current one.\n\nGrzegorz Jaśkiewicz wrote:\n> maybe that 'one big table' needs something called 'normalisation'\n> first. See how much that will shed off. You might be surprised.\n> The partitioning needs to be done by some constant intervals, of time\n> - in your case. Whatever suits you, I would suggest to use the rate\n> that will give you both ease of archiving/removal of old data (so not\n> too wide), and also, one that would make sure that most of the data\n> you'll be searching for in your queries will be in one , two\n> partitions per query.\n>\n>\n> \n",
"msg_date": "Sat, 09 Jan 2010 15:42:08 +0300",
"msg_from": "Nickolay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "On Sat, Jan 09, 2010 at 03:42:08PM +0300, Nickolay wrote:\n> I do not see any way to normalize this table anymore. it's size is 4Gig for \n> ~4M rows, i.e. 1Kb per row, i think it's ok.\n> Also there are 2 indexes: by date_time and by a couple of service fields \n> (total index size is 250Mb now).\n> I think i'll be going to partition by months (approx. 1M rows or 1Gig per \n> month), so it would be like 60 partitions for 5 years. Is that OK for \n> postgres?\n\nNot a problem. We have a log server that has 64 daily partitions.\n\n> Oh, btw, 95% of queries are searching rows for current date (last 24 \n> hours).\n\nYou may want to use a daily staging table and then flush to the \nmonthly archive tables at the end of the day.\n\nKen\n",
"msg_date": "Sat, 9 Jan 2010 06:46:46 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "That may help with the queries speed (not a problem now), but we'll then \nhave to add UNION statement for daily staging table for other 5% of \nrequests, right? And there would be a moment when daily message is in \narchive table AND in daily table (while transferring from daily table to \narchive).\nOur main problem is in blocking when doing DELETE (app sometimes freezes \nfor a long time), and also we have to do VACUUM on live table, which is \nnot acceptable in our app.\n\nThanks for your reply, I was kinda worried about number of partitions \nand how this would affect PG query execution speed.\n\nKenneth Marshall wrote:\n>> Oh, btw, 95% of queries are searching rows for current date (last 24 \n>> hours).\n>> \n>\n> You may want to use a daily staging table and then flush to the \n> monthly archive tables at the end of the day.\n>\n> \n",
"msg_date": "Sat, 09 Jan 2010 15:59:08 +0300",
"msg_from": "Nickolay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "\n> That may help with the queries speed (not a problem now), but we'll then \n> have to add UNION statement for daily staging table for other 5% of \n> requests, right? And there would be a moment when daily message is in \n> archive table AND in daily table (while transferring from daily table to \n> archive).\n> Our main problem is in blocking when doing DELETE (app sometimes freezes \n> for a long time), and also we have to do VACUUM on live table, which is \n> not acceptable in our app.\n>\n> Thanks for your reply, I was kinda worried about number of partitions \n> and how this would affect PG query execution speed.\n>\n> Kenneth Marshall wrote:\n>>> Oh, btw, 95% of queries are searching rows for current date (last 24 \n>>> hours).\n>>>\n>>\n>> You may want to use a daily staging table and then flush to the monthly \n>> archive tables at the end of the day.\n\n\tIf the rows in the archive tables are never updated, this strategy means \nyou never need to vacuum the big archive tables (and indexes), which is \ngood. Also you can insert the rows into the archive table in the order of \nyour choice, the timestamp for example, which makes it nicely clustered, \nwithout needing to ever run CLUSTER.\n\n\tAnd with partitioning you can have lots of indexes on the staging table \n(and current months partition) (to speed up your most common queries which \nare likely to be more OLTP), while using less indexes on the older \npartitions (saves disk space) if queries on old partitions are likely to \nbe reporting queries which are going to grind through a large part of the \ntable anyway.\n",
"msg_date": "Sat, 09 Jan 2010 16:37:29 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "Okay, I see your point with staging table. That's a good idea!\nThe only problem I see here is the transfer-to-archive-table process. As \nyou've correctly noticed, the system is kind of a real-time and there \ncan be dozens of processes writing to the staging table, i cannot see \nhow to make the transfer/flush process right and clear...\n\nPierre Frédéric Caillaud wrote:\n>>>> Oh, btw, 95% of queries are searching rows for current date (last \n>>>> 24 hours).\n>>>>\n>>>\n>>> You may want to use a daily staging table and then flush to the \n>>> monthly archive tables at the end of the day.\n>\n> If the rows in the archive tables are never updated, this strategy \n> means you never need to vacuum the big archive tables (and indexes), \n> which is good. Also you can insert the rows into the archive table in \n> the order of your choice, the timestamp for example, which makes it \n> nicely clustered, without needing to ever run CLUSTER.\n>\n> And with partitioning you can have lots of indexes on the staging \n> table (and current months partition) (to speed up your most common \n> queries which are likely to be more OLTP), while using less indexes on \n> the older partitions (saves disk space) if queries on old partitions \n> are likely to be reporting queries which are going to grind through a \n> large part of the table anyway.\n>\n>\n\n",
"msg_date": "Sat, 09 Jan 2010 21:50:07 +0300",
"msg_from": "Nickolay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "2010/1/9 Nickolay <[email protected]>\n\n> Okay, I see your point with staging table. That's a good idea!\n> The only problem I see here is the transfer-to-archive-table process. As\n> you've correctly noticed, the system is kind of a real-time and there can be\n> dozens of processes writing to the staging table, i cannot see how to make\n> the transfer/flush process right and clear...\n>\n> The simplest way to do this is to create view and add/remove first/last day\nby recreating the view on daily interval.\n\n-- \nLudwik Dyląg\n\n2010/1/9 Nickolay <[email protected]>\nOkay, I see your point with staging table. That's a good idea!\nThe only problem I see here is the transfer-to-archive-table process. As you've correctly noticed, the system is kind of a real-time and there can be dozens of processes writing to the staging table, i cannot see how to make the transfer/flush process right and clear...\nThe simplest way to do this is to create view and add/remove first/last day by recreating the view on daily interval.-- Ludwik Dyląg",
"msg_date": "Sat, 9 Jan 2010 19:58:22 +0100",
"msg_from": "Ludwik Dylag <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "\n> If you transfer (delete from staging, insert into archive) in one\n> transaction , then it will be always visible in exactly one of them,\n> and exatly once in a view over both staging and archive(s).\n\n\tDoes the latest version implement this :\n\nINSERT INTO archive (...) DELETE FROM staging WHERE ... RETURNING ...\n\n",
"msg_date": "Sun, 10 Jan 2010 11:52:01 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "2010/1/10 Pierre Frédéric Caillaud <[email protected]>:\n>\n>> If you transfer (delete from staging, insert into archive) in one\n>> transaction , then it will be always visible in exactly one of them,\n>> and exatly once in a view over both staging and archive(s).\n>\n> Does the latest version implement this :\n>\n> INSERT INTO archive (...) DELETE FROM staging WHERE ... RETURNING ...\n\nNo. There are no plans to support that, though there are proposals to support:\n\nWITH x AS (DELETE FROM staging WHERE ... RETURNING ...) INSERT INTO\narchive (...) SELECT ... FROM x\n\nI'm not sure how much that will help though since, in the designs so\nfar discused, the tuples won't be pipelined.\n\n...Robert\n",
"msg_date": "Sun, 10 Jan 2010 13:45:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
},
{
"msg_contents": "On Sun, 10 Jan 2010 19:45:32 +0100, Robert Haas <[email protected]> \nwrote:\n\n> 2010/1/10 Pierre Frédéric Caillaud <[email protected]>:\n>>\n>>> If you transfer (delete from staging, insert into archive) in one\n>>> transaction , then it will be always visible in exactly one of them,\n>>> and exatly once in a view over both staging and archive(s).\n>>\n>> Does the latest version implement this :\n>>\n>> INSERT INTO archive (...) DELETE FROM staging WHERE ... RETURNING ...\n>\n> No. There are no plans to support that, though there are proposals to \n> support:\n>\n> WITH x AS (DELETE FROM staging WHERE ... RETURNING ...) INSERT INTO\n> archive (...) SELECT ... FROM x\n>\n> I'm not sure how much that will help though since, in the designs so\n> far discused, the tuples won't be pipelined.\n>\n> ...Robert\n>\n\n\tYeah, but it's a lot more user-friendly than SELECT FOR UPDATE, INSERT \nSELECT, DELETE...\n\n\n",
"msg_date": "Mon, 11 Jan 2010 12:25:26 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
}
] |
[
{
"msg_contents": "Hi 2 all,\n\nHere is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series \nserver, SATA raid, Linux.\n\nI have 1 big table (called \"archive\") which contains short text messages \nwith a plenty of additional service info.\nCurrently this table contains more than 4M rows for a period of 4,5 \nmonths, i.e. each row has average size of 1K.\n\nI'm going to make our application work with partitions of this table \ninstead of one large table. The primary reason is that eventually we'd \nneed to remove old rows and it would be pretty hard with one table \nbecause of blocking (and rows are being added constantly).\n\n1. What would be your recommendations on how to partition this table (by \nmonths, years or quarters)?\n2. What is recommended PG settings for such configuration? Would it be \nok to set shared_buffers to let's say 512M (if RAM is 1Gig may be \nshared_buffers is to be 400M?)? What other settings would you recommend?\n\nThanks in advance,\nNick.\n",
"msg_date": "Sat, 09 Jan 2010 13:32:39 +0300",
"msg_from": "Nickolay <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG optimization question"
},
{
"msg_contents": "On 9/01/2010 6:32 PM, Nickolay wrote:\n> Hi 2 all,\n>\n> Here is my typical configuration: 1(2) GB of RAM, HP ML 350(150) series\n> server, SATA raid, Linux.\n>\n> I have 1 big table (called \"archive\") which contains short text messages\n> with a plenty of additional service info.\n> Currently this table contains more than 4M rows for a period of 4,5\n> months, i.e. each row has average size of 1K.\n>\n> I'm going to make our application work with partitions of this table\n> instead of one large table. The primary reason is that eventually we'd\n> need to remove old rows and it would be pretty hard with one table\n> because of blocking (and rows are being added constantly).\n\nDELETEs shouldn't block concurrent INSERTs.\n\nThat said, dropping a partition is a lot more convenient than DELETEing \nfrom a big table.\n\n--\nCraig Ringer\n",
"msg_date": "Sun, 10 Jan 2010 09:24:26 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG optimization question"
}
] |
[
{
"msg_contents": "Dear All,\n\nI'm trying to optimise the speed of some selects with the where condition:\n\nWHERE id =\n (SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')\n\n\nThis is relatively slow, taking about 15-20ms, even though I have a \njoint index on both fields:\n\nCREATE INDEX testidx3 ON tbl_sort_report (id, parcel_id_code);\n\n\nSo, my question is, is there any way to improve this? I'd expect that an \nindex on ( max(id),parcel_id_code ) would be ideal, excepting that \npostgres won't allow that (and such an index probably doesn't make much \nconceptual sense).\n\n\nExplain Analyze is below.\n\nThanks,\n\nRichard\n\n\n\nHere is part of the schema. id is the primary key; parcel_id_code loops \nfrom 0...99999 and back again every few hours.\n\nfsc_log=> \\d tbl_sort_report\n Table \"public.tbl_sort_report\"\n Column | Type | \n Modifiers\n----------------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default \nnextval('master_id_seq'::regclass)\n timestamp | timestamp with time zone |\n parcel_id_code | integer |\n(etc)\n\n\n\n\nEXPLAIN ANALYZE (SELECT MAX(id) FROM tbl_sort_report WHERE \nparcel_id_code='43024');\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=7.34..7.35 rows=1 width=0) (actual time=17.712..17.714 \nrows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Limit (cost=0.00..7.34 rows=1 width=8) (actual \ntime=17.705..17.705 rows=0 loops=1)\n -> Index Scan Backward using testidx3 on tbl_sort_report \n(cost=0.00..14.67 rows=2 width=8) (actual time=17.700..17.700 rows=0 \nloops=1)\n Index Cond: (parcel_id_code = 43024)\n Filter: (id IS NOT NULL)\n Total runtime: 17.786 ms\n\n",
"msg_date": "Sat, 09 Jan 2010 12:46:07 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Joint index including MAX() ?"
},
{
"msg_contents": "you can also try :\n\nselect val FROM table ORDER BY val DESC LIMIT 1;\n\nwhich usually is much quicker.\n",
"msg_date": "Sat, 9 Jan 2010 12:51:41 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joint index including MAX() ?"
},
{
"msg_contents": "Hi,\n\nI first suggestion would be to either build the index only on\nparcel_id_code or on (parcel_id_code, id).\n\nBut I am not sure because I am new in pg:)\n\ncheers,\nlefteris\n\nOn Sat, Jan 9, 2010 at 1:46 PM, Richard Neill <[email protected]> wrote:\n> Dear All,\n>\n> I'm trying to optimise the speed of some selects with the where condition:\n>\n> WHERE id =\n> (SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')\n>\n>\n> This is relatively slow, taking about 15-20ms, even though I have a joint\n> index on both fields:\n>\n> CREATE INDEX testidx3 ON tbl_sort_report (id, parcel_id_code);\n>\n>\n> So, my question is, is there any way to improve this? I'd expect that an\n> index on ( max(id),parcel_id_code ) would be ideal, excepting that\n> postgres won't allow that (and such an index probably doesn't make much\n> conceptual sense).\n>\n>\n> Explain Analyze is below.\n>\n> Thanks,\n>\n> Richard\n>\n>\n>\n> Here is part of the schema. id is the primary key; parcel_id_code loops from\n> 0...99999 and back again every few hours.\n>\n> fsc_log=> \\d tbl_sort_report\n> Table \"public.tbl_sort_report\"\n> Column | Type | Modifiers\n> ----------------------+--------------------------+-----------------------------------------------------\n> id | bigint | not null default\n> nextval('master_id_seq'::regclass)\n> timestamp | timestamp with time zone |\n> parcel_id_code | integer |\n> (etc)\n>\n>\n>\n>\n> EXPLAIN ANALYZE (SELECT MAX(id) FROM tbl_sort_report WHERE\n> parcel_id_code='43024');\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=7.34..7.35 rows=1 width=0) (actual time=17.712..17.714 rows=1\n> loops=1)\n> InitPlan 1 (returns $0)\n> -> Limit (cost=0.00..7.34 rows=1 width=8) (actual time=17.705..17.705\n> rows=0 loops=1)\n> -> Index Scan Backward using testidx3 on tbl_sort_report\n> (cost=0.00..14.67 rows=2 width=8) (actual time=17.700..17.700 rows=0\n> loops=1)\n> Index Cond: (parcel_id_code = 43024)\n> Filter: (id IS NOT NULL)\n> Total runtime: 17.786 ms\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Sat, 9 Jan 2010 13:52:31 +0100",
"msg_from": "Lefteris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joint index including MAX() ?"
},
{
"msg_contents": "Richard Neill <[email protected]> writes:\n> I'm trying to optimise the speed of some selects with the where condition:\n> WHERE id =\n> (SELECT MAX(id) FROM tbl_sort_report WHERE parcel_id_code='43024')\n> This is relatively slow, taking about 15-20ms, even though I have a \n> joint index on both fields:\n> CREATE INDEX testidx3 ON tbl_sort_report (id, parcel_id_code);\n\nYou've got the index column order backwards: to make this query fast,\nit has to be on (parcel_id_code, id). The reason should be apparent\nif you think about the index ordering. With the correct index, the\nbackend can descend the btree looking for the last entry with\nparcel_id_code='43024', and when it hits it, that's the max id.\nThe other way round, the best available strategy using the index\nis to search backwards from the end (highest id) hoping to hit a\nrow with parcel_id_code='43024'. That could take a long time.\nFrequently the planner will think it's so slow that it shouldn't\neven bother with the index, just seqscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 09 Jan 2010 12:07:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Joint index including MAX() ? "
}
] |
[
{
"msg_contents": "Hi,\n\nPart of a larger problem, I'm trying to optimize a rather simple query which\nis basically:\nSELECT * FROM table WHERE indexed_column > ... ORDER BY indexed_column DESC;\n\n(see attachment for all details: table definition, query, query plans)\n\nFor small ranges it will choose an index scan which is very good. For\nsomewhat larger ranges (not very large yet) it will switch to a bitmap scan\n+ sorting. Pgsql probably thinks that the larger the range, the better a\nbitmap scan is because it reads more effectively. However, in my case, the\nlarger the query, the worse bitmap+sort performs compared to index scan:\n\nSmall range (5K rows): 5.4 msec (b+s) vs 3.3 msec (i) -- performance penalty\nof ~50%\nLarge range (1.5M rows): 6400 sec (b+s) vs 2100 msec (i) -- performance\npenalty of ~200%\n\nHow can I make pgsql realize that it should always pick the index scan?\n\nThanks!\n\nKind regards,\nMathieu",
"msg_date": "Sun, 10 Jan 2010 13:28:11 +0100",
"msg_from": "Mathieu De Zutter <[email protected]>",
"msg_from_op": true,
"msg_subject": "Choice of bitmap scan over index scan"
},
{
"msg_contents": "On 01/10/2010 12:28 PM, Mathieu De Zutter wrote:\n>Sort (cost=481763.31..485634.61 rows=1548520 width=338) (actual time=5423.628..6286.148 rows=1551923 loops=1)\n> Sort Key: event_timestamp\n > Sort Method: external merge Disk: 90488kB\n> -> Seq Scan on log_event (cost=0.00..79085.92 rows=1548520 width=338) (actual time=0.022..2195.527 rows=1551923 loops=1)\n> Filter: (event_timestamp > (now() - '1 year'::interval))\n> Total runtime: 6407.377 ms\n\nNeeding to use an external (on-disk) sort method, when taking\nonly 90MB, looks odd.\n\n- Jeremy\n",
"msg_date": "Sun, 10 Jan 2010 14:04:47 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris <[email protected]> wrote:\n> On 01/10/2010 12:28 PM, Mathieu De Zutter wrote:\n>>\n>> Sort (cost=481763.31..485634.61 rows=1548520 width=338) (actual\n>> time=5423.628..6286.148 rows=1551923 loops=1)\n>> Sort Key: event_timestamp\n>\n> > Sort Method: external merge Disk: 90488kB\n>>\n>> -> Seq Scan on log_event (cost=0.00..79085.92 rows=1548520 width=338)\n>> (actual time=0.022..2195.527 rows=1551923 loops=1)\n>> Filter: (event_timestamp > (now() - '1 year'::interval))\n>> Total runtime: 6407.377 ms\n>\n> Needing to use an external (on-disk) sort method, when taking\n> only 90MB, looks odd.\n>\n> - Jeremy\n\nWell, you'd need to have work_mem > 90 MB for that not to happen, and\nvery few people can afford to set that setting that high. If you have\na query with three or four sorts using 90 MB a piece, and five or ten\nusers running them, you can quickly kill the box...\n\n...Robert\n",
"msg_date": "Sun, 10 Jan 2010 21:53:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On 01/11/2010 02:53 AM, Robert Haas wrote:\n> On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris<[email protected]> wrote:\n>> Needing to use an external (on-disk) sort method, when taking\n>> only 90MB, looks odd.\n[...]\n> Well, you'd need to have work_mem> 90 MB for that not to happen, and\n> very few people can afford to set that setting that high. If you have\n> a query with three or four sorts using 90 MB a piece, and five or ten\n> users running them, you can quickly kill the box...\n\nOh. That's, um, a real pity given the cost of going external. Any hope\nof a more dynamic allocation of memory resource in the future?\nWithin a single query plan, the number of sorts is known; how about\na sort-mem limit per query rather than per sort (as a first step)?\n\nCheers,\n Jeremy\n\n\n",
"msg_date": "Mon, 11 Jan 2010 19:41:08 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 2:41 PM, Jeremy Harris <[email protected]> wrote:\n> On 01/11/2010 02:53 AM, Robert Haas wrote:\n>>\n>> On Sun, Jan 10, 2010 at 9:04 AM, Jeremy Harris<[email protected]> wrote:\n>>>\n>>> Needing to use an external (on-disk) sort method, when taking\n>>> only 90MB, looks odd.\n>\n> [...]\n>>\n>> Well, you'd need to have work_mem> 90 MB for that not to happen, and\n>> very few people can afford to set that setting that high. If you have\n>> a query with three or four sorts using 90 MB a piece, and five or ten\n>> users running them, you can quickly kill the box...\n>\n> Oh. That's, um, a real pity given the cost of going external. Any hope\n> of a more dynamic allocation of memory resource in the future?\n> Within a single query plan, the number of sorts is known; how about\n> a sort-mem limit per query rather than per sort (as a first step)?\n\nUnfortunately, it's not the case that the number of sorts is known -\nthe amount of memory available to the sort affects its cost, so if we\nknew we were going to have more or less memory it would affect whether\nwe chose the plan involving the sort in the first place.\n\nMy previous musings on this topic are here:\n\nhttp://archives.postgresql.org/pgsql-hackers/2009-10/msg00125.php\n\n...Robert\n",
"msg_date": "Mon, 11 Jan 2010 17:15:37 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
}
] |
[
{
"msg_contents": "Mathieu De Zutter wrote:\n \nYou didn't include any information on your hardware and OS, which can\nbe very important. Also, what version of PostgreSQL is this?\nSELECT version(); output would be good.\n \n> How can I make pgsql realize that it should always pick the index\n> scan?\n \nThat would probably be a very bad thing to do, in a general sense.\nI'm not even convinced yet it's really what you want in this case.\n \n> shared_buffers = 24MB\n> work_mem = 8MB\n> #effective_cache_size = 128MB\n \nThose are probably not optimal; however, without information on your\nhardware and runtime environment, I can't make any concrete\nsuggestion.\n \n> #seq_page_cost = 1.0\n> #random_page_cost = 4.0\n \nIt's entirely possible that you will get plans more appropriate to\nyour hardware and runtime environment by adjusting these. Again, I\nlack data to propose anything specific yet.\n \n-Kevin\n\n\n",
"msg_date": "Sun, 10 Jan 2010 09:18:10 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Sun, Jan 10, 2010 at 4:18 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Mathieu De Zutter wrote:\n>\n> You didn't include any information on your hardware and OS, which can\n> be very important. Also, what version of PostgreSQL is this?\n> SELECT version(); output would be good.\n>\n\nIntel(R) Core(TM)2 Duo CPU E7200 @ 2.53GHz\n2GB RAM\n2x500GB RAID-1\n\nRunning Debian/Etch AMD64\nPG version: PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC\ngcc-4.3.real (Debian 4.3.2-1.1) 4.3.2\n\nServer also runs DNS/Mail/Web/VCS/... for budget reasons.\nDatabase size is 1-2 GB. Also running copies of it for testing/dev.\n\n\nKind regards,\nMathieu\n\nOn Sun, Jan 10, 2010 at 4:18 PM, Kevin Grittner <[email protected]> wrote:\nMathieu De Zutter wrote:\n\nYou didn't include any information on your hardware and OS, which can\nbe very important. Also, what version of PostgreSQL is this?\nSELECT version(); output would be good.\nIntel(R) Core(TM)2 Duo CPU E7200 @ 2.53GHz\n2GB RAM\n2x500GB RAID-1 \n\nRunning Debian/Etch AMD64\nPG version: PostgreSQL 8.3.8 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.3.real (Debian 4.3.2-1.1) 4.3.2\n\nServer also runs DNS/Mail/Web/VCS/... for budget reasons.\nDatabase size is 1-2 GB. Also running copies of it for testing/dev. \n\n\nKind regards,\nMathieu",
"msg_date": "Sun, 10 Jan 2010 16:43:40 +0100",
"msg_from": "Mathieu De Zutter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
}
] |
[
{
"msg_contents": "Mathieu De Zutter wrote:\n \n> Intel(R) Core(TM)2 Duo CPU E7200 @ 2.53GHz\n> 2GB RAM\n> 2x500GB RAID-1\n \n> Running Debian/Etch AMD64\n> PG version: PostgreSQL 8.3.8 on x86_64\n \n> Server also runs DNS/Mail/Web/VCS/... for budget reasons.\n> Database size is 1-2 GB. Also running copies of it for testing/dev.\n \nI would try something like this and see how it goes:\n \nshared_buffers = 200MB\nwork_mem = 32MB\neffective_cache_size = 1.2GB\nseq_page_cost = 0.1\nrandom_page_cost = 0.1\n \nSome of these settings require a PostgreSQL restart.\n \nI may have gone too aggressively low on the page costs, but it seems\nlikely with a \"1-2 GB\" database and 2 GB RAM, the active portions of\nyour database will be fully cached in spite of the other\napplications. Besides looking at the impact on this one query, you\nshould keep an eye on all queries after such changes, and post for\nany which become unacceptably slow. Properly tuning something like\nthis can take a few iterations.\n \n-Kevin\n\n\n",
"msg_date": "Sun, 10 Jan 2010 09:53:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Sun, Jan 10, 2010 at 10:53 AM, Kevin Grittner\n<[email protected]> wrote:\n> seq_page_cost = 0.1\n> random_page_cost = 0.1\n\nThese might not even be low enough. The reason why bitmap index scans\nwin over plain index scans, in general, is because you make one pass\nthrough the heap to get all the rows you need instead of bouncing\naround doing lots of random access. But of course if all the data is\nin RAM then this argument falls down.\n\nIf these aren't enough to get the query planner to DTRT, then the OP\nmight want to try lowering them further and seeing how low he has to\ngo to flip the plan...\n\n...Robert\n",
"msg_date": "Sun, 10 Jan 2010 21:52:25 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 3:52 AM, Robert Haas <[email protected]> wrote:\n\n> On Sun, Jan 10, 2010 at 10:53 AM, Kevin Grittner\n> <[email protected]> wrote:\n> > seq_page_cost = 0.1\n> > random_page_cost = 0.1\n>\n> These might not even be low enough. The reason why bitmap index scans\n> win over plain index scans, in general, is because you make one pass\n> through the heap to get all the rows you need instead of bouncing\n> around doing lots of random access. But of course if all the data is\n> in RAM then this argument falls down.\n>\n> If these aren't enough to get the query planner to DTRT, then the OP\n> might want to try lowering them further and seeing how low he has to\n> go to flip the plan...\n>\n\nSo if this query usually does *not* hit the cache, it will be probably\nfaster if I leave it like that? While testing a query I execute it that much\nthat it's always getting into the cache. However, since other applications\nrun on the same server, I think that infrequently used data gets flushed\nafter a while, even if the DB could fit in the RAM.\n\n-- \nMathieu\n\nOn Mon, Jan 11, 2010 at 3:52 AM, Robert Haas <[email protected]> wrote:\nOn Sun, Jan 10, 2010 at 10:53 AM, Kevin Grittner\n<[email protected]> wrote:\n> seq_page_cost = 0.1\n> random_page_cost = 0.1\n\nThese might not even be low enough. The reason why bitmap index scans\nwin over plain index scans, in general, is because you make one pass\nthrough the heap to get all the rows you need instead of bouncing\naround doing lots of random access. But of course if all the data is\nin RAM then this argument falls down.\n\nIf these aren't enough to get the query planner to DTRT, then the OP\nmight want to try lowering them further and seeing how low he has to\ngo to flip the plan... So if this query usually does *not* hit the cache, it will be probably faster if I leave it like that? While testing a query I execute it that much that it's always getting into the cache. However, since other applications run on the same server, I think that infrequently used data gets flushed after a while, even if the DB could fit in the RAM.\n-- Mathieu",
"msg_date": "Mon, 11 Jan 2010 09:36:40 +0100",
"msg_from": "Mathieu De Zutter <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "On Mon, 11 Jan 2010, Mathieu De Zutter wrote:\n> > seq_page_cost = 0.1\n> > random_page_cost = 0.1\n\n> So if this query usually does *not* hit the cache, it will be probably faster if I leave\n> it like that? While testing a query I execute it that much that it's always getting into\n> the cache. However, since other applications run on the same server, I think that\n> infrequently used data gets flushed after a while, even if the DB could fit in the RAM.\n\nPostgres is being conservative. The plan it uses (bitmap index scan) will \nperform much better than an index scan when the data is not in the cache, \nby maybe an order of magnitude, depending on your hardware setup.\n\nThe index scan may perform better at the moment, but the bitmap index scan \nis safer.\n\nMatthew\n\n-- \n Change is inevitable, except from vending machines.\n",
"msg_date": "Mon, 11 Jan 2010 10:36:50 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "> Postgres is being conservative. The plan it uses (bitmap index scan) \n> will perform much better than an index scan when the data is not in the \n> cache, by maybe an order of magnitude, depending on your hardware setup.\n>\n> The index scan may perform better at the moment, but the bitmap index \n> scan is safer.\n\n\tSuppose you make a query that will need to retrieve 5% of the rows in a \ntable...\n\n\tIf the table is nicely clustered (ie you want the latest rows in a table \nwhere they are always appended at the end with no holes, for instance), \nbitmap index scan will mark 5% of the pages for reading, and read them \nsequentially (fast). Plain index scan will also scan the rows more or less \nsequentially, so it's going to be quite fast too.\n\n\tNow if your table is not clustered at all, or clustered on something \nwhich has no correlation to your current query, you may hit the worst case \n: reading a ramdom sampling of 5% of the pages. Bitmap index scan will \nsort these prior to reading, so the HDD/OS will do smart things. Plain \nindex scan won't.\n\n\t- worst case for bitmap index scan is a seq scan... slow, but if you have \nno other choice, it's OK.\n\t- worst case for plain index scan is a lot worse since it's a random \nseekfest.\n\n\tIf everything is cached in RAM, there is not much difference (plain index \nscan can be faster if the bitmap \"recheck cond\" is slow).\n",
"msg_date": "Mon, 11 Jan 2010 12:23:53 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Choice of bitmap scan over index scan"
},
{
"msg_contents": "Mathieu De Zutter <[email protected]> wrote:\n \n> So if this query usually does *not* hit the cache, it will be\n> probably faster if I leave it like that? While testing a query I\n> execute it that much that it's always getting into the cache.\n> However, since other applications run on the same server, I think\n> that infrequently used data gets flushed after a while, even if\n> the DB could fit in the RAM.\n \nYou definitely were hitting the cache almost exclusively in the\nEXPLAIN ANALYZE results you sent. If that's not typically what\nhappens, we'd be able to better analyze the situation with an\nEXPLAIN ANALYZE of a more typical run. That said, if you are doing\nphysical reads, reading backwards on the index is going to degrade\npretty quickly if you're using a normal rotating magnetic medium,\nbecause the blocks are arranged on the disk in a format to support\nfast reads in a forward direction. Given that and other factors,\nthe bitmap scan will probably be much faster if you do wind up going\nto the disk most of the time.\n \nOn the other hand, there's no reason to lie to the optimizer about\nhow much memory is on the machine. You can't expect it to make sane\nchoices on the basis of misleading assumptions. For starters, try\nsetting effective_cache_size to at least 1GB. That doesn't reserve\nany space, it just tells the optimizer what it can assume about how\nmuch data can be cached, and a large setting will tend to encourage\nmore indexed access.\n \nGiven that when you focus on one piece of the database, the caching\neffects are pretty dramatic, you really should reduce\nrandom_page_cost to 2, even with the in-and-out-of-cache scenario\nyou describe. These aren't \"magic bullets\" that solve all\nperformance problems, but you would be giving the optimizer a\nfighting chance at costing plans in a way that the one with the\nlowest calculated cost is actually the one which will run the\nfastest.\n \nAlso, if the pressure on RAM is that high on this machine, it would\nprobably be cost-effective to add another 2GB of RAM, so that you\ncould be a bit more generous in your allocation of RAM to the\ndatabase. It might make your problems queries an order of magnitude\nor more faster with very little effort. If a quick google search is\nany indication, you can get that much RAM for less than $50 these\ndays, if you've got a slot open.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jan 2010 09:11:12 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Choice of bitmap scan over index scan"
}
] |
[
{
"msg_contents": "I wrote:\n \n> work_mem = 32MB\n \nHmmm... With 100 connections and 2 GB RAM, that is probably on the\nhigh side, at least if you sometimes use a lot of those connections\nat the same time to run queries which might use sorts or hashes. It's\nprobably safer to go down to 16MB or even back to where you had it.\n \nSorry I missed that before.\n \n-Kevin\n\n\n",
"msg_date": "Sun, 10 Jan 2010 10:01:42 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Choice of bitmap scan over index scan"
}
] |
[
{
"msg_contents": "Hello,\n\nWe're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n\n4X E7420 Xeon, Four cores (for a total of 16 cores)\n2.13 GHz, 8M Cache, 1066 Mhz FSB\n32 Gigs of RAM\n15 K RPM drives in striped raid\n\nThings run fine, but when we get a lot of concurrent queries running, we see\na pretty good slow down.\n\nWe don't have much experience with this sort of hardware. Does anyone have\nan example config file we could use as a good starting point for this sort\nof hardware?\n\nWe have a fair amount of row-level inserts and deletes going on (probably as\nmuch data is inserted and deleted in a day than permanently resides in the\ndb).\n\nBob\n\nHello,We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...4X E7420 Xeon, Four cores (for a total of 16 cores)2.13 GHz, 8M Cache, 1066 Mhz FSB32 Gigs of RAM15 K RPM drives in striped raid\nThings run fine, but when we get a lot of concurrent queries running, we see a pretty good slow down. We don't have much experience with this sort of hardware. Does anyone have an example config file we could use as a good starting point for this sort of hardware? \nWe have a fair amount of row-level inserts and deletes going on (probably as much data is inserted and deleted in a day than permanently resides in the db).Bob",
"msg_date": "Mon, 11 Jan 2010 08:44:04 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek <[email protected]> wrote:\n> Hello,\n>\n> We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n>\n> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> 32 Gigs of RAM\n> 15 K RPM drives in striped raid\n\nWhat method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?\n\n> Things run fine, but when we get a lot of concurrent queries running, we see\n> a pretty good slow down.\n\nDefinte \"a lot\".\n\n> We don't have much experience with this sort of hardware. Does anyone have\n> an example config file we could use as a good starting point for this sort\n> of hardware?\n>\n> We have a fair amount of row-level inserts and deletes going on (probably as\n> much data is inserted and deleted in a day than permanently resides in the\n> db).\n\nWhat do the following commands tell you?\n\niostat -x 10 (first iteration doesn't count)\nvmstat 10 (again, first iteration doesn't count)\ntop\n\nWhat you're looking for is iowait / utilization.\n",
"msg_date": "Mon, 11 Jan 2010 06:50:48 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "In response to Bob Dusek :\n> Hello,\n> \n> We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n> \n> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> 32 Gigs of RAM\n> 15 K RPM drives in striped raid\n> \n> Things run fine, but when we get a lot of concurrent queries running, we see a\n> pretty good slow down.\n> \n> We don't have much experience with this sort of hardware.�� Does anyone have an\n> example config file we could use as a good starting point for this sort of\n> hardware?\n\nHave you tuned your postgresql.conf? (memory-parameter)\n\nHere are some links for you:\n\n15:07 < akretschmer> ??performance\n15:07 < rtfm_please> For information about performance\n15:07 < rtfm_please> see http://revsys.com/writings/postgresql-performance.html\n15:07 < rtfm_please> or http://wiki.postgresql.org/wiki/Performance_Optimization\n15:07 < rtfm_please> or http://www.depesz.com/index.php/2007/07/05/how-to-insert-data-to-database-as-fast-as-possible/\n\n\nHTH, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Mon, 11 Jan 2010 15:07:53 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe <[email protected]>wrote:\n\n> On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek <[email protected]> wrote:\n> > Hello,\n> >\n> > We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n> >\n> > 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> > 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> > 32 Gigs of RAM\n> > 15 K RPM drives in striped raid\n>\n> What method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?\n>\n\nRAID-0\n\n\n>\n> > Things run fine, but when we get a lot of concurrent queries running, we\n> see\n> > a pretty good slow down.\n>\n> Definte \"a lot\".\n>\n>\nWe have an application server that is processing requests. Each request\nconsists of a combination of selects, inserts, and deletes. We actually see\ndegredation when we get more than 40 concurrent requests. The exact number\nof queries executed by each request isn't known. It varies per request.\nBut, an example request would be about 16 inserts and 113 selects. Any\ngiven request can't execute more than a single query at a time.\n\nTo be more specific about the degradation, we've set the\n\"log_min_duration_statement=200\", and when we run with 40 concurrent\nrequests, we don't see queries showing up in there. When we run with 60\nconcurrent requests, we start seeing queries show up, and when we run 200+\nrequests, we see multi-second queries.\n\nThis is to be expected, to some extent, as we would expect some perfromance\ndegradation with higher utilization. But, the hardware doesn't appear to be\nvery busy, and that's where we're hoping for some help.\n\nWe want to have Postgres eating up as many resources as possible to chug\nthrough our queries faster. Right now, it's running slower with more\nutilization, but there's still too much idle time for the CPUs.\n\n> We don't have much experience with this sort of hardware. Does anyone\n> have\n> > an example config file we could use as a good starting point for this\n> sort\n> > of hardware?\n> >\n> > We have a fair amount of row-level inserts and deletes going on (probably\n> as\n> > much data is inserted and deleted in a day than permanently resides in\n> the\n> > db).\n>\n> What do the following commands tell you?\n>\n> iostat -x 10 (first iteration doesn't count)\n>\n\nHere's some iostat output (from the 3rd data point from iostat -x 10)...\nthis was taken while we were processing 256 simultaneous requests.\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 34.29 0.00 7.09 0.03 0.00 58.58\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\navgqu-sz await svctm %util\nsda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n0.42 3.17 0.04 0.48\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\nsda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\nsda5 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n0.42 3.17 0.04 0.48\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\ndm-0 0.00 0.00 0.00 0.40 0.00 3.20 8.00\n0.00 0.00 0.00 0.00\ndm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 99.90 0.00 799.20 8.00\n0.15 1.50 0.01 0.12\ndm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00\n0.00 0.33 0.17 0.01\ndm-4 0.00 0.00 0.00 144.70 0.00 1157.60 8.00\n0.46 3.17 0.02 0.35\ndm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n0.00 0.00 0.00 0.00\n\nThe iowait seems pretty low, doesn't it?\n\nvmstat 10 (again, first iteration doesn't count)\n>\n\nHere's the vmstat output, with the 6th data element clipped out, which seems\nrepresentative of the whole... (also taken during 256 simultaneous request)\n\n[root@ecpe1 pg_log]# vmstat 10\nprocs -----------memory---------- ---swap-- -----io---- --system--\n-----cpu------\nr b swpd free buff cache si so bi bo in cs us sy id wa\nst\n8 0 0 21818136 696692 3871172 0 0 0 1042 2261 48418 34 7\n59 0 0\n\nThat's a lot of free mem, which is to be expected. Our database is not very\nlarge.\n\ntop\n>\n\n(taken during 256 simultaneous requests)\n\ntop - 10:11:43 up 12 days, 20:48, 6 users, load average: 10.48, 4.16, 2.83\nTasks: 798 total, 8 running, 790 sleeping, 0 stopped, 0 zombie\nCpu0 : 33.3%us, 7.6%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu1 : 31.6%us, 5.9%sy, 0.0%ni, 62.5%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu2 : 33.0%us, 6.6%sy, 0.0%ni, 60.4%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu3 : 35.4%us, 6.2%sy, 0.0%ni, 58.0%id, 0.0%wa, 0.0%hi, 0.3%si,\n0.0%st\nCpu4 : 36.3%us, 5.6%sy, 0.0%ni, 58.1%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu5 : 37.4%us, 6.2%sy, 0.0%ni, 56.1%id, 0.0%wa, 0.0%hi, 0.3%si,\n0.0%st\nCpu6 : 38.1%us, 6.0%sy, 0.0%ni, 56.0%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu7 : 39.2%us, 7.5%sy, 0.0%ni, 52.9%id, 0.0%wa, 0.0%hi, 0.3%si,\n0.0%st\nCpu8 : 35.5%us, 7.2%sy, 0.0%ni, 56.9%id, 0.0%wa, 0.0%hi, 0.3%si,\n0.0%st\nCpu9 : 37.8%us, 7.6%sy, 0.0%ni, 54.3%id, 0.0%wa, 0.0%hi, 0.3%si,\n0.0%st\nCpu10 : 39.5%us, 5.9%sy, 0.0%ni, 54.6%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu11 : 34.5%us, 7.2%sy, 0.0%ni, 58.2%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu12 : 41.1%us, 6.9%sy, 0.0%ni, 50.3%id, 0.0%wa, 0.0%hi, 1.6%si,\n0.0%st\nCpu13 : 38.0%us, 7.3%sy, 0.0%ni, 54.8%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu14 : 36.2%us, 6.2%sy, 0.0%ni, 57.6%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nCpu15 : 36.8%us, 8.2%sy, 0.0%ni, 54.9%id, 0.0%wa, 0.0%hi, 0.0%si,\n0.0%st\nMem: 28817004k total, 8008372k used, 20808632k free, 705772k buffers\nSwap: 30867448k total, 0k used, 30867448k free, 4848376k cached\n\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n2641 postgres 16 0 8524m 21m 16m S 3.3 0.1 0:00.25 postmaster\n2650 postgres 15 0 8524m 21m 16m S 3.3 0.1 0:00.25 postmaster\n2706 postgres 16 0 8524m 21m 15m S 3.3 0.1 0:00.20 postmaster\n2814 postgres 15 0 8523m 18m 14m S 3.3 0.1 0:00.10 postmaster\n2829 postgres 15 0 8523m 18m 14m S 3.3 0.1 0:00.10 postmaster\n2618 postgres 15 0 8524m 21m 16m S 3.0 0.1 0:00.25 postmaster\n2639 postgres 15 0 8524m 21m 16m R 3.0 0.1 0:00.25 postmaster\n2671 postgres 15 0 8524m 21m 16m S 3.0 0.1 0:00.23 postmaster\n2675 postgres 16 0 8524m 21m 16m S 3.0 0.1 0:00.23 postmaster\n2694 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.23 postmaster\n2698 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.21 postmaster\n2702 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.19 postmaster\n2767 postgres 15 0 8524m 20m 14m S 3.0 0.1 0:00.13 postmaster\n2776 postgres 15 0 8524m 20m 14m S 3.0 0.1 0:00.14 postmaster\n2812 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.11 postmaster\n2819 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster\n2823 postgres 16 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster\n2828 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster\n2631 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.24 postmaster\n2643 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.23 postmaster\n2656 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster\n2658 postgres 16 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster\n2664 postgres 16 0 8524m 21m 16m S 2.6 0.1 0:00.24 postmaster\n2674 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.23 postmaster\n2679 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster\n2684 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.21 postmaster\n2695 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster\n2699 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster\n2703 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster\n2704 postgres 15 0 8524m 20m 15m R 2.6 0.1 0:00.17 postmaster\n2713 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster\n2734 postgres 15 0 8524m 20m 14m S 2.6 0.1 0:00.14 postmaster\n2738 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.14 postmaster\n\n\n\n> What you're looking for is iowait / utilization.\n>\n\nOn Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe <[email protected]> wrote:\nOn Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek <[email protected]> wrote:\n> Hello,\n>\n> We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n>\n> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> 32 Gigs of RAM\n> 15 K RPM drives in striped raid\n\nWhat method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?RAID-0 \n\n> Things run fine, but when we get a lot of concurrent queries running, we see\n> a pretty good slow down.\n\nDefinte \"a lot\".\nWe have an application server that is processing requests. Each request consists of a combination of selects, inserts, and deletes. We actually see degredation when we get more than 40 concurrent requests. The exact number of queries executed by each request isn't known. It varies per request. But, an example request would be about 16 inserts and 113 selects. Any given request can't execute more than a single query at a time. \nTo be more specific about the degradation, we've set the \"log_min_duration_statement=200\", and when we run with 40 concurrent requests, we don't see queries showing up in there. When we run with 60 concurrent requests, we start seeing queries show up, and when we run 200+ requests, we see multi-second queries.\nThis is to be expected, to some extent, as we would expect some perfromance degradation with higher utilization. But, the hardware doesn't appear to be very busy, and that's where we're hoping for some help.\nWe want to have Postgres eating up as many resources as possible to chug through our queries faster. Right now, it's running slower with more utilization, but there's still too much idle time for the CPUs. \n\n> We don't have much experience with this sort of hardware. Does anyone have\n> an example config file we could use as a good starting point for this sort\n> of hardware?\n>\n> We have a fair amount of row-level inserts and deletes going on (probably as\n> much data is inserted and deleted in a day than permanently resides in the\n> db).\n\nWhat do the following commands tell you?\n\niostat -x 10 (first iteration doesn't count)Here's some iostat output (from the 3rd data point from iostat -x 10)... this was taken while we were processing 256 simultaneous requests.\n avg-cpu: %user %nice %system %iowait %steal %idle 34.29 0.00 7.09 0.03 0.00 58.58Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73 0.42 3.17 0.04 0.48sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\nsda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00sda5 0.00 112.20 0.00 133.40 0.00 1964.80 14.73 0.42 3.17 0.04 0.48\nsdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00dm-0 0.00 0.00 0.00 0.40 0.00 3.20 8.00 0.00 0.00 0.00 0.00dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\ndm-2 0.00 0.00 0.00 99.90 0.00 799.20 8.00 0.15 1.50 0.01 0.12dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00 0.00 0.33 0.17 0.01dm-4 0.00 0.00 0.00 144.70 0.00 1157.60 8.00 0.46 3.17 0.02 0.35\ndm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00The iowait seems pretty low, doesn't it? \n\nvmstat 10 (again, first iteration doesn't count)Here's the vmstat output, with the 6th data element clipped out, which seems representative of the whole... (also taken during 256 simultaneous request)\n[root@ecpe1 pg_log]# vmstat 10procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------r b swpd free buff cache si so bi bo in cs us sy id wa st8 0 0 21818136 696692 3871172 0 0 0 1042 2261 48418 34 7 59 0 0\n That's a lot of free mem, which is to be expected. Our database is not very large. \n\ntop(taken during 256 simultaneous requests)top - 10:11:43 up 12 days, 20:48, 6 users, load average: 10.48, 4.16, 2.83Tasks: 798 total, 8 running, 790 sleeping, 0 stopped, 0 zombie\nCpu0 : 33.3%us, 7.6%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu1 : 31.6%us, 5.9%sy, 0.0%ni, 62.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu2 : 33.0%us, 6.6%sy, 0.0%ni, 60.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu3 : 35.4%us, 6.2%sy, 0.0%ni, 58.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu4 : 36.3%us, 5.6%sy, 0.0%ni, 58.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu5 : 37.4%us, 6.2%sy, 0.0%ni, 56.1%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st \nCpu6 : 38.1%us, 6.0%sy, 0.0%ni, 56.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu7 : 39.2%us, 7.5%sy, 0.0%ni, 52.9%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu8 : 35.5%us, 7.2%sy, 0.0%ni, 56.9%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st\nCpu9 : 37.8%us, 7.6%sy, 0.0%ni, 54.3%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%stCpu10 : 39.5%us, 5.9%sy, 0.0%ni, 54.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu11 : 34.5%us, 7.2%sy, 0.0%ni, 58.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu12 : 41.1%us, 6.9%sy, 0.0%ni, 50.3%id, 0.0%wa, 0.0%hi, 1.6%si, 0.0%stCpu13 : 38.0%us, 7.3%sy, 0.0%ni, 54.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stCpu14 : 36.2%us, 6.2%sy, 0.0%ni, 57.6%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nCpu15 : 36.8%us, 8.2%sy, 0.0%ni, 54.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%stMem: 28817004k total, 8008372k used, 20808632k free, 705772k buffers Swap: 30867448k total, 0k used, 30867448k free, 4848376k cached \nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND2641 postgres 16 0 8524m 21m 16m S 3.3 0.1 0:00.25 postmaster2650 postgres 15 0 8524m 21m 16m S 3.3 0.1 0:00.25 postmaster\n2706 postgres 16 0 8524m 21m 15m S 3.3 0.1 0:00.20 postmaster2814 postgres 15 0 8523m 18m 14m S 3.3 0.1 0:00.10 postmaster2829 postgres 15 0 8523m 18m 14m S 3.3 0.1 0:00.10 postmaster\n2618 postgres 15 0 8524m 21m 16m S 3.0 0.1 0:00.25 postmaster2639 postgres 15 0 8524m 21m 16m R 3.0 0.1 0:00.25 postmaster2671 postgres 15 0 8524m 21m 16m S 3.0 0.1 0:00.23 postmaster\n2675 postgres 16 0 8524m 21m 16m S 3.0 0.1 0:00.23 postmaster 2694 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.23 postmaster2698 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.21 postmaster\n2702 postgres 15 0 8524m 21m 15m S 3.0 0.1 0:00.19 postmaster2767 postgres 15 0 8524m 20m 14m S 3.0 0.1 0:00.13 postmaster2776 postgres 15 0 8524m 20m 14m S 3.0 0.1 0:00.14 postmaster\n2812 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.11 postmaster2819 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster2823 postgres 16 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster \n2828 postgres 15 0 8523m 18m 14m S 3.0 0.1 0:00.09 postmaster 2631 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.24 postmaster 2643 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.23 postmaster\n2656 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster2658 postgres 16 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster2664 postgres 16 0 8524m 21m 16m S 2.6 0.1 0:00.24 postmaster \n2674 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.23 postmaster 2679 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.22 postmaster2684 postgres 15 0 8524m 21m 15m S 2.6 0.1 0:00.21 postmaster\n2695 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster2699 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster 2703 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster \n2704 postgres 15 0 8524m 20m 15m R 2.6 0.1 0:00.17 postmaster2713 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.18 postmaster 2734 postgres 15 0 8524m 20m 14m S 2.6 0.1 0:00.14 postmaster\n2738 postgres 15 0 8524m 20m 15m S 2.6 0.1 0:00.14 postmaster \nWhat you're looking for is iowait / utilization.",
"msg_date": "Mon, 11 Jan 2010 11:42:03 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 9:07 AM, A. Kretschmer <\[email protected]> wrote:\n\n> In response to Bob Dusek :\n> > Hello,\n> >\n> > We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n> >\n> > 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> > 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> > 32 Gigs of RAM\n> > 15 K RPM drives in striped raid\n> >\n> > Things run fine, but when we get a lot of concurrent queries running, we\n> see a\n> > pretty good slow down.\n> >\n> > We don't have much experience with this sort of hardware. Does anyone\n> have an\n> > example config file we could use as a good starting point for this sort\n> of\n> > hardware?\n>\n> Have you tuned your postgresql.conf? (memory-parameter)\n>\n>\nHere's a list of what I consider to be key changes we've made to the config\nfile (from default)..\n\nfor comparison purposes, the diff command was \"diff postgresql.conf.dist\npostgresql.conf.mod\"\n64c64\n< max_connections = 100 # (change requires restart)\n---\n> max_connections = 300 # (change requires restart)\n78c78\n< ssl = true # (change requires restart)\n---\n> #ssl = off # (change requires restart)\n106c106,107\n< shared_buffers = 32MB # min 128kB\n---\n> #shared_buffers = 32MB # min 128kB\n> shared_buffers = 8GB # min 128kB (rdk)\n115a117\n> work_mem = 64MB # min 64kB (vrk) (rdk)\n117c119,121\n< #max_stack_depth = 2MB # min 100kB\n---\n> maintenance_work_mem = 2GB # min 1MB (rdk)\n> #max_stack_depth = 1MB # min 100kB\n> max_stack_depth = 9MB # min 100kB (vrk)\n127a132\n> vacuum_cost_delay = 15ms # 0-100 milliseconds (rdk)\n150c155\n< #fsync = on # turns forced synchronization on or off\n---\n> fsync = off # turns forced synchronization on or off (rdk)\n\nPlease note, I've been reading this list a bit lately, and I'm aware of the\nkind of advice that some offer with respect to fsync. I understand that\nwith 8.4 we can turn this on and shut off \"synchronous_commit\". I would be\ninterested in more information on that. But the bottom line is that we've\ngotten in the habit of shutting this off (on production servers) using\nPostgres 7.4, as the performance gain is enormous, and with fsync=on, we\ncouldn't get the performance we needed.\n\n151a157\n> synchronous_commit = off # immediate fsync at commit\n152a159\n> wal_sync_method = open_sync # the default is the first option (vrk)\n159c166\n< #full_page_writes = on # recover from partial page writes\n---\n> full_page_writes = off # recover from partial page writes (rdk)\n160a168\n> wal_buffers = 8MB # min 32kB (rdk)\n164c172\n< #commit_delay = 0 # range 0-100000, in microseconds\n---\n> commit_delay = 10 # range 0-100000, in microseconds (vrk)\n169a178\n> checkpoint_segments = 256 # in logfile segments, min 1, 16MB each\n(rdk)\n170a180\n> checkpoint_timeout = 15min # range 30s-1h (rdk)\n171a182\n> checkpoint_completion_target = 0.7 # checkpoint target duration, 0.0 -\n1.0 (rdk)\n206a218\n> effective_cache_size = 24GB # (rdk)\n\nI would be willing to send our entire config file to someone if that would\nhelp... I didn't want to attach it to this email, because I'm not sure about\nthe etiquette of attaching files to emails on this list.\n\n\n> Here are some links for you:\n>\n> 15:07 < akretschmer> ??performance\n> 15:07 < rtfm_please> For information about performance\n> 15:07 < rtfm_please> see\n> http://revsys.com/writings/postgresql-performance.html\n> 15:07 < rtfm_please> or\n> http://wiki.postgresql.org/wiki/Performance_Optimization\n> 15:07 < rtfm_please> or\n> http://www.depesz.com/index.php/2007/07/05/how-to-insert-data-to-database-as-fast-as-possible/\n>\n\nWe will spend more time on this and see if we can learn more.\n\nBut, we're hoping someone on this list can offer us some quick tips to help\nus use up more of the 16 cpus we have available.\n\nThanks for pointing all of that out.\n\n\n>\n>\n> HTH, Andreas\n> --\n> Andreas Kretschmer\n> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n> GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOn Mon, Jan 11, 2010 at 9:07 AM, A. Kretschmer <[email protected]> wrote:\nIn response to Bob Dusek :\n> Hello,\n>\n> We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty hardware...\n>\n> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> 32 Gigs of RAM\n> 15 K RPM drives in striped raid\n>\n> Things run fine, but when we get a lot of concurrent queries running, we see a\n> pretty good slow down.\n>\n> We don't have much experience with this sort of hardware. Does anyone have an\n> example config file we could use as a good starting point for this sort of\n> hardware?\n\nHave you tuned your postgresql.conf? (memory-parameter)\nHere's a list of what I consider to be key changes we've made to the config file (from default)..for comparison purposes, the diff command was \"diff postgresql.conf.dist postgresql.conf.mod\"\n64c64< max_connections = 100 # (change requires restart)---> max_connections = 300 # (change requires restart)78c78< ssl = true # (change requires restart)\n---> #ssl = off # (change requires restart)106c106,107< shared_buffers = 32MB # min 128kB---> #shared_buffers = 32MB # min 128kB> shared_buffers = 8GB # min 128kB (rdk)\n115a117> work_mem = 64MB # min 64kB (vrk) (rdk)117c119,121< #max_stack_depth = 2MB # min 100kB---> maintenance_work_mem = 2GB # min 1MB (rdk)> #max_stack_depth = 1MB # min 100kB\n> max_stack_depth = 9MB # min 100kB (vrk)127a132> vacuum_cost_delay = 15ms # 0-100 milliseconds (rdk)150c155< #fsync = on # turns forced synchronization on or off\n---> fsync = off # turns forced synchronization on or off (rdk)Please note, I've been reading this list a bit lately, and I'm aware of the kind of advice that some offer with respect to fsync. I understand that with 8.4 we can turn this on and shut off \"synchronous_commit\". I would be interested in more information on that. But the bottom line is that we've gotten in the habit of shutting this off (on production servers) using Postgres 7.4, as the performance gain is enormous, and with fsync=on, we couldn't get the performance we needed. \n151a157> synchronous_commit = off # immediate fsync at commit152a159> wal_sync_method = open_sync # the default is the first option (vrk)159c166< #full_page_writes = on # recover from partial page writes\n---> full_page_writes = off # recover from partial page writes (rdk)160a168> wal_buffers = 8MB # min 32kB (rdk)164c172< #commit_delay = 0 # range 0-100000, in microseconds\n---> commit_delay = 10 # range 0-100000, in microseconds (vrk)169a178> checkpoint_segments = 256 # in logfile segments, min 1, 16MB each (rdk)170a180> checkpoint_timeout = 15min # range 30s-1h (rdk)\n171a182> checkpoint_completion_target = 0.7 # checkpoint target duration, 0.0 - 1.0 (rdk)206a218> effective_cache_size = 24GB # (rdk)I would be willing to send our entire config file to someone if that would help... I didn't want to attach it to this email, because I'm not sure about the etiquette of attaching files to emails on this list. \n \nHere are some links for you:\n\n15:07 < akretschmer> ??performance\n15:07 < rtfm_please> For information about performance\n15:07 < rtfm_please> see http://revsys.com/writings/postgresql-performance.html\n15:07 < rtfm_please> or http://wiki.postgresql.org/wiki/Performance_Optimization\n15:07 < rtfm_please> or http://www.depesz.com/index.php/2007/07/05/how-to-insert-data-to-database-as-fast-as-possible/\nWe will spend more time on this and see if we can learn more. But, we're hoping someone on this list can offer us some quick tips to help us use up more of the 16 cpus we have available. \nThanks for pointing all of that out. \n\n\nHTH, Andreas\n--\nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 11 Jan 2010 12:10:52 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek wrote:\n> On Mon, Jan 11, 2010 at 8:50 AM, Scott Marlowe <[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> On Mon, Jan 11, 2010 at 6:44 AM, Bob Dusek <[email protected]\n> <mailto:[email protected]>> wrote:\n> > Hello,\n> >\n> > We're running Postgres 8.4.2 on Red Hat 5, on pretty hefty\n> hardware...\n> >\n> > 4X E7420 Xeon, Four cores (for a total of 16 cores)\n> > 2.13 GHz, 8M Cache, 1066 Mhz FSB\n> > 32 Gigs of RAM\n> > 15 K RPM drives in striped raid\n> \n> What method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?\n> \n> \n> RAID-0\n\nAnd how many drives?\n\n> \n> > Things run fine, but when we get a lot of concurrent queries\n> running, we see\n> > a pretty good slow down.\n> \n> Definte \"a lot\".\n> \n> \n> We have an application server that is processing requests. Each request \n> consists of a combination of selects, inserts, and deletes. We actually \n> see degredation when we get more than 40 concurrent requests. The exact \n> number of queries executed by each request isn't known. It varies per \n> request. But, an example request would be about 16 inserts and 113 \n> selects. Any given request can't execute more than a single query at a \n> time. \n\nSo, you are concurrently trying to achieve more than around 640 writes \nand 4520 reads (worst case figures...). This should be interesting. For \n40 concurrent requests you will probably need at least 4 drives in \nRAID-0 to sustain the write rates (and I'll guess 5-6 to be sure to \ncover read requests also, together with plenty of RAM).\n\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 34.29 0.00 7.09 0.03 0.00 58.58\n> \n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \n> avgqu-sz await svctm %util\n> sda 0.00 112.20 0.00 133.40 0.00 1964.80 \n> 14.73 0.42 3.17 0.04 0.48\n> \n> The iowait seems pretty low, doesn't it?\n\nYes, but you are issuing 133 write operations per seconds per drive(s) - \nthis is nearly the limit of what you can get with 15k RPM drives \n(actually, the limit should be somewhere around 200..250 IOPS but 133 \nisn't that far).\n\n> top - 10:11:43 up 12 days, 20:48, 6 users, load average: 10.48, 4.16, 2.83\n> Tasks: 798 total, 8 running, 790 sleeping, 0 stopped, 0 zombie\n> Cpu0 : 33.3%us, 7.6%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si, \n> 0.0%st\n\nThere is one other possibility - since the CPUs are not very loaded, are \nyou sure the client application with which you are testing is fast \nenough to issue enough request to saturate the database?\n\n",
"msg_date": "Mon, 11 Jan 2010 18:12:59 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek <[email protected]> wrote:\n> Scott Marlowe <[email protected]>wrote:\n>> Bob Dusek <[email protected]> wrote:\n \n>>> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n \n>> What method of striped RAID?\n> \n> RAID-0\n \nI hope you have a plan for what to do when any one drive in this\narray fails, and the entire array is unusable.\n \nAnyway, my benchmarks tend to show that best throughput occurs at\nabout (CPU_count * 2) plus effective_spindle_count. Since you seem\nto be fully cached, effective_spindle_count would be zero, so I\nwould expect performance to start to degrade when you have more than\nabout 32 sessions active.\n \n> We actually see degredation when we get more than 40 concurrent\n> requests.\n \nClose enough.\n \n> when we run 200+ requests, we see multi-second queries.\n \nNo surprise there. Your vmstat output suggests that context\nswitches are becoming a problem, and I wouldn't be surprised if I\nheard that the network is an issue. You might want to have someone\ntake a look at the network side to check.\n \nYou want to use some connection pooling which queues requests when\nmore than some configurable number of connections is already active\nwith a request. You probably want to run that on the server side. \nAs for the postgresql.conf, could you show what you have right now,\nexcluding all comments?\n \n-Kevin\n",
"msg_date": "Mon, 11 Jan 2010 11:17:53 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Ivan Voras wrote:\n\n> Yes, but you are issuing 133 write operations per seconds per drive(s) - \n> this is nearly the limit of what you can get with 15k RPM drives \n> (actually, the limit should be somewhere around 200..250 IOPS but 133 \n> isn't that far).\n\nI saw in your other post you have fsync turned off so ignore this, it's \nnot an IO problem in your case.\n\n",
"msg_date": "Mon, 11 Jan 2010 18:19:40 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 9:42 AM, Bob Dusek <[email protected]> wrote:\n>> What method of striped RAID? RAID-5? RAID-10? RAID-4? RAID-0?\n>\n> RAID-0\n\nJust wondering how many drives?\n\n> To be more specific about the degradation, we've set the\n> \"log_min_duration_statement=200\", and when we run with 40 concurrent\n> requests, we don't see queries showing up in there. When we run with 60\n> concurrent requests, we start seeing queries show up, and when we run 200+\n> requests, we see multi-second queries.\n>\n> This is to be expected, to some extent, as we would expect some perfromance\n> degradation with higher utilization. But, the hardware doesn't appear to be\n> very busy, and that's where we're hoping for some help.\n\nIt's likely in io wait.\n\n>> What do the following commands tell you?\n>>\n>> iostat -x 10 (first iteration doesn't count)\n>\n> Here's some iostat output (from the 3rd data point from iostat -x 10)...\n> this was taken while we were processing 256 simultaneous requests.\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 34.29 0.00 7.09 0.03 0.00 58.58\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> 0.42 3.17 0.04 0.48\n> sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda5 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> 0.42 3.17 0.04 0.48\n> sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> dm-0 0.00 0.00 0.00 0.40 0.00 3.20 8.00\n> 0.00 0.00 0.00 0.00\n> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> dm-2 0.00 0.00 0.00 99.90 0.00 799.20 8.00\n> 0.15 1.50 0.01 0.12\n> dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00\n> 0.00 0.33 0.17 0.01\n> dm-4 0.00 0.00 0.00 144.70 0.00 1157.60 8.00\n> 0.46 3.17 0.02 0.35\n> dm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n>\n> The iowait seems pretty low, doesn't it?\n\nDepends, is that the first iteration of output? if so, ignore it and\nshow me the second and further on. Same for vmstat... In fact let\nthem run for a minute or two and attach the results... OTOH, if that\nis the second or later set of output, then you're definitely not IO\nbound, and I don't see why the CPUs are not being better utilized.\n\nHow many concurrent queries are you running when you take these\nmeasurements? Can you take them with lower and higher numbers of\nconcurrent users and compare the two? normally I'd be looking for\ncontext switching taking more and more time in a heavily loaded\nsystem, but I'm not seeing it in your vmstat numbers either.\n\nWhat are your settings for\n\neffective_cache_size\nrandom_page_cost\nwork_mem\n\nwith your machine and the extra memory, you can probably uptune the\nwork_mem to 8 Megs safely if it's at the default of 1MB. With a\ndatabase that fits in RAM, you can often turn down random_page_cost to\nnear 1.0 (1.2 to 1.4 is common for such scenarios.) And effective\ncache size being larger (in the 20G range) will hint the planner that\nit's likely to find everything it needs in ram somewhere and not on\nthe disk.\n\nThere are several common bottlenecks you can try to tune away from.\nIO doesn't look like a problem for you. Neither does CPU load. So,\nthen we're left with context switching time and memory to CPU\nbandwidth. If your CPUs are basically becoming data pumps then the\nspeed of your FSB becomes VERY critical, and some older Intel mobos\ndidn't have a lot of CPU to Memory bandwidth and adding CPUs made it\nworse, not better. More modern Intel chipsets have much faster CPU to\nMemory BW, since they're using the same kind of fabric switching that\nAMD uses on highly parallel machines.\n\nIf your limit is your hardware, then the only solution is a faster\nmachine. It may well be that a machine with dual fast Nehalem\n(2.4GHz+) quad core CPUs will be faster. Or 4 or 8 AMD CPUs with\ntheir faster fabric.\n",
"msg_date": "Mon, 11 Jan 2010 10:34:09 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "We may want to start looking at query plans for the slowest queries.\nUse explain analyze to find them and attach them here. I kinda have a\nfeeling you're running into a limit on the speed of your memory\nthough, and there's no real cure for that. You can buy a little time\nwith some query or db tuning, but 250 or more concurrent users is a\nLOT.\n",
"msg_date": "Mon, 11 Jan 2010 10:47:10 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": ">\n>\n> > This is to be expected, to some extent, as we would expect some\n> perfromance\n> > degradation with higher utilization. But, the hardware doesn't appear to\n> be\n> > very busy, and that's where we're hoping for some help.\n>\n> It's likely in io wait.\n>\n> >> What do the following commands tell you?\n> >>\n> >> iostat -x 10 (first iteration doesn't count)\n> >\n> > Here's some iostat output (from the 3rd data point from iostat -x 10)...\n> > this was taken while we were processing 256 simultaneous requests.\n> >\n> > avg-cpu: %user %nice %system %iowait %steal %idle\n> > 34.29 0.00 7.09 0.03 0.00 58.58\n> >\n> > Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> > avgqu-sz await svctm %util\n> > sda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> > 0.42 3.17 0.04 0.48\n> > sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > sda5 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> > 0.42 3.17 0.04 0.48\n> > sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > dm-0 0.00 0.00 0.00 0.40 0.00 3.20 8.00\n> > 0.00 0.00 0.00 0.00\n> > dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> > dm-2 0.00 0.00 0.00 99.90 0.00 799.20 8.00\n> > 0.15 1.50 0.01 0.12\n> > dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00\n> > 0.00 0.33 0.17 0.01\n> > dm-4 0.00 0.00 0.00 144.70 0.00 1157.60 8.00\n> > 0.46 3.17 0.02 0.35\n> > dm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> > 0.00 0.00 0.00 0.00\n> >\n> > The iowait seems pretty low, doesn't it?\n>\n> Depends, is that the first iteration of output? if so, ignore it and\n> show me the second and further on. Same for vmstat... In fact let\n> them run for a minute or two and attach the results... OTOH, if that\n> is the second or later set of output, then you're definitely not IO\n> bound, and I don't see why the CPUs are not being better utilized.\n>\n> I was probably not clear... the output I pasted was from the third\niteration of output from iostat. And, the vmstat output I pasted was from\nthe sixth iteration of output\n\nHow many concurrent queries are you running when you take these\n> measurements? Can you take them with lower and higher numbers of\n> concurrent users and compare the two? normally I'd be looking for\n> context switching taking more and more time in a heavily loaded\n> system, but I'm not seeing it in your vmstat numbers either.\n>\n> We took those measurements with 256 concurrent requests being processed.\nSo, at most, we have 256 concurrent queries executed by our application.\nThere aren't other applications using the db in our tests.\n\nWe can take some measurements at 40 concurrent requests and see where we\nstand.\n\n\n> What are your settings for\n>\n> effective_cache_size\n>\n\n effective_cache_size = 24GB\n\n\n> random_page_cost\n>\n\nUsing the default...\n\n#random_page_cost = 4.0\n\n\n> work_mem\n>\n\n work_mem = 64MB\n\n\n> with your machine and the extra memory, you can probably uptune the\n> work_mem to 8 Megs safely if it's at the default of 1MB. With a\n> database that fits in RAM, you can often turn down random_page_cost to\n> near 1.0 (1.2 to 1.4 is common for such scenarios.) And effective\n> cache size being larger (in the 20G range) will hint the planner that\n> it's likely to find everything it needs in ram somewhere and not on\n> the disk.\n>\n\nSo, we should probably try cranking our random_page_cost value down. When\nwe dump our db with \"pg_dump --format=t\", it's about 15 MB. We should be\nable to keep the thing in memory.\n\n>\n> There are several common bottlenecks you can try to tune away from.\n> IO doesn't look like a problem for you. Neither does CPU load. So,\n> then we're left with context switching time and memory to CPU\n> bandwidth. If your CPUs are basically becoming data pumps then the\n> speed of your FSB becomes VERY critical, and some older Intel mobos\n> didn't have a lot of CPU to Memory bandwidth and adding CPUs made it\n> worse, not better. More modern Intel chipsets have much faster CPU to\n> Memory BW, since they're using the same kind of fabric switching that\n> AMD uses on highly parallel machines.\n>\n\nEach CPU is 2.13 GHz, with 8MB Cache, and the FSB is 1066 MHz. Does that\nbus speed seem slow?\n\nIt's hard to go to the money tree and say \"we're only using about half of\nyour CPUs, but you need to get better ones.\"\n\n\n> If your limit is your hardware, then the only solution is a faster\n> machine. It may well be that a machine with dual fast Nehalem\n> (2.4GHz+) quad core CPUs will be faster. Or 4 or 8 AMD CPUs with\n> their faster fabric.\n>\n\nIt sounds like we could spend less money on memory and more on faster hard\ndrives and faster CPUs.\n\nBut, man, that's a tough sell. This box is a giant, relative to anything\nelse we've worked with.\n\n\n> This is to be expected, to some extent, as we would expect some perfromance\n> degradation with higher utilization. But, the hardware doesn't appear to be\n> very busy, and that's where we're hoping for some help.\n\nIt's likely in io wait.\n\n>> What do the following commands tell you?\n>>\n>> iostat -x 10 (first iteration doesn't count)\n>\n> Here's some iostat output (from the 3rd data point from iostat -x 10)...\n> this was taken while we were processing 256 simultaneous requests.\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n> 34.29 0.00 7.09 0.03 0.00 58.58\n>\n> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n> avgqu-sz await svctm %util\n> sda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> 0.42 3.17 0.04 0.48\n> sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda4 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> sda5 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n> 0.42 3.17 0.04 0.48\n> sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> dm-0 0.00 0.00 0.00 0.40 0.00 3.20 8.00\n> 0.00 0.00 0.00 0.00\n> dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n> dm-2 0.00 0.00 0.00 99.90 0.00 799.20 8.00\n> 0.15 1.50 0.01 0.12\n> dm-3 0.00 0.00 0.00 0.60 0.00 4.80 8.00\n> 0.00 0.33 0.17 0.01\n> dm-4 0.00 0.00 0.00 144.70 0.00 1157.60 8.00\n> 0.46 3.17 0.02 0.35\n> dm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n> 0.00 0.00 0.00 0.00\n>\n> The iowait seems pretty low, doesn't it?\n\nDepends, is that the first iteration of output? if so, ignore it and\nshow me the second and further on. Same for vmstat... In fact let\nthem run for a minute or two and attach the results... OTOH, if that\nis the second or later set of output, then you're definitely not IO\nbound, and I don't see why the CPUs are not being better utilized.\nI was probably not clear... the output I pasted was from the third iteration of output from iostat. And, the vmstat output I pasted was from the sixth iteration of output \n\nHow many concurrent queries are you running when you take these\nmeasurements? Can you take them with lower and higher numbers of\nconcurrent users and compare the two? normally I'd be looking for\ncontext switching taking more and more time in a heavily loaded\nsystem, but I'm not seeing it in your vmstat numbers either.\nWe took those measurements with 256 concurrent requests being processed. So, at most, we have 256 concurrent queries executed by our application. There aren't other applications using the db in our tests. \nWe can take some measurements at 40 concurrent requests and see where we stand. \n\nWhat are your settings for\n\neffective_cache_size effective_cache_size = 24GB \nrandom_page_costUsing the default...#random_page_cost = 4.0 \n\nwork_mem work_mem = 64MB \nwith your machine and the extra memory, you can probably uptune the\nwork_mem to 8 Megs safely if it's at the default of 1MB. With a\ndatabase that fits in RAM, you can often turn down random_page_cost to\nnear 1.0 (1.2 to 1.4 is common for such scenarios.) And effective\ncache size being larger (in the 20G range) will hint the planner that\nit's likely to find everything it needs in ram somewhere and not on\nthe disk.So, we should probably try cranking our random_page_cost value down. When we dump our db with \"pg_dump --format=t\", it's about 15 MB. We should be able to keep the thing in memory. \n\n\nThere are several common bottlenecks you can try to tune away from.\nIO doesn't look like a problem for you. Neither does CPU load. So,\nthen we're left with context switching time and memory to CPU\nbandwidth. If your CPUs are basically becoming data pumps then the\nspeed of your FSB becomes VERY critical, and some older Intel mobos\ndidn't have a lot of CPU to Memory bandwidth and adding CPUs made it\nworse, not better. More modern Intel chipsets have much faster CPU to\nMemory BW, since they're using the same kind of fabric switching that\nAMD uses on highly parallel machines.Each CPU is 2.13 GHz, with 8MB Cache, and the FSB is 1066 MHz. Does that bus speed seem slow? It's hard to go to the money tree and say \"we're only using about half of your CPUs, but you need to get better ones.\"\n \nIf your limit is your hardware, then the only solution is a faster\nmachine. It may well be that a machine with dual fast Nehalem\n(2.4GHz+) quad core CPUs will be faster. Or 4 or 8 AMD CPUs with\ntheir faster fabric.\nIt sounds like we could spend less money on memory and more on faster hard drives and faster CPUs. But, man, that's a tough sell. This box is a giant, relative to anything else we've worked with.",
"msg_date": "Mon, 11 Jan 2010 12:49:39 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 12:17 PM, Kevin Grittner <\[email protected]> wrote:\n\n> Bob Dusek <[email protected]> wrote:\n> > Scott Marlowe <[email protected]>wrote:\n> >> Bob Dusek <[email protected]> wrote:\n>\n> >>> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n>\n> >> What method of striped RAID?\n> >\n> > RAID-0\n>\n> I hope you have a plan for what to do when any one drive in this\n> array fails, and the entire array is unusable.\n>\n\nPoint noted.\n\n\n> Anyway, my benchmarks tend to show that best throughput occurs at\n> about (CPU_count * 2) plus effective_spindle_count. Since you seem\n> to be fully cached, effective_spindle_count would be zero, so I\n> would expect performance to start to degrade when you have more than\n> about 32 sessions active.\n>\n\nThat's a little disheartening for a single or dual CPU system.\n\n\n> > We actually see degredation when we get more than 40 concurrent\n> > requests.\n>\n> Close enough.\n>\n> > when we run 200+ requests, we see multi-second queries.\n>\n> No surprise there. Your vmstat output suggests that context\n> switches are becoming a problem, and I wouldn't be surprised if I\n> heard that the network is an issue. You might want to have someone\n> take a look at the network side to check.\n>\n\nThis is all happening on a LAN, and network throughput doesn't seem to be an\nissue. It may be a busy network, but I'm not sure about a problem. Can you\nelaborate on your suspicion, based on the vmstat? I haven't used vmstat\nmuch.\n\n>\n> You want to use some connection pooling which queues requests when\n> more than some configurable number of connections is already active\n> with a request. You probably want to run that on the server side.\n> As for the postgresql.conf, could you show what you have right now,\n> excluding all comments?\n>\n\nThe problem with connection pooling is that we actually have to achieve more\nthan 40 per second, which happens to be the sweet spot with our current\nconfig.\n\nI posted our changes from the default in a reply to another message. I\ndon't want to duplicate. Can you check those out?\n\n>\n> -Kevin\n>\n\nOn Mon, Jan 11, 2010 at 12:17 PM, Kevin Grittner <[email protected]> wrote:\nBob Dusek <[email protected]> wrote:\n> Scott Marlowe <[email protected]>wrote:\n>> Bob Dusek <[email protected]> wrote:\n\n>>> 4X E7420 Xeon, Four cores (for a total of 16 cores)\n\n>> What method of striped RAID?\n>\n> RAID-0\n\nI hope you have a plan for what to do when any one drive in this\narray fails, and the entire array is unusable.Point noted. \n\nAnyway, my benchmarks tend to show that best throughput occurs at\nabout (CPU_count * 2) plus effective_spindle_count. Since you seem\nto be fully cached, effective_spindle_count would be zero, so I\nwould expect performance to start to degrade when you have more than\nabout 32 sessions active.That's a little disheartening for a single or dual CPU system. \n\n> We actually see degredation when we get more than 40 concurrent\n> requests.\n\nClose enough.\n\n> when we run 200+ requests, we see multi-second queries.\n\nNo surprise there. Your vmstat output suggests that context\nswitches are becoming a problem, and I wouldn't be surprised if I\nheard that the network is an issue. You might want to have someone\ntake a look at the network side to check.This is all happening on a LAN, and network throughput doesn't seem to\nbe an issue. It may be a busy network, but I'm not sure about a\nproblem. Can you elaborate on your suspicion, based on the vmstat? I\nhaven't used vmstat much. \n\nYou want to use some connection pooling which queues requests when\nmore than some configurable number of connections is already active\nwith a request. You probably want to run that on the server side.\nAs for the postgresql.conf, could you show what you have right now,\nexcluding all comments?The problem with connection pooling is that we actually have to achieve\nmore than 40 per second, which happens to be the sweet spot with our\ncurrent config. I posted our changes from the default in a reply to another message. I don't want to duplicate. Can you check those out? \n\n-Kevin",
"msg_date": "Mon, 11 Jan 2010 12:54:48 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 10:49 AM, Bob Dusek <[email protected]> wrote:\n>> Depends, is that the first iteration of output? if so, ignore it and\n>> show me the second and further on. Same for vmstat... In fact let\n>> them run for a minute or two and attach the results... OTOH, if that\n>> is the second or later set of output, then you're definitely not IO\n>> bound, and I don't see why the CPUs are not being better utilized.\n>>\n> I was probably not clear... the output I pasted was from the third iteration\n> of output from iostat. And, the vmstat output I pasted was from the sixth\n> iteration of output\n\nYeah, you're definitely CPU/Memory bound it seems.\n\n> We can take some measurements at 40 concurrent requests and see where we\n> stand.\n\nWe'll probably not see much difference, if you're waiting on memory.\n\n> So, we should probably try cranking our random_page_cost value down. When\n> we dump our db with \"pg_dump --format=t\", it's about 15 MB. We should be\n> able to keep the thing in memory.\n\nYeah, I doubt that changing it will make a huge difference given how\nsmall your db is.\n\n>> There are several common bottlenecks you can try to tune away from.\n>> IO doesn't look like a problem for you. Neither does CPU load. So,\n>> then we're left with context switching time and memory to CPU\n>> bandwidth. If your CPUs are basically becoming data pumps then the\n>> speed of your FSB becomes VERY critical, and some older Intel mobos\n>> didn't have a lot of CPU to Memory bandwidth and adding CPUs made it\n>> worse, not better. More modern Intel chipsets have much faster CPU to\n>> Memory BW, since they're using the same kind of fabric switching that\n>> AMD uses on highly parallel machines.\n>\n> Each CPU is 2.13 GHz, with 8MB Cache, and the FSB is 1066 MHz. Does that\n> bus speed seem slow?\n\nWhen 16 cores are all sharing the same bus (which a lot of older\ndesigns do) then yes. I'm not that familiar with the chipset you're\nrunning, but I don't think that series CPU has an integrated memory\ncontroller. Does it break the memory into separate chunks that\ndifferent cpus can access without stepping on each other's toes?\n\nLater Intel and all AMDs since the Opteron have built in memory\ncontrollers. This meant that going from 2 to 4 cpus in an AMD server\ndoubled your memory bandwidth, while going from 2 to 4 cpus on older\nintel designs left it the same so that each cpu got 1/2 as much\nbandwidth as it had before when there were 2.\n\n> It's hard to go to the money tree and say \"we're only using about half of\n> your CPUs, but you need to get better ones.\"\n\nWell, if the problem is that you've got a chipset that can't utilize\nall your CPUs because of memory bw starvation, it's your only fix.\nYou should set up some streaming read / write to memory tests you can\nrun singly, then on 2, 4, 8 16 cores and see how fast memory access is\nas you add more threads. I'm betting you'll get throughput well\nbefore 16 cores are working on the problem.\n\n>> If your limit is your hardware, then the only solution is a faster\n>> machine. It may well be that a machine with dual fast Nehalem\n>> (2.4GHz+) quad core CPUs will be faster. Or 4 or 8 AMD CPUs with\n>> their faster fabric.\n>\n> It sounds like we could spend less money on memory and more on faster hard\n> drives and faster CPUs.\n\nI'm pretty sure you could live with slower hard drives here, and fsync\non as well possibly. It looks like it's all cpu <-> memory bandwidth.\n But I'm just guessing.\n\n> But, man, that's a tough sell. This box is a giant, relative to anything\n> else we've worked with.\n\nYeah, I understand. We're looking at having to upgrade our dual cpu /\nquad core AMD 2.1GHz machine to 4 hex core cpus this summer, possibly\ndodecacore cpus even.\n\nSo, I took a break from writing and searched for some more info on the\n74xx series CPUs, and from reading lots of articles, including this\none:\n\nhttp://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3414\n\nIt seems apparent that the 74xx series if a great CPU, as long as\nyou're not memory bound.\n",
"msg_date": "Mon, 11 Jan 2010 11:19:34 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": ">\n>> RAID-0\n>>\n>\n> And how many drives?\n>\n> Just two.\n\nWe have an application server that is processing requests. Each request\n>> consists of a combination of selects, inserts, and deletes. We actually see\n>> degredation when we get more than 40 concurrent requests. The exact number\n>> of queries executed by each request isn't known. It varies per request.\n>> But, an example request would be about 16 inserts and 113 selects. Any\n>> given request can't execute more than a single query at a time.\n>>\n>\n> So, you are concurrently trying to achieve more than around 640 writes and\n> 4520 reads (worst case figures...). This should be interesting. For 40\n> concurrent requests you will probably need at least 4 drives in RAID-0 to\n> sustain the write rates (and I'll guess 5-6 to be sure to cover read\n> requests also, together with plenty of RAM).\n>\n> avg-cpu: %user %nice %system %iowait %steal %idle\n>> 34.29 0.00 7.09 0.03 0.00 58.58\n>>\n>> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz\n>> avgqu-sz await svctm %util\n>> sda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73\n>> 0.42 3.17 0.04 0.48\n>>\n>> The iowait seems pretty low, doesn't it?\n>>\n>\n> Yes, but you are issuing 133 write operations per seconds per drive(s) -\n> this is nearly the limit of what you can get with 15k RPM drives (actually,\n> the limit should be somewhere around 200..250 IOPS but 133 isn't that far).\n>\n>\nAs you mentioned in a separate response, we have fsync shut off.\nRegardless, we shut off a lot of logging in our app and reduced that number\nto approx 20 per second. So, a lot of those writes were coming from outside\nthe db. We do a lot of logging. We should consider turning some off, it\nseems.\n\n\n top - 10:11:43 up 12 days, 20:48, 6 users, load average: 10.48, 4.16,\n>> 2.83\n>> Tasks: 798 total, 8 running, 790 sleeping, 0 stopped, 0 zombie\n>> Cpu0 : 33.3%us, 7.6%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si,\n>> 0.0%st\n>>\n>\n> There is one other possibility - since the CPUs are not very loaded, are\n> you sure the client application with which you are testing is fast enough to\n> issue enough request to saturate the database?\n>\n\nEach of the 256 requests was being processed by a php process. So, it could\ncertainly be faster. But, the fact that we're seeing the db performance\ndegrade would seem to indicate that our application is fast enough to punish\nthe db. Isn't that true?\n\n\n\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\nRAID-0\n\n\nAnd how many drives?Just two.\nWe have an application server that is processing requests. Each request consists of a combination of selects, inserts, and deletes. We actually see degredation when we get more than 40 concurrent requests. The exact number of queries executed by each request isn't known. It varies per request. But, an example request would be about 16 inserts and 113 selects. Any given request can't execute more than a single query at a time. \n\n\nSo, you are concurrently trying to achieve more than around 640 writes and 4520 reads (worst case figures...). This should be interesting. For 40 concurrent requests you will probably need at least 4 drives in RAID-0 to sustain the write rates (and I'll guess 5-6 to be sure to cover read requests also, together with plenty of RAM).\n\n\n avg-cpu: %user %nice %system %iowait %steal %idle\n 34.29 0.00 7.09 0.03 0.00 58.58\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util\nsda 0.00 112.20 0.00 133.40 0.00 1964.80 14.73 0.42 3.17 0.04 0.48\n\nThe iowait seems pretty low, doesn't it?\n\n\nYes, but you are issuing 133 write operations per seconds per drive(s) - this is nearly the limit of what you can get with 15k RPM drives (actually, the limit should be somewhere around 200..250 IOPS but 133 isn't that far).\n As you mentioned in a separate response, we have fsync shut off. Regardless, we shut off a lot of logging in our app and reduced that number to approx 20 per second. So, a lot of those writes were coming from outside the db. We do a lot of logging. We should consider turning some off, it seems.\n\n\ntop - 10:11:43 up 12 days, 20:48, 6 users, load average: 10.48, 4.16, 2.83\nTasks: 798 total, 8 running, 790 sleeping, 0 stopped, 0 zombie\nCpu0 : 33.3%us, 7.6%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\n\n\nThere is one other possibility - since the CPUs are not very loaded, are you sure the client application with which you are testing is fast enough to issue enough request to saturate the database?\nEach of the 256 requests was being processed by a php process. So, it could certainly be faster. But, the fact that we're seeing the db performance degrade would seem to indicate that our application is fast enough to punish the db. Isn't that true? \n \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 11 Jan 2010 13:57:31 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 10:54 AM, Bob Dusek <[email protected]> wrote:\n>> You want to use some connection pooling which queues requests when\n>> more than some configurable number of connections is already active\n>> with a request. You probably want to run that on the server side.\n>> As for the postgresql.conf, could you show what you have right now,\n>> excluding all comments?\n>\n> The problem with connection pooling is that we actually have to achieve more\n> than 40 per second, which happens to be the sweet spot with our current\n> config.\n\nNumber of parallel processes doesn't equal # reqs/second. If your\nmaximum throughput occurs at 40 parallel requests, you'll get more\ndone reducing the maximum number of concurrent processes to 40 and\nletting them stack up in a queue waiting for a spot to run.\n",
"msg_date": "Mon, 11 Jan 2010 12:30:20 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Scott Marlowe wrote:\n> So, I took a break from writing and searched for some more info on the\n> 74xx series CPUs, and from reading lots of articles, including this\n> one:\n> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3414\n> It seems apparent that the 74xx series if a great CPU, as long as\n> you're not memory bound.\n> \n\nThis is why I regularly end up recommending people consider Intel's \ndesigns here so often, the ones that use triple-channel DDR3, instead of \nany of the AMD ones. It's extremely easy to end up with a memory-bound \nworkload nowadays, at which point all the CPU power in the world doesn't \nhelp you anymore.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 11 Jan 2010 17:04:08 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 3:04 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> So, I took a break from writing and searched for some more info on the\n>> 74xx series CPUs, and from reading lots of articles, including this\n>> one:\n>> http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3414\n>> It seems apparent that the 74xx series if a great CPU, as long as\n>> you're not memory bound.\n>>\n>\n> This is why I regularly end up recommending people consider Intel's designs\n> here so often, the ones that use triple-channel DDR3, instead of any of the\n> AMD ones. It's extremely easy to end up with a memory-bound workload\n> nowadays, at which point all the CPU power in the world doesn't help you\n> anymore.\n\nThe DDR3 Nehalem and DDR2 AMD are both actually pretty close in real\nworld use on 4 or more socket machines. Most benchmarks on memory\nbandwidth give no huge advantage to either one or the other. They\nboth max out at about 25GB/s.\n\nIt's the older Xeon base 74xx chipsets without integrated memory\ncontrollers that seem to have such horrible bandwidth because they're\nnot multi-channel.\n\nFor dual socket the Nehalem is pretty much the king. By the time you\nget to 8 sockets AMD is still ahead. Just avoid anything older than\nnehalem or istanbul.\n",
"msg_date": "Mon, 11 Jan 2010 15:44:18 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Scott Marlowe wrote:\n> The DDR3 Nehalem and DDR2 AMD are both actually pretty close in real\n> world use on 4 or more socket machines. Most benchmarks on memory\n> bandwidth give no huge advantage to either one or the other. They\n> both max out at about 25GB/s.\n> \nThe most fair comparison I've seen so far is \nhttp://www.advancedclustering.com/company-blog/stream-benchmarking.html \nwhich puts the faster Intel solutions at 37GB/s, while the Opterons bog \ndown at 20GB/s. That matches my own tests pretty well too--Intel's got \nat least a 50% lead here in many cases.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 11 Jan 2010 18:17:08 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 4:17 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> The DDR3 Nehalem and DDR2 AMD are both actually pretty close in real\n>> world use on 4 or more socket machines. Most benchmarks on memory\n>> bandwidth give no huge advantage to either one or the other. They\n>> both max out at about 25GB/s.\n>>\n>\n> The most fair comparison I've seen so far is\n> http://www.advancedclustering.com/company-blog/stream-benchmarking.html\n> which puts the faster Intel solutions at 37GB/s, while the Opterons bog down\n> at 20GB/s. That matches my own tests pretty well too--Intel's got at least\n> a 50% lead here in many cases.\n\nBut that's with only 2 sockets. I'd like to see something comparing 4\nor 8 socket machines. Hmmm, off to googol.\n",
"msg_date": "Mon, 11 Jan 2010 16:29:29 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "\n> Each of the 256 requests was being processed by a php process. So, it \n> could\n> certainly be faster. But, the fact that we're seeing the db performance\n> degrade would seem to indicate that our application is fast enough to \n> punish\n> the db. Isn't that true?\n\n\tNot necessarily. Your DB still has lots of idle CPU, so perhaps it's your \nclient which is getting over the top. Or you have locking problems in your \nDB.\n\tThings to test :\n\n\t- vmstat on the benchmark client\n\t- iptraf on the network link\n\t- monitor ping times between client and server during load test\n\n\tSome time ago, I made a benchmark simulating a forum. Postgres was \nsaturating the gigabit ethernet between server and client...\n\n\tIf those PHP processes run inside Apache, I'd suggest switching to \nlighttpd/fastcgi, which has higher performance, and uses a limited, \ncontrollable set of PHP processes (and therefore DB connections), which in \nturn uses much less memory.\n\n\tPS : try those settings :\n\nfsync = fdatasync\nwal_buffers = 64MB\nwalwriter_delay = 2ms\nsynchronous commits @ 1 s delay\n",
"msg_date": "Tue, 12 Jan 2010 00:53:29 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
}
] |
[
{
"msg_contents": "Hi,\n\nLWLockPadded is either 16 or 32 bytes, so modern systems (e.g. Core2\nor AMD Opteron [1]) with cacheline size of 64 bytes can get\nfalse-sharing in lwlocks.\n\nI changed LWLOCK_PADDED_SIZE in src/backend/storage/lmgr/lwlock.c to\n64, and ran sysbench OLTP read-only benchmark, and got a slight\nimprovement in throughput:\n\nHardware: single-socket Core 2, quad-core, Q6600 @ 2.40GHz\nSoftware: Linux 2.6.28-17, glibc 2.9, gcc 4.3.3\n\nPostgreSQL: 8.5alpha3\n\nsysbench parameters: sysbench --num-threads=4 --max-requests=0\n--max-time=120 --oltp-read-only=on --test=oltp\n\noriginal: 3227, 3243, 3243\nafter: 3256, 3255, 3253\n\nSo there is a speedup of 1.005 or what other people usually call it, a\n0.5% improvement.\n\nHowever, it's a single socket machine, so all the cache traffic does\nnot need to go off-chip. Can someone with a multi-socket machine help\nme run some test so that we can get a better idea of how this change\n(patch attached) performs in bigger systems??\n\nThanks,\nRayson\n\nP.S. And I just googled and found similar discussions about padding\nLWLOCK_PADDED_SIZE, but the previous work was done on an IBM POWER\nsystem, and the benchmark used was apachebench. IMO, the setup was too\ncomplex to measure a small performance improvement in PostgreSQL.\n\n[1] Performance Guidelines for AMD Athlon™ 64 and AMD Opteron™ ccNUMA\nMultiprocessor Systems Application Note",
"msg_date": "Mon, 11 Jan 2010 12:24:43 -0500",
"msg_from": "Rayson Ho <[email protected]>",
"msg_from_op": true,
"msg_subject": "cache false-sharing in lwlocks"
}
] |
[
{
"msg_contents": "Bob Dusek <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> Bob Dusek <[email protected]> wrote:\n \n>> Anyway, my benchmarks tend to show that best throughput occurs at\n>> about (CPU_count * 2) plus effective_spindle_count. Since you\n>> seem to be fully cached, effective_spindle_count would be zero,\n>> so I would expect performance to start to degrade when you have\n>> more than about 32 sessions active.\n>>\n> That's a little disheartening for a single or dual CPU system.\n \nNot at all. You only have so many resources to keep busy at any one\nmoment. It is generally more efficient to only context switch\nbetween as many processes as can keep those resources relatively\nbusy; otherwise valuable resources are spent switching among the\nvarious processes rather than doing useful work.\n \n[Regular readers of this list might want to skip ahead while I run\nthrough my usual \"thought experiment on the topic. ;-) ]\n \nImagine this hypothetical environment -- you have one CPU running\nrequests. There are no other resources to worry about and no\nlatency to the clients. Let's say that the requests take one second\neach. The client suddenly has 100 requests to run. Assuming\ncontext switching is free, you could submit all at once, and 100\nseconds later, you get 100 responses, with an average response time\nof 100 seconds. Let's put a (again free) connection pooler in\nthere. You submit those 100 requests, but they are fed to the\ndatabase one at a time. You get one response back in one second,\nthe next in two seconds, the last in 100 seconds. No request took\nany longer, and the average response time was 50.5 seconds -- almost\na 50% reduction.\n \nNow context switching is not free, and you had tens of thousands of\nthem per second. Besides the hit on CPU availability during each\nswitch, you're reducing the value of the L1 and L2 caches. So in\nreality, you could expect your \"request storm\" to perform\nsignificantly worse in comparison to the connection pooled\nconfiguration. In reality, you have more than one resource to keep\nbusy, so the pool should be sized greater than one; but it's still\ntrue that there is some point at which getting a request to the\ndatabase server delays the response to that request more than\nqueuing it for later execution would. Some database products build\nin a way to manage this; in PostgreSQL it's on you to do so.\n \n>> Your vmstat output suggests that context switches are becoming a\n>> problem, and I wouldn't be surprised if I heard that the network\n>> is an issue. You might want to have someone take a look at the\n>> network side to check.\n>>\n> This is all happening on a LAN, and network throughput doesn't\n> seem to be an issue. It may be a busy network, but I'm not sure\n> about a problem. Can you elaborate on your suspicion, based on\n> the vmstat? I haven't used vmstat much.\n \nIt was simply this: all that CPU idle time while it was swamped with\nrequests suggests that there might be a bottleneck outside the\ndatabase server. That could be, as another post suggests, the\nclient software. It could also be the network. (It could also be\ncontention on locks within PostgreSQL from the large number of\nrequests, but that's covered by the connection pooling suggestion.)\n \n> The problem with connection pooling is that we actually have to\n> achieve more than 40 per second, which happens to be the sweet\n> spot with our current config.\n \nWell, if you're considering a connection pool which can only submit\none request per second, you're looking at the wrong technology. We\nuse a custom connection pool built into our software, so I'm not\nvery familiar with the \"drop in\" packages out there, but we start\nthe next queued request based on the completion of a request --\nthere's no polling involved.\n \nBetween the RAID 0, fsync = off, and full_page_writes = off -- you\nreally had better not be staking anything important on this data. \nThis configuration would make The Flying Wallendas break out in a\nsweat. It suggests to me that you might want to look into a better\nRAID controller -- a high quality controller with battery-backup\n(BBU) cache, configured for write-back, might allow you to change\nall these to safe settings. If you also switch to a RAID\nconfiguration with some redundancy, you'll be much safer....\n \n-Kevin\n\n",
"msg_date": "Mon, 11 Jan 2010 12:20:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 1:20 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Bob Dusek <[email protected]> wrote:\n> > Kevin Grittner <[email protected]> wrote:\n> >> Bob Dusek <[email protected]> wrote:\n>\n> >> Anyway, my benchmarks tend to show that best throughput occurs at\n> >> about (CPU_count * 2) plus effective_spindle_count. Since you\n> >> seem to be fully cached, effective_spindle_count would be zero,\n> >> so I would expect performance to start to degrade when you have\n> >> more than about 32 sessions active.\n> >>\n> > That's a little disheartening for a single or dual CPU system.\n>\n> Not at all. You only have so many resources to keep busy at any one\n> moment. It is generally more efficient to only context switch\n> between as many processes as can keep those resources relatively\n> busy; otherwise valuable resources are spent switching among the\n> various processes rather than doing useful work.\n>\n> [Regular readers of this list might want to skip ahead while I run\n> through my usual \"thought experiment on the topic. ;-) ]\n>\n> Imagine this hypothetical environment -- you have one CPU running\n> requests. There are no other resources to worry about and no\n> latency to the clients. Let's say that the requests take one second\n> each. The client suddenly has 100 requests to run. Assuming\n> context switching is free, you could submit all at once, and 100\n> seconds later, you get 100 responses, with an average response time\n> of 100 seconds. Let's put a (again free) connection pooler in\n> there. You submit those 100 requests, but they are fed to the\n> database one at a time. You get one response back in one second,\n> the next in two seconds, the last in 100 seconds. No request took\n> any longer, and the average response time was 50.5 seconds -- almost\n> a 50% reduction.\n>\n> Now context switching is not free, and you had tens of thousands of\n> them per second. Besides the hit on CPU availability during each\n> switch, you're reducing the value of the L1 and L2 caches. So in\n> reality, you could expect your \"request storm\" to perform\n> significantly worse in comparison to the connection pooled\n> configuration. In reality, you have more than one resource to keep\n> busy, so the pool should be sized greater than one; but it's still\n> true that there is some point at which getting a request to the\n> database server delays the response to that request more than\n> queuing it for later execution would. Some database products build\n> in a way to manage this; in PostgreSQL it's on you to do so.\n>\n\nI appreciate the explanation. We were thinking that since we have so much\nCPU available, we weren't hitting Postgres' peak and that maybe a config\nchange would help. But, thus far, it sounds like we're hardware-bound, and\nan application connection pool seems inevitable.\n\n>> Your vmstat output suggests that context switches are becoming a\n> >> problem, and I wouldn't be surprised if I heard that the network\n> >> is an issue. You might want to have someone take a look at the\n> >> network side to check.\n> >>\n> > This is all happening on a LAN, and network throughput doesn't\n> > seem to be an issue. It may be a busy network, but I'm not sure\n> > about a problem. Can you elaborate on your suspicion, based on\n> > the vmstat? I haven't used vmstat much.\n>\n> It was simply this: all that CPU idle time while it was swamped with\n> requests suggests that there might be a bottleneck outside the\n> database server. That could be, as another post suggests, the\n> client software. It could also be the network. (It could also be\n> contention on locks within PostgreSQL from the large number of\n> requests, but that's covered by the connection pooling suggestion.)\n>\n\nI'm curious if it would be worth our effort to enable the pg_stat stuff and\ntry to analyze the system that way. We don't have a lot of experience with\nthat, but if we could learn something critical from it, we will do it.\n\n\n> > The problem with connection pooling is that we actually have to\n> > achieve more than 40 per second, which happens to be the sweet\n> > spot with our current config.\n>\n> Well, if you're considering a connection pool which can only submit\n> one request per second, you're looking at the wrong technology. We\n> use a custom connection pool built into our software, so I'm not\n> very familiar with the \"drop in\" packages out there, but we start\n> the next queued request based on the completion of a request --\n> there's no polling involved.\n>\n\nI'm thinking we'll have to roll our own. In a way, we have already done the\nconnection pooling. We're experimenting with a new architecture with much\nmore demanding performance requirements. We were emboldened by the hardware\nspecs.\n\n\n> Between the RAID 0, fsync = off, and full_page_writes = off -- you\n> really had better not be staking anything important on this data.\n> This configuration would make The Flying Wallendas break out in a\n> sweat. It suggests to me that you might want to look into a better\n> RAID controller -- a high quality controller with battery-backup\n> (BBU) cache, configured for write-back, might allow you to change\n> all these to safe settings. If you also switch to a RAID\n> configuration with some redundancy, you'll be much safer....\n>\n\nYeah :) We haven't run into much trouble. But, we cut our teeth doing\nperformance analysis of our app using PG 7.4. And, people on this list seem\nto be adamantly against this config these days. Is this safer in older\nversions of PG? Or, are the risks the same?\n\nWe have some asynchronous communications processes that communicate\npermanent db changes to an enterprise-level data warehouse. And, we can\nrecover that data back down to the server if the server goes belly-up. If\nsomething does go belly up, we really only lose the bit of data that hasn't\nbeen communicated yet. It's true, that this data is important. However,\nit's also true that it's very costly to guarantee this that very small\namount of data isn't lost. And, practically speaking (for our purposes) it\nseems that the data's not worth the cost.\n\n-Kevin\n>\n>\n\nOn Mon, Jan 11, 2010 at 1:20 PM, Kevin Grittner <[email protected]> wrote:\nBob Dusek <[email protected]> wrote:\n> Kevin Grittner <[email protected]> wrote:\n>> Bob Dusek <[email protected]> wrote:\n\n>> Anyway, my benchmarks tend to show that best throughput occurs at\n>> about (CPU_count * 2) plus effective_spindle_count. Since you\n>> seem to be fully cached, effective_spindle_count would be zero,\n>> so I would expect performance to start to degrade when you have\n>> more than about 32 sessions active.\n>>\n> That's a little disheartening for a single or dual CPU system.\n\nNot at all. You only have so many resources to keep busy at any one\nmoment. It is generally more efficient to only context switch\nbetween as many processes as can keep those resources relatively\nbusy; otherwise valuable resources are spent switching among the\nvarious processes rather than doing useful work.\n\n[Regular readers of this list might want to skip ahead while I run\nthrough my usual \"thought experiment on the topic. ;-) ]\n\nImagine this hypothetical environment -- you have one CPU running\nrequests. There are no other resources to worry about and no\nlatency to the clients. Let's say that the requests take one second\neach. The client suddenly has 100 requests to run. Assuming\ncontext switching is free, you could submit all at once, and 100\nseconds later, you get 100 responses, with an average response time\nof 100 seconds. Let's put a (again free) connection pooler in\nthere. You submit those 100 requests, but they are fed to the\ndatabase one at a time. You get one response back in one second,\nthe next in two seconds, the last in 100 seconds. No request took\nany longer, and the average response time was 50.5 seconds -- almost\na 50% reduction.\n\nNow context switching is not free, and you had tens of thousands of\nthem per second. Besides the hit on CPU availability during each\nswitch, you're reducing the value of the L1 and L2 caches. So in\nreality, you could expect your \"request storm\" to perform\nsignificantly worse in comparison to the connection pooled\nconfiguration. In reality, you have more than one resource to keep\nbusy, so the pool should be sized greater than one; but it's still\ntrue that there is some point at which getting a request to the\ndatabase server delays the response to that request more than\nqueuing it for later execution would. Some database products build\nin a way to manage this; in PostgreSQL it's on you to do so.I appreciate the explanation. We were thinking that since we have so much CPU available, we weren't hitting Postgres' peak and that maybe a config change would help. But, thus far, it sounds like we're hardware-bound, and an application connection pool seems inevitable. \n\n>> Your vmstat output suggests that context switches are becoming a\n>> problem, and I wouldn't be surprised if I heard that the network\n>> is an issue. You might want to have someone take a look at the\n>> network side to check.\n>>\n> This is all happening on a LAN, and network throughput doesn't\n> seem to be an issue. It may be a busy network, but I'm not sure\n> about a problem. Can you elaborate on your suspicion, based on\n> the vmstat? I haven't used vmstat much.\n\nIt was simply this: all that CPU idle time while it was swamped with\nrequests suggests that there might be a bottleneck outside the\ndatabase server. That could be, as another post suggests, the\nclient software. It could also be the network. (It could also be\ncontention on locks within PostgreSQL from the large number of\nrequests, but that's covered by the connection pooling suggestion.)I'm curious if it would be worth our effort to enable the pg_stat stuff and try to analyze the system that way. We don't have a lot of experience with that, but if we could learn something critical from it, we will do it.\n \n> The problem with connection pooling is that we actually have to\n> achieve more than 40 per second, which happens to be the sweet\n> spot with our current config.\n\nWell, if you're considering a connection pool which can only submit\none request per second, you're looking at the wrong technology. We\nuse a custom connection pool built into our software, so I'm not\nvery familiar with the \"drop in\" packages out there, but we start\nthe next queued request based on the completion of a request --\nthere's no polling involved.I'm thinking we'll have to roll our own. In a way, we have already done the connection pooling. We're experimenting with a new architecture with much more demanding performance requirements. We were emboldened by the hardware specs. \n\n\nBetween the RAID 0, fsync = off, and full_page_writes = off -- you\nreally had better not be staking anything important on this data.\nThis configuration would make The Flying Wallendas break out in a\nsweat. It suggests to me that you might want to look into a better\nRAID controller -- a high quality controller with battery-backup\n(BBU) cache, configured for write-back, might allow you to change\nall these to safe settings. If you also switch to a RAID\nconfiguration with some redundancy, you'll be much safer....Yeah :) We haven't run into much trouble. But, we cut our teeth doing performance analysis of our app using PG 7.4. And, people on this list seem to be adamantly against this config these days. Is this safer in older versions of PG? Or, are the risks the same? \nWe have some asynchronous communications processes that communicate permanent db changes to an enterprise-level data warehouse. And, we can recover that data back down to the server if the server goes belly-up. If something does go belly up, we really only lose the bit of data that hasn't been communicated yet. It's true, that this data is important. However, it's also true that it's very costly to guarantee this that very small amount of data isn't lost. And, practically speaking (for our purposes) it seems that the data's not worth the cost. \n\n-Kevin",
"msg_date": "Mon, 11 Jan 2010 14:34:44 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 11:20 AM, Kevin Grittner\n<[email protected]> wrote:\n> Bob Dusek <[email protected]> wrote:\n>> Kevin Grittner <[email protected]> wrote:\n>>> Bob Dusek <[email protected]> wrote:\n>\n>>> Anyway, my benchmarks tend to show that best throughput occurs at\n>>> about (CPU_count * 2) plus effective_spindle_count. Since you\n>>> seem to be fully cached, effective_spindle_count would be zero,\n>>> so I would expect performance to start to degrade when you have\n>>> more than about 32 sessions active.\n>>>\n>> That's a little disheartening for a single or dual CPU system.\n>\n> Not at all. You only have so many resources to keep busy at any one\n> moment. It is generally more efficient to only context switch\n> between as many processes as can keep those resources relatively\n> busy; otherwise valuable resources are spent switching among the\n> various processes rather than doing useful work.\n>\n> [Regular readers of this list might want to skip ahead while I run\n> through my usual \"thought experiment on the topic. ;-) ]\n>\n> Imagine this hypothetical environment -- you have one CPU running\n> requests. There are no other resources to worry about and no\n> latency to the clients. Let's say that the requests take one second\n> each. The client suddenly has 100 requests to run. Assuming\n> context switching is free, you could submit all at once, and 100\n> seconds later, you get 100 responses, with an average response time\n> of 100 seconds. Let's put a (again free) connection pooler in\n> there. You submit those 100 requests, but they are fed to the\n> database one at a time. You get one response back in one second,\n> the next in two seconds, the last in 100 seconds. No request took\n> any longer, and the average response time was 50.5 seconds -- almost\n> a 50% reduction.\n>\n> Now context switching is not free, and you had tens of thousands of\n> them per second.\n\nFYI, on an 8 or 16 core machine, 10k to 30k context switches per\nsecond aren't that much. If you're climbing past 100k you might want\nto look out.\n\nThe more I read up on the 74xx CPUs and look at the numbers here the\nmore I think it's just that this machine has X bandwidth and it's\nusing it all up. You could put 1,000 cores in it, and it wouldn't go\nany faster. My guess is that a 4x6 core AMD machine or even a 2x6\nNehalem would be much faster at this job. Only way to tell is to run\nsomething like the stream benchmark and see how it scales,\nmemory-wise, as you add cores to the benchmark.\n",
"msg_date": "Mon, 11 Jan 2010 12:36:39 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 12:36 PM, Scott Marlowe <[email protected]> wrote:\n> FYI, on an 8 or 16 core machine, 10k to 30k context switches per\n> second aren't that much. If you're climbing past 100k you might want\n> to look out.\n>\n> The more I read up on the 74xx CPUs and look at the numbers here the\n> more I think it's just that this machine has X bandwidth and it's\n> using it all up. You could put 1,000 cores in it, and it wouldn't go\n> any faster. My guess is that a 4x6 core AMD machine or even a 2x6\n> Nehalem would be much faster at this job. Only way to tell is to run\n> something like the stream benchmark and see how it scales,\n> memory-wise, as you add cores to the benchmark.\n\nAlso I'm guessing that query profiling may help, if we can get the\nqueries to request less data to trundle through then we might be able\nto get Bob the performance needed to keep up.\n\nBut at some point he's gonna have to look at partitioning his database\nonto multiple machines some how.\n",
"msg_date": "Mon, 11 Jan 2010 12:38:42 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 12:34 PM, Bob Dusek <[email protected]> wrote:\n> Yeah :) We haven't run into much trouble. But, we cut our teeth doing\n> performance analysis of our app using PG 7.4. And, people on this list seem\n> to be adamantly against this config these days. Is this safer in older\n> versions of PG? Or, are the risks the same?\n\nIt's always been unsafe. Just that 7.4 was so slow that sometimes you\ndidn't really get to choose.\n\n> We have some asynchronous communications processes that communicate\n> permanent db changes to an enterprise-level data warehouse. And, we can\n> recover that data back down to the server if the server goes belly-up. If\n> something does go belly up, we really only lose the bit of data that hasn't\n> been communicated yet. It's true, that this data is important. However,\n> it's also true that it's very costly to guarantee this that very small\n> amount of data isn't lost. And, practically speaking (for our purposes) it\n> seems that the data's not worth the cost.\n\nI have slave dbs running on four 7200RPM SATA drives with fsync off.\nThey only get updated from the master db so if they go boom, I just\nrecreate their node. There's times fsync off is ok, you just have to\nknow that that db is now considered \"disposable\".\n\nHowever, I'd suggest doing some benchmarking to PROVE that you're\nseeing an improvement from fsync being off. If there's no\nimprovement, then you might as well leave it on and save yourself some\nheadache later on when the machine gets powered off suddenly etc.\n",
"msg_date": "Mon, 11 Jan 2010 12:45:44 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "\n> I have slave dbs running on four 7200RPM SATA drives with fsync off.\n> They only get updated from the master db so if they go boom, I just\n> recreate their node. There's times fsync off is ok, you just have to\n> know that that db is now considered \"disposable\".\n> \n> However, I'd suggest doing some benchmarking to PROVE that you're\n> seeing an improvement from fsync being off. If there's no\n> improvement, then you might as well leave it on and save yourself some\n> headache later on when the machine gets powered off suddenly etc.\n\nI haven't been involved in any benchmarking of PG8 with fsync=off, but we certainly did it with PG 7.4. fsync=0ff, for our purposes, was MUCH faster.\n \n> -- \n> Sent via pgsql-performance mailing list \n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> ",
"msg_date": "Mon, 11 Jan 2010 15:13:12 -0500",
"msg_from": "\"Dusek, Bob\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, Jan 11, 2010 at 1:13 PM, Dusek, Bob <[email protected]> wrote:\n> I haven't been involved in any benchmarking of PG8 with fsync=off, but we certainly did it with PG 7.4. fsync=0ff, for our purposes, was MUCH faster.\n\nAnd many changes have been made since then to make fsyncing much\nfaster. You may be grinding the valves on a 2009 Ferrari because\npappy used to have to do it on his 1958 pickup truck here.\n",
"msg_date": "Mon, 11 Jan 2010 13:16:45 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Mon, Jan 11, 2010 at 1:13 PM, Dusek, Bob <[email protected]> wrote:\n>> I haven't been involved in any benchmarking of PG8 with fsync=off, but we certainly did it with PG 7.4. �fsync=0ff, for our purposes, was MUCH faster.\n\n> And many changes have been made since then to make fsyncing much\n> faster. You may be grinding the valves on a 2009 Ferrari because\n> pappy used to have to do it on his 1958 pickup truck here.\n\nPerhaps more to the point, synchronous_commit can get most of the same\nspeedup with much less risk to your database. You really owe it to\nyourself to redo that benchmarking with a recent PG release.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 11 Jan 2010 15:25:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help "
},
{
"msg_contents": "Scott Marlowe <[email protected]> wrote:\n \n> FYI, on an 8 or 16 core machine, 10k to 30k context switches per\n> second aren't that much.\n \nYeah, on our 16 core machines under heavy load we hover around 30k. \nHe was around 50k, which is why I said it looked like it was\n\"becoming a problem.\"\n \n> If you're climbing past 100k you might want to look out.\n \nWe hit that at one point; cutting our connection pool size brought\nit down and improved performance dramatically. I don't think I'd\nwait for 100k to address it next time.\n \n> The more I read up on the 74xx CPUs and look at the numbers here\n> the more I think it's just that this machine has X bandwidth and\n> it's using it all up. You could put 1,000 cores in it, and it\n> wouldn't go any faster. My guess is that a 4x6 core AMD machine\n> or even a 2x6 Nehalem would be much faster at this job. Only way\n> to tell is to run something like the stream benchmark and see how\n> it scales, memory-wise, as you add cores to the benchmark.\n \nI haven't been keeping up on the hardware, so I defer to you on\nthat. It certainly seems like it would fit with the symptoms. On\nthe other hand, I haven't seen anything yet to convince me that it\n*couldn't* be a client-side or network bottleneck, or the sort of\nlock contention bottleneck that showed up in some of Sun's\nbenchmarks. If it were my problem, I'd be trying to rule out\nwhichever one of those could be tested most easily, iteratively.\n \nAlso, as you suggested, identifying what queries are taking most of\nthe time and trying to optimize them is a route that might help,\nregardless.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jan 2010 14:55:50 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": ">\n>\n> I haven't been keeping up on the hardware, so I defer to you on\n> that. It certainly seems like it would fit with the symptoms. On\n> the other hand, I haven't seen anything yet to convince me that it\n> *couldn't* be a client-side or network bottleneck, or the sort of\n> lock contention bottleneck that showed up in some of Sun's\n> benchmarks. If it were my problem, I'd be trying to rule out\n> whichever one of those could be tested most easily, iteratively.\n>\n>\nHow do I learn more about the actual lock contention in my db? Lock\ncontention makes some sense. Each of the 256 requests are relatively\nsimilar. So, I don't doubt that lock contention could be an issue. I just\ndon't know how to observe it or correct it. It seems like if we have\nprocesses that are contending for locks, there's not much we can do about\nit.\n\nAlso, as you suggested, identifying what queries are taking most of\n> the time and trying to optimize them is a route that might help,\n> regardless.\n>\n\nWe often undertake query optimization. And, we often learn things about our\napp or make small performance gains from it. Sometimes, we are even able to\nmake big changes to the application to make large gains based on how we see\nqueries performing.\n\nSo, I agree that it's a good thing. However, query optimizing is tough,\nsince you can't necessarily predict the sizes of your tables in a real-time\nsystem that is used differently by different users.\n\n\n> -Kevin\n>\n\n\nI haven't been keeping up on the hardware, so I defer to you on\nthat. It certainly seems like it would fit with the symptoms. On\nthe other hand, I haven't seen anything yet to convince me that it\n*couldn't* be a client-side or network bottleneck, or the sort of\nlock contention bottleneck that showed up in some of Sun's\nbenchmarks. If it were my problem, I'd be trying to rule out\nwhichever one of those could be tested most easily, iteratively.\nHow do I learn more about the actual lock contention in my db? Lock contention makes some sense. Each of the 256 requests are relatively similar. So, I don't doubt that lock contention could be an issue. I just don't know how to observe it or correct it. It seems like if we have processes that are contending for locks, there's not much we can do about it. \n\nAlso, as you suggested, identifying what queries are taking most of\nthe time and trying to optimize them is a route that might help,\nregardless.We often undertake query optimization. And, we often learn things about our app or make small performance gains from it. Sometimes, we are even able to make big changes to the application to make large gains based on how we see queries performing. \nSo, I agree that it's a good thing. However, query optimizing is tough, since you can't necessarily predict the\nsizes of your tables in a real-time system that is used differently by\ndifferent users. \n\n-Kevin",
"msg_date": "Mon, 11 Jan 2010 16:23:39 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek <[email protected]> wrote:\n \n> How do I learn more about the actual lock contention in my db? \n> Lock contention makes some sense. Each of the 256 requests are\n> relatively similar. So, I don't doubt that lock contention could\n> be an issue. I just don't know how to observe it or correct it. \n> It seems like if we have processes that are contending for locks,\n> there's not much we can do about it.\n \nI'm not sure what the best way would be to measure it, but in prior\ndiscussions the general mood seemed to be that if you had so many\nactive sessions that you were running into the issue, the best\nsolution was to use a connection pool to avoid it.\n \n-Kevin\n",
"msg_date": "Mon, 11 Jan 2010 15:58:34 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "> > How do I learn more about the actual lock contention in my db? \n> > Lock contention makes some sense. Each of the 256 requests are\n> > relatively similar. So, I don't doubt that lock contention could\n> > be an issue. I just don't know how to observe it or correct it. \n> > It seems like if we have processes that are contending for locks,\n> > there's not much we can do about it.\n> \n> I'm not sure what the best way would be to measure it, but in prior\n> discussions the general mood seemed to be that if you had so many\n> active sessions that you were running into the issue, the best\n> solution was to use a connection pool to avoid it.\n\nSorry.. by \"not much we can do about it\", I meant, from a query perspective. I mean, we can't use locking hints or anything like that in Postgres that I know of. \n\nI do understand that the connection pool will help this. \n\n\n> \n> -Kevin\n> \n> -- \n> Sent via pgsql-performance mailing list \n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> ",
"msg_date": "Mon, 11 Jan 2010 17:06:24 -0500",
"msg_from": "\"Dusek, Bob\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek wrote:\n>\n> How do I learn more about the actual lock contention in my db?\n\nThere's a page with a sample query and links to more info at \nhttp://wiki.postgresql.org/wiki/Lock_Monitoring\n\nOne other little thing: when you're running \"top\", try using \"top -c\" \ninstead. That should show you exactly what all the postmaster backends \nare actually doing, which is really handy to sort out issues in this area.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 11 Jan 2010 17:12:33 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Mon, 11 Jan 2010, Bob Dusek wrote:\n> How do I learn more about the actual lock contention in my db? Lock contention makes\n> some sense. Each of the 256 requests are relatively similar. So, I don't doubt that\n> lock contention could be an issue. I just don't know how to observe it or correct it. \n> It seems like if we have processes that are contending for locks, there's not much we can\n> do about it. \n\nTo me:\n\n1. This doesn't look like an IO bandwidth issue, as the database is small.\n2. This doesn't look like a CPU speed issue, as usage is low.\n3. This doesn't look like a memory bandwidth issue, as that would count as\n CPU active in top.\n4. This doesn't look like a memory size problem either.\n\nSo, what's left? It could be a network bandwidth problem, if your client \nis on a separate server. You haven't really given much detail about the \nnature of the queries, so it is difficult for us to tell if the size of \nthe results means that you are maxing out your network. However, it \ndoesn't sound like it is likely to me that this is the problem.\n\nIt could be a client bottleneck problem - maybe your server is performing \nreally well, but your client can't keep up. You may be able to determine \nthis by switching on logging of long-running queries in Postgres, and \ncomparing that with what your client says. Also, look at the resource \nusage on the client machine.\n\nIt could be a lock contention problem. To me, this feels like the most \nlikely. You say that the queries are similar. If you are reading and \nwriting from a small set of the same objects in each of the transactions, \nthen you will suffer badly from lock contention, as only one backend can \nbe in a read-modify-write cycle on a given object at a time. We don't know \nenough about the data and queries to say whether this is the case. \nHowever, if you have a common object that every request touches (like a \nstatus line or something), then try to engineer that out of the system.\n\nHope this helps. Synchronising forty processes around accessing a single \nobject for high performance is really hard, and Postgres does it \nincredibly well, but it is still by far the best option to avoid \ncontention completely if possible.\n\n> -Kevin\n\nIt'd really help us reading your emails if you could make sure that it is \neasy to distinguish your words from words you are quoting. It can be very \nconfusing reading some of your emails, trying to remember which bits I \nhave seen before written by someone else. This is one of the few lines \nthat I know you didn't write - you're a Bob, not a Kevin. A few \">\" \ncharacters at the beginning of lines, which most mail readers will add \nautomatically, make all the difference.\n\nMatthew\n\n-- \n Me... a skeptic? I trust you have proof?",
"msg_date": "Tue, 12 Jan 2010 17:12:30 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Matthew Wakeling <[email protected]> wrote:\n \n>> -Kevin\n> \n> It'd really help us reading your emails if you could make sure\n> that it is easy to distinguish your words from words you are\n> quoting. It can be very confusing reading some of your emails,\n> trying to remember which bits I have seen before written by\n> someone else. This is one of the few lines that I know you didn't\n> write - you're a Bob, not a Kevin. A few \">\" characters at the\n> beginning of lines, which most mail readers will add\n> automatically, make all the difference.\n \nThat took me by surprise, because outside of that one line, where\nBob apparently lost the leading character, I've been seeing his\nmessages properly quoted. I went back and looked at Bob's old\nmessages and found that he's sending them in multiple mime formats,\ntext/plain with the '>' characters and the following:\n \n--0016e6d77e63233088047ce8a128\nContent-Type: text/html; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\n \nI hadn't noticed, because I have my email reader set up to default\nto text format if available. Your reader must be looking at the\nhtml format and not handling the this stuff:\n \n<blockquote class=3D\"gmail_quote\" style=3D\"border-left: 1px solid=\n rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left:\n1ex;\"><div><d=\niv class=3D\"h5\">\n \nYou might want to adjust your reader.\n \nBob, you might want to just send plain text, to avoid such problems.\n \n-Kevin\n",
"msg_date": "Tue, 12 Jan 2010 11:38:27 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Tue, Jan 12, 2010 at 12:12 PM, Matthew Wakeling <[email protected]>wrote:\n\n> On Mon, 11 Jan 2010, Bob Dusek wrote:\n>\n>> How do I learn more about the actual lock contention in my db? Lock\n>> contention makes\n>> some sense. Each of the 256 requests are relatively similar. So, I don't\n>> doubt that\n>> lock contention could be an issue. I just don't know how to observe it or\n>> correct it.\n>> It seems like if we have processes that are contending for locks, there's\n>> not much we can\n>> do about it.\n>>\n>\n> To me:\n>\n> 1. This doesn't look like an IO bandwidth issue, as the database is small.\n> 2. This doesn't look like a CPU speed issue, as usage is low.\n> 3. This doesn't look like a memory bandwidth issue, as that would count as\n> CPU active in top.\n> 4. This doesn't look like a memory size problem either.\n>\n> So, what's left? It could be a network bandwidth problem, if your client is\n> on a separate server. You haven't really given much detail about the nature\n> of the queries, so it is difficult for us to tell if the size of the results\n> means that you are maxing out your network. However, it doesn't sound like\n> it is likely to me that this is the problem.\n>\n>\nThe connections to postgres are happening on the localhost. Our application\nserver accepts connections from the network, and the application queries\nPostgres using a standard pg_pconnect on the localhost.\n\n\n> It could be a client bottleneck problem - maybe your server is performing\n> really well, but your client can't keep up. You may be able to determine\n> this by switching on logging of long-running queries in Postgres, and\n> comparing that with what your client says. Also, look at the resource usage\n> on the client machine.\n>\n\nWe've been logging long-running queries (200 ms). That's how we know\nPostgres is degrading. We don't see any queries showing up when we have 40\nclients running. But, we start seeing quite a bit show up after that.\n\n\n>\n> It could be a lock contention problem. To me, this feels like the most\n> likely. You say that the queries are similar. If you are reading and writing\n> from a small set of the same objects in each of the transactions, then you\n> will suffer badly from lock contention, as only one backend can be in a\n> read-modify-write cycle on a given object at a time. We don't know enough\n> about the data and queries to say whether this is the case. However, if you\n> have a common object that every request touches (like a status line or\n> something), then try to engineer that out of the system.\n>\n> Hope this helps. Synchronising forty processes around accessing a single\n> object for high performance is really hard, and Postgres does it incredibly\n> well, but it is still by far the best option to avoid contention completely\n> if possible.\n>\n\nEach of the concurrent clients does a series of selects, inserts, updates,\nand deletes. The requests would generally never update or delete the same\nrows in a table. However, the requests do generally write to the same\ntables. And, they are all reading from the same tables that they're writing\nto. For the inserts, I imagine they are blocking on access to the sequence\nthat controls the primary keys for the insert tables.\n\nBut, I'm not sure about locking beyond that. When we delete from the\ntables, we generally delete where \"clientid=X\", which deletes all of the\nrows that a particular client inserted (each client cleans up its own rows\nafter it finishes what its doing). Would that be blocking inserts on that\ntable for other clients?\n\n\n>\n> -Kevin\n>>\n>\n> It'd really help us reading your emails if you could make sure that it is\n> easy to distinguish your words from words you are quoting. It can be very\n> confusing reading some of your emails, trying to remember which bits I have\n> seen before written by someone else. This is one of the few lines that I\n> know you didn't write - you're a Bob, not a Kevin. A few \">\" characters at\n> the beginning of lines, which most mail readers will add automatically, make\n> all the difference.\n>\n>\nI'm really sorry. I'm using gmail's interface. I just saw the \"<< Plain\nText\" formatter at the top of this compose message. But, if I convert it to\nPlain Text now, I may lose my portion of the message. I'll use the Plain\nText when posting future messages.\n\nSorry for the hassel.\n\nMatthew\n>\n> --\n> Me... a skeptic? I trust you have proof?\n\nOn Tue, Jan 12, 2010 at 12:12 PM, Matthew Wakeling <[email protected]> wrote:\nOn Mon, 11 Jan 2010, Bob Dusek wrote:\n\nHow do I learn more about the actual lock contention in my db? Lock contention makes\nsome sense. Each of the 256 requests are relatively similar. So, I don't doubt that\nlock contention could be an issue. I just don't know how to observe it or correct it. \nIt seems like if we have processes that are contending for locks, there's not much we can\ndo about it. \n\n\nTo me:\n\n1. This doesn't look like an IO bandwidth issue, as the database is small.\n2. This doesn't look like a CPU speed issue, as usage is low.\n3. This doesn't look like a memory bandwidth issue, as that would count as\n CPU active in top.\n4. This doesn't look like a memory size problem either.\n\nSo, what's left? It could be a network bandwidth problem, if your client is on a separate server. You haven't really given much detail about the nature of the queries, so it is difficult for us to tell if the size of the results means that you are maxing out your network. However, it doesn't sound like it is likely to me that this is the problem.\nThe connections to postgres are happening on the localhost. Our application server accepts connections from the network, and the application queries Postgres using a standard pg_pconnect on the localhost. \n \nIt could be a client bottleneck problem - maybe your server is performing really well, but your client can't keep up. You may be able to determine this by switching on logging of long-running queries in Postgres, and comparing that with what your client says. Also, look at the resource usage on the client machine.\nWe've been logging long-running queries (200 ms). That's how we know Postgres is degrading. We don't see any queries showing up when we have 40 clients running. But, we start seeing quite a bit show up after that. \n \n\nIt could be a lock contention problem. To me, this feels like the most likely. You say that the queries are similar. If you are reading and writing from a small set of the same objects in each of the transactions, then you will suffer badly from lock contention, as only one backend can be in a read-modify-write cycle on a given object at a time. We don't know enough about the data and queries to say whether this is the case. However, if you have a common object that every request touches (like a status line or something), then try to engineer that out of the system.\n\nHope this helps. Synchronising forty processes around accessing a single object for high performance is really hard, and Postgres does it incredibly well, but it is still by far the best option to avoid contention completely if possible.\nEach of the concurrent clients does a series of selects, inserts, updates, and deletes. The requests would generally never update or delete the same rows in a table. However, the requests do generally write to the same tables. And, they are all reading from the same tables that they're writing to. For the inserts, I imagine they are blocking on access to the sequence that controls the primary keys for the insert tables. \nBut, I'm not sure about locking beyond that. When we delete from the tables, we generally delete where \"clientid=X\", which deletes all of the rows that a particular client inserted (each client cleans up its own rows after it finishes what its doing). Would that be blocking inserts on that table for other clients? \n \n\n\n -Kevin\n\n\nIt'd really help us reading your emails if you could make sure that it is easy to distinguish your words from words you are quoting. It can be very confusing reading some of your emails, trying to remember which bits I have seen before written by someone else. This is one of the few lines that I know you didn't write - you're a Bob, not a Kevin. A few \">\" characters at the beginning of lines, which most mail readers will add automatically, make all the difference.\nI'm really sorry. I'm using gmail's interface. I just saw the \"<< Plain Text\" formatter at the top of this compose message. But, if I convert it to Plain Text now, I may lose my portion of the message. I'll use the Plain Text when posting future messages. \nSorry for the hassel. \nMatthew\n\n-- \nMe... a skeptic? I trust you have proof?",
"msg_date": "Tue, 12 Jan 2010 13:01:39 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "> Bob, you might want to just send plain text, to avoid such problems.\n\nWill do. Looks like gmail's interface does it nicely.\n\n>\n> -Kevin\n",
"msg_date": "Tue, 12 Jan 2010 13:09:03 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Tue, 12 Jan 2010, Bob Dusek wrote:\n> Each of the concurrent clients does a series of selects, inserts, updates,\n> and deletes. The requests would generally never update or delete the same\n> rows in a table. However, the requests do generally write to the same\n> tables. And, they are all reading from the same tables that they're writing\n> to. For the inserts, I imagine they are blocking on access to the sequence\n> that controls the primary keys for the insert tables.\n\nI'm going to have to bow out at this stage, and let someone else who knows \nmore about what gets locked in a transaction help instead. The sequence \nmay be significant, but I would have thought it would have to be something \na bit bigger that is gumming up the works.\n\n> I'm really sorry. I'm using gmail's interface.\n\nActually, you weren't doing anything particularly wrong as it turns out. \nIt is partly a case of alpine being too clever for its own good, just as \nKevin pointed out. My mail reader is taking the \"most preferred\" mime \nalternative, which is the HTML version, and interpreting it to its best \nability, which isn't very well. It is the email that says which \nalternative is preferred, by the way. I have just forced alpine to view \nthe plain text version instead, and it is much better.\n\n> I just saw the \"<< Plain Text\" formatter at the top of this compose \n> message. But, if I convert it to Plain Text now, I may lose my portion \n> of the message. I'll use the Plain Text when posting future messages.\n\nTo be honest, that's always a good idea, although you didn't actually do \nwrong. I do know people whose spam filters immediately discard emails that \ncontain a HTML alternative - that's taking it to the extreme!\n\nMatthew\n\n-- \n Beware of bugs in the above code; I have only proved it correct, not\n tried it. --Donald Knuth\n",
"msg_date": "Tue, 12 Jan 2010 18:29:44 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On 13/01/2010 2:01 AM, Bob Dusek wrote:\n\n> The connections to postgres are happening on the localhost. Our\n> application server accepts connections from the network, and the\n> application queries Postgres using a standard pg_pconnect on the localhost.\n\nWell, that's a good reason to have all those CPUs - if your app server \nruns on the same host as Pg.\n\n> We've been logging long-running queries (200 ms). That's how we know\n> Postgres is degrading. We don't see any queries showing up when we have\n> 40 clients running. But, we start seeing quite a bit show up after that.\n\nIt might be informative to see how fast query times are increasing with \nclient count. You can probably figure this out by progressively lowering \nyour query time logging theshold.\n\n> Each of the concurrent clients does a series of selects, inserts,\n> updates, and deletes. The requests would generally never update or\n> delete the same rows in a table. However, the requests do generally\n> write to the same tables. And, they are all reading from the same\n> tables that they're writing to.\n\nAFAIK None of that should interfere with each other, so long as they're \nnot working with the same sets of tuples.\n\n> For the inserts, I imagine they are\n> blocking on access to the sequence that controls the primary keys for\n> the insert tables.\n\nI doubt it. Sequences are outside transactional rules for that reason. \nIt takes an incredibly short time for nextval(...) to obtain the next \nvalue for the sequence, and after that the sequence is unlocked and \nready for the next use.\n\n> But, I'm not sure about locking beyond that. When we delete from the\n> tables, we generally delete where \"clientid=X\", which deletes all of the\n> rows that a particular client inserted (each client cleans up its own\n> rows after it finishes what its doing). Would that be blocking inserts\n> on that table for other clients?\n\nNow would be a good time to start watching pg_locks - rather than \nguessing, try to actually observe if there's lock contention.\n\nI'd also consider looking into a connection pool so that as the number \nof clients to your appserver increases you can keep the number of active \nPg connections at the \"sweet spot\" for your server to maximise overall \nthroughput.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 13 Jan 2010 13:01:51 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "FYI - We have implemented a number of changes...\n\na) some query and application optimizations\nb) connection pool (on the cheap: set max number of clients on\nPostgres server and created a blocking wrapper to pg_pconnect that\nwill block until it gets a connection)\nc) moved the application server to a separate box\n\nAnd, we pretty much doubled our capacity... from approx 40 \"requests\"\nper second to approx 80.\n\nThe problem with our \"cheap\" connection pool is that the persistent\nconnections don't seem to be available immediately after they're\nreleased by the previous process. pg_close doesn't seem to help the\nsituation. We understand that pg_close doesn't really close a\npersistent connection, but we were hoping that it would cleanly\nrelease it for another client to use. Curious.\n\nWe've also tried third-party connection pools and they don't seem to\nbe real fast.\n\nThanks for all of your input. We really appreciate it.\n\nBob\n",
"msg_date": "Wed, 13 Jan 2010 15:10:04 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "On Wed, Jan 13, 2010 at 1:10 PM, Bob Dusek <[email protected]> wrote:\n> And, we pretty much doubled our capacity... from approx 40 \"requests\"\n> per second to approx 80.\n\nExcellent!\n\n> The problem with our \"cheap\" connection pool is that the persistent\n> connections don't seem to be available immediately after they're\n> released by the previous process. pg_close doesn't seem to help the\n> situation. We understand that pg_close doesn't really close a\n> persistent connection, but we were hoping that it would cleanly\n> release it for another client to use. Curious.\n\nYeah, the persistent connects in php are kinda as dangerous as they\nare useful.. Have you tried using regular connects just to compare\nperformance? On Linux they're not too bad, but on Windows (the pg\nserver that is) it's pretty horrible performance-wise.\n\n> We've also tried third-party connection pools and they don't seem to\n> be real fast.\n\nWhat have you tried? Would pgbouncer work for you?\n",
"msg_date": "Wed, 13 Jan 2010 13:43:29 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": ">> The problem with our \"cheap\" connection pool is that the persistent\n>> connections don't seem to be available immediately after they're\n>> released by the previous process. pg_close doesn't seem to help the\n>> situation. We understand that pg_close doesn't really close a\n>> persistent connection, but we were hoping that it would cleanly\n>> release it for another client to use. Curious.\n>\n> Yeah, the persistent connects in php are kinda as dangerous as they\n> are useful.. Have you tried using regular connects just to compare\n> performance? On Linux they're not too bad, but on Windows (the pg\n> server that is) it's pretty horrible performance-wise.\n\nYes we have. Regular connections are pretty slow, even when our\napplication server is on the same box as the db server.\n\n>> We've also tried third-party connection pools and they don't seem to\n>> be real fast.\n>\n> What have you tried? Would pgbouncer work for you?\n\nWe've tried pgbouncer. It's pretty good.\n\nHere are more details on what we're running:\n\nWe have three servers: A, B, and C. All of them are on the same rack,\nsharing a gb switch.\n\nWe have a test application (Apache bench) running on A. The test app\nsends 5000 requests to our application server. We can control how\nmany requests it will send concurrently. For purposes of explanation,\nI'll refer to the concurrency parameter of the test server TCON.\n\nThe application server is (now) running on B. It's basically Apache\nwith the PHP5 module.\n\nAnd, good ol' Postgres is running on C.\n\nWe have two basic configurations.\n\nThe first configuration is with the application server using the\n\"cheap\" connection pooling. Like I said before, we do this by\nconfiguring Postgres to only allow 40 clients, and the application\nserver uses a pconnect wrapper that blocks until it gets a db\nconnection (I guess you'd call this a \"polling connection pool\"). We\nhave run the first configuration using persistent and non-persistent\nconnections.\n\nWhen we run it with persistent connections using a TCON of 40, Apache\nBench tells us that we are processing ~100 requests per second and our\nCPU utilization is up to about %80.\n\nWhen we run it with non-persistent connections using the same TCON, we\nprocess about ~30 requests per second, and our cpu utilization is at\nabout %30 (sort of a surprising drop).\n\nIf we change TCON to 200 using the persistent connection\nconfiguration, we're only able to process ~23 per second. It seems\nlike lots of failing connections from our pconnect wrapper are killing\ndb performance.\n\nThe second configuration is with pgBouncer. We configure pgBouncer to\nrun on the same server as Postgres, and we configure it to allow an\ninfinite number of incoming connections and only 40 connections to the\nactual Postgres db. We change the Postgres configuration to allow up\nto 60 clients (just to set it higher than what pgBouncer should be\nusing). Using this configuration, with TCON set to any number >= 40,\nwe can process ~83 requests per second.\n\nSo, pgBouncer is pretty good. It doesn't appear to be as good as\nlimiting TCON and using pconnect, but since we can't limit TCON in a\nproduction environment, we may not have a choice.\n\nDoes anyone know why failed db connections would have such a drastic\nperformance hit on the system? I suppose it matters how many\nconnections were attempting. Maybe we're killing ourselves with a\ndenial of service attack of sorts (hence, polling is bad). But, I'm\ntold that we were only checking every 1/2 second (so, basically 160\nprocesses attempting to connect every 1/2 second). I suppose 320\nattempts per second could cause a lot of interrupts and context\nswitches. I don't think we have the context switch numbers handy for\nall of those runs.\n\nMaybe we can get those numbers tomorrow.\n\nAnyway, thanks again for your help thus far.\n",
"msg_date": "Wed, 13 Jan 2010 17:37:56 -0500",
"msg_from": "Bob Dusek <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek wrote:\n\n> So, pgBouncer is pretty good. It doesn't appear to be as good as\n> limiting TCON and using pconnect, but since we can't limit TCON in a\n> production environment, we may not have a choice.\n\nIt may be worth looking into pgpool, as well. If you have a very\ncheap-to-connect-to local pool you can use non-persistent connections\n(for quick release) and the local pool takes care of maintaining and\nsharing out the expensive-to-establish real connections to Pg its self.\n\nIf you find you still can't get the throughput you need, an alternative\nto adding more hardware capacity and/or more server tuning is to look\ninto using memcached to satisfy many of the read requests for your app\nserver. Use some of that 16GB of RAM on the app server to populate a\nmemcached instance with less-frequently-changing data, and prefer to\nfetch things from memcached rather than from Pg. With a bit of work on\ndata access indirection and on invalidating things in memcached when\nthey're changed in Pg, you can get truly insane boosts to performance\n... and get more real work done in Pg by getting rid of repetitive\nqueries of relatively constant data.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 14 Jan 2010 13:45:05 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek wrote:\n>>> The problem with our \"cheap\" connection pool is that the persistent\n>>> connections don't seem to be available immediately after they're\n>>> released by the previous process. pg_close doesn't seem to help the\n>>> situation. We understand that pg_close doesn't really close a\n>>> persistent connection, but we were hoping that it would cleanly\n>>> release it for another client to use. Curious.\n>> Yeah, the persistent connects in php are kinda as dangerous as they\n>> are useful.. Have you tried using regular connects just to compare\n>> performance? On Linux they're not too bad, but on Windows (the pg\n>> server that is) it's pretty horrible performance-wise.\n> \n> Yes we have. Regular connections are pretty slow, even when our\n> application server is on the same box as the db server.\n> \n>>> We've also tried third-party connection pools and they don't seem to\n>>> be real fast.\n>> What have you tried? Would pgbouncer work for you?\n> \n> We've tried pgbouncer. It's pretty good.\n\nOh, also look into mod_dbd . With the threaded MPM it can apparently\nprovide excellent in-apache connection pooling.\n\n--\nCraig Ringer\n\n",
"msg_date": "Thu, 14 Jan 2010 13:47:13 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "Bob Dusek <[email protected]> writes:\n> So, pgBouncer is pretty good. It doesn't appear to be as good as\n> limiting TCON and using pconnect, but since we can't limit TCON in a\n> production environment, we may not have a choice.\n\nYou can still use pconnect() with pgbouncer, in transaction mode, if\nyour application is compatible with that (no advisory locks or other\nsession level tricks). \n\nRegards,\n-- \ndim\n",
"msg_date": "Thu, 14 Jan 2010 11:44:15 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
},
{
"msg_contents": "\n> So, pgBouncer is pretty good. It doesn't appear to be as good as\n> limiting TCON and using pconnect, but since we can't limit TCON in a\n> production environment, we may not have a choice.\n\n\tActually, you can : use lighttpd and php/fastcgi.\n\n\tLighttpd handles the network stuff, and funnels/queues any number of \nclient connections into a limited number of PHP fastcgi processes. You can \nconfigure this process pool to your tastes.\n\n\tRather than instanciating 1 PHP interpreter (and 1 postgres) per client \nconnection, you can set it up for a max of N PHP procs. If PHP waits a lot \non IO (you use url fopen, that kind of things) you can set N=5..10 per \ncore, but if you don't use that, N=2-3 per core is good. It needs to be \ntuned to your application's need.\n\n\tThe idea is that if you got enough processes to keep your CPU busy, \nadding more will just fill your RAM, trash your CPU cache, add more \ncontext swithes, and generally lower your total throughput. Same is true \nfor Postgres, too.\n\n\tI've switched from apache to lighttpd on a rather busy community site and \nthe difference in performance and memory usage were quite noticeable. \nAlso, this site used MySQL (argh) so the occasional locking on some MyISAM \ntables would become really itchy unless the number of concurrent processes \nwas kept to a manageable level.\n\n\tWhen you bring down your number of postgres processes to some manageable \nlevel (plot a curve of throughput versus processes and select the \nmaximum), if postgres still spends idle time waiting for locks, you'll \nneed to do some exploration :\n\n\t- use the lock view facility in postgres\n\t- check your triggers : are you using some trigger that updates a count \nas rows are modified ? This can be a point of contention.\n\t- check your FKs too.\n\t- try fsync=off\n\t- try to put the WAL and tables on a ramdisk.\n\tIf you have even a few % iowait, maybe that hides the fact that 1 \npostmaster is fsyncing and perhaps 10 others are waiting on it to finish, \nwhich doesn't count as iowait...\n\n\t- recompile postgres and enable lwlock timing\n\n\n",
"msg_date": "Thu, 14 Jan 2010 16:59:15 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance config help"
}
] |
[
{
"msg_contents": "Hi list, I'm having a problem when dealing with operations that asks too\nmuch CPU from the server.\nThe scenario is this:\n\nI have a multithreaded server, each thread with its own connection to the\ndatabase. Everything is working fine, actually great, actually\noutstandingly, in normal operation.\n\nI've a table named \"a\" with 1.8 million records, and growing, but I'm ok\nwith it, at least for the moment. Maybe in the near future we will cut it\ndown, backup old data, and free it up. But this is not the issue, as I said,\neverything is working great. I have a cpl of indexes to help some queries,\nand that's it.\n\nNow my problem started when I tried to do some model refactoring on this\nproduction table.\n\nFirst I tried a dumb approach.\nI connected from pgadmin, opened a new session.\nI tried an ALTER TABLE on this table just to turn a char(255) field into\nchar(250), and it locked up my system.\n\nNo surprise, since I had many threads waiting for this alter table to\nfinish. What I did not foresee was that this alter table would take up so\nmuch time. Ok, my fault, for not having calculated the time that it would\ntake the ALTER TABLE to complete.\n\nNow, with this experience, I tried a simple workaround.\nCreated an empty version of \"a\" named \"a_empty\", identical in every sense.\nrenamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me like\n0 seconds of downtime, and everything kept working smoothly. Maybe a cpl of\noperations could have failed if they tried to write in the very second that\nthere was actually no table named \"a\", but since the operation was\ntransactional, the worst scenario was that if the operation should have\nfailed, the client application would just inform of the error and ask the\nuser for a retry. No big deal.\n\nNow, this table, that is totally unattached to the system in every way (no\none references this table, its like a dumpster for old records), is not\nbegin accessed by no other thread in the system, so an ALTER table on it, to\nturn a char(255) to char(250), should have no effect on the system.\n\nSo, with this in mind, I tried the ALTER TABLE this time on the \"a_full\"\n(totally unrelated) table.\nThe system went non-responsive again, and this time it had nothing to do\nwith threads waiting for the alter table to complete. The pgAdmin GUI went\nnon-responsive, as well as the application's server GUI, whose threads kept\nworking on the background, but starting to take more and more time for every\nclients request (up to 25 seconds, which are just ridiculous and completely\nunacceptable in normal conditions).\n\nThis resulted in my client applications to start disconnecting after their\noperations failed due to timeout, and the system basically went down again,\nfrom a users point of view.\n\nThis time, since I saw no relation between my operation on a totally\nunrelated table, and the server BIG slowdown, I blamed the servers memory.\n\nAfter some tests, I came up to the conclusion that any heavy duty operation\non any thread (ALTER TABLE on 1.8 million records tables, updates on this\ntable, or an infinite loop, just to make my point), would affect the whole\nserver.\n\nBottom line is, I can't seem to do any heavy processing on the database (or\nany operation that would require the server to enter into high CPU usage),\nand still expect the server to behave normally. Whatever heavy duty\noperation, DDL, DML, on whatever table (related, or unrelated), on whatever\nthread, would tear down my servers integrity.\n\nMy question then is: is there a way to limit the CPU assigned to a specific\nconnection?\nI mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\n\nSomething like:\npg_set_max_cpu_usage(2/100);\n\nand rest assured that no matter what that thread is asking the database to\ndo, it just wont affect the other running threads. Obviosly, assuring that\nthe process itself does not involve any locking of the other threads.\n\nIs something like that possible?\n\nThanks in advance,\nEduardo.\n\nHi list, I'm having a problem when dealing with operations that asks too much CPU from the server.The scenario is this:I have a multithreaded server, each thread with its own connection to the database. Everything is working fine, actually great, actually outstandingly, in normal operation.\nI've a table named \"a\" with 1.8 million records, and growing, but I'm ok with it, at least for the moment. Maybe in the near future we will cut it down, backup old data, and free it up. But this is not the issue, as I said, everything is working great. I have a cpl of indexes to help some queries, and that's it.\nNow my problem started when I tried to do some model refactoring on this production table.First I tried a dumb approach.I connected from pgadmin, opened a new session.I tried an ALTER TABLE on this table just to turn a char(255) field into char(250), and it locked up my system.\nNo surprise, since I had many threads waiting for this alter table to finish. What I did not foresee was that this alter table would take up so much time. Ok, my fault, for not having calculated the time that it would take the ALTER TABLE to complete.\nNow, with this experience, I tried a simple workaround.Created an empty version of \"a\" named \"a_empty\", identical in every sense.renamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me like 0 seconds of downtime, and everything kept working smoothly. Maybe a cpl of operations could have failed if they tried to write in the very second that there was actually no table named \"a\", but since the operation was transactional, the worst scenario was that if the operation should have failed, the client application would just inform of the error and ask the user for a retry. No big deal.\nNow, this table, that is totally unattached to the system in every way (no one references this table, its like a dumpster for old records), is not begin accessed by no other thread in the system, so an ALTER table on it, to turn a char(255) to char(250), should have no effect on the system.\nSo, with this in mind, I tried the ALTER TABLE this time on the \"a_full\" (totally unrelated) table.The system went non-responsive again, and this time it had nothing to do with threads waiting for the alter table to complete. The pgAdmin GUI went non-responsive, as well as the application's server GUI, whose threads kept working on the background, but starting to take more and more time for every clients request (up to 25 seconds, which are just ridiculous and completely unacceptable in normal conditions).\nThis resulted in my client applications to start disconnecting after their operations failed due to timeout, and the system basically went down again, from a users point of view.This time, since I saw no relation between my operation on a totally unrelated table, and the server BIG slowdown, I blamed the servers memory.\nAfter some tests, I came up to the conclusion that any heavy duty operation on any thread (ALTER TABLE on 1.8 million records tables, updates on this table, or an infinite loop, just to make my point), would affect the whole server.\nBottom line is, I can't seem to do any heavy processing on the database (or any operation that would require the server to enter into high CPU usage), and still expect the server to behave normally. Whatever heavy duty operation, DDL, DML, on whatever table (related, or unrelated), on whatever thread, would tear down my servers integrity.\nMy question then is: is there a way to limit the CPU assigned to a specific connection?I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.Something like:pg_set_max_cpu_usage(2/100);\nand rest assured that no matter what that thread is asking the database to do, it just wont affect the other running threads. Obviosly, assuring that the process itself does not involve any locking of the other threads.\nIs something like that possible?Thanks in advance,Eduardo.",
"msg_date": "Wed, 13 Jan 2010 01:59:11 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "a heavy duty operation on an \"unused\" table kills my server"
},
{
"msg_contents": "Eduardo Piombino wrote:\n> Hi list, I'm having a problem when dealing with operations that asks too \n> much CPU from the server.\n> The scenario is this:\n\nA nice description below, but ... you give no information about your system: number of CPUs, disk types and configuration, how much memory, what have you changed in your Postgres configuration? And what operating system, what version of Postgres, etc., etc. The more information you give, the better the answer.\n\nIf you're operating on a single disk with a tiny amount of memory, and old, misconfigured Postgres on a laptop computer, that's a whole different problem than if you're on a big sytem with 16 CPUs and a huge RAID 1+0 with battery-backed cache.\n\nCraig\n\n> \n> I have a multithreaded server, each thread with its own connection to \n> the database. Everything is working fine, actually great, actually \n> outstandingly, in normal operation.\n> \n> I've a table named \"a\" with 1.8 million records, and growing, but I'm ok \n> with it, at least for the moment. Maybe in the near future we will cut \n> it down, backup old data, and free it up. But this is not the issue, as \n> I said, everything is working great. I have a cpl of indexes to help \n> some queries, and that's it.\n> \n> Now my problem started when I tried to do some model refactoring on this \n> production table.\n> \n> First I tried a dumb approach.\n> I connected from pgadmin, opened a new session.\n> I tried an ALTER TABLE on this table just to turn a char(255) field into \n> char(250), and it locked up my system.\n> \n> No surprise, since I had many threads waiting for this alter table to \n> finish. What I did not foresee was that this alter table would take up \n> so much time. Ok, my fault, for not having calculated the time that it \n> would take the ALTER TABLE to complete.\n> \n> Now, with this experience, I tried a simple workaround.\n> Created an empty version of \"a\" named \"a_empty\", identical in every sense.\n> renamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me \n> like 0 seconds of downtime, and everything kept working smoothly. Maybe \n> a cpl of operations could have failed if they tried to write in the very \n> second that there was actually no table named \"a\", but since the \n> operation was transactional, the worst scenario was that if the \n> operation should have failed, the client application would just inform \n> of the error and ask the user for a retry. No big deal.\n> \n> Now, this table, that is totally unattached to the system in every way \n> (no one references this table, its like a dumpster for old records), is \n> not begin accessed by no other thread in the system, so an ALTER table \n> on it, to turn a char(255) to char(250), should have no effect on the \n> system.\n> \n> So, with this in mind, I tried the ALTER TABLE this time on the \"a_full\" \n> (totally unrelated) table.\n> The system went non-responsive again, and this time it had nothing to do \n> with threads waiting for the alter table to complete. The pgAdmin GUI \n> went non-responsive, as well as the application's server GUI, whose \n> threads kept working on the background, but starting to take more and \n> more time for every clients request (up to 25 seconds, which are just \n> ridiculous and completely unacceptable in normal conditions).\n> \n> This resulted in my client applications to start disconnecting after \n> their operations failed due to timeout, and the system basically went \n> down again, from a users point of view.\n> \n> This time, since I saw no relation between my operation on a totally \n> unrelated table, and the server BIG slowdown, I blamed the servers memory.\n> \n> After some tests, I came up to the conclusion that any heavy duty \n> operation on any thread (ALTER TABLE on 1.8 million records tables, \n> updates on this table, or an infinite loop, just to make my point), \n> would affect the whole server.\n> \n> Bottom line is, I can't seem to do any heavy processing on the database \n> (or any operation that would require the server to enter into high CPU \n> usage), and still expect the server to behave normally. Whatever heavy \n> duty operation, DDL, DML, on whatever table (related, or unrelated), on \n> whatever thread, would tear down my servers integrity.\n> \n> My question then is: is there a way to limit the CPU assigned to a \n> specific connection?\n> I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\n> \n> Something like:\n> pg_set_max_cpu_usage(2/100);\n> \n> and rest assured that no matter what that thread is asking the database \n> to do, it just wont affect the other running threads. Obviosly, assuring \n> that the process itself does not involve any locking of the other threads.\n> \n> Is something like that possible?\n> \n> Thanks in advance,\n> Eduardo.\n> \n\n",
"msg_date": "Tue, 12 Jan 2010 21:14:00 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "On 13/01/2010 12:59 PM, Eduardo Piombino wrote:\n\n> My question then is: is there a way to limit the CPU assigned to a\n> specific connection?\n> I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\n>\n> Something like:\n> pg_set_max_cpu_usage(2/100);\n\nYou're assuming the issue is CPU. I think that unlikely. In general, a \nsingle thread/process that wants as much CPU as it can get won't bring \nany machine with a half-decent OS to its knees. Any UNIX system should \nbarely notice - everything else will slow down somewhat, depending on \nits scheduler, but in any sane setup shouldn't slow down by more than \n1/2. Modern Windows tends to be fairly well behaved here too.\n\nWhat's much more likely is that you're working with a crappy disk setup \n- such as a RAID 5 array without battery-backed cache, or a single slow \ndisk. You probably also have quite deep write queuing in the RAID \ncontroller / disk / OS. This means that your disk-intensive ALTER TABLE \nmakes your disk subsystem so busy that it takes ages before any other \nprocess gets a look-in. It's not unlikely that I/O requests are being \nqueued so deeply that it (often) takes several seconds for the \ncontroller to get around to executing a newly submitted read or write \nrequest. If your other queries need to do more than a few steps where \nthey read some data, think about it, and read other data depending on \nthe first read, then they're going to take forever, because they're \ngoing to have to ensure a long delay before disk access each time.\n\nOf course, that's just a guess, since you've provided no information on \nyour hardware. Try collecting up some of the information shown here:\n\nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems#Things_you_need_to_mention\n\nIn any case, if it *is* I/O related, what to do about it depends on \nexactly what sort of I/O issue it is. Extremely deep queuing? Looks good \nfor throughput benchmarks, but is stupid if you care about latency and \nhave some I/O that's higher priority than others, so reduce your queue \ndepth. Very slow writes hammering reads? Don't use RAID 5. Etc.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 13 Jan 2010 13:41:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "I'm sorry.\n\nThe server is a production server HP Proliant, I don't remember the exact\nmodel, but the key features were:\n4 cores, over 2GHz each (I'm sorry I don't remember the actual specs), I\nthink it had 16G of RAM (if that is possible?)\nIt has two 320G disks in RAID (mirrored).\n\nI don't even have the emails with the specs here, but I can give you the\nexact configuration by tomorrow.\n\nOperating system: Windows 2003 server, with latest patches.\nPostgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n\nAt any given time, the server is on 0% CPU load, with peaks of 1%, 2%, max.\nIn normal operation.\n\nI've been digging a little in the archives, and one thing that it helped me\ncome up with, is that I don't really remember seeing high CPU usage (fact\nthat surprised me, but i do remember seeing high IO activity). I'm sorry,\nits pretty late here.\nI know this single statement is enough to almost change everything I've just\nasked.\nPlease try interpreting again my original mail, considering that when I said\n\"high CPU usage\" It might very well be \"high IO usage\".\n\nThe final effect was that the server went non-responsive, for all matters,\nnot even the TaskManager would come up when i hit CTRL-ALT-DEL, and of\ncourse, every client would suffer horrific (+20 secs) for the simplest\noperations like SELECT NOW();\n\nI've just made a little modification to my original questions, to extend to\nthe possibility of a IO usage issue, instead of just CPU.\n\n*\n*\n>\n> *Bottom line is, I can't seem to do any heavy processing on the database\n> (or any operation that would require the server to enter into high CPU usage\n> **or IO USAGE), and still expect the server to behave normally. Whatever\n> heavy duty operation, DDL, DML, on whatever table (related, or unrelated),\n> on whatever thread, would tear down my servers integrity.*\n>\n> * My question then is: is there a way to limit the CPU or IO USAGEassigned to a specific connection?\n> *\n> * I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.*\n>\n> * Something like:*\n> * pg_set_max_cpu _or_io_usage(2/100);*\n>\n\n\nOn Wed, Jan 13, 2010 at 2:14 AM, Craig James <[email protected]>wrote:\n\n> Eduardo Piombino wrote:\n>\n>> Hi list, I'm having a problem when dealing with operations that asks too\n>> much CPU from the server.\n>> The scenario is this:\n>>\n>\n> A nice description below, but ... you give no information about your\n> system: number of CPUs, disk types and configuration, how much memory, what\n> have you changed in your Postgres configuration? And what operating system,\n> what version of Postgres, etc., etc. The more information you give, the\n> better the answer.\n>\n> If you're operating on a single disk with a tiny amount of memory, and old,\n> misconfigured Postgres on a laptop computer, that's a whole different\n> problem than if you're on a big sytem with 16 CPUs and a huge RAID 1+0 with\n> battery-backed cache.\n>\n\nCraig\n>\n>\n>\n>> I have a multithreaded server, each thread with its own connection to the\n>> database. Everything is working fine, actually great, actually\n>> outstandingly, in normal operation.\n>>\n>> I've a table named \"a\" with 1.8 million records, and growing, but I'm ok\n>> with it, at least for the moment. Maybe in the near future we will cut it\n>> down, backup old data, and free it up. But this is not the issue, as I said,\n>> everything is working great. I have a cpl of indexes to help some queries,\n>> and that's it.\n>>\n>> Now my problem started when I tried to do some model refactoring on this\n>> production table.\n>>\n>> First I tried a dumb approach.\n>> I connected from pgadmin, opened a new session.\n>> I tried an ALTER TABLE on this table just to turn a char(255) field into\n>> char(250), and it locked up my system.\n>>\n>> No surprise, since I had many threads waiting for this alter table to\n>> finish. What I did not foresee was that this alter table would take up so\n>> much time. Ok, my fault, for not having calculated the time that it would\n>> take the ALTER TABLE to complete.\n>>\n>> Now, with this experience, I tried a simple workaround.\n>> Created an empty version of \"a\" named \"a_empty\", identical in every sense.\n>> renamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me\n>> like 0 seconds of downtime, and everything kept working smoothly. Maybe a\n>> cpl of operations could have failed if they tried to write in the very\n>> second that there was actually no table named \"a\", but since the operation\n>> was transactional, the worst scenario was that if the operation should have\n>> failed, the client application would just inform of the error and ask the\n>> user for a retry. No big deal.\n>>\n>> Now, this table, that is totally unattached to the system in every way (no\n>> one references this table, its like a dumpster for old records), is not\n>> begin accessed by no other thread in the system, so an ALTER table on it, to\n>> turn a char(255) to char(250), should have no effect on the system.\n>>\n>> So, with this in mind, I tried the ALTER TABLE this time on the \"a_full\"\n>> (totally unrelated) table.\n>> The system went non-responsive again, and this time it had nothing to do\n>> with threads waiting for the alter table to complete. The pgAdmin GUI went\n>> non-responsive, as well as the application's server GUI, whose threads kept\n>> working on the background, but starting to take more and more time for every\n>> clients request (up to 25 seconds, which are just ridiculous and completely\n>> unacceptable in normal conditions).\n>>\n>> This resulted in my client applications to start disconnecting after their\n>> operations failed due to timeout, and the system basically went down again,\n>> from a users point of view.\n>>\n>> This time, since I saw no relation between my operation on a totally\n>> unrelated table, and the server BIG slowdown, I blamed the servers memory.\n>>\n>> After some tests, I came up to the conclusion that any heavy duty\n>> operation on any thread (ALTER TABLE on 1.8 million records tables, updates\n>> on this table, or an infinite loop, just to make my point), would affect the\n>> whole server.\n>>\n>> Bottom line is, I can't seem to do any heavy processing on the database\n>> (or any operation that would require the server to enter into high CPU usage\n>> *or IO USAGE*), and still expect the server to behave normally. Whatever\n>> heavy duty operation, DDL, DML, on whatever table (related, or unrelated),\n>> on whatever thread, would tear down my servers integrity.\n>>\n>> My question then is: is there a way to limit the CPU* or **IO USAGE* assigned\n>> to a specific connection?\n>> I mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\n>>\n>> Something like:\n>> pg_set_max_cpu*_or_io_*usage(2/100);\n>>\n>> and rest assured that no matter what that thread is asking the database to\n>> do, it just wont affect the other running threads. Obviosly, assuring that\n>> the process itself does not involve any locking of the other threads.\n>>\n>> Is something like that possible?\n>>\n>> Thanks in advance,\n>> Eduardo.\n>>\n>>\n>\n\nI'm sorry.The server is a production server HP Proliant, I don't remember the exact model, but the key features were:4 cores, over 2GHz each (I'm sorry I don't remember the actual specs), I think it had 16G of RAM (if that is possible?)\nIt has two 320G disks in RAID (mirrored).\nI don't even have the emails with the specs here, but I can give you the exact configuration by tomorrow.Operating system: Windows 2003 server, with latest patches.Postgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\nAt any given time, the server is on 0% CPU load, with peaks of 1%, 2%, max. In normal operation.I've been digging a little in the archives, and one thing that it helped me come up with, is that I don't really remember seeing high CPU usage (fact that surprised me, but i do remember seeing high IO activity). I'm sorry, its pretty late here.\nI know this single statement is enough to almost change everything I've just asked.Please try interpreting again my original mail, considering that when I said \"high CPU usage\" It might very well be \"high IO usage\". \nThe final effect was that the server went non-responsive, for all matters, not even the TaskManager would come up when i hit CTRL-ALT-DEL, and of course, every client would suffer horrific (+20 secs) for the simplest operations like SELECT NOW();\nI've just made a little modification to my original questions, to extend to the possibility of a IO usage issue, instead of just CPU.\nBottom line is, I can't seem to do any heavy processing on the database\n(or any operation that would require the server to enter into high CPU\nusage or IO USAGE),\nand still expect the server to behave normally. Whatever heavy duty\noperation, DDL, DML, on whatever table (related, or unrelated), on\nwhatever thread, would tear down my servers integrity.\nMy question then is: is there a way to limit the CPU or IO USAGE assigned to a specific connection?\n\nI mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\nSomething like:\npg_set_max_cpu\n_or_io_usage(2/100);\nOn Wed, Jan 13, 2010 at 2:14 AM, Craig James <[email protected]> wrote:\nEduardo Piombino wrote:\n\nHi list, I'm having a problem when dealing with operations that asks too much CPU from the server.\nThe scenario is this:\n\n\nA nice description below, but ... you give no information about your system: number of CPUs, disk types and configuration, how much memory, what have you changed in your Postgres configuration? And what operating system, what version of Postgres, etc., etc. The more information you give, the better the answer.\n\nIf you're operating on a single disk with a tiny amount of memory, and old, misconfigured Postgres on a laptop computer, that's a whole different problem than if you're on a big sytem with 16 CPUs and a huge RAID 1+0 with battery-backed cache.\n\n \nCraig\n\n\n\nI have a multithreaded server, each thread with its own connection to the database. Everything is working fine, actually great, actually outstandingly, in normal operation.\n\nI've a table named \"a\" with 1.8 million records, and growing, but I'm ok with it, at least for the moment. Maybe in the near future we will cut it down, backup old data, and free it up. But this is not the issue, as I said, everything is working great. I have a cpl of indexes to help some queries, and that's it.\n\nNow my problem started when I tried to do some model refactoring on this production table.\n\nFirst I tried a dumb approach.\nI connected from pgadmin, opened a new session.\nI tried an ALTER TABLE on this table just to turn a char(255) field into char(250), and it locked up my system.\n\nNo surprise, since I had many threads waiting for this alter table to finish. What I did not foresee was that this alter table would take up so much time. Ok, my fault, for not having calculated the time that it would take the ALTER TABLE to complete.\n\nNow, with this experience, I tried a simple workaround.\nCreated an empty version of \"a\" named \"a_empty\", identical in every sense.\nrenamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me like 0 seconds of downtime, and everything kept working smoothly. Maybe a cpl of operations could have failed if they tried to write in the very second that there was actually no table named \"a\", but since the operation was transactional, the worst scenario was that if the operation should have failed, the client application would just inform of the error and ask the user for a retry. No big deal.\n\nNow, this table, that is totally unattached to the system in every way (no one references this table, its like a dumpster for old records), is not begin accessed by no other thread in the system, so an ALTER table on it, to turn a char(255) to char(250), should have no effect on the system.\n\nSo, with this in mind, I tried the ALTER TABLE this time on the \"a_full\" (totally unrelated) table.\nThe system went non-responsive again, and this time it had nothing to do with threads waiting for the alter table to complete. The pgAdmin GUI went non-responsive, as well as the application's server GUI, whose threads kept working on the background, but starting to take more and more time for every clients request (up to 25 seconds, which are just ridiculous and completely unacceptable in normal conditions).\n\nThis resulted in my client applications to start disconnecting after their operations failed due to timeout, and the system basically went down again, from a users point of view.\n\nThis time, since I saw no relation between my operation on a totally unrelated table, and the server BIG slowdown, I blamed the servers memory.\n\nAfter some tests, I came up to the conclusion that any heavy duty operation on any thread (ALTER TABLE on 1.8 million records tables, updates on this table, or an infinite loop, just to make my point), would affect the whole server.\n\nBottom line is, I can't seem to do any heavy processing on the database (or any operation that would require the server to enter into high CPU usage or IO USAGE), and still expect the server to behave normally. Whatever heavy duty operation, DDL, DML, on whatever table (related, or unrelated), on whatever thread, would tear down my servers integrity.\n\nMy question then is: is there a way to limit the CPU or IO USAGE assigned to a specific connection?\n\nI mean, I don't care if my ALTER TABLE takes 4 days instead of 4 hours.\n\nSomething like:\npg_set_max_cpu_or_io_usage(2/100);\n\nand rest assured that no matter what that thread is asking the database to do, it just wont affect the other running threads. Obviosly, assuring that the process itself does not involve any locking of the other threads.\n\nIs something like that possible?\n\nThanks in advance,\nEduardo.",
"msg_date": "Wed, 13 Jan 2010 02:47:30 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On 13/01/2010 1:47 PM, Eduardo Piombino wrote:\n> I'm sorry.\n>\n> The server is a production server HP Proliant, I don't remember the\n> exact model, but the key features were:\n> 4 cores, over 2GHz each (I'm sorry I don't remember the actual specs), I\n> think it had 16G of RAM (if that is possible?)\n> It has two 320G disks in RAID (mirrored).\n\nPlain 'ol SATA disks in RAID-1?\n\nHardware RAID (and if so, controller model)? With battery backup? Write \ncache on or off?\n\nOr software RAID? If so, Windows build-in sw raid, or some vendor's \nfakeraid (Highpoint, Promise, Adaptec, etc) ?\n\nAnyway, with two disks in RAID-1 I'm not surprised you're seeing some \nperformance issues with heavy writes, especially since it seems unlikely \nthat you have a BBU hardware RAID controller. In RAID-1 a write must hit \nboth disks, so a 1Mb write effectively costs twice as much as a 1Mb \nread. Since many controllers try for high throughput (because it looks \ngood in benchmarks) at the expense of latency they also tend to try to \nbatch writes into long blocks, which keeps the disks busy in extended \nbursts. That slaughters read latencies.\n\nI had this sort of issue with a 3Ware 8500-8, and landed up modifying \nand recompiling the driver to reduce its built-in queue depth. I also \nincreased readahead. It was still pretty awful as I was working with \nRAID 5 on SATA disks, but it made a big difference and more importantly \nmeant that my Linux server was able to honour `ionice' priorities and \nfeed more important requests to the controller first.\n\nOn windows, I really don't know what to do about it beyond getting a \nbetter I/O subsystem. Google may help - look into I/O priorities, queue \ndepths, reducing read latencies, etc.\n\n> I don't even have the emails with the specs here, but I can give you the\n> exact configuration by tomorrow.\n>\n> Operating system: Windows 2003 server, with latest patches.\n> Postgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n\nUrk. 8.2 ?\n\nPg on Windows improves a lot with each release, and that's an old buggy \nversion of 8.2 at that. Looking into an upgrade would be a really, \nREALLY good idea.\n\n> Please try interpreting again my original mail, considering that when I\n> said \"high CPU usage\" It might very well be \"high IO usage\".\n>\n> The final effect was that the server went non-responsive, for all\n> matters, not even the TaskManager would come up when i hit CTRL-ALT-DEL,\n> and of course, every client would suffer horrific (+20 secs) for the\n> simplest operations like SELECT NOW();\n\nThat sounds a LOT like horrible read latencies caused by total I/O \noverload. It could also be running out of memory and swapping heavily, \nso do keep an eye out for that, but I wouldn't expect to see that with \nan ALTER TABLE - especially on a 16GB server.\n\n> / My question then is: is there a way to limit the CPU* or **IO\n> USAGE* assigned to a specific connection?/\n\nIn win32 you can set CPU priorities manually in Task Manager, but only \nonce you already know the process ID of the Pg backend that's going to \nbe hammering the machine. Not helpful.\n\nI don't know of any way to do per-process I/O priorities in Win32, but I \nonly use win32 reluctantly and don't use it for anything I care about \n(like a production Pg server) so I'm far from a definitive source.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 13 Jan 2010 14:02:02 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Excellent, lots of useful information in your message.\nI will follow your advices, and keep you posted on any progress. I have yet\nto confirm you with some technical details of my setup, but I'm pretty sure\nyou hit the nail in any case.\n\nOne last question, this IO issue I'm facing, do you think it is just a\nmatter of RAID configuration speed, or a matter of queue gluttony (and not\nleaving time for other processes to get into the IO queue in a reasonable\ntime)?\n\nBecause if it was just a matter of speed, ok, with my actual RAID\nconfiguration lets say it takes 10 minutes to process the ALTER TABLE\n(leaving no space to other IOs until the ALTER TABLE is done), lets say then\ni put the fastest possible RAID setup, or even remove RAID for the sake of\nspeed, and it completes in lets say again, 10 seconds (an unreal\nassumption). But if my table now grows 60 times, I would be facing the very\nsame problem again, even with the best RAID configuration.\n\nThe problem would seem to be in the way the OS (or hardware, or someone\nelse, or all of them) is/are inserting the IO requests into the queue.\nWhat can I do to control the order in which these IO requests are finally\nentered into the queue?\nI mean .. what i would like to obtain is:\n\nConsidering the ALTER TABLE as a sequence of 100.000 READ/WRITE OPERATIONS\nConsidering the SELECT * FROM xxx as a sequence of 100 READ OPERATIONS\n(totally unrelated in disk)\n\nFirst i run the ALTER TABLE on a thread...\nLets say by the time it generates 1.000 READ/WRITE OPERATIONS, the other\nthread starts with the SELECT * FROM xxx ...\nI would expect the IO system to give chance to the those 100 READ OPERATIONS\nto execute immediately (with no need to wait for the remaining 990.000\nREAD/WRITE OPERATIONS finish), that is, to enter the queue at *almost* the\nvery same moment the IO request were issued.\n\nIf I can not guarantee that, I'm kinda doomed, because the largest the\namount of IO operations requested by a \"heavy duty operation\", the longest\nit will take any other thread to start doing anything.\n\nWhat cards do I have to manipulate the order the IO requests are entered\ninto the \"queue\"?\nCan I disable this queue?\nShould I turn disk's IO operation caches off?\nNot use some specific disk/RAID vendor, for instance?\n\nI think I have some serious reading to do on this matter, google will help\nof course, but as always, every advice for small it may seem, will be very\nmuch appreciated.\n\nNonetheless, thanks a lot for all the light you already brought me on this\nmatter.\nI really appreciate it.\n\nEduardo.\n\n\n\nOn Wed, Jan 13, 2010 at 3:02 AM, Craig Ringer\n<[email protected]>wrote:\n\n> On 13/01/2010 1:47 PM, Eduardo Piombino wrote:\n>\n>> I'm sorry.\n>>\n>> The server is a production server HP Proliant, I don't remember the\n>> exact model, but the key features were:\n>> 4 cores, over 2GHz each (I'm sorry I don't remember the actual specs), I\n>> think it had 16G of RAM (if that is possible?)\n>> It has two 320G disks in RAID (mirrored).\n>>\n>\n> Plain 'ol SATA disks in RAID-1?\n>\n> Hardware RAID (and if so, controller model)? With battery backup? Write\n> cache on or off?\n>\n> Or software RAID? If so, Windows build-in sw raid, or some vendor's\n> fakeraid (Highpoint, Promise, Adaptec, etc) ?\n>\n> Anyway, with two disks in RAID-1 I'm not surprised you're seeing some\n> performance issues with heavy writes, especially since it seems unlikely\n> that you have a BBU hardware RAID controller. In RAID-1 a write must hit\n> both disks, so a 1Mb write effectively costs twice as much as a 1Mb read.\n> Since many controllers try for high throughput (because it looks good in\n> benchmarks) at the expense of latency they also tend to try to batch writes\n> into long blocks, which keeps the disks busy in extended bursts. That\n> slaughters read latencies.\n>\n> I had this sort of issue with a 3Ware 8500-8, and landed up modifying and\n> recompiling the driver to reduce its built-in queue depth. I also increased\n> readahead. It was still pretty awful as I was working with RAID 5 on SATA\n> disks, but it made a big difference and more importantly meant that my Linux\n> server was able to honour `ionice' priorities and feed more important\n> requests to the controller first.\n>\n> On windows, I really don't know what to do about it beyond getting a better\n> I/O subsystem. Google may help - look into I/O priorities, queue depths,\n> reducing read latencies, etc.\n>\n>\n> I don't even have the emails with the specs here, but I can give you the\n>> exact configuration by tomorrow.\n>>\n>> Operating system: Windows 2003 server, with latest patches.\n>> Postgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n>>\n>\n> Urk. 8.2 ?\n>\n> Pg on Windows improves a lot with each release, and that's an old buggy\n> version of 8.2 at that. Looking into an upgrade would be a really, REALLY\n> good idea.\n>\n>\n> Please try interpreting again my original mail, considering that when I\n>> said \"high CPU usage\" It might very well be \"high IO usage\".\n>>\n>> The final effect was that the server went non-responsive, for all\n>> matters, not even the TaskManager would come up when i hit CTRL-ALT-DEL,\n>> and of course, every client would suffer horrific (+20 secs) for the\n>> simplest operations like SELECT NOW();\n>>\n>\n> That sounds a LOT like horrible read latencies caused by total I/O\n> overload. It could also be running out of memory and swapping heavily, so do\n> keep an eye out for that, but I wouldn't expect to see that with an ALTER\n> TABLE - especially on a 16GB server.\n>\n> / My question then is: is there a way to limit the CPU* or **IO\n>> USAGE* assigned to a specific connection?/\n>>\n>\n> In win32 you can set CPU priorities manually in Task Manager, but only once\n> you already know the process ID of the Pg backend that's going to be\n> hammering the machine. Not helpful.\n>\n> I don't know of any way to do per-process I/O priorities in Win32, but I\n> only use win32 reluctantly and don't use it for anything I care about (like\n> a production Pg server) so I'm far from a definitive source.\n>\n> --\n> Craig Ringer\n>\n\nExcellent, lots of useful information in your message.I will follow your advices, and keep you posted on any progress. I have yet to confirm you with some technical details of my setup, but I'm pretty sure you hit the nail in any case.\nOne last question, this IO issue I'm facing, do you think it is just a matter of RAID configuration speed, or a matter of queue gluttony (and not leaving time for other processes to get into the IO queue in a reasonable time)?\nBecause if it was just a matter of speed, ok, with my actual RAID configuration lets say it takes 10 minutes to process the ALTER TABLE (leaving no space to other IOs until the ALTER TABLE is done), lets say then i put the fastest possible RAID setup, or even remove RAID for the sake of speed, and it completes in lets say again, 10 seconds (an unreal assumption). But if my table now grows 60 times, I would be facing the very same problem again, even with the best RAID configuration.\nThe problem would seem to be in the way the OS (or hardware, or someone else, or all of them) is/are inserting the IO requests into the queue.What can I do to control the order in which these IO requests are finally entered into the queue?\nI mean .. what i would like to obtain is:Considering the ALTER TABLE as a sequence of 100.000 READ/WRITE OPERATIONSConsidering the SELECT * FROM xxx as a sequence of 100 READ OPERATIONS (totally unrelated in disk)\nFirst i run the ALTER TABLE on a thread...Lets say by the time it generates 1.000 READ/WRITE OPERATIONS, the other thread starts with the SELECT * FROM xxx ...I would expect the IO system to give chance to the those 100 READ OPERATIONS to execute immediately (with no need to wait for the remaining 990.000 READ/WRITE OPERATIONS finish), that is, to enter the queue at *almost* the very same moment the IO request were issued.\nIf I can not guarantee that, I'm kinda doomed, because the largest the amount of IO operations requested by a \"heavy duty operation\", the longest it will take any other thread to start doing anything.\nWhat cards do I have to manipulate the order the IO requests are entered into the \"queue\"?Can I disable this queue?Should I turn disk's IO operation caches off?Not use some specific disk/RAID vendor, for instance?\nI think I have some serious reading to do on this matter, google will help of course, but as always, every advice for small it may seem, will be very much appreciated.Nonetheless, thanks a lot for all the light you already brought me on this matter.\nI really appreciate it.Eduardo.On Wed, Jan 13, 2010 at 3:02 AM, Craig Ringer <[email protected]> wrote:\nOn 13/01/2010 1:47 PM, Eduardo Piombino wrote:\n\nI'm sorry.\n\nThe server is a production server HP Proliant, I don't remember the\nexact model, but the key features were:\n4 cores, over 2GHz each (I'm sorry I don't remember the actual specs), I\nthink it had 16G of RAM (if that is possible?)\nIt has two 320G disks in RAID (mirrored).\n\n\nPlain 'ol SATA disks in RAID-1?\n\nHardware RAID (and if so, controller model)? With battery backup? Write cache on or off?\n\nOr software RAID? If so, Windows build-in sw raid, or some vendor's fakeraid (Highpoint, Promise, Adaptec, etc) ?\n\nAnyway, with two disks in RAID-1 I'm not surprised you're seeing some performance issues with heavy writes, especially since it seems unlikely that you have a BBU hardware RAID controller. In RAID-1 a write must hit both disks, so a 1Mb write effectively costs twice as much as a 1Mb read. Since many controllers try for high throughput (because it looks good in benchmarks) at the expense of latency they also tend to try to batch writes into long blocks, which keeps the disks busy in extended bursts. That slaughters read latencies.\n\nI had this sort of issue with a 3Ware 8500-8, and landed up modifying and recompiling the driver to reduce its built-in queue depth. I also increased readahead. It was still pretty awful as I was working with RAID 5 on SATA disks, but it made a big difference and more importantly meant that my Linux server was able to honour `ionice' priorities and feed more important requests to the controller first.\n\nOn windows, I really don't know what to do about it beyond getting a better I/O subsystem. Google may help - look into I/O priorities, queue depths, reducing read latencies, etc.\n\n\nI don't even have the emails with the specs here, but I can give you the\nexact configuration by tomorrow.\n\nOperating system: Windows 2003 server, with latest patches.\nPostgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n\n\nUrk. 8.2 ?\n\nPg on Windows improves a lot with each release, and that's an old buggy version of 8.2 at that. Looking into an upgrade would be a really, REALLY good idea.\n\n\nPlease try interpreting again my original mail, considering that when I\nsaid \"high CPU usage\" It might very well be \"high IO usage\".\n\nThe final effect was that the server went non-responsive, for all\nmatters, not even the TaskManager would come up when i hit CTRL-ALT-DEL,\nand of course, every client would suffer horrific (+20 secs) for the\nsimplest operations like SELECT NOW();\n\n\nThat sounds a LOT like horrible read latencies caused by total I/O overload. It could also be running out of memory and swapping heavily, so do keep an eye out for that, but I wouldn't expect to see that with an ALTER TABLE - especially on a 16GB server.\n\n\n / My question then is: is there a way to limit the CPU* or **IO\n USAGE* assigned to a specific connection?/\n\n\nIn win32 you can set CPU priorities manually in Task Manager, but only once you already know the process ID of the Pg backend that's going to be hammering the machine. Not helpful.\n\nI don't know of any way to do per-process I/O priorities in Win32, but I only use win32 reluctantly and don't use it for anything I care about (like a production Pg server) so I'm far from a definitive source.\n\n--\nCraig Ringer",
"msg_date": "Wed, 13 Jan 2010 04:03:04 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Eduardo Piombino wrote:\n> Postgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n\nUgh...there are several features in PostgreSQL 8.3 and later \nspecifically to address the sort of issue you're running into. If you \nwant to get good write performance out of this system, you may need to \nupgrade to at least that version. It's impossible to resolve several of \nthe common problems in write operations being too intense using any 8.2 \nversion. \n\n> The final effect was that the server went non-responsive, for all \n> matters, not even the TaskManager would come up when i hit \n> CTRL-ALT-DEL, and of course, every client would suffer horrific (+20 \n> secs) for the simplest operations like SELECT NOW();\n\nThe thing that you have to realize is that altering a table is basically \nmaking a new copy of that table, which is a really heavy amount of \nwriting. It's quite easy for an I/O heavy operation like that to fill \nup a lot of RAM with data to be written out, and when the database \nperiodically needs to force all that data out to disk the whole system \ngrinds to a halt when it happens. There's no way I'm aware of to \nthrottle that writing down to a reasonable amount under Windows either, \nto achieve your goal of just making the ALTER run using less resources.\n\nSome reading:\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over \nbasic tuning of the database server. If you haven't already increased \nthe checkpoint_segments parameters of your system, that's the first \nthing to try--increase it *a lot* (32 or more, default is 3) because it \ncan really help with this problem. A moderate increase to \nshared_buffers is in order too; since you're on Windows, increasing it \nto 256MB is a reasonable change. The rest of the changes in there \naren't likely to help out with this specific problem.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm : \ncovers the most likely cause of the issue you're running into. \nUnfortunately, most of the solutions you'll see there are things changed \nin 8.3.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 13 Jan 2010 02:39:30 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Yes, one of the things I will do asap is to migrate to the latest version.\n\nOn other occasion I went through the checkpoint parameters you mentioned,\nbut left them untouched since they seemed logical.\nI'm a little reluctant of changing the checkpoint configuration just to let\nme do a -once in a lifetime- ALTER.\nThe checkpoints would then remain too far away in time (or in traffic).\nAnd thinking of touching it and retouching it every time I need to do sthing\ndifferent bugs me a little. But if there is no other option I will\ndefinitely give it a try.\n\nAre you sure, for instance, that the ALTER command (and the internal data it\nmay require to handle, lets say 1.8 million records * 1024 bytes/record\n(aprox)) goes to RAM, then to disk, and gets logged in the WAL during the\nwhole process? Maybe it does not get logged at all until the ALTER is\ncompleted? Since the original table can be left untouched until this copy of\nthe table gets updated ... Just guessing here.\n\n\nOn Wed, Jan 13, 2010 at 4:39 AM, Greg Smith <[email protected]> wrote:\n\n> Eduardo Piombino wrote:\n>\n>> Postgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n>>\n>\n> Ugh...there are several features in PostgreSQL 8.3 and later specifically\n> to address the sort of issue you're running into. If you want to get good\n> write performance out of this system, you may need to upgrade to at least\n> that version. It's impossible to resolve several of the common problems in\n> write operations being too intense using any 8.2 version.\n>\n>> The final effect was that the server went non-responsive, for all matters,\n>> not even the TaskManager would come up when i hit CTRL-ALT-DEL, and of\n>> course, every client would suffer horrific (+20 secs) for the simplest\n>> operations like SELECT NOW();\n>>\n>\n> The thing that you have to realize is that altering a table is basically\n> making a new copy of that table, which is a really heavy amount of writing.\n> It's quite easy for an I/O heavy operation like that to fill up a lot of\n> RAM with data to be written out, and when the database periodically needs to\n> force all that data out to disk the whole system grinds to a halt when it\n> happens. There's no way I'm aware of to throttle that writing down to a\n> reasonable amount under Windows either, to achieve your goal of just making\n> the ALTER run using less resources.\n>\n> Some reading:\n>\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over\n> basic tuning of the database server. If you haven't already increased the\n> checkpoint_segments parameters of your system, that's the first thing to\n> try--increase it *a lot* (32 or more, default is 3) because it can really\n> help with this problem. A moderate increase to shared_buffers is in order\n> too; since you're on Windows, increasing it to 256MB is a reasonable change.\n> The rest of the changes in there aren't likely to help out with this\n> specific problem.\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm<http://www.westnet.com/%7Egsmith/content/postgresql/chkp-bgw-83.htm>: covers the most likely cause of the issue you're running into.\n> Unfortunately, most of the solutions you'll see there are things changed in\n> 8.3.\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n\nYes, one of the things I will do asap is to migrate to the latest version. On other occasion I went through the checkpoint parameters you mentioned, but left them untouched since they seemed logical.I'm a little reluctant of changing the checkpoint configuration just to let me do a -once in a lifetime- ALTER.\nThe checkpoints would then remain too far away in time (or in traffic).And thinking of touching it and retouching it every time I need to do sthing different bugs me a little. But if there is no other option I will definitely give it a try.\nAre you sure, for instance, that the ALTER command (and the internal data it may require to handle, lets say 1.8 million records * 1024 bytes/record (aprox)) goes to RAM, then to disk, and gets logged in the WAL during the whole process? Maybe it does not get logged at all until the ALTER is completed? Since the original table can be left untouched until this copy of the table gets updated ... Just guessing here.\nOn Wed, Jan 13, 2010 at 4:39 AM, Greg Smith <[email protected]> wrote:\nEduardo Piombino wrote:\n\nPostgres version: 8.2.4, with all defaults, except DateStyle and TimeZone.\n\n\nUgh...there are several features in PostgreSQL 8.3 and later specifically to address the sort of issue you're running into. If you want to get good write performance out of this system, you may need to upgrade to at least that version. It's impossible to resolve several of the common problems in write operations being too intense using any 8.2 version. \n\n\nThe final effect was that the server went non-responsive, for all matters, not even the TaskManager would come up when i hit CTRL-ALT-DEL, and of course, every client would suffer horrific (+20 secs) for the simplest operations like SELECT NOW();\n\n\nThe thing that you have to realize is that altering a table is basically making a new copy of that table, which is a really heavy amount of writing. It's quite easy for an I/O heavy operation like that to fill up a lot of RAM with data to be written out, and when the database periodically needs to force all that data out to disk the whole system grinds to a halt when it happens. There's no way I'm aware of to throttle that writing down to a reasonable amount under Windows either, to achieve your goal of just making the ALTER run using less resources.\n\nSome reading:\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server goes over basic tuning of the database server. If you haven't already increased the checkpoint_segments parameters of your system, that's the first thing to try--increase it *a lot* (32 or more, default is 3) because it can really help with this problem. A moderate increase to shared_buffers is in order too; since you're on Windows, increasing it to 256MB is a reasonable change. The rest of the changes in there aren't likely to help out with this specific problem.\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm : covers the most likely cause of the issue you're running into. Unfortunately, most of the solutions you'll see there are things changed in 8.3.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Wed, 13 Jan 2010 05:27:01 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On 13/01/2010 3:03 PM, Eduardo Piombino wrote:\n> One last question, this IO issue I'm facing, do you think it is just a\n> matter of RAID configuration speed, or a matter of queue gluttony (and\n> not leaving time for other processes to get into the IO queue in a\n> reasonable time)?\n\nHard to say with the data provided. It's not *just* a matter of a slow \narray, but that might contribute.\n\nSpecifically, though, by \"slow array\" in this case I'm looking at \nlatency rather than throughput, particularly read latency under heavy \nwrite load. Simple write throughput isn't really the issue, though bad \nwrite throughput can make it fall apart under a lighter load than it \nwould otherwise.\n\nHigh read latencies may not be caused by deep queuing, though that's one \npossible cause. A controller that prioritizes batching sequential writes \nefficiently over serving random reads would cause it too - though \nreducing its queue depth so it can't see as many writes to batch would help.\n\nLet me stress, again, that if you have a decent RAID controller with a \nbattery backed cache unit you can enable write caching and most of these \nissues just go away. Using an array format with better read/write \nconcurrency, like RAID 10, may help as well.\n\nHonestly, though, at this point you need to collect data on what the \nsystem is actually doing, what's slowing it down and where. *then* look \ninto how to address it. I can't advise you much on that as you're using \nWindows, but there must be lots of info on optimising windows I/O \nlatencies and throughput on the 'net...\n\n> Because if it was just a matter of speed, ok, with my actual RAID\n> configuration lets say it takes 10 minutes to process the ALTER TABLE\n> (leaving no space to other IOs until the ALTER TABLE is done), lets say\n> then i put the fastest possible RAID setup, or even remove RAID for the\n> sake of speed, and it completes in lets say again, 10 seconds (an unreal\n> assumption). But if my table now grows 60 times, I would be facing the\n> very same problem again, even with the best RAID configuration.\n\nOnly if the issue is one of pure write throughput. I don't think it is. \nYou don't care how long the ALTER takes, only how much it impacts other \nusers. Reducing the impact on other users so your ALTER can complete in \nits own time without stamping all over other work is the idea.\n\n> The problem would seem to be in the way the OS (or hardware, or someone\n> else, or all of them) is/are inserting the IO requests into the queue.\n\nIt *might* be. There's just not enough information to tell that yet. \nYou'll need to do quite a bit more monitoring. I don't have the \nexpertise to advise you on what to do and how to do it under Windows.\n\n> What can I do to control the order in which these IO requests are\n> finally entered into the queue?\n\nNo idea. You probably need to look into I/O priorities on Windows.\n\nIdeally you shouldn't have to, though. If you can keep read latencies at \nsane levels under high write load on your array, you don't *need* to \nmess with this.\n\nNote that I'm still guessing about the issue being high read latencies \nunder write load. It fits what you describe, but there isn't enough data \nto be sure, and I don't know how to collect it on Windows.\n\n> What cards do I have to manipulate the order the IO requests are entered\n> into the \"queue\"?\n> Can I disable this queue?\n> Should I turn disk's IO operation caches off?\n> Not use some specific disk/RAID vendor, for instance?\n\nDon't know. Contact your RAID card tech support, Google, search MSDN, etc.\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 13 Jan 2010 18:58:43 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Eduardo Piombino escreveu:\n> Maybe it does not get logged at all until the ALTER is completed?\n> \nThis feature [1] was implemented a few months ago and it will be available\nonly in the next PostgreSQL version (8.5).\n\n[1] http://archives.postgresql.org/pgsql-committers/2009-11/msg00018.php\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n",
"msg_date": "Wed, 13 Jan 2010 12:06:57 -0200",
"msg_from": "Euler Taveira de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "With that said, I assume my current version of pgsql DOES make all this\nheavy work go through WAL logging.\n\nCurious thing is that I remember (of course) reviewing logs of the crash\ntimes, and I didn't see anything strange, not even the famous warning \"you\nare making checkpoints too often. maybe you should consider using extending\nthe checkpoint_segments parameter\".\n\nI will check it again.\nBesides, I will try to gather as much information on the system itself (RAID\ncontrollers, disk vendors, etc).\nThank you, will keep you posted.\n\nOn Wed, Jan 13, 2010 at 11:06 AM, Euler Taveira de Oliveira <\[email protected]> wrote:\n\n> Eduardo Piombino escreveu:\n> > Maybe it does not get logged at all until the ALTER is completed?\n> >\n> This feature [1] was implemented a few months ago and it will be available\n> only in the next PostgreSQL version (8.5).\n>\n> [1] http://archives.postgresql.org/pgsql-committers/2009-11/msg00018.php\n>\n>\n> --\n> Euler Taveira de Oliveira\n> http://www.timbira.com/\n>\n\nWith that said, I assume my current version of pgsql DOES make all this heavy work go through WAL logging.Curious thing is that I remember (of course) reviewing logs of the crash times, and I didn't see anything strange, not even the famous warning \"you are making checkpoints too often. maybe you should consider using extending the checkpoint_segments parameter\".\nI will check it again.Besides, I will try to gather as much information on the system itself (RAID controllers, disk vendors, etc).Thank you, will keep you posted.On Wed, Jan 13, 2010 at 11:06 AM, Euler Taveira de Oliveira <[email protected]> wrote:\nEduardo Piombino escreveu:\n> Maybe it does not get logged at all until the ALTER is completed?\n>\nThis feature [1] was implemented a few months ago and it will be available\nonly in the next PostgreSQL version (8.5).\n\n[1] http://archives.postgresql.org/pgsql-committers/2009-11/msg00018.php\n\n\n--\n Euler Taveira de Oliveira\n http://www.timbira.com/",
"msg_date": "Wed, 13 Jan 2010 12:53:59 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On Wed, Jan 13, 2010 at 2:03 AM, Eduardo Piombino <[email protected]> wrote:\n> Excellent, lots of useful information in your message.\n> I will follow your advices, and keep you posted on any progress. I have yet\n> to confirm you with some technical details of my setup, but I'm pretty sure\n> you hit the nail in any case.\n>\n> One last question, this IO issue I'm facing, do you think it is just a\n> matter of RAID configuration speed, or a matter of queue gluttony (and not\n> leaving time for other processes to get into the IO queue in a reasonable\n> time)?\n>\n> Because if it was just a matter of speed, ok, with my actual RAID\n> configuration lets say it takes 10 minutes to process the ALTER TABLE\n> (leaving no space to other IOs until the ALTER TABLE is done), lets say then\n> i put the fastest possible RAID setup, or even remove RAID for the sake of\n> speed, and it completes in lets say again, 10 seconds (an unreal\n> assumption). But if my table now grows 60 times, I would be facing the very\n> same problem again, even with the best RAID configuration.\n>\n> The problem would seem to be in the way the OS (or hardware, or someone\n> else, or all of them) is/are inserting the IO requests into the queue.\n> What can I do to control the order in which these IO requests are finally\n> entered into the queue?\n> I mean .. what i would like to obtain is:\n>\n> Considering the ALTER TABLE as a sequence of 100.000 READ/WRITE OPERATIONS\n> Considering the SELECT * FROM xxx as a sequence of 100 READ OPERATIONS\n> (totally unrelated in disk)\n>\n> First i run the ALTER TABLE on a thread...\n> Lets say by the time it generates 1.000 READ/WRITE OPERATIONS, the other\n> thread starts with the SELECT * FROM xxx ...\n> I would expect the IO system to give chance to the those 100 READ OPERATIONS\n> to execute immediately (with no need to wait for the remaining 990.000\n> READ/WRITE OPERATIONS finish), that is, to enter the queue at *almost* the\n> very same moment the IO request were issued.\n>\n> If I can not guarantee that, I'm kinda doomed, because the largest the\n> amount of IO operations requested by a \"heavy duty operation\", the longest\n> it will take any other thread to start doing anything.\n\nOne thing you can do - although it's a darn lot of work compared to\njust running a DDL command - is create a new empty table with the\nschema you want and then write a script that copies, say, 1000 records\nfrom the old table to the new table. If your table has a primary key\nwith a natural sort ordering, it's not too hard to keep track of where\nyou left off the last time and continue on from there. Then you can\nincrementally get all of your data over without swamping the system.\nI realize that's a pain in the neck, of course.\n\nI'm kind of surprised that there are disk I/O subsystems that are so\nbad that a single thread doing non-stop I/O can take down the whole\nserver. Is that normal? Does it happen on non-Windows operating\nsystems? What kind of hardware should I not buy to make sure this\ndoesn't happen to me?\n\n...Robert\n",
"msg_date": "Wed, 13 Jan 2010 11:23:00 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On Tue, Jan 12, 2010 at 9:59 PM, Eduardo Piombino <[email protected]> wrote:\n...\n\n> Now, with this experience, I tried a simple workaround.\n> Created an empty version of \"a\" named \"a_empty\", identical in every sense.\n> renamed \"a\" to \"a_full\", and \"a_empty\" to \"a\". This procedure costed me like\n> 0 seconds of downtime, and everything kept working smoothly. Maybe a cpl of\n> operations could have failed if they tried to write in the very second that\n> there was actually no table named \"a\", but since the operation was\n> transactional, the worst scenario was that if the operation should have\n> failed, the client application would just inform of the error and ask the\n> user for a retry. No big deal.\n>\n> Now, this table, that is totally unattached to the system in every way (no\n> one references this table, its like a dumpster for old records), is not\n> begin accessed by no other thread in the system, so an ALTER table on it, to\n> turn a char(255) to char(250), should have no effect on the system.\n>\n> So, with this in mind, I tried the ALTER TABLE this time on the \"a_full\"\n> (totally unrelated) table.\n> The system went non-responsive again, and this time it had nothing to do\n> with threads waiting for the alter table to complete. The pgAdmin GUI went\n> non-responsive, as well as the application's server GUI, whose threads kept\n> working on the background, but starting to take more and more time for every\n> clients request (up to 25 seconds, which are just ridiculous and completely\n> unacceptable in normal conditions).\n\nOK, I'm not entirely sure this table is not still locking something\nelse. If you make a copy by doing something like:\n\nselect * into test_table from a;\n\nand then alter test_table do you still get the same problems? If so,\nthen it is an IO issue, most likely. If not, then there is some\nclient connection still referencing this table or something and that\ncould cause this type of behaviour as well.\n",
"msg_date": "Wed, 13 Jan 2010 09:52:23 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "> OK, I'm not entirely sure this table is not still locking something\n> else. If you make a copy by doing something like:\n>\n> select * into test_table from a;\n>\n> and then alter test_table do you still get the same problems? If so,\n> then it is an IO issue, most likely. If not, then there is some\n> client connection still referencing this table or something and that\n> could cause this type of behaviour as well.\n>\n\nI can guarantee you that the table is not being referenced by any other\nthread, table or process, and that it is totally unrelated to everything\nelse in the system.\n\nIts just a plain table, with 1.8 million records, that no thread knows it\nexists. It has no foreign keys that would allow thinking of a possible\n\"lock\" on the parent table, nor it is being referenced by any other table in\nthe model. It has no triggers associated, and no indexes. It could very well\neven be on another database on the same physical server, and still do the\nsame damage. I did not try this, but I'm pretty sure of the outcome. I\nwould'nt like to bring the server down just to prove this, but I will do it\nif I find it necessary.\n\nThe only things that are common to this table and other tables in the\nsystem, as I see are:\nRAM, IO, and CPU, at a very low level. One of these is being stressed out by\nthe thread executing the ALTER, and the other threads (not just pgsql\napplication threads, but system processes in general) suffer from the lack\nof this resource. All the previous discussions tend to induce that the\nresource we are talking about is IO.\n\nThe fact that the Task Manager does not come up, would also not be explained\nby a lock in a client thread.\nBesides all that, all the client queries are NO WAIT, thus any lock would\njust return immediately, and no retry would be done until the response gets\nback to the user and the user confirms it. In that case, all the errors\npresented to the final users would be \"The element is being processed some\nother place\", as my default handler to pgsql error code \"55P03\", instead of\nthe horrible \"Operation timed out\", that is what final users got during the\nhuge slowdown/downtime.\n\n\n\n\n\n--\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nOK, I'm not entirely sure this table is not still locking something\nelse. If you make a copy by doing something like:\n\nselect * into test_table from a;\n\nand then alter test_table do you still get the same problems? If so,\nthen it is an IO issue, most likely. If not, then there is some\nclient connection still referencing this table or something and that\ncould cause this type of behaviour as well.I can guarantee you that the table is not being referenced by any other thread, table or process, and that it is totally unrelated to everything else in the system.\nIts just a plain table, with 1.8 million records, that no thread knows it exists. It has no foreign keys that would allow thinking of a possible \"lock\" on the parent table, nor it is being referenced by any other table in the model. It has no triggers associated, and no indexes. It could very well even be on another database on the same physical server, and still do the same damage. I did not try this, but I'm pretty sure of the outcome. I would'nt like to bring the server down just to prove this, but I will do it if I find it necessary.\nThe only things that are common to this table and other tables in the system, as I see are:RAM, IO, and CPU, at a very low level. One of these is being stressed out by the thread executing the ALTER, and the other threads (not just pgsql application threads, but system processes in general) suffer from the lack of this resource. All the previous discussions tend to induce that the resource we are talking about is IO. \nThe fact that the Task Manager does not come up, would also not be explained by a lock in a client thread. Besides all that, all the client queries are NO WAIT, thus any lock would just return immediately, and no retry would be done until the response gets back to the user and the user confirms it. In that case, all the errors presented to the final users would be \"The element is being processed some other place\", as my default handler to pgsql error code \"55P03\", instead of the horrible \"Operation timed out\", that is what final users got during the huge slowdown/downtime.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 13 Jan 2010 14:54:36 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On Wed, Jan 13, 2010 at 10:54 AM, Eduardo Piombino <[email protected]> wrote:\n>\n>> OK, I'm not entirely sure this table is not still locking something\n>> else. If you make a copy by doing something like:\n>>\n>> select * into test_table from a;\n>>\n>> and then alter test_table do you still get the same problems? If so,\n>> then it is an IO issue, most likely. If not, then there is some\n>> client connection still referencing this table or something and that\n>> could cause this type of behaviour as well.\n>\n> I can guarantee you that the table is not being referenced by any other\n> thread, table or process, and that it is totally unrelated to everything\n> else in the system.\n\nIf you rename a table that WAS being referenced by other threads, then\nit might still be being accessed or waited on etc by those threads, as\ntheir transaction would have started earlier.\n\nThe only way you can guarantee it's not being reference in some way is\nto create it fresh and new as I suggested and test on that. Until\nthen, your guarantee is based on a belief, not verifiable fact. I too\ntend to believe this is an IO problem btw, but claiming that it can't\nbe a problem with some locks without looking at pg_locks at least, is\na bit premature.\n",
"msg_date": "Wed, 13 Jan 2010 11:11:32 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Robert Haas wrote:\n> I'm kind of surprised that there are disk I/O subsystems that are so\n> bad that a single thread doing non-stop I/O can take down the whole\n> server. Is that normal? Does it happen on non-Windows operating\n> systems? What kind of hardware should I not buy to make sure this\n> doesn't happen to me?\n> \nYou can kill any hardware on any OS with the right abusive client. \nCreate a wide table and insert a few million records into it with \ngenerate_series one day and watch what it does to queries trying to run \nin parallel with that.\n\nI think the missing step here to nail down exactly what's happening on \nEduardo's system is that he should open up some of the Windows system \nmonitoring tools, look at both disk I/O and CPU usage, and then watch \nwhat changes when the troublesome ALTER TABLE shows up.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 13 Jan 2010 13:46:03 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Greg, I will post more detailed data as soon as I'm able to gather it.\n\nI was trying out if the cancellation of the ALTER cmd worked ok, I might\ngive the ALTER another try, and see how much CPU, RAM and IO usage gets\ninvolved. I will be doing this monitoring with the process explorer from\nsysinternals, but I don't know how I can make it to log the results. Do you\nknow any tool that you have used that can help me generate this evidence? I\nwill google a little as soon as possible.\n\n\nOn Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\n\n> Robert Haas wrote:\n>\n>> I'm kind of surprised that there are disk I/O subsystems that are so\n>> bad that a single thread doing non-stop I/O can take down the whole\n>> server. Is that normal? Does it happen on non-Windows operating\n>> systems? What kind of hardware should I not buy to make sure this\n>> doesn't happen to me?\n>>\n>>\n> You can kill any hardware on any OS with the right abusive client. Create\n> a wide table and insert a few million records into it with generate_series\n> one day and watch what it does to queries trying to run in parallel with\n> that.\n>\n> I think the missing step here to nail down exactly what's happening on\n> Eduardo's system is that he should open up some of the Windows system\n> monitoring tools, look at both disk I/O and CPU usage, and then watch what\n> changes when the troublesome ALTER TABLE shows up.\n>\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n\nGreg, I will post more detailed data as soon as I'm able to gather it.I was trying out if the cancellation of the ALTER cmd worked ok, I might give the ALTER another try, and see how much CPU, RAM and IO usage gets involved. I will be doing this monitoring with the process explorer from sysinternals, but I don't know how I can make it to log the results. Do you know any tool that you have used that can help me generate this evidence? I will google a little as soon as possible.\nOn Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\nRobert Haas wrote:\n\nI'm kind of surprised that there are disk I/O subsystems that are so\nbad that a single thread doing non-stop I/O can take down the whole\nserver. Is that normal? Does it happen on non-Windows operating\nsystems? What kind of hardware should I not buy to make sure this\ndoesn't happen to me?\n \n\nYou can kill any hardware on any OS with the right abusive client. Create a wide table and insert a few million records into it with generate_series one day and watch what it does to queries trying to run in parallel with that.\n\nI think the missing step here to nail down exactly what's happening on Eduardo's system is that he should open up some of the Windows system monitoring tools, look at both disk I/O and CPU usage, and then watch what changes when the troublesome ALTER TABLE shows up.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Wed, 13 Jan 2010 16:13:20 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Robert Haas wrote:\n\n> I'm kind of surprised that there are disk I/O subsystems that are so\n> bad that a single thread doing non-stop I/O can take down the whole\n> server. Is that normal?\n\nNo.\n\n> Does it happen on non-Windows operating\n> systems? \n\nYes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a\nterminal would go from a 1/4 second operation to a 5 minute operation\nunder heavy write load by one writer. I landed up having to modify the\ndriver to partially mitigate the issue, but a single user on the\nterminal server performing any sort of heavy writing would still\nabsolutely nuke performance.\n\nI landed up having dramatically better results by disabling the\ncontroller's RAID features, instead exposing each disk to the OS\nseparately and using Linux's software RAID.\n\n> What kind of hardware should I not buy to make sure this\n> doesn't happen to me?\n\n3ware's older cards. Apparently their new ones are a lot better, but I\nhaven't verified this personally.\n\nAnything in RAID-5 without a BBU.\n\nAnything at all without a BBU, preferably.\n\n--\nCraig Ringer\n",
"msg_date": "Thu, 14 Jan 2010 13:36:54 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "On 1/13/2010 11:36 PM, Craig Ringer wrote:\n> Robert Haas wrote:\n>\n>> I'm kind of surprised that there are disk I/O subsystems that are so\n>> bad that a single thread doing non-stop I/O can take down the whole\n>> server. Is that normal?\n>\n> No.\n>\n>> Does it happen on non-Windows operating\n>> systems?\n>\n> Yes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a\n> terminal would go from a 1/4 second operation to a 5 minute operation\n> under heavy write load by one writer. I landed up having to modify the\n> driver to partially mitigate the issue, but a single user on the\n> terminal server performing any sort of heavy writing would still\n> absolutely nuke performance.\n\nOn a side note, on linux, would using the deadline scheduler resolve that?\n\n-Andy\n",
"msg_date": "Thu, 14 Jan 2010 08:49:36 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "\n> \"high CPU usage\" It might very well be \"high IO usage\".\n\n\tTry this :\n\n\tCopy (using explorer, the shell, whatever) a huge file.\n\tThis will create load similar to ALTER TABLE.\n\tMeasure throughput, how much is it ?\n\n\tIf your server blows up just like it did on ALTER TABLE, you got a IO \nsystem problem.\n\tIf everything is smooth, you can look into other things.\n\n\tHow's your fragmentation ? Did the disk ever get full ? What does the \ntask manager say (swap in/out, disk queue lengthn etc)\n\n\tPS : try a separate tablespace on another disk.\n",
"msg_date": "Thu, 14 Jan 2010 17:16:11 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Andy Colson wrote:\n> On 1/13/2010 11:36 PM, Craig Ringer wrote:\n>> Yes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a\n>> terminal would go from a 1/4 second operation to a 5 minute operation\n>> under heavy write load by one writer. I landed up having to modify the\n>> driver to partially mitigate the issue, but a single user on the\n>> terminal server performing any sort of heavy writing would still\n>> absolutely nuke performance.\n>\n> On a side note, on linux, would using the deadline scheduler resolve \n> that?\n\nI've never seen the deadline scheduler resolve anything. If you're out \nof I/O capacity and that's blocking other work, performance is dominated \nby the policies of the underlying controller/device caches. Think about \nit a minute: disks nowadays can easily have 32MB of buffer in them, \nright? And random read/write operations are lucky to clear 2MB/s on \ncheap drivers. So once the drive is filled with requests, you can \neasily sit there for ten seconds before the scheduler even has any input \non resolving the situation. That's even more true if you've got a \nlarger controller cache in the mix.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 14 Jan 2010 13:07:44 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "On 1/14/2010 12:07 PM, Greg Smith wrote:\n> Andy Colson wrote:\n>> On 1/13/2010 11:36 PM, Craig Ringer wrote:\n>>> Yes. My 3ware 8500-8 on a Debian Sarge box was so awful that launching a\n>>> terminal would go from a 1/4 second operation to a 5 minute operation\n>>> under heavy write load by one writer. I landed up having to modify the\n>>> driver to partially mitigate the issue, but a single user on the\n>>> terminal server performing any sort of heavy writing would still\n>>> absolutely nuke performance.\n>>\n>> On a side note, on linux, would using the deadline scheduler resolve\n>> that?\n>\n> I've never seen the deadline scheduler resolve anything. If you're out\n> of I/O capacity and that's blocking other work, performance is dominated\n> by the policies of the underlying controller/device caches. Think about\n> it a minute: disks nowadays can easily have 32MB of buffer in them,\n> right? And random read/write operations are lucky to clear 2MB/s on\n> cheap drivers. So once the drive is filled with requests, you can easily\n> sit there for ten seconds before the scheduler even has any input on\n> resolving the situation. That's even more true if you've got a larger\n> controller cache in the mix.\n>\n\nThat makes sense. So if there is very little io, or if there is way way \ntoo much, then the scheduler really doesn't matter. So there is a slim \nmiddle ground where the io is within a small percent of the HD capacity \nwhere the scheduler might make a difference?\n\n-Andy\n",
"msg_date": "Thu, 14 Jan 2010 12:15:33 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Andy Colson wrote:\n> So if there is very little io, or if there is way way too much, then \n> the scheduler really doesn't matter. So there is a slim middle ground \n> where the io is within a small percent of the HD capacity where the \n> scheduler might make a difference?\n\nThat's basically how I see it. There seem to be people who run into \nworkloads in the middle ground where the scheduler makes a world of \ndifference. I've never seen one myself, and suspect that some of the \nreports of deadline being a big improvement just relate to some buginess \nin the default CFQ implementation that I just haven't encountered.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 14 Jan 2010 13:30:46 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Regarding the hardware the system is running on:\n\nIt's an HP Proliant DL-180 G5 server.\n\nHere are the specs... our actual configuration only has one CPU, and 16G of\nRAM.\nThe model of the 2 disks I will post later today, when I get to the server.\nI was with many things, sorry.\n\nhttp://h18000.www1.hp.com/products/quickspecs/12903_na/12903_na.HTML\nhttp://h18004.www1.hp.com/products/quickspecs/DS_00126/DS_00126.pdf\n\n*At A Glance\n*The HP ProLiant DL180 G5 is a low cost high capacity storage optimized\n2-way server that delivers on a history of design excellence and 2U density\nfor a variety of rack deployments and applications.\n\n - Processors:\n - Supports up to two Quad-Core Intel® Xeon® processors: 5400 sequence\n with 12MB Level 2 cache\n - Intel® 5100 Chipset\n - Memory:\n - Up to 32 GB of memory supported by six (6) PC2-5300 (667 MHz) DDR2\n memory slots\n - Internal Drive Support:\n - Supports up to twelve via CTO with controller or up to eight via BTO\n with the addition of a controller:\n - Hot Plug Serial ATA (SATA) 3.5\"hard drives; or\n - Hot Plug Serial Attached SCSI (SAS) 3.5\"hard drives\n *NOTE:* 4 hard drives are supported standard via BTO. 8 hard drive\n support requires the addition of a Smart Array or HBA\ncontroller. Hot Plug\n and SAS functionality require the addition of a Smart Array or HBA\n controller. 12 hard drive support available via CTO only and\nrequires a SAS\n controller that supports expanders.\n - Internal storage capacity:\n - SATA Models: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard drives)\n - SAS Model: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard drives)\n - Network Controller:\n - One integrated NC105i PCI-e Gigabit NIC (embedded) (Wake on LAN and\n PXE capable)\n - Storage Controllers:\n - HP Embedded SATA RAID Controller (up to 4 hard drive support on\n standard BTO models)\n *NOTE:* Transfer rate 1.5 Gb/s SATA\n - Expansion Slots:\n - One available Low Profile x8 PCI-Express slot using a Low profile\n Riser.\n - Two Full Height/ Full Length Riser options\n - Option1: 2 full-length/full-height PCI-Express x8 connector slots\n (x4 electrical - Standard)\n - Option2: full-length/full-height riser with 2 PCI-X\n Slots(Optional)\n - Infrastructure Management:\n - Optional HP Lights Out 100c Remote Management card with Virtual KVM\n and Virtual Media support (includes IPMI2.0 and SMASH support)\n - USB Ports:\n - Seven USB ports (2) front, (4) rear, (1) internal\n - Optical Drive:\n - Support for one:\n - Optional Multi-bay DVD\n - Optional Floppy (USB only, USB key)\n - Power Supply:\n - 750W Power Supply (Optional Redundancy Hot Plug, Autoswitching) CSCI\n 2007/8\n - 1200W High Efficiency Power Supply (Optional Redundancy Hot Plug,\n Autoswitching) (Optional) CSCI 2007/8\n - *NOTE:* Climate Savers Computing Initiative, 2007-2008 Compliant\n - Form Factor:\n - 2U rack models\n\n\nRegarding the SATA RAID controller, on the other spec pages it says that for\nthe 8 disks model (ours), it comes with a Smart Array E200. I will try to\ncheck out if we are using the original, since I recall hearing something\nabout that our disks were SAS (Serial Attached SCSI), and I don't know if it\nis possible to connect those disks to embedded Smart Array E200 controller.\nWould it be possible?\n\nOn Wed, Jan 13, 2010 at 4:13 PM, Eduardo Piombino <[email protected]> wrote:\n\n> Greg, I will post more detailed data as soon as I'm able to gather it.\n>\n> I was trying out if the cancellation of the ALTER cmd worked ok, I might\n> give the ALTER another try, and see how much CPU, RAM and IO usage gets\n> involved. I will be doing this monitoring with the process explorer from\n> sysinternals, but I don't know how I can make it to log the results. Do you\n> know any tool that you have used that can help me generate this evidence? I\n> will google a little as soon as possible.\n>\n>\n>\n> On Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\n>\n>> Robert Haas wrote:\n>>\n>>> I'm kind of surprised that there are disk I/O subsystems that are so\n>>> bad that a single thread doing non-stop I/O can take down the whole\n>>> server. Is that normal? Does it happen on non-Windows operating\n>>> systems? What kind of hardware should I not buy to make sure this\n>>> doesn't happen to me?\n>>>\n>>>\n>> You can kill any hardware on any OS with the right abusive client. Create\n>> a wide table and insert a few million records into it with generate_series\n>> one day and watch what it does to queries trying to run in parallel with\n>> that.\n>>\n>> I think the missing step here to nail down exactly what's happening on\n>> Eduardo's system is that he should open up some of the Windows system\n>> monitoring tools, look at both disk I/O and CPU usage, and then watch what\n>> changes when the troublesome ALTER TABLE shows up.\n>>\n>>\n>> --\n>> Greg Smith 2ndQuadrant Baltimore, MD\n>> PostgreSQL Training, Services and Support\n>> [email protected] www.2ndQuadrant.com\n>>\n>>\n>\n\nRegarding the hardware the system is running on: It's an HP Proliant DL-180 G5 server.Here are the specs... our actual configuration only has one CPU, and 16G of RAM.The model of the 2 disks I will post later today, when I get to the server.\nI was with many things, sorry.http://h18000.www1.hp.com/products/quickspecs/12903_na/12903_na.HTMLhttp://h18004.www1.hp.com/products/quickspecs/DS_00126/DS_00126.pdf\nAt \nA Glance \n \n The HP ProLiant \n DL180 G5 is a low cost high capacity storage optimized 2-way server that \n delivers on a history of design excellence and 2U density for a variety \n of rack deployments and applications. \n Processors: \n Supports up to two Quad-Core Intel® Xeon® processors: \n 5400 sequence with 12MB Level 2 cacheIntel® 5100 Chipset \nMemory: \n Up to 32 GB of memory supported by six (6) PC2-5300 (667 MHz) \n DDR2 memory slots\nInternal Drive Support: \n Supports up to twelve via CTO with controller or up to eight via \n BTO with the addition of a controller: \n Hot Plug Serial ATA (SATA) 3.5\"hard drives; orHot Plug Serial Attached SCSI (SAS) 3.5\"hard drives\nNOTE: 4 hard drives are \n supported standard via BTO. 8 hard drive support requires the \n addition of a Smart Array or HBA controller. Hot Plug and SAS \n functionality require the addition of a Smart Array or HBA controller. \n 12 hard drive support available via CTO only and requires a \n SAS controller that supports expanders.\nInternal storage capacity: \n SATA Models: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard \n drives)SAS Model: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard \n drives)\n\nNetwork Controller: \n One integrated NC105i PCI-e Gigabit NIC (embedded) (Wake on LAN \n and PXE capable)\nStorage Controllers: \n HP Embedded SATA RAID Controller (up to 4 hard drive support on \n standard BTO models)\nNOTE: Transfer rate 1.5 Gb/s \n SATA\nExpansion Slots: \n One available Low Profile x8 PCI-Express slot using a Low profile \n Riser.Two Full Height/ Full Length Riser options \n Option1: 2 full-length/full-height PCI-Express x8 connector \n slots (x4 electrical - Standard)Option2: full-length/full-height riser with 2 PCI-X Slots(Optional)\n\nInfrastructure Management: \n Optional HP Lights Out 100c Remote Management card with Virtual \n KVM and Virtual Media support (includes IPMI2.0 and SMASH support)\nUSB Ports: \n Seven USB ports (2) front, (4) rear, (1) internal \nOptical Drive: \n Support for one: \n Optional Multi-bay DVD Optional Floppy (USB only, USB key)\n\nPower Supply: \n 750W Power Supply (Optional Redundancy Hot Plug, Autoswitching) \n CSCI 2007/81200W High Efficiency Power Supply (Optional Redundancy Hot Plug, \n Autoswitching) (Optional) CSCI 2007/8 \n NOTE: Climate Savers Computing \n Initiative, 2007-2008 Compliant\n\nForm Factor: \n 2U rack models \nRegarding the SATA RAID controller, on the other spec pages it says that for the 8 disks model (ours), it comes with a Smart Array E200. I will try to check out if we are using the original, since I recall hearing something about that our disks were SAS (Serial Attached SCSI), and I don't know if it is possible to connect those disks to embedded Smart Array E200 controller. Would it be possible?\nOn Wed, Jan 13, 2010 at 4:13 PM, Eduardo Piombino <[email protected]> wrote:\nGreg, I will post more detailed data as soon as I'm able to gather it.I was trying out if the cancellation of the ALTER cmd worked ok, I might give the ALTER another try, and see how much CPU, RAM and IO usage gets involved. I will be doing this monitoring with the process explorer from sysinternals, but I don't know how I can make it to log the results. Do you know any tool that you have used that can help me generate this evidence? I will google a little as soon as possible.\n\nOn Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\nRobert Haas wrote:\n\nI'm kind of surprised that there are disk I/O subsystems that are so\nbad that a single thread doing non-stop I/O can take down the whole\nserver. Is that normal? Does it happen on non-Windows operating\nsystems? What kind of hardware should I not buy to make sure this\ndoesn't happen to me?\n \n\nYou can kill any hardware on any OS with the right abusive client. Create a wide table and insert a few million records into it with generate_series one day and watch what it does to queries trying to run in parallel with that.\n\nI think the missing step here to nail down exactly what's happening on Eduardo's system is that he should open up some of the Windows system monitoring tools, look at both disk I/O and CPU usage, and then watch what changes when the troublesome ALTER TABLE shows up.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 14 Jan 2010 17:49:04 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Regarding the EA-200 card, here are the specs.\nIt seems it has support for SAS disks, so it is most probably that we are\nusing the embedded/default controller.\n\nhttp://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.html\nhttp://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.pdf\n\n*Key Features *\n\n - Seamless upgrades from past generations and upgrades to next generation\n HP high performance and high capacity Serial Attached SCSI Smart Array\n controllers.\n - 3G SAS technology delivers high performance and data bandwidth up to\n 300 MB\\s per physical link and contains full compatibility with 1.5G SATA\n technology.\n - x4 2.5G PCI Express host interface technology delivers high performance\n and data bandwidth up to 2 GB/s maximum bandwidth.\n - Addition of the battery backed cache upgrade enables BBWC, RAID 5,\n Capacity Expansion, RAID migration, and Stripe Size Migration.\n - Mix-and-match SAS and SATA hard drives, lets you deploy drive\n technology as needed to fit your computing environment.\n - Support for up to 2 TB in a single logical drive.\n - Software consistency among all Smart Array family products: Array\n Configuration Utility (ACU), Option ROM Configuration for Arrays (ORCA),\n Systems Insight Manager, Array Diagnostic Utility (ADU) and SmartStart. Some\n of these features are not available with ProLiant 100 series platforms.\n - The SA-E200 controller supports up to 8 drives. The SA-E200i supports\n 2-8 drives depending on the server implementation.\n\n\n*Performance*\n\nHP's High Performance Architecture sets new boundaries of industry\nperformance expectations!\n\n - 3Gb/s SAS (300MB/s bandwidth per physical link)\n - x8 3Gb/s SAS physical links (compatible with 1.5G SATA)\n - 64 MB or 128 MB DDR1-266 battery-backed cache provides up to 4.2 GB/s\n maximum bandwidth.\n - x4 2.5G PCI Express host interface provides 2 GB/s maximum bandwidth.\n - MIPS 32-bit Processor\n - Read ahead caching\n - Write-back caching (with battery-backed write cache upgrade)\n\n\n*Capacity *\n\nGiven the increasing need for high performance and rapid capacity expansion,\nthe SA-E200 offers:\n\n - Up to 6TB of total storage with 6 x 1TB SATA MDL hard drives (3.5\")\n *NOTE:* Support for greater than 2TB in a single logical drive.\n - Up to 2.4TB of total storage with 8 x 300GB SFF SAS hard drives\n\n\nOn Thu, Jan 14, 2010 at 5:49 PM, Eduardo Piombino <[email protected]> wrote:\n\n> Regarding the hardware the system is running on:\n>\n> It's an HP Proliant DL-180 G5 server.\n>\n> Here are the specs... our actual configuration only has one CPU, and 16G of\n> RAM.\n> The model of the 2 disks I will post later today, when I get to the server.\n> I was with many things, sorry.\n>\n> http://h18000.www1.hp.com/products/quickspecs/12903_na/12903_na.HTML\n> http://h18004.www1.hp.com/products/quickspecs/DS_00126/DS_00126.pdf\n>\n> *At A Glance\n> *The HP ProLiant DL180 G5 is a low cost high capacity storage optimized\n> 2-way server that delivers on a history of design excellence and 2U density\n> for a variety of rack deployments and applications.\n>\n> - Processors:\n> - Supports up to two Quad-Core Intel® Xeon® processors: 5400\n> sequence with 12MB Level 2 cache\n> - Intel® 5100 Chipset\n> - Memory:\n> - Up to 32 GB of memory supported by six (6) PC2-5300 (667 MHz) DDR2\n> memory slots\n> - Internal Drive Support:\n> - Supports up to twelve via CTO with controller or up to eight via\n> BTO with the addition of a controller:\n> - Hot Plug Serial ATA (SATA) 3.5\"hard drives; or\n> - Hot Plug Serial Attached SCSI (SAS) 3.5\"hard drives\n> *NOTE:* 4 hard drives are supported standard via BTO. 8 hard\n> drive support requires the addition of a Smart Array or HBA controller. Hot\n> Plug and SAS functionality require the addition of a Smart Array or HBA\n> controller. 12 hard drive support available via CTO only and requires a SAS\n> controller that supports expanders.\n> - Internal storage capacity:\n> - SATA Models: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard drives)\n> - SAS Model: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard drives)\n> - Network Controller:\n> - One integrated NC105i PCI-e Gigabit NIC (embedded) (Wake on LAN\n> and PXE capable)\n> - Storage Controllers:\n> - HP Embedded SATA RAID Controller (up to 4 hard drive support on\n> standard BTO models)\n> *NOTE:* Transfer rate 1.5 Gb/s SATA\n> - Expansion Slots:\n> - One available Low Profile x8 PCI-Express slot using a Low profile\n> Riser.\n> - Two Full Height/ Full Length Riser options\n> - Option1: 2 full-length/full-height PCI-Express x8 connector\n> slots (x4 electrical - Standard)\n> - Option2: full-length/full-height riser with 2 PCI-X\n> Slots(Optional)\n> - Infrastructure Management:\n> - Optional HP Lights Out 100c Remote Management card with Virtual\n> KVM and Virtual Media support (includes IPMI2.0 and SMASH support)\n> - USB Ports:\n> - Seven USB ports (2) front, (4) rear, (1) internal\n> - Optical Drive:\n> - Support for one:\n> - Optional Multi-bay DVD\n> - Optional Floppy (USB only, USB key)\n> - Power Supply:\n> - 750W Power Supply (Optional Redundancy Hot Plug, Autoswitching)\n> CSCI 2007/8\n> - 1200W High Efficiency Power Supply (Optional Redundancy Hot Plug,\n> Autoswitching) (Optional) CSCI 2007/8\n> - *NOTE:* Climate Savers Computing Initiative, 2007-2008\n> Compliant\n> - Form Factor:\n> - 2U rack models\n>\n>\n> Regarding the SATA RAID controller, on the other spec pages it says that\n> for the 8 disks model (ours), it comes with a Smart Array E200. I will try\n> to check out if we are using the original, since I recall hearing something\n> about that our disks were SAS (Serial Attached SCSI), and I don't know if it\n> is possible to connect those disks to embedded Smart Array E200 controller.\n> Would it be possible?\n>\n>\n> On Wed, Jan 13, 2010 at 4:13 PM, Eduardo Piombino <[email protected]>wrote:\n>\n>> Greg, I will post more detailed data as soon as I'm able to gather it.\n>>\n>> I was trying out if the cancellation of the ALTER cmd worked ok, I might\n>> give the ALTER another try, and see how much CPU, RAM and IO usage gets\n>> involved. I will be doing this monitoring with the process explorer from\n>> sysinternals, but I don't know how I can make it to log the results. Do you\n>> know any tool that you have used that can help me generate this evidence? I\n>> will google a little as soon as possible.\n>>\n>>\n>>\n>> On Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\n>>\n>>> Robert Haas wrote:\n>>>\n>>>> I'm kind of surprised that there are disk I/O subsystems that are so\n>>>> bad that a single thread doing non-stop I/O can take down the whole\n>>>> server. Is that normal? Does it happen on non-Windows operating\n>>>> systems? What kind of hardware should I not buy to make sure this\n>>>> doesn't happen to me?\n>>>>\n>>>>\n>>> You can kill any hardware on any OS with the right abusive client.\n>>> Create a wide table and insert a few million records into it with\n>>> generate_series one day and watch what it does to queries trying to run in\n>>> parallel with that.\n>>>\n>>> I think the missing step here to nail down exactly what's happening on\n>>> Eduardo's system is that he should open up some of the Windows system\n>>> monitoring tools, look at both disk I/O and CPU usage, and then watch what\n>>> changes when the troublesome ALTER TABLE shows up.\n>>>\n>>>\n>>> --\n>>> Greg Smith 2ndQuadrant Baltimore, MD\n>>> PostgreSQL Training, Services and Support\n>>> [email protected] www.2ndQuadrant.com\n>>>\n>>>\n>>\n>\n\nRegarding the EA-200 card, here are the specs.It seems it has support for SAS disks, so it is most probably that we are using the embedded/default controller.http://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.html\nhttp://h18000.www1.hp.com/products/quickspecs/12460_div/12460_div.pdfKey Features \nSeamless upgrades from past generations and upgrades to next generation \n HP high performance and high capacity Serial Attached SCSI Smart Array \n controllers. 3G SAS technology delivers high performance and data bandwidth up \n to 300 MB\\s per physical link and contains full compatibility with 1.5G \n SATA technology. x4 2.5G PCI Express host interface technology delivers high performance \n and data bandwidth up to 2 GB/s maximum bandwidth. Addition of the battery backed cache upgrade enables BBWC, RAID 5, \n Capacity Expansion, RAID migration, and Stripe Size Migration. Mix-and-match SAS and SATA hard drives, lets you deploy drive technology \n as needed to fit your computing environment. Support for up to 2 TB in a single logical drive. Software consistency among all Smart Array family products: Array \n Configuration Utility (ACU), Option ROM Configuration for Arrays (ORCA), \n Systems Insight Manager, Array Diagnostic Utility (ADU) and SmartStart. \n Some of these features are not available with ProLiant 100 series platforms. \n The SA-E200 controller supports up to 8 drives. The SA-E200i supports \n 2-8 drives depending on the server implementation. Performance\nHP's High Performance Architecture sets new boundaries of industry performance \nexpectations!\n3Gb/s SAS (300MB/s bandwidth per physical link)x8 3Gb/s SAS physical links (compatible with 1.5G SATA)64 MB or 128 MB DDR1-266 battery-backed cache provides up to 4.2 GB/s \n maximum bandwidth.x4 2.5G PCI Express host interface provides 2 GB/s maximum bandwidth.MIPS 32-bit ProcessorRead ahead cachingWrite-back caching (with battery-backed write cache upgrade)\nCapacity \nGiven the increasing need for high performance and rapid capacity expansion, \n the SA-E200 offers: \nUp to 6TB of total storage with 6 x 1TB SATA MDL hard drives (3.5\") \n \nNOTE: Support for greater than \n 2TB in a single logical drive.Up to 2.4TB of total storage with 8 x 300GB SFF SAS hard drivesOn Thu, Jan 14, 2010 at 5:49 PM, Eduardo Piombino <[email protected]> wrote:\nRegarding the hardware the system is running on: It's an HP Proliant DL-180 G5 server.\nHere are the specs... our actual configuration only has one CPU, and 16G of RAM.The model of the 2 disks I will post later today, when I get to the server.\nI was with many things, sorry.http://h18000.www1.hp.com/products/quickspecs/12903_na/12903_na.HTMLhttp://h18004.www1.hp.com/products/quickspecs/DS_00126/DS_00126.pdf\nAt \nA Glance \n \n The HP ProLiant \n DL180 G5 is a low cost high capacity storage optimized 2-way server that \n delivers on a history of design excellence and 2U density for a variety \n of rack deployments and applications. \n Processors: \n Supports up to two Quad-Core Intel® Xeon® processors: \n 5400 sequence with 12MB Level 2 cacheIntel® 5100 Chipset \nMemory: \n Up to 32 GB of memory supported by six (6) PC2-5300 (667 MHz) \n DDR2 memory slots\nInternal Drive Support: \n Supports up to twelve via CTO with controller or up to eight via \n BTO with the addition of a controller: \n Hot Plug Serial ATA (SATA) 3.5\"hard drives; orHot Plug Serial Attached SCSI (SAS) 3.5\"hard drives\nNOTE: 4 hard drives are \n supported standard via BTO. 8 hard drive support requires the \n addition of a Smart Array or HBA controller. Hot Plug and SAS \n functionality require the addition of a Smart Array or HBA controller. \n 12 hard drive support available via CTO only and requires a \n SAS controller that supports expanders.\nInternal storage capacity: \n SATA Models: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard \n drives)SAS Model: Up to 12.0TB (12 x 1TB Hot Plug 3.5\" hard \n drives)\n\nNetwork Controller: \n One integrated NC105i PCI-e Gigabit NIC (embedded) (Wake on LAN \n and PXE capable)\nStorage Controllers: \n HP Embedded SATA RAID Controller (up to 4 hard drive support on \n standard BTO models)\nNOTE: Transfer rate 1.5 Gb/s \n SATA\nExpansion Slots: \n One available Low Profile x8 PCI-Express slot using a Low profile \n Riser.Two Full Height/ Full Length Riser options \n Option1: 2 full-length/full-height PCI-Express x8 connector \n slots (x4 electrical - Standard)Option2: full-length/full-height riser with 2 PCI-X Slots(Optional)\n\nInfrastructure Management: \n Optional HP Lights Out 100c Remote Management card with Virtual \n KVM and Virtual Media support (includes IPMI2.0 and SMASH support)\nUSB Ports: \n Seven USB ports (2) front, (4) rear, (1) internal \nOptical Drive: \n Support for one: \n Optional Multi-bay DVD Optional Floppy (USB only, USB key)\n\nPower Supply: \n 750W Power Supply (Optional Redundancy Hot Plug, Autoswitching) \n CSCI 2007/81200W High Efficiency Power Supply (Optional Redundancy Hot Plug, \n Autoswitching) (Optional) CSCI 2007/8 \n NOTE: Climate Savers Computing \n Initiative, 2007-2008 Compliant\n\nForm Factor: \n 2U rack models \nRegarding the SATA RAID controller, on the other spec pages it says that for the 8 disks model (ours), it comes with a Smart Array E200. I will try to check out if we are using the original, since I recall hearing something about that our disks were SAS (Serial Attached SCSI), and I don't know if it is possible to connect those disks to embedded Smart Array E200 controller. Would it be possible?\n\nOn Wed, Jan 13, 2010 at 4:13 PM, Eduardo Piombino <[email protected]> wrote:\n\nGreg, I will post more detailed data as soon as I'm able to gather it.I was trying out if the cancellation of the ALTER cmd worked ok, I might give the ALTER another try, and see how much CPU, RAM and IO usage gets involved. I will be doing this monitoring with the process explorer from sysinternals, but I don't know how I can make it to log the results. Do you know any tool that you have used that can help me generate this evidence? I will google a little as soon as possible.\n\nOn Wed, Jan 13, 2010 at 3:46 PM, Greg Smith <[email protected]> wrote:\nRobert Haas wrote:\n\nI'm kind of surprised that there are disk I/O subsystems that are so\nbad that a single thread doing non-stop I/O can take down the whole\nserver. Is that normal? Does it happen on non-Windows operating\nsystems? What kind of hardware should I not buy to make sure this\ndoesn't happen to me?\n \n\nYou can kill any hardware on any OS with the right abusive client. Create a wide table and insert a few million records into it with generate_series one day and watch what it does to queries trying to run in parallel with that.\n\nI think the missing step here to nail down exactly what's happening on Eduardo's system is that he should open up some of the Windows system monitoring tools, look at both disk I/O and CPU usage, and then watch what changes when the troublesome ALTER TABLE shows up.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 14 Jan 2010 18:01:33 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "On Thu, 14 Jan 2010, Greg Smith wrote:\n> Andy Colson wrote:\n>> So if there is very little io, or if there is way way too much, then the \n>> scheduler really doesn't matter. So there is a slim middle ground where \n>> the io is within a small percent of the HD capacity where the scheduler \n>> might make a difference?\n>\n> That's basically how I see it. There seem to be people who run into \n> workloads in the middle ground where the scheduler makes a world of \n> difference. I've never seen one myself, and suspect that some of the reports \n> of deadline being a big improvement just relate to some buginess in the \n> default CFQ implementation that I just haven't encountered.\n\nThat's the perception I get. CFQ is the default scheduler, but in most \nsystems I have seen, it performs worse than the other three schedulers, \nall of which seem to have identical performance. I would avoid \nanticipatory on a RAID array though.\n\nIt seems to me that CFQ is simply bandwidth limited by the extra \nprocessing it has to perform.\n\nMatthew\n\n-- \n Experience is what allows you to recognise a mistake the second time you\n make it.\n",
"msg_date": "Fri, 15 Jan 2010 10:55:00 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Matthew Wakeling wrote:\n> On Thu, 14 Jan 2010, Greg Smith wrote:\n>> Andy Colson wrote:\n>>> So if there is very little io, or if there is way way too much, then \n>>> the scheduler really doesn't matter. So there is a slim middle \n>>> ground where the io is within a small percent of the HD capacity \n>>> where the scheduler might make a difference?\n>>\n>> That's basically how I see it. There seem to be people who run into \n>> workloads in the middle ground where the scheduler makes a world of \n>> difference. I've never seen one myself, and suspect that some of the \n>> reports of deadline being a big improvement just relate to some \n>> buginess in the default CFQ implementation that I just haven't \n>> encountered.\n> \n> That's the perception I get. CFQ is the default scheduler, but in most \n> systems I have seen, it performs worse than the other three schedulers, \n> all of which seem to have identical performance. I would avoid \n> anticipatory on a RAID array though.\n\nI thought the best strategy for a good RAID controller was NOOP. Anything the OS does just makes it harder for the RAID controller to do its job. With a direct-attached disk, the OS knows where the heads are, but with a battery-backed RAID controller, the OS has no idea what's actually happening.\n\nCraig\n",
"msg_date": "Fri, 15 Jan 2010 09:09:46 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "On Fri, 15 Jan 2010, Craig James wrote:\n>> That's the perception I get. CFQ is the default scheduler, but in most \n>> systems I have seen, it performs worse than the other three schedulers, all \n>> of which seem to have identical performance. I would avoid anticipatory on \n>> a RAID array though.\n>\n> I thought the best strategy for a good RAID controller was NOOP.\n\nAgreed. That's what we use here. My observation is though that noop is \nidentical in performance to anticipatory and deadline. Theoretically, it \nshould be faster.\n\nMatthew\n\n-- \n\"Take care that thou useth the proper method when thou taketh the measure of\n high-voltage circuits so that thou doth not incinerate both thee and the\n meter; for verily, though thou has no account number and can be easily\n replaced, the meter doth have one, and as a consequence, bringeth much woe\n upon the Supply Department.\" -- The Ten Commandments of Electronics\n",
"msg_date": "Fri, 15 Jan 2010 17:16:54 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Matthew Wakeling wrote:\n> CFQ is the default scheduler, but in most systems I have seen, it \n> performs worse than the other three schedulers, all of which seem to \n> have identical performance. I would avoid anticipatory on a RAID array \n> though.\n>\n> It seems to me that CFQ is simply bandwidth limited by the extra \n> processing it has to perform.\n\nI'm curious what you are doing when you see this. I've got several \nhundred hours worth of pgbench data on CFQ vs. deadline from a couple of \nsystem collected over the last three years, and I've never seen either a \nclear deadline win or a major CFQ failing. Most results are an even tie, \nwith the occasional mild preference for CFQ under really brutal loads.\n\nMy theory has been that the \"extra processing it has to perform\" you \ndescribe just doesn't matter in the context of a fast system where \nphysical I/O is always the bottleneck. I'd really like to have a \ncompelling reason to prefer deadline, because the concept seems better, \nbut I just haven't seen the data to back that up.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 13:00:11 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Eduardo Piombino wrote:\n> Going to the disk properties (in windows), I just realized it does not \n> have the Write Cache enabled, and it doesn't also allow me to set it \n> up. I've read in google that the lack of ability to turn it on (that \n> is, that the checkbox remains checked after you apply the changes), \n> has to do with the lack of batter backup in the controller (which is \n> default bundle option for embedded EA-200, which is our case).\n>\n> Regarding actual disk performance, I did some silly tests:\n> Copied a 496 Mbytes file from a folder to another folder in C: and it \n> took almost 90 secs.\n> That would be 496MB/90 sec = 5.51MB/sec\n>\n\nI'd suggest http://www.hdtune.com/ as a better way to test transfer \nspeed here across the drive(s).\n\nI think you'll find that your server continues to underperform \nexpectations until you get the battery installed that allows turning the \nwrite cache on. A quick look at HP's literature suggests they believe \nyou only need the battery to enable the write-cache if you're using \nRAID5. That's completely wrong for database use, where you will greatly \nbenefit from it regardless of underlying RAID setup. If you've got an \nEA-200 but don't have a battery for it to unlock all the features, \nyou're unlikely to find a more cost effect way to improve your system \nthan to buy one.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 17:32:32 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "I will give it a try, thanks.\n\nHowever, besides all the analysis and tests and stats that I've been\ncollecting, I think the point of discussion turned into if my hardware is\ngood enough, and if it can keep up with the needs in normal, or even\nheaviest users load. And if that is the question, the answer would be yes,\nit does. The whole system performs outstandingly well under the maximum\nstress users can ever request.\nOf course it could be better, and that of course would be fantastic, but I\nhave the feeling that in this setup, buying more hardware, replace parts,\netc, would be just a temporary fix (maybe temporary = forever in this\ncontext). I'm not saying that I won't buy that battery for the card, no,\nbecause that will greatly boost my performance for this kind of\nadministrative background tasks, but my point is that current hardware,\nseems more than sufficient for current users needs.\n\nWhat I'm trying to say is that:\nI think pg is wasting resources, it could be very well taking advantage of,\nif you guys just tell me get better hardware. I mean ... the IO subsystem is\nobviously the bottleneck of my system. But most of the time it is on very\nvery light load, actually ALL of the time, unless I do some heavy background\nprocessing like the original ALTER, or the procedure that updates 800.000\nrows. What I would consider to be a great feature, would be able to tell\npgsql, that a certain operation, is not time critical, so that it does not\ntry to use all the available IO subsystem at once, but rather rationalize\nits use. Maybe that's the operating system / disk controller / other system\ncomponent responsibility, and in this case it's others module fault, that it\nturns out that a single process has full control over a shared resource.\n\nThe \"whole system\" failed to rationalize the IO subsystem use, and I agree\nthat it is not pgsql fault, at all.\nBut already knowing that the base system (i.e. components out of pg's\ncontrol, like OS, hardware, etc) may be \"buggy\" or that it can fail in\nrationalizing the IO, maybe it would be nice to tell to whoever is\nresponsible for making use of the IO subsystem (pg_bg_writer?), to use it in\na moderately manner. That is ... This operation is not critical, please do\nnot trash my system because it is not necessary. Use all the delays you\nwould like, go slowly, please, I don't really care if you take a month. Or\nat least, be aware of current status of the IO system. If it is being busy,\nslow down, if it is free, speed up. Of course I won't care if it takes less\ntime to complete.\n\nToday, one can rationalize use of CPU, with a simple pg_sleep() call.\nIt would be nice to have maybe an ALTER table option (for ALTERs) or an\noption in the BEGIN transaction command, that would let me say:\nBEGIN SLOW TRANSACTION;\nor BEGIN TRANSACTION RATIONALIZE_IO;\nindicating that all the IO operations that are going to be performed in this\ntransaction, are not time critical, and thus, there is no need to put the\nsystem in risk of a IO storm, just for a silly set of updates, that no one\nis waiting for.\n\nSo if that feature was available, there would be no need for me (or maybe,\nthousands of pg users), to upgrade hardware just to be able to perform a\nsingle, unrelated, operation. I mean, the hardware is there, and is working\npretty good. If I could just tell pg that I don't care if an operation takes\nall the time in the world, I think that would be awesome, and it would be\nmaking the MOST of every possible hardware configuration.\n\nI truly love pg. I just feel that something is not quite right the moment I\nam required to upgrade my hardware, knowing that at any given time, I have\n90% of the IO subsystem idle, that could be very well used in a better\nfashion, and now would be completely wasted.\n\nWell thank you, just some thoughts. And if the idea of a RATIONALIZED\ntransaction picks up, I would be more than glad to help implement it or to\nhelp in any other way I can.\n\nBest regards,\nEduardo.\n\n\nOn Fri, Jan 15, 2010 at 7:32 PM, Greg Smith <[email protected]> wrote:\n\n> Eduardo Piombino wrote:\n>\n>> Going to the disk properties (in windows), I just realized it does not\n>> have the Write Cache enabled, and it doesn't also allow me to set it up.\n>> I've read in google that the lack of ability to turn it on (that is, that\n>> the checkbox remains checked after you apply the changes), has to do with\n>> the lack of batter backup in the controller (which is default bundle option\n>> for embedded EA-200, which is our case).\n>>\n>> Regarding actual disk performance, I did some silly tests:\n>> Copied a 496 Mbytes file from a folder to another folder in C: and it took\n>> almost 90 secs.\n>> That would be 496MB/90 sec = 5.51MB/sec\n>>\n>>\n> I'd suggest http://www.hdtune.com/ as a better way to test transfer speed\n> here across the drive(s).\n>\n> I think you'll find that your server continues to underperform expectations\n> until you get the battery installed that allows turning the write cache on.\n> A quick look at HP's literature suggests they believe you only need the\n> battery to enable the write-cache if you're using RAID5. That's completely\n> wrong for database use, where you will greatly benefit from it regardless of\n> underlying RAID setup. If you've got an EA-200 but don't have a battery for\n> it to unlock all the features, you're unlikely to find a more cost effect\n> way to improve your system than to buy one.\n>\n>\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI will give it a try, thanks.However, besides all the analysis and tests and stats that I've been collecting, I think the point of discussion turned into if my hardware is good enough, and if it can keep up with the needs in normal, or even heaviest users load. And if that is the question, the answer would be yes, it does. The whole system performs outstandingly well under the maximum stress users can ever request.\nOf course it could be better, and that of course would be fantastic, but I have the feeling that in this setup, buying more hardware, replace parts, etc, would be just a temporary fix (maybe temporary = forever in this context). I'm not saying that I won't buy that battery for the card, no, because that will greatly boost my performance for this kind of administrative background tasks, but my point is that current hardware, seems more than sufficient for current users needs.\nWhat I'm trying to say is that:I think pg is wasting resources, it could be very well taking advantage of, if you guys just tell me get better hardware. I mean ... the IO subsystem is obviously the bottleneck of my system. But most of the time it is on very very light load, actually ALL of the time, unless I do some heavy background processing like the original ALTER, or the procedure that updates 800.000 rows. What I would consider to be a great feature, would be able to tell pgsql, that a certain operation, is not time critical, so that it does not try to use all the available IO subsystem at once, but rather rationalize its use. Maybe that's the operating system / disk controller / other system component responsibility, and in this case it's others module fault, that it turns out that a single process has full control over a shared resource.\nThe \"whole system\" failed to rationalize the IO subsystem use, and I agree that it is not pgsql fault, at all.But already knowing that the base system (i.e. components out of pg's control, like OS, hardware, etc) may be \"buggy\" or that it can fail in rationalizing the IO, maybe it would be nice to tell to whoever is responsible for making use of the IO subsystem (pg_bg_writer?), to use it in a moderately manner. That is ... This operation is not critical, please do not trash my system because it is not necessary. Use all the delays you would like, go slowly, please, I don't really care if you take a month. Or at least, be aware of current status of the IO system. If it is being busy, slow down, if it is free, speed up. Of course I won't care if it takes less time to complete.\nToday, one can rationalize use of CPU, with a simple pg_sleep() call.It would be nice to have maybe an ALTER table option (for ALTERs) or an option in the BEGIN transaction command, that would let me say:BEGIN SLOW TRANSACTION;\nor BEGIN TRANSACTION RATIONALIZE_IO;indicating that all the IO operations that are going to be performed in this transaction, are not time critical, and thus, there is no need to put the system in risk of a IO storm, just for a silly set of updates, that no one is waiting for.\nSo if that feature was available, there would be no need for me (or maybe, thousands of pg users), to upgrade hardware just to be able to perform a single, unrelated, operation. I mean, the hardware is there, and is working pretty good. If I could just tell pg that I don't care if an operation takes all the time in the world, I think that would be awesome, and it would be making the MOST of every possible hardware configuration.\nI truly love pg. I just feel that something is not quite right the moment I am required to upgrade my hardware, knowing that at any given time, I have 90% of the IO subsystem idle, that could be very well used in a better fashion, and now would be completely wasted.\nWell thank you, just some thoughts. And if the idea of a RATIONALIZED transaction picks up, I would be more than glad to help implement it or to help in any other way I can.Best regards,Eduardo.\nOn Fri, Jan 15, 2010 at 7:32 PM, Greg Smith <[email protected]> wrote:\nEduardo Piombino wrote:\n\nGoing to the disk properties (in windows), I just realized it does not have the Write Cache enabled, and it doesn't also allow me to set it up. I've read in google that the lack of ability to turn it on (that is, that the checkbox remains checked after you apply the changes), has to do with the lack of batter backup in the controller (which is default bundle option for embedded EA-200, which is our case).\n\nRegarding actual disk performance, I did some silly tests:\nCopied a 496 Mbytes file from a folder to another folder in C: and it took almost 90 secs.\nThat would be 496MB/90 sec = 5.51MB/sec\n\n\n\nI'd suggest http://www.hdtune.com/ as a better way to test transfer speed here across the drive(s).\n\nI think you'll find that your server continues to underperform expectations until you get the battery installed that allows turning the write cache on. A quick look at HP's literature suggests they believe you only need the battery to enable the write-cache if you're using RAID5. That's completely wrong for database use, where you will greatly benefit from it regardless of underlying RAID setup. If you've got an EA-200 but don't have a battery for it to unlock all the features, you're unlikely to find a more cost effect way to improve your system than to buy one.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 15 Jan 2010 21:47:35 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Eduardo Piombino wrote:\n> But already knowing that the base system (i.e. components out of pg's \n> control, like OS, hardware, etc) may be \"buggy\" or that it can fail in \n> rationalizing the IO, maybe it would be nice to tell to whoever is \n> responsible for making use of the IO subsystem (pg_bg_writer?), to use \n> it in a moderately manner. That is ... This operation is not critical, \n> please do not trash my system because it is not necessary. Use all the \n> delays you would like, go slowly, please, I don't really care if you \n> take a month. Or at least, be aware of current status of the IO \n> system. If it is being busy, slow down, if it is free, speed up. Of \n> course I won't care if it takes less time to complete.\n\nThere are three problems here:\n\n1) The background writer does not have a central role in the I/O of the \nsystem, and even if it did that would turn into a scalability issue. \nClients initiate a lot of work on their own, and it's not so easy to \nactually figure out where to put a limiter at given that.\n\n2) PostgreSQL aims to be cross-platform, and writing something that \nadjusts operations based on what the OS is doing requires a lot of \nOS-specific code. You end up needing to write a whole new library for \nevery platform you want to support.\n\n3) Everyone who is spending money/time improving PostgreSQL has things \nthey think are more important to work on than resource limiters, so \nthere's just not anybody working on this.\n\nYour request is completely reasonable and there are plenty of uses for \nit. It's just harder than it might seem to build. One day we may find \nsomeone with money to spend who can justify sponsoring development in \nthis area because it's a must-have for their PostgreSQL deployment. I \nassure you that any number of people reading this list would be happy to \nquote out that job.\n\nBut right now, there is no such sponsor I'm aware of. That means the \nbest we can do is try and help people work around the issues they do run \ninto in the most effective way possible, which in your case has wandered \ninto this investigation of your underlying disk subsystem. It's not \nthat we don't see that an alternate approach would make the problem go \naway, the code needed just isn't available, and other project \ndevelopment work (like the major replication advance that was just \ncommitted today) are seen as more important.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 21:25:43 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Eduardo Piombino wrote:\n\n> I think pg is wasting resources, it could be very well taking advantage\n> of, if you guys just tell me get better hardware. I mean ... the IO\n> subsystem is obviously the bottleneck of my system. But most of the time\n> it is on very very light load, actually ALL of the time, unless I do\n> some heavy background processing like the original ALTER, or the\n> procedure that updates 800.000 rows. What I would consider to be a great\n> feature, would be able to tell pgsql, that a certain operation, is not\n> time critical, so that it does not try to use all the available IO\n> subsystem at once, but rather rationalize its use.\n\nRate-limiting (or preferably prioritizing) I/O from Pg would be nice.\n\nIt's already possible to prioritize I/O from Pg, though, albeit somewhat\nclumsily:\n\n http://wiki.postgresql.org/wiki/Priorities\n\n... as the OS provides I/O priority features. Pg shouldn't have to\nre-implement those, only provide more convenient access to them.\n\n( On Windows? Who knows. If you find out how to set I/O priorities on\nWindows please extend that article! )\n\nThe trouble is that if you have a crappy RAID setup, the OS's I/O\npriorities may be ineffective. The OS will do its best to prioritize\nanything else over your don't-care-how-long-it-takes backend's query,\nbut if the RAID controller is queuing requests seconds-deep nothing the\nOS does will make any real difference.\n\nTo my eternal frustration, there don't seem to be any RAID controllers\nthat have any concept of I/O priorities. I'd love Linux to be able to\nsubmit requests to different queues within the controller depending on\npriority, so low priority requests only got serviced if the\nhigher-priority queue was empty. AFAIK there isn't really anything like\nthat out there, though - all the RAID controllers seem to be built for\noverall throughput at the expense of request latency to one extent or\nanother.\n\nSo ... your can prioritize I/O in the OS as much as you like, but your\nRAID controller may merrily undermine all your work.\n\nDoing it within Pg would suffer from many of the same issues. Pg has no\nway to know how deeply the controller is queuing requests and when it's\nactually finished a request, so it it's very hard for Pg to rate-limit\nit's I/O effectively for low-priority work. It doesn't know how to\nstrike a balance between sending requests too fast (ruining latency for\nhigher priority work) and sending far too few (so taking forever for the\nlow priority work). What's insanely conservative on some hardware is\ninsanely too much to ask from other hardware. To be sure the controller\nis done with a set of writes and ready for another, you'd have to\nfsync() and that'd be murderous on performance, completely ruining any\nbenefits gained from pacing the work.\n\nIt's also complicated by the fact that Pg's architecture is very poorly\nsuited to prioritizing I/O based on query or process. (AFAIK) basically\nall writes go through shared_buffers and the bgwriter - neither Pg nor\nin fact the OS know what query or what backend created a given set of\nblock writes.\n\nTo be able to effectively prioritize I/O you'd really have to be able to\nbypass the bgwriter, instead doing the writes direct from the low\npriority backend after ionice()ing or otherwise setting up low OS-level\nI/O priorities. Even then, RAID-controller level queuing and buffering\nmight land up giving most of the I/O bandwidth to the low priority work\nanyway.\n\nI guess some kind of dynamic rate-limiting could theoretically also\nallow Pg to write at (say) 50% of the device's write capacity at any\ngiven time, but the multiple layers of buffering and the dynamic load\nchanges in the system would make it incredibly hard to effectively\nevaluate what the system's write capacity actually was. You'd probably\nhave to run a dedicated Pg benchmark to generate some parameters to\ncalibrate low priority write rates... but they'd still change depending\non the random vs seq I/O mix of other processes and Pg backends on the\nsystem, the amount of fsync() activity, etc etc etc. It's a more\ncomplicated (!) version of the problem of rate-limiting TCP/IP data sending.\n\n( Actually, implementing something akin to TCP/IP connection rate\nlimiting for allocating I/O write bandwidth in low-priority connections\nwould be ... fascinating. I'm sure the people who write OS write\nschedulers and priority systems like ionice have looked into it and\nfound reasons why it's not suitable. )\n\n\nThe point of all that rambling: it's not as easy as just adding query\npriorities to Pg!\n\n> responsible for making use of the IO subsystem (pg_bg_writer?), to use\n> it in a moderately manner. That is ... This operation is not critical,\n> please do not trash my system because it is not necessary. Use all the\n> delays you would like, go slowly, please, I don't really care if you\n> take a month.\n\nTrouble is, that's a rather rare case. Usually you *do* care if it takes\na month vs a week, because you're worried about lock times.\n\n> Or at least, be aware of current status of the IO system.\n> If it is being busy, slow down, if it is free, speed up. Of course I\n> won't care if it takes less time to complete.\n\nThere just isn't the visibility into the OS and hardware level to know\nthat. Alas. At best you can measure how long it takes for the OS to\nreturn from an I/O request or fsync() ... but all the caching and\nbuffering and queuing means that bears little relationship to the\ncapacity of the system.\n\n> Today, one can rationalize use of CPU, with a simple pg_sleep() call.\n> It would be nice to have maybe an ALTER table option (for ALTERs) or an\n> option in the BEGIN transaction command, that would let me say:\n> BEGIN SLOW TRANSACTION;\n> or BEGIN TRANSACTION RATIONALIZE_IO;\n> indicating that all the IO operations that are going to be performed in\n> this transaction, are not time critical, and thus, there is no need to\n> put the system in risk of a IO storm, just for a silly set of updates,\n> that no one is waiting for.\n\nI'd love that myself - if it could be made to work fairly simply. I'm\nnot sure it can.\n\nIn reality it'd probably have to look more like:\n\nBEGIN SLOW TRANSACTION WITH\n io_max_ops_persec = 5\n io_max_bytes_written_persec = 10000;\n\nwhere those params would pretty much be \"make it up and see what works\"\nstuff with a bit of benchmark guidance.\n\nMaybe that'd still be useful. If so, you'd need to answer how to\nseparate such low-priority I/O out so the bgwriter could rate-limit it\nseparately, or how to bypass the bgwriter for such I/O.\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 16 Jan 2010 11:59:57 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Craig Ringer wrote:\n> It's also complicated by the fact that Pg's architecture is very poorly\n> suited to prioritizing I/O based on query or process. (AFAIK) basically\n> all writes go through shared_buffers and the bgwriter - neither Pg nor\n> in fact the OS know what query or what backend created a given set of\n> block writes.\n\nYou're correct that all writes go through shared_buffers, and all \ninformation about the query that dirties the page in the first place is \ngone by the time it's written out. In 8.3 and later, buffers get \nwritten three ways:\n\n(1) A backend needs to allocate a buffer to do some work. The buffer it \nis allocated is dirty. In this case, the backend itself ends up writing \nthe page out.\n\n(2) The background writer monitors how many allocations are going on, \nand it tries to keep ahead of the backends by writing pages likely to be \nre-used in the near future out before (1) happens. (This is the part \nthat was different in earlier versions--the background writer just \nroamed the whole buffer cache looking for work to do before, unrelated \nto the amount of activity on the system).\n\n(3) Checkpoints (which are also executed by the background writer) have \nto write out every dirty buffer in order to reconcile everything between \nmemory and disk.\n\nOne reason you can't just ionice the backend and make all the problems \ngo away is (3); you can't let a sluggish backend stop checkpoints from \nhappening.\n\nYou might note that only one of these sources--a backend allocating a \nbuffer--is connected to the process you want to limit. If you think of \nthe problem from that side, it actually becomes possible to do something \nuseful here. The most practical way to throttle something down without \na complete database redesign is to attack the problem via allocation. \nIf you limited the rate of how many buffers a backend was allowed to \nallocate and dirty in the first place, that would be extremely effective \nin limiting its potential damage to I/O too, albeit indirectly. Trying \nto limit the damage on the write and OS side instead is a dead end, \nyou'll never make that work without a major development job--one that I \nwould bet against ever being committed even if someone did it for a \nspecific platform, because they're all going to be so different and the \ncode so hackish.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 23:43:55 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> You might note that only one of these sources--a backend allocating a \n> buffer--is connected to the process you want to limit. If you think of \n> the problem from that side, it actually becomes possible to do something \n> useful here. The most practical way to throttle something down without \n> a complete database redesign is to attack the problem via allocation. \n> If you limited the rate of how many buffers a backend was allowed to \n> allocate and dirty in the first place, that would be extremely effective \n> in limiting its potential damage to I/O too, albeit indirectly.\n\nThis is in fact exactly what the vacuum_cost_delay logic does.\nIt might be interesting to investigate generalizing that logic\nso that it could throttle all of a backend's I/O not just vacuum.\nIn principle I think it ought to work all right for any I/O-bound\nquery.\n\nBut, as noted upthread, this is not high on the priority list\nof any of the major developers.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 16 Jan 2010 00:18:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my server "
},
{
"msg_contents": "Tom Lane wrote:\n> This is in fact exactly what the vacuum_cost_delay logic does.\n> It might be interesting to investigate generalizing that logic\n> so that it could throttle all of a backend's I/O not just vacuum.\n> In principle I think it ought to work all right for any I/O-bound\n> query.\n> \n\nSo much for inventing a new idea; never considered that parallel \nbefore. The logic is perfectly reusable, not so sure how much of the \nimplementation would be though.\n\nI think the main difference is that there's one shared VacuumCostBalance \nto worry about, whereas each backend that might be limited would need \nits own clear scratchpad to accumulate costs into. That part seems \nsimilar to how the new EXPLAIN BUFFERS capability instruments things \nthough, which was the angle I was thinking of approaching this from. \nMake that instrumenting more global, periodically compute a total cost \nfrom that instrument snapshot, and nap whenever the delta between the \ncost at the last nap and the current cost exceeds your threshold.\n\nBet I could find some more consumers in user land who'd love to watch \nthat instrumented data too, if it were expanded to be available for \noperations beyond just plan execution. I know it would make a lot of \njobs easier if you could measure \"that <x> statement cost you <y>\" for \nmore than just queries--for example, tracking whether any given UPDATE \ngoes outside of the buffer cache or not would be fascinating tuning \nfodder. Ditto if you could get a roll-up of everything a particular \nconnection did.\n\nThe part specific to the rate limiting that I don't have any good idea \nabout yet is where to put the napping logic at, such that it would work \nacross everything an I/O limited backend might do. The only common \npoint here seems to be the calls into the buffer manager code, but since \nthat's happening with locks and pins you can't sleep in there. Not \nenthusiastic about sprinkling every type of backend operation with a \ncall to some nap check routine.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sat, 16 Jan 2010 04:09:26 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "On Sat, Jan 16, 2010 at 4:09 AM, Greg Smith <[email protected]> wrote:\n> Tom Lane wrote:\n>>\n>> This is in fact exactly what the vacuum_cost_delay logic does.\n>> It might be interesting to investigate generalizing that logic\n>> so that it could throttle all of a backend's I/O not just vacuum.\n>> In principle I think it ought to work all right for any I/O-bound\n>> query.\n>>\n>\n> So much for inventing a new idea; never considered that parallel before.\n> The logic is perfectly reusable, not so sure how much of the implementation\n> would be though.\n>\n> I think the main difference is that there's one shared VacuumCostBalance to\n> worry about, whereas each backend that might be limited would need its own\n> clear scratchpad to accumulate costs into. That part seems similar to how\n> the new EXPLAIN BUFFERS capability instruments things though, which was the\n> angle I was thinking of approaching this from. Make that instrumenting more\n> global, periodically compute a total cost from that instrument snapshot, and\n> nap whenever the delta between the cost at the last nap and the current cost\n> exceeds your threshold.\n\nSeems like you'd also need to think about priority inversion, if the\n\"low-priority\" backend is holding any locks.\n\n...Robert\n",
"msg_date": "Sat, 16 Jan 2010 07:49:32 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Robert Haas wrote:\n> Seems like you'd also need to think about priority inversion, if the\n> \"low-priority\" backend is holding any locks.\n> \n\nRight, that's what I was alluding to in the last part: the non-obvious \npiece here is not how to decide when the backend should nap because it's \ndone too much I/O, it's how to figure out when it's safe for it to do so \nwithout causing trouble for others.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sat, 16 Jan 2010 12:47:10 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "> Seems like you'd also need to think about priority inversion, if the\n> \"low-priority\" backend is holding any locks.\n>\n\nI'm not sure that priority inversion would be right in this scenario,\nbecause in that case the IO storm would still be able to exist, in the cases\nwhere the slow jobs collide with the need-to-remain-fast (aka real-time)\noperations on some lock . I'm using pg in a real time environment\ncommunicating with many different hardware, which all produce a light load,\nbut all require real time response times, and allowing a proiority inversion\nwould indirectly allow IO storms in those cases, going back to where\neverything started.\n\nHowever, if such a mechanism was to be implemented, maybe it (the inversion\nof priorities) could be left as an option in the configuration, that could\nbe turned on or off. In my case, I would just leave it off, but maybe for\nsome applications they find it useful, knowing that io storms may still\nappear, given a series of conditions.\n\nIn the case where priority inversion is not to be used, I would however\nstill greatly benefit from the slow jobs/fast jobs mechanism, just being\nextra-careful that the slow jobs, obviously, did not acquire any locks that\na fast job would ever require. This alone would be, still, a *huge* feature\nif it was ever to be introduced, reinforcing the real-time\nawareness/requirements, that many applications look for today.\n\nSeems like you'd also need to think about priority inversion, if the\n\n\"low-priority\" backend is holding any locks.I'm not sure that priority inversion would be right in this scenario,\nbecause in that case the IO storm would still be able to exist, in the\ncases where the slow jobs collide with the need-to-remain-fast (aka\nreal-time) operations on some lock . I'm using pg in a real time environment\ncommunicating with many different hardware, which all produce a light load,\nbut all require real time response times, and allowing a proiority\ninversion would indirectly allow IO storms in those cases, going back to where everything started.\n\nHowever, if such a mechanism was to be implemented, maybe it (the\ninversion of priorities) could be left as an option in the\nconfiguration, that could be turned on or off. In my case, I would just\nleave it off, but maybe for some applications they find it useful,\nknowing that io storms may still appear, given a series of conditions.In the case where priority inversion is not to be used, I would however\nstill greatly benefit from the slow jobs/fast jobs mechanism, just\nbeing extra-careful that the slow jobs, obviously, did not acquire any\nlocks that a fast job would ever require. This alone would be, still, a *huge*\nfeature if it was ever to be introduced, reinforcing the real-time\nawareness/requirements, that many applications look for today.",
"msg_date": "Sun, 17 Jan 2010 18:21:30 -0300",
"msg_from": "Eduardo Piombino <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n\tserver"
},
{
"msg_contents": "Eduardo Piombino wrote:\n> In the case where priority inversion is not to be used, I would \n> however still greatly benefit from the slow jobs/fast jobs mechanism, \n> just being extra-careful that the slow jobs, obviously, did not \n> acquire any locks that a fast job would ever require. This alone would \n> be, still, a *huge* feature if it was ever to be introduced, \n> reinforcing the real-time awareness/requirements, that many \n> applications look for today.\n\nIn this context, \"priority inversion\" is not a generic term related to \nrunning things with lower priorities. It means something very \nspecific: that you're allowing low-priority jobs to acquire locks on \nresources needed by high-priority ones, and therefore blocking the \nhigh-priority ones from running effectively. Unfortunately, much like \ndeadlock, it's impossible to avoid the problem in a generic way just by \nbeing careful. It's one of the harder issues that needs to be \nconsidered in order to make progress on implementing this feature one day.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sun, 17 Jan 2010 18:00:21 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n \tserver"
},
{
"msg_contents": "Greg Smith <[email protected]> writes:\n> In this context, \"priority inversion\" is not a generic term related to \n> running things with lower priorities. It means something very \n> specific: that you're allowing low-priority jobs to acquire locks on \n> resources needed by high-priority ones, and therefore blocking the \n> high-priority ones from running effectively. Unfortunately, much like \n> deadlock, it's impossible to avoid the problem in a generic way just by \n> being careful. It's one of the harder issues that needs to be \n> considered in order to make progress on implementing this feature one day.\n\nIt might be worth remarking on how the vacuum_cost_delay logic deals\nwith the issue. Basically, the additional sleeps are carefully inserted\nonly at places where we are not holding any low-level locks (such as\nbuffer content locks). We do not do anything about the table-level\nlock that vacuum has got, but vacuum's table lock is weak enough that it\nwon't block most ordinary queries. So in typical circumstances it's not\na problem if vacuum runs for a very long time. But you can definitely\nget burnt if you have a competing session trying to acquire an exclusive\nlock on the table being vacuumed, or if you enable vacuum_cost_delay on\na VACUUM FULL.\n\nAutovacuum takes a couple of extra precautions: it never does VACUUM\nFULL at all, and it is set up so that a request for a conflicting\nexclusive lock causes the autovacuum operation to get canceled.\n\nThe upshot is that you can enable autovacuum_cost_delay without much\nfear of creating priority-inversion delays for competing tasks. But\nit's not at all clear how we'd generalize this design to allow slowdown\nof other operations without creating significant inversion hazards.\n\nBTW, it was suggested upthread that the \"cost balance\" stuff represented\nan additional problem that'd have to be surmounted to get to a general\nsolution. I don't think this is necessarily the case. The point of the\ncost balance code is to ensure that multiple autovacuum workers don't\neat a disproportionate amount of resources. It's not clear that someone\nwould even want such a feature for user-level background queries, and\neven if desirable it's certainly not a must-have thing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 17 Jan 2010 18:14:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my server "
},
{
"msg_contents": "On Fri, 15 Jan 2010, Greg Smith wrote:\n>> It seems to me that CFQ is simply bandwidth limited by the extra processing \n>> it has to perform.\n>\n> I'm curious what you are doing when you see this.\n\n16 disc 15kRPM RAID0, when using fadvise with more than 100 simultaneous \n8kB random requests. I sent an email to the mailing list on 29 Jan 2008, \nbut it got thrown away by the mailing list spam filter because it had an \nimage in it (the graph showing interesting information). Gregory Stark \nreplied to it in \nhttp://archives.postgresql.org/pgsql-performance/2008-01/msg00285.php\n\nI was using his synthetic test case program.\n\n> My theory has been that the \"extra processing it has to perform\" you describe \n> just doesn't matter in the context of a fast system where physical I/O is \n> always the bottleneck.\n\nBasically, to an extent, that's right. However, when you get 16 drives or \nmore into a system, then it starts being an issue.\n\nMatthew\n\n-- \nFor every complex problem, there is a solution that is simple, neat, and wrong.\n -- H. L. Mencken \n",
"msg_date": "Wed, 20 Jan 2010 13:38:50 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "Matthew Wakeling wrote:\n> On Fri, 15 Jan 2010, Greg Smith wrote:\n>> My theory has been that the \"extra processing it has to perform\" you \n>> describe just doesn't matter in the context of a fast system where \n>> physical I/O is always the bottleneck.\n>\n> Basically, to an extent, that's right. However, when you get 16 drives \n> or more into a system, then it starts being an issue.\n\nI guess if I test a system with *only* 16 drives in it one day, maybe \nI'll find out.\n\nSeriously though, there is some difference between a completely \nsynthetic test like you noted issues with here, and anything you can see \nwhen running the database. I was commenting more on the state of things \nfrom the perspective of a database app, where I just haven't seen any of \nthe CFQ issues I hear reports of in other contexts. I'm sure there are \nplenty of low-level tests where the differences between the schedulers \nis completely obvious and it doesn't look as good anymore, and I'll take \na look at whether I can replicate the test case you saw a specific \nconcern with here.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 20 Jan 2010 15:34:21 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
},
{
"msg_contents": "On Wed, 20 Jan 2010, Greg Smith wrote:\n>> Basically, to an extent, that's right. However, when you get 16 drives or \n>> more into a system, then it starts being an issue.\n>\n> I guess if I test a system with *only* 16 drives in it one day, maybe I'll \n> find out.\n\n*Curious* What sorts of systems have you tried so far?\n\nAs the graph I just sent shows, the four schedulers are pretty-much \nidentical in performance, until you start saturating it with simultaneous \nrequests. CFQ levels out at a performance a little lower than the other \nthree.\n\n> Seriously though, there is some difference between a completely synthetic \n> test like you noted issues with here, and anything you can see when running \n> the database.\n\nGranted, this test is rather synthetic. It is testing the rather unusual \ncase of lots of simultaneous random small requests - more simultaneous \nrequests than we advise people to run backends on a server. You'd probably \nneed to get a RAID array a whole lot bigger than 16 drives to have a \n\"normal workload\" capable of demonstrating the performance difference, and \neven that isn't particularly major.\n\nWould be interesting research if anyone has a 200-spindle RAID array \nhanging around somewhere.\n\nMatthew\n\n-- \n A good programmer is one who looks both ways before crossing a one-way street.\n Considering the quality and quantity of one-way streets in Cambridge, it\n should be no surprise that there are so many good programmers there.\n",
"msg_date": "Thu, 21 Jan 2010 12:02:16 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: a heavy duty operation on an \"unused\" table kills my\n server"
}
] |
[
{
"msg_contents": "I'm having some performance problems in a few sales reports running on postgres 8.3, running on Redhat 4.1.2. The hardware is a bit old, but it performs well enough. The reports are the typical sales reporting fare: Gather the sales of a time period based some criteria, aggregate them by product, and then join with a bunch of other tables to display different kinds of product information.\n \nI'll spare you all the pain of looking at the entire queries: The ultimate issue appears to be the same: The innermost table of the queries is an inline view, which aggregates the data by product. It runs rather quickly, but postgres underestimates the number of rows that come out of it, making the rest of the query plan rather suboptimal. The inline view look like this\n \nselect sku_id, sum(rs.price) as dollarsSold, sum(rs.quantity) as units \n from reporting.sales rs \n where rs.sale_date between ? AND ? group by sku_id\n \nIn some cases, we see extra conditions aside of the dates, but they have the same shape. Barring a massive date range, the rest of the filters are less selective than the date, so postgres uses an index on sale_date,sku_id. I have increased the statistics calculations on sale_date quite a bit to make sure Postgres makes decent row estimates.The problem is in the aggregation:\n \n\"HashAggregate (cost=54545.20..54554.83 rows=642 width=24) (actual time=87.945..98.219 rows=11462 loops=1)\"\n\" -> Index Scan using reporting_sales_sale_date_idx on sales rs (cost=0.00..54288.63 rows=34209 width=24) (actual time=0.042..34.194 rows=23744 loops=1)\"\n\" Index Cond: ((sale_date >= '2009-07-01 00:00:00'::timestamp without time zone) AND (sale_date <= '2009-07-06 00:00:00'::timestamp without time zone))\"\n\"Total runtime: 10.110 ms\"\n \nAs you an seem the Index scan's estimate is pretty close when I use a single condition, but the aggregate estimate is off by a factor of 20. When I add further conditions, the estimate just gets worse and worse.\n \n\"HashAggregate (cost=8894.83..8894.85 rows=1 width=24) (actual time=6.444..6.501 rows=92 loops=1)\"\n\" -> Index Scan using reporting_sales_sale_date_sku_id_idx on sales rs (cost=0.00..8894.76 rows=9 width=24) (actual time=0.103..6.278 rows=94 loops=1)\"\n\" Index Cond: ((sale_date >= '2009-07-01 00:00:00'::timestamp without time zone) AND (sale_date <= '2009-07-06 00:00:00'::timestamp without time zone) AND ((sale_channel)::text = 'RETAIL'::text))\"\n\" Filter: ((activity_type)::text = 'RETURN'::text)\"\n\"Total runtime: 6.583 ms\"\nI think I've done what I could when it comes to altering statistics: For example, activity_type and sale_channel have full statistics, and they are rather independent as filtering mechanisms: If all Postgres did when trying to estimate their total filtering capacity was just multiply the frequency of each value, the estimates would not be far off.\n \nThe killer seems to be the row aggregation. There are about 95K different values of sku_id in the sales table, and even the best seller items are a very small percentage of all rows, so expecting the aggregation to consolidate the rows 50:1 like it does in one of the explains above is a pipe dream. I've increased statistics in sku_id into the three digits, but results are not any better\n \nschemaname;tablename;attname;null_frac;avg_width;n_distinct;most_common_freqs\n\"reporting\";\"sales\";\"sku_id\";0;11;58337;\"{0.00364167,0.0027125,0.00230417,0.00217083,0.00178333,0.001675,0.00136667,0.00135,0.0012875,0.0011875,....\"\n \nIs there any way I can coax Postgres into making a more realistic aggregation estimate? I could just delay aggregation until the rest of the data is joined, making the estimate's failure moot, but the price would be quite hefty in some of the reports, which could return 20K products and widths of over 150, so it's not optimal, especially when right now the same query that can request 100 rows could end up requesting 80K.\n\n\n\n\n\nI'm having some performance problems in a few sales reports running on postgres 8.3, running on Redhat 4.1.2. The hardware is a bit old, but it performs well enough. The reports are the typical sales reporting fare: Gather the sales of a time period based some criteria, aggregate them by product, and then join with a bunch of other tables to display different kinds of product information.\n \nI'll spare you all the pain of looking at the entire queries: The ultimate issue appears to be the same: The innermost table of the queries is an inline view, which aggregates the data by product. It runs rather quickly, but postgres underestimates the number of rows that come out of it, making the rest of the query plan rather suboptimal. The inline view look like this\n \nselect sku_id, sum(rs.price) as dollarsSold, sum(rs.quantity) as units from reporting.sales rs where rs.sale_date between ? AND ? group by sku_id\n \nIn some cases, we see extra conditions aside of the dates, but they have the same shape. Barring a massive date range, the rest of the filters are less selective than the date, so postgres uses an index on sale_date,sku_id. I have increased the statistics calculations on sale_date quite a bit to make sure Postgres makes decent row estimates.The problem is in the aggregation:\n \n\"HashAggregate (cost=54545.20..54554.83 rows=642 width=24) (actual time=87.945..98.219 rows=11462 loops=1)\"\" -> Index Scan using reporting_sales_sale_date_idx on sales rs (cost=0.00..54288.63 rows=34209 width=24) (actual time=0.042..34.194 rows=23744 loops=1)\"\" Index Cond: ((sale_date >= '2009-07-01 00:00:00'::timestamp without time zone) AND (sale_date <= '2009-07-06 00:00:00'::timestamp without time zone))\"\"Total runtime: 10.110 ms\"\n \nAs you an seem the Index scan's estimate is pretty close when I use a single condition, but the aggregate estimate is off by a factor of 20. When I add further conditions, the estimate just gets worse and worse.\n \n\"HashAggregate (cost=8894.83..8894.85 rows=1 width=24) (actual time=6.444..6.501 rows=92 loops=1)\"\" -> Index Scan using reporting_sales_sale_date_sku_id_idx on sales rs (cost=0.00..8894.76 rows=9 width=24) (actual time=0.103..6.278 rows=94 loops=1)\"\" Index Cond: ((sale_date >= '2009-07-01 00:00:00'::timestamp without time zone) AND (sale_date <= '2009-07-06 00:00:00'::timestamp without time zone) AND ((sale_channel)::text = 'RETAIL'::text))\"\" Filter: ((activity_type)::text = 'RETURN'::text)\"\"Total runtime: 6.583 ms\"\nI think I've done what I could when it comes to altering statistics: For example, activity_type and sale_channel have full statistics, and they are rather independent as filtering mechanisms: If all Postgres did when trying to estimate their total filtering capacity was just multiply the frequency of each value, the estimates would not be far off.\n \nThe killer seems to be the row aggregation. There are about 95K different values of sku_id in the sales table, and even the best seller items are a very small percentage of all rows, so expecting the aggregation to consolidate the rows 50:1 like it does in one of the explains above is a pipe dream. I've increased statistics in sku_id into the three digits, but results are not any better\n \nschemaname;tablename;attname;null_frac;avg_width;n_distinct;most_common_freqs\n\"reporting\";\"sales\";\"sku_id\";0;11;58337;\"{0.00364167,0.0027125,0.00230417,0.00217083,0.00178333,0.001675,0.00136667,0.00135,0.0012875,0.0011875,....\"\n \nIs there any way I can coax Postgres into making a more realistic aggregation estimate? I could just delay aggregation until the rest of the data is joined, making the estimate's failure moot, but the price would be quite hefty in some of the reports, which could return 20K products and widths of over 150, so it's not optimal, especially when right now the same query that can request 100 rows could end up requesting 80K.",
"msg_date": "Wed, 13 Jan 2010 15:58:45 -0600",
"msg_from": "\"Jorge Montero\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hashaggregate estimates"
},
{
"msg_contents": "\"Jorge Montero\" <[email protected]> writes:\n> The killer seems to be the row aggregation. There are about 95K\n> different values of sku_id in the sales table, and even the best\n> seller items are a very small percentage of all rows, so expecting the\n> aggregation to consolidate the rows 50:1 like it does in one of the\n> explains above is a pipe dream. I've increased statistics in sku_id\n> into the three digits, but results are not any better\n\nYeah, estimating the number of distinct values from a sample of the data\nis a hard problem :-(.\n \n> Is there any way I can coax Postgres into making a more realistic\n> aggregation estimate?\n\nThere's no good way in 8.3. (In CVS HEAD there's a feature to manually\noverride the ndistinct estimate for a column.) In principle you could\nmanually update the pg_statistic.stadistinct value for the column, but\nthe trouble with that is the next ANALYZE will overwrite it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 13 Jan 2010 17:08:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hashaggregate estimates "
}
] |
[
{
"msg_contents": "Hello together,\n\nI need to increase the write performance when inserting\nbytea of 8MB. I am using 8.2.4 on windows with libpq.\n\nThe test setting is simple:\n\nI write 100x times a byte array (bytea) of 8 MB random data\ninto a table having a binary column (and oids and 3 other\nint columns, oids are indexed). I realized that writing 8 MB\nof 0-bytes is optimized away. With random data, the disk\nspace now is filled with 800MB each run as expected. I use a\ntransaction around the insert command.\n\nThis takes about 50s, so, 800MB/50s = 16MB/s.\n\nHowever the harddisk (sata) could write 43 MB/s in the worst\ncase! Why is write performance limited to 16 MB/s?\n\n\nSome more hints what I do:\n\nI use PQexecParams() and the INSERT ... $001 notation to NOT\ncreate a real escapted string from the data additionally but\nuse a pointer to the 8MB data buffer.\n\nI altered the binary column to STORAGE EXTERNAL.\n\nSome experiments with postgresql.conf (fsync off,\nshared_buffers=1000MB, checkpoint_segments=256) did not\nchange the 50s- much (somtimes 60s sometimes a little less).\n\n4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n\n\nDo you have any further idea why 16MB/s seems to be the\nlimit here?\n\nThank You\n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 15:29:03 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "[email protected] wrote:\n> Hello together,\n> \n> I need to increase the write performance when inserting\n> bytea of 8MB. I am using 8.2.4 on windows with libpq.\n\n> \n> This takes about 50s, so, 800MB/50s = 16MB/s.\n> \n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n> \n> \n> Do you have any further idea why 16MB/s seems to be the\n> limit here?\n\nAre you doing it locally or over a network? If you are accessing the \nserver over a network then it could be the location of the bottleneck.\n\n",
"msg_date": "Thu, 14 Jan 2010 15:49:16 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, 14 Jan 2010, [email protected] wrote:\n> This takes about 50s, so, 800MB/50s = 16MB/s.\n>\n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n\nSeveral reasons:\n\nThe data needs to be written first to the WAL, in order to provide \ncrash-safety. So you're actually writing 1600MB, not 800.\n\nPostgres needs to update a few other things on disc (indexes on the large \nobject table maybe?), and needs to call fsync a couple of times. That'll \nadd a bit of time.\n\nYour discs can't write 43MB/s in the *worst case* - the worst case is lots \nof little writes scattered over the disc, where it would be lucky to \nmanage 1MB/s. Not all of the writes Postgres makes are sequential. A handy \nway of knowing how sequential the writes are is to listen to the disc as \nit writes - the clicking sounds are where it has to waste time moving the \ndisc head from one part of the disc to another.\n\nMatthew\n\n-- \n No trees were killed in the sending of this message. However a large\n number of electrons were terribly inconvenienced.\n",
"msg_date": "Thu, 14 Jan 2010 15:04:02 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "* [email protected] <[email protected]> [100114 09:29]:\n \n> This takes about 50s, so, 800MB/50s = 16MB/s.\n> \n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n \n> I altered the binary column to STORAGE EXTERNAL.\n> \n> Some experiments with postgresql.conf (fsync off,\n> shared_buffers=1000MB, checkpoint_segments=256) did not\n> change the 50s- much (somtimes 60s sometimes a little less).\n> \n> 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n> \n> \n> Do you have any further idea why 16MB/s seems to be the\n> limit here?\n\nSo, your SATA disk can do 43MB/s of sequential writes, but you're example\nis doing:\n1) Sequential writes to WAL\n2) Random writes to your index\n3) Sequential writes to table heap\n4) Sequential writes to table' toast heap\n5) Any other OS-based FS overhead\n\nNow, writes #2,3 and 4 don't happen completely concurrently with your\nWAL, some of them are still in postgres buffers, but easily enough to\ninterrupt the stream of WAL enough to certainly make it believable that\nwith everything going on on the disk, you can only write WAL at a\n*sustained* 16 MB/s\n\nIf you're running a whole system on a single SATA which can stream\n43MB/s, remember that for *every* other read/write sent do the disk, you\nlose up to 1MB/s (12ms seek time, read/write, and back). And in that\n\"every other\", you have FS metadata updates, any other file writes the\nFS flushes, etc... 20 aditional blocks being that are either read or\nwritten to disk are going to completely chop your 43MB/s rate...\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.",
"msg_date": "Thu, 14 Jan 2010 10:07:35 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "> Do you have any further idea why 16MB/s seems to be the limit here?\n\nBYTEA deserialization is very slow, and this could be a factor here.\nHave you checked that you are in fact I/O bound?\n\nYou can speed things up by sending the data in binary, by passing\napproriate parameters to PQexecParams().\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Thu, 14 Jan 2010 15:14:46 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n>\n> Some more hints what I do:\n>\n> I use PQexecParams() and the INSERT ... $001 notation to NOT\n> create a real escapted string from the data additionally but\n> use a pointer to the 8MB data buffer.\n>\n> I altered the binary column to STORAGE EXTERNAL.\n>\n> Some experiments with postgresql.conf (fsync off,\n> shared_buffers=1000MB, checkpoint_segments=256) did not\n> change the 50s- much (somtimes 60s sometimes a little less).\n>\n> 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n\n\tBig CPU and slow disk...\n\n\tYou should add another disk just for the WAL -- disks are pretty cheap \nthese days.\n\tWriting the WAL on a second disk is the first thing to do on a \nconfiguration like yours, if you are limited by writes.\n\tIt also reduces the fsync lag a lot since the disk is only doing WAL.\n",
"msg_date": "Thu, 14 Jan 2010 17:08:59 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Thank You for your reply.\n\nIvan Voras:\n\n> Are you doing it locally or over a network? If you are accessing the \n> server over a network then it could be the location of the bottleneck.\n\nAll is done locally (for now).\n \n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 21:09:57 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nThanks a lot for your reply.\n\nHannu Krosing:\n\n> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n> \n> try inserting the same data using 4 parallel connections or even 8\n> parallel ones.\n\nInteresting idea -- I forgot to mention though that 2-3\ncores will be occupied soon with other tasks.\n\n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 21:14:17 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nThanks a lot for the detailed reply.\n\nMatthew Wakeling:\n\n> On Thu, 14 Jan 2010, [email protected] wrote:\n> > This takes about 50s, so, 800MB/50s = 16MB/s.\n> >\n> > However the harddisk (sata) could write 43 MB/s in the worst\n> > case! Why is write performance limited to 16 MB/s?\n> \n> Several reasons:\n> \n> The data needs to be written first to the WAL, in order to provide \n> crash-safety. So you're actually writing 1600MB, not 800.\n\nI understand. So the actual throughput is 32MB/s which is\ncloser to 43 MB/s, of course.\n\nCan I verify that by temporarily disabling WAL writes\ncompletely and see if the thoughput is then doubled?\n\n Felix\n\n",
"msg_date": "Thu, 14 Jan 2010 21:18:54 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Aidan Van Dyk:\n\n> So, your SATA disk can do 43MB/s of sequential writes, but you're example\n> is doing:\n> 1) Sequential writes to WAL\n> 2) Random writes to your index\n> 3) Sequential writes to table heap\n> 4) Sequential writes to table' toast heap\n> 5) Any other OS-based FS overhead\n\nOk, I see. Thanks a lot for the detailed answer! Especially\nwriting to WAL may eat up 50% as I've learned now. So,\n16MB/s x 2 would in fact be 32 MB/s, plus some extras...\n\n\nHowever, does that mean: If I have a raw sequential\nperformance of 100%, I will get a binary write (like in my\nexample) which is about 33% as a general rule of thumb?\n\nJust to mention:\n\n* The system has two hard disks, the first for\n WinXP, the second purely for the postgres data.\n\n* I was doing nothing else simultanously on the newly\n installed OS.\n\n* The consumed time (50s, see my test case) were needed to\n 99.9 % just by PGexecParam() function.\n\n* No network connect to the postgres server (everything\n local).\n \n* No complex sql command; just inserting 100x times using\n PGexecParam(), as a transaction.\n\n* The binary data was marked as such in PGexecParam\n (Format = 1).\n\n* What I meant by 43 MB/s \"worst case\": I downloaded\n some hd benchmarks which showed a performance of\n 43-70 MB/s. (Whereas repetitions of my postgres test did\n never vary, but *constantly* performed at 16MB/s).\n\nHm.\n\nNevertheless: If your explanation covers all what can be\nsaid about it then replacing the hard disk by a faster one\nshould increase the performance here (I'll try to check that\nout).\n\nThanks again!\n\n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 22:23:07 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Florian Weimer:\n\n> > Do you have any further idea why 16MB/s seems to be the limit here?\n> \n> BYTEA deserialization is very slow, and this could be a factor here.\n> Have you checked that you are in fact I/O bound?\n\nCould you elaborate that a bit? It sounds interesting but I\ndo not get what you mean by:\n\n\"bytea deserialization\": Do you mean from an escaped string\nback to real binary data? Does that apply to my case (I use\nPGexecParam and have the Format arg set to 1, binary) ?\n\n\"I/O bound\": What do you mean by that?\n\n\n> You can speed things up by sending the data in binary, by passing\n> approriate parameters to PQexecParams().\n\nDo you mean the Format arg =1 ? If not, what is appropriate\nhere?\n\n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 22:27:19 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Pierre Frédéric Caillaud:\n\n> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n> \n> \tBig CPU and slow disk...\n> \n> \tYou should add another disk just for the WAL -- disks are pretty cheap \n> these days.\n> \tWriting the WAL on a second disk is the first thing to do on a \n> configuration like yours, if you are limited by writes.\n> \tIt also reduces the fsync lag a lot since the disk is only doing WAL.\n\nGood idea -- where can I set the path to WAL?\n\n Felix\n\n\n",
"msg_date": "Thu, 14 Jan 2010 22:28:07 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: [email protected]\n\n> Nevertheless: If your explanation covers all what can be said \n> about it then replacing the hard disk by a faster one should \n> increase the performance here (I'll try to check that out).\n> \n\nMoving the pg_xlog directory to the OS drive should make a difference and it\nwill cost you zero.\n\n",
"msg_date": "Thu, 14 Jan 2010 18:33:11 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, 14 Jan 2010 22:28:07 +0100, [email protected] \n<[email protected]> wrote:\n\n> Pierre Frédéric Caillaud:\n>\n>> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n>>\n>> \tBig CPU and slow disk...\n>>\n>> \tYou should add another disk just for the WAL -- disks are pretty cheap\n>> these days.\n>> \tWriting the WAL on a second disk is the first thing to do on a\n>> configuration like yours, if you are limited by writes.\n>> \tIt also reduces the fsync lag a lot since the disk is only doing WAL.\n>\n> Good idea -- where can I set the path to WAL?\n\n\tAt install, or use a symlink (they exist on windows too !...)\n\n\thttp://stackoverflow.com/questions/1901405/postgresql-wal-on-windows\n\n\tI've no idea of the other needed NTFS tweaks, like if there is a \nnoatime/nodiratime ?...\n\n\n",
"msg_date": "Fri, 15 Jan 2010 00:19:56 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "2010/1/15 Pierre Frédéric Caillaud <[email protected]>:\n> On Thu, 14 Jan 2010 22:28:07 +0100, [email protected] <[email protected]> wrote:\n>\n>> Pierre Frédéric Caillaud:\n>>\n>>> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n>>>\n>>> Big CPU and slow disk...\n>>>\n>>> You should add another disk just for the WAL -- disks are pretty cheap\n>>> these days.\n>>> Writing the WAL on a second disk is the first thing to do on a\n>>> configuration like yours, if you are limited by writes.\n>>> It also reduces the fsync lag a lot since the disk is only doing WAL.\n>>\n>> Good idea -- where can I set the path to WAL?\n>\n> At install, or use a symlink (they exist on windows too !...)\n>\n> http://stackoverflow.com/questions/1901405/postgresql-wal-on-windows\n>\n> I've no idea of the other needed NTFS tweaks, like if there is a noatime/nodiratime ?...\n\nIt does. See http://www.hagander.net/talks/Advanced%20PostgreSQL%20on%20Windows.pdf\n\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n",
"msg_date": "Fri, 15 Jan 2010 06:38:48 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "> Florian Weimer:\n>\n>> > Do you have any further idea why 16MB/s seems to be the limit here?\n>> \n>> BYTEA deserialization is very slow, and this could be a factor here.\n>> Have you checked that you are in fact I/O bound?\n>\n> Could you elaborate that a bit? It sounds interesting but I\n> do not get what you mean by:\n>\n> \"bytea deserialization\": Do you mean from an escaped string\n> back to real binary data?\n\nYes, that is what I meant.\n\n> Does that apply to my case (I use PGexecParam and have the Format\n> arg set to 1, binary) ?\n\nYes, this was my suggestion. There is probably some other issue,\nthen.\n\n> \"I/O bound\": What do you mean by that?\n\nYou should check (presumably using the Windows performance monitoring\ntools, but I'm not familiar with Windows) if the PostgreSQL process is\nindeed waiting on disk I/O.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 15 Jan 2010 08:02:49 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\n> http://www.hagander.net/talks/Advanced%20PostgreSQL%20on%20Windows.pdf\n\n\tGreat doc ! I'm keeping that ;)\n\n\n",
"msg_date": "Fri, 15 Jan 2010 10:50:42 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, 14 Jan 2010, [email protected] wrote:\n>> The data needs to be written first to the WAL, in order to provide\n>> crash-safety. So you're actually writing 1600MB, not 800.\n>\n> I understand. So the actual throughput is 32MB/s which is\n> closer to 43 MB/s, of course.\n>\n> Can I verify that by temporarily disabling WAL writes\n> completely and see if the thoughput is then doubled?\n\nThere isn't a magic setting in Postgres to disable the WAL. That would be \nfar too tempting, and an easy way to break the database.\n\nHowever, what you can do is to insert the data into the table in the same \ntransaction as creating the table. Then Postgres knows that no other \ntransactions can see the table, so it doesn't need to be so careful.\n\nUnfortunately, I don't think even this strategy will work in your case, as \nyou will be writing to the large object table, which already exists. Could \nsomeone who knows confirm this?\n\nMatthew\n\n-- \n Let's say I go into a field and I hear \"baa baa baa\". Now, how do I work \n out whether that was \"baa\" followed by \"baa baa\", or if it was \"baa baa\"\n followed by \"baa\"?\n - Computer Science Lecturer\n",
"msg_date": "Fri, 15 Jan 2010 11:09:13 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, 14 Jan 2010, [email protected] wrote:\n> Nevertheless: If your explanation covers all what can be\n> said about it then replacing the hard disk by a faster one\n> should increase the performance here (I'll try to check that\n> out).\n\nProbably. However, it is worth you running the test again, and looking at \nhow busy the CPU on the machine is. The disc may be the bottleneck, or the \nCPU may be the bottleneck.\n\nMatthew\n\n-- \n\"Take care that thou useth the proper method when thou taketh the measure of\n high-voltage circuits so that thou doth not incinerate both thee and the\n meter; for verily, though thou has no account number and can be easily\n replaced, the meter doth have one, and as a consequence, bringeth much woe\n upon the Supply Department.\" -- The Ten Commandments of Electronics\n",
"msg_date": "Fri, 15 Jan 2010 11:25:51 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Matthew Wakeling:\n\n> On Thu, 14 Jan 2010, [email protected] wrote:\n> > Nevertheless: If your explanation covers all what can be\n> > said about it then replacing the hard disk by a faster one\n> > should increase the performance here (I'll try to check that\n> > out).\n> \n> Probably. However, it is worth you running the test again, and looking at \n> how busy the CPU on the machine is. The disc may be the bottleneck, or the \n> CPU may be the bottleneck.\n\nTrue.\n\nI've changed the setting a bit:\n\n(1) Replaced 7.200 disk by a 10.000 one, still sata though.\n\n(2) Inserting rows only 10x times (instead of 100x times)\nbut 80mb each, so having the same amount of 800mb in total.\n\n(3) Changed the WAL path to the system disk (by the\ngreat 'junction' trick mentioned in the other posting), so\nactually splitting the write access to the \"system\" disk and\nthe fast \"data\" disk.\n\n\n\nAnd here is the frustrating result:\n\n1. None of the 4 CPUs was ever more busy than 30% (never\nless idle than 70%),\n\n2. while both disks kept being far below the average write\nperformance: the \"data\" disk had 18 peaks of approx. 40 mb\nbut in total the average thoughput was 16-18 mb/s.\n\n\nBTW:\n\n* Disabling noatime and similar for ntfs did not change\nthings much (thanks though!).\n\n* A short cross check copying 800mb random data file from\n\"system\" to \"data\" disk showed a performance of constantly\n75 mb/s.\n\n\nSo, I have no idea what remains as the bottleneck.\n\n Felix\n\n\n",
"msg_date": "Fri, 15 Jan 2010 21:34:03 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Pierre Frédéric Caillaud:\n\n> \tAt install, or use a symlink (they exist on windows too !...)\n> \n> \thttp://stackoverflow.com/questions/1901405/postgresql-wal-on-windows\n\nVery interesting! Did not help much though (see other\nposting).\n\nThank You\n Felix\n\n",
"msg_date": "Fri, 15 Jan 2010 21:35:46 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, Jan 14, 2010 at 9:29 AM, [email protected]\n<[email protected]> wrote:\n> Hello together,\n>\n> I need to increase the write performance when inserting\n> bytea of 8MB. I am using 8.2.4 on windows with libpq.\n>\n> The test setting is simple:\n>\n> I write 100x times a byte array (bytea) of 8 MB random data\n> into a table having a binary column (and oids and 3 other\n> int columns, oids are indexed). I realized that writing 8 MB\n> of 0-bytes is optimized away. With random data, the disk\n> space now is filled with 800MB each run as expected. I use a\n> transaction around the insert command.\n>\n> This takes about 50s, so, 800MB/50s = 16MB/s.\n>\n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n>\n>\n> Some more hints what I do:\n>\n> I use PQexecParams() and the INSERT ... $001 notation to NOT\n> create a real escapted string from the data additionally but\n> use a pointer to the 8MB data buffer.\n>\n> I altered the binary column to STORAGE EXTERNAL.\n>\n> Some experiments with postgresql.conf (fsync off,\n> shared_buffers=1000MB, checkpoint_segments=256) did not\n> change the 50s- much (somtimes 60s sometimes a little less).\n>\n> 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n>\n>\n> Do you have any further idea why 16MB/s seems to be the\n> limit here?\n\npostgres is simply not geared towards this type of workload. 16mb\nisn't too bad actually, and I bet you could significantly beat that\nwith better disks and multiple clients sending data, maybe even close\nto saturate a gigabit line. However, there are other ways to do this\n(outside the db) that are more appropriate if efficiency is a big\nconcern.\n\nmerlin\n",
"msg_date": "Fri, 15 Jan 2010 16:15:47 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "I'd second this .... a database is doing all kinds of clever things to\nensure ACID consistency on every byte that gets written to it.\n\nIf you don't need that level of consistency for your 8MB blobs, write them\nto plain files named with some kind of id, and put the id in the database\ninstead of the blob. This will reduce the amount of disk I/O for storing\neach blob by nearly 50%, and will reduce marshaling overheads by a larger\nmagin.\n\n From your account, it sounds like the database is performing nicely on that\nhardware ... 16MB/sec to a raw disk or filesystem is rather slow by modern\nstandards, but 16MB/sec of database updates is pretty good for having\neverything on one slow-ish spindle.\n\nOn Fri, Jan 15, 2010 at 3:15 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Thu, Jan 14, 2010 at 9:29 AM, [email protected]\n> <[email protected]> wrote:\n> > Hello together,\n> >\n> > I need to increase the write performance when inserting\n> > bytea of 8MB. I am using 8.2.4 on windows with libpq.\n> >\n> > The test setting is simple:\n> >\n> > I write 100x times a byte array (bytea) of 8 MB random data\n> > into a table having a binary column (and oids and 3 other\n> > int columns, oids are indexed). I realized that writing 8 MB\n> > of 0-bytes is optimized away. With random data, the disk\n> > space now is filled with 800MB each run as expected. I use a\n> > transaction around the insert command.\n> >\n> > This takes about 50s, so, 800MB/50s = 16MB/s.\n> >\n> > However the harddisk (sata) could write 43 MB/s in the worst\n> > case! Why is write performance limited to 16 MB/s?\n> >\n> >\n> > Some more hints what I do:\n> >\n> > I use PQexecParams() and the INSERT ... $001 notation to NOT\n> > create a real escapted string from the data additionally but\n> > use a pointer to the 8MB data buffer.\n> >\n> > I altered the binary column to STORAGE EXTERNAL.\n> >\n> > Some experiments with postgresql.conf (fsync off,\n> > shared_buffers=1000MB, checkpoint_segments=256) did not\n> > change the 50s- much (somtimes 60s sometimes a little less).\n> >\n> > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n> >\n> >\n> > Do you have any further idea why 16MB/s seems to be the\n> > limit here?\n>\n> postgres is simply not geared towards this type of workload. 16mb\n> isn't too bad actually, and I bet you could significantly beat that\n> with better disks and multiple clients sending data, maybe even close\n> to saturate a gigabit line. However, there are other ways to do this\n> (outside the db) that are more appropriate if efficiency is a big\n> concern.\n>\n> merlin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'd second this .... a database is doing all kinds of clever things to ensure ACID consistency on every byte that gets written to it. If you don't need that level of consistency for your 8MB blobs, write them to plain files named with some kind of id, and put the id in the database instead of the blob. This will reduce the amount of disk I/O for storing each blob by nearly 50%, and will reduce marshaling overheads by a larger magin.\nFrom your account, it sounds like the database is performing nicely on that hardware ... 16MB/sec to a raw disk or filesystem is rather slow by modern standards, but 16MB/sec of database updates is pretty good for having everything on one slow-ish spindle.\nOn Fri, Jan 15, 2010 at 3:15 PM, Merlin Moncure <[email protected]> wrote:\nOn Thu, Jan 14, 2010 at 9:29 AM, [email protected]\n<[email protected]> wrote:\n> Hello together,\n>\n> I need to increase the write performance when inserting\n> bytea of 8MB. I am using 8.2.4 on windows with libpq.\n>\n> The test setting is simple:\n>\n> I write 100x times a byte array (bytea) of 8 MB random data\n> into a table having a binary column (and oids and 3 other\n> int columns, oids are indexed). I realized that writing 8 MB\n> of 0-bytes is optimized away. With random data, the disk\n> space now is filled with 800MB each run as expected. I use a\n> transaction around the insert command.\n>\n> This takes about 50s, so, 800MB/50s = 16MB/s.\n>\n> However the harddisk (sata) could write 43 MB/s in the worst\n> case! Why is write performance limited to 16 MB/s?\n>\n>\n> Some more hints what I do:\n>\n> I use PQexecParams() and the INSERT ... $001 notation to NOT\n> create a real escapted string from the data additionally but\n> use a pointer to the 8MB data buffer.\n>\n> I altered the binary column to STORAGE EXTERNAL.\n>\n> Some experiments with postgresql.conf (fsync off,\n> shared_buffers=1000MB, checkpoint_segments=256) did not\n> change the 50s- much (somtimes 60s sometimes a little less).\n>\n> 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n>\n>\n> Do you have any further idea why 16MB/s seems to be the\n> limit here?\n\npostgres is simply not geared towards this type of workload. 16mb\nisn't too bad actually, and I bet you could significantly beat that\nwith better disks and multiple clients sending data, maybe even close\nto saturate a gigabit line. However, there are other ways to do this\n(outside the db) that are more appropriate if efficiency is a big\nconcern.\n\nmerlin\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 15 Jan 2010 18:03:56 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\n> I've changed the setting a bit:\n>\n> (1) Replaced 7.200 disk by a 10.000 one, still sata though.\n>\n> (2) Inserting rows only 10x times (instead of 100x times)\n> but 80mb each, so having the same amount of 800mb in total.\n>\n> (3) Changed the WAL path to the system disk (by the\n> great 'junction' trick mentioned in the other posting), so\n> actually splitting the write access to the \"system\" disk and\n> the fast \"data\" disk.\n>\n>\n>\n> And here is the frustrating result:\n>\n> 1. None of the 4 CPUs was ever more busy than 30% (never\n> less idle than 70%),\n>\n> 2. while both disks kept being far below the average write\n> performance: the \"data\" disk had 18 peaks of approx. 40 mb\n> but in total the average thoughput was 16-18 mb/s.\n>\n>\n> BTW:\n>\n> * Disabling noatime and similar for ntfs did not change\n> things much (thanks though!).\n>\n> * A short cross check copying 800mb random data file from\n> \"system\" to \"data\" disk showed a performance of constantly\n> 75 mb/s.\n>\n>\n> So, I have no idea what remains as the bottleneck.\n>\n> Felix\n\n\tTry this :\n\nCREATE TABLE test AS SELECT * FROM yourtable;\n\n\tThis will test write speed, and TOAST compression speed.\n\tThen try this:\n\nCREATE TABLE test (LIKE yourtable);\nCOMMIT;\nINSERT INTO test SELECT * FROM yourtable;\n\n\tThis does the same thing but also writes WAL.\n\tI wonder what results you'll get.\n",
"msg_date": "Sun, 17 Jan 2010 00:23:30 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Hello Pierre,\n\nthank You for these useful test commands.\n\nHere is what I did:\n\n\nPierre Frédéric Caillaud:\n\n> \tTry this :\n> \n> CREATE TABLE test AS SELECT * FROM yourtable;\n> \n> \tThis will test write speed, and TOAST compression speed.\n> \tThen try this:\n\n(1)\n\nSetting:\n\n* pg_xlog sym'linked to another disk (to \"system disk\")\n* having approx 11.1 GB in 'yourtable' on \"data disk\"\n* executed SQL by pgAdmin III (as above, no transaction)\n\nSpeed:\n\n* 754 s (14.5 MB/s)\n\n\n> CREATE TABLE test (LIKE yourtable);\n> COMMIT;\n> INSERT INTO test SELECT * FROM yourtable;\n> \n> \tThis does the same thing but also writes WAL.\n> \tI wonder what results you'll get.\n\n(2)\n\nSetting: like (1), and 'test' table removed first\nSpeed: 752 s (so, the same since pg_xlog sym'linked)\n\n\n(3)\n\nSetting: like (2), but removed symlink of pg_xlog, so\nhaving it again on \"data disk\" where big data is\n\nSpeed: 801 s (so ~1 minute longer)\n\nBTW: I expected longer duration for scenario (3).\n\n\n\nIMHO: As neither the CPUs nor the disk throughput nor the\npostgres.exe task's CPU consumption was at its limits: I\nwonder what is the problem here. Maybe it is not postgresql\nrelated at all. I'll try to execute these tests on a SSD\nand/or Raid system.\n\n Felix\n\n\n",
"msg_date": "Mon, 18 Jan 2010 12:20:59 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Hannu Krosing:\n\n> On Thu, 2010-01-14 at 21:14 +0100, [email protected] wrote:\n> > Thanks a lot for your reply.\n> > \n> > Hannu Krosing:\n> > \n> > > > 4 Core CPU 3 Ghz, WinXP, 1 TB SATA disk.\n> > > \n> > > try inserting the same data using 4 parallel connections or even 8\n> > > parallel ones.\n> > \n> > Interesting idea -- I forgot to mention though that 2-3\n> > cores will be occupied soon with other tasks.\n> \n> Even one core will probably be idling at the througput you mentioned, so\n> the advice still stands, use more than one connection to get better\n> throughput.\n\nThank You. Since connecting more than once would mean some\nmajor changes in my db layer I fear considering it as a\nsolution.\n\nBTW: I do not get the idea behind that. Since firstly, I\nlater will have just one core free for postgres processes,\nand secondly neither the cpu nor the postgres processes seem\nto be really busy yet. Do you mean a postgres process may be\nprogrammed in a way that it waits for something unknown\nwhich can be surrounded by feeding another postgres process\nwith work, even on the same CPU?\n\nAs a short check, this is what I did (see other postings\nfrom today for further scenarios I tested):\n\nSetting:\n\n* About 11.1 GB data in the table \"bin_table\" on a\n separate \"data disk\" from the tests the last days (mostly\n rows of 80 MB bin data each)\n\n* WAL/pg_xlog not symlinked to another disk anymore.\n\n* created tables test + test2 \"LIKE bin_table\"\n\n* 2x times pgAdminIII, running:\n INSERT INTO test SELECT * FROM bin_table;\n\n resp.\n\n INSERT INTO test2 SELECT * FROM bin_table;\n\nResult:\n\n* To copy those 11.1 GB into test + test2 in parallel it\n took 1699 s (13,17 MB/s)\n\nThis is what was to expect. It took quite exactly 2 times of\nwhat 1 process needs for writing half of the data.\n\n\nThank You again.\n\n Felix\n\n\n",
"msg_date": "Mon, 18 Jan 2010 14:08:38 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Dave Crooke:\n\n> If you don't need that level of consistency for your 8MB blobs, write them\n> to plain files named with some kind of id, and put the id in the database\n> instead of the blob.\n\nThe problem here is that I later need to give access to the\ndatabase via TCP to an external client. This client will\nthen read out and *wipe* those masses of data\nasynchronously, while I'll continue to writing into to\ndatabase.\n\nSeparating the data into an ID value (in the database) and\nordinary binary files (on disk elsewhere) means, that I need\nto implement a separate TCP protocol and talk to the client\nwhenever it needs to read/delete the data. I try to avoid\nthat extra task. So postgres shall function here as a\ncommunicator, too, not only for saving data to disk.\n\nThank you.\n\n Felix\n\n\n",
"msg_date": "Mon, 18 Jan 2010 14:17:58 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Hannu Krosing:\n\n> did you also test this with fsync=off ?\n\nYes. No significant difference.\n\n\n> I suspect that what you are seeing is the effect of randomly writing to\n> the index files. While sequential write performance can be up to\n> 80MB/sec on modern drives, sequential writes are an order of magnitude\n> slower. And at your data sizes you are very likely to hit a \n> CHECKPOINT, which needs to do some random writes.\n\nYes, from the server log I noticed that I hit checkpoints\ntoo early and too often. I tried the astronomical value of\n1000 for checkpoint_segments to not hit a single one for the\nwhole test run (copying 800 MB) -- even though that is no\ngood idea in practice of course.\n\nIt took even longer then. Probably because the server\ncreated a lot of 16 MB log files (about 300 in my case)\nwhich is presumly more costy (at least for the first run?)\nthan overwriting existing files. I am not too much into\nthat, though, since this is not a solution anyway on the\nlong run IMHO.\n\nThanks again.\n\n Felix\n \n\n",
"msg_date": "Mon, 18 Jan 2010 17:04:20 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Matthew Wakeling:\n\n> The data needs to be written first to the WAL, in order to provide \n> crash-safety. So you're actually writing 1600MB, not 800.\n\nI come back again to saving WAL to another disk. Now, after\nall, I wonder: Doesn't the server wait anyway until WAL was\nwritten to disk?\n\nSo, if true, does it should not really matter if WAL is\nwritten to another disk then or not (besides some savings by\n2x hd cache and less hd head moves).\n\n Felix\n\n\n",
"msg_date": "Mon, 18 Jan 2010 17:13:15 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "[email protected]:\n\n> I'll try to execute these tests on a SSD\n> and/or Raid system.\n\nFYI:\n\nOn a recent but mid-range performing SSD (128 MB size,\nserially writing 80-140 MB, 100 MB average) the situation\nwas worse for some reason. No difference by fsync=on/off.\n\n Felix\n\n",
"msg_date": "Mon, 18 Jan 2010 17:38:44 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 18, 2010, at 3:20 AM, [email protected] wrote:\n\n> Hello Pierre,\n> \n> thank You for these useful test commands.\n> \n> Here is what I did:\n> \n> \n> Pierre Frédéric Caillaud:\n> \n>> \tTry this :\n>> \n>> CREATE TABLE test AS SELECT * FROM yourtable;\n>> \n>> \tThis will test write speed, and TOAST compression speed.\n>> \tThen try this:\n> \n> (1)\n> \n> Setting:\n> \n> * pg_xlog sym'linked to another disk (to \"system disk\")\n> * having approx 11.1 GB in 'yourtable' on \"data disk\"\n> * executed SQL by pgAdmin III (as above, no transaction)\n> \n> Speed:\n> \n> * 754 s (14.5 MB/s)\n> \n> \n>> CREATE TABLE test (LIKE yourtable);\n>> COMMIT;\n>> INSERT INTO test SELECT * FROM yourtable;\n>> \n>> \tThis does the same thing but also writes WAL.\n>> \tI wonder what results you'll get.\n> \n> (2)\n> \n> Setting: like (1), and 'test' table removed first\n> Speed: 752 s (so, the same since pg_xlog sym'linked)\n> \n> \n> (3)\n> \n> Setting: like (2), but removed symlink of pg_xlog, so\n> having it again on \"data disk\" where big data is\n> \n> Speed: 801 s (so ~1 minute longer)\n> \n> BTW: I expected longer duration for scenario (3).\n> \n> \n> \n> IMHO: As neither the CPUs nor the disk throughput nor the\n> postgres.exe task's CPU consumption was at its limits: I\n> wonder what is the problem here. Maybe it is not postgresql\n> related at all. I'll try to execute these tests on a SSD\n> and/or Raid system.\n> \n> Felix\n> \n\nYou are CPU bound.\n\n30% of 4 cores is greater than 25%. 25% is one core fully used. The postgres compression of data in TOAST is probably the problem. I'm assuming its doing Gzip, and at the default compression level, which on random data will be in the 15MB/sec range. I don't know if TOAST will do compression at a lower compression level. Is your data typically random or incompressible? If it is compressible then your test should be changed to reflect that.\n\n\nIf I am wrong, you are I/O bound -- this will show up in windows Performance Monitor as \"Disk Time (%)\" -- which you can get on a per device or total basis, along with i/o per second (read and/or write) and bytes/sec metrics.\n\nTo prove that you are CPU bound, split your test in half, and run the two halves at the same time. If you are CPU bound, then your bytes/sec performance will go up significantly, along with CPU usage.\n\nIf you are I/O bound, it will stay the same or get worse.\n\n-Scott\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 18 Jan 2010 15:37:12 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "[email protected]:\n\n> I'll try to execute these tests on a SSD\n> and/or Raid system.\n\nFYI:\n\nOn a sata raid-0 (mid range hardware) and recent 2x 1.5 TB\ndisks with a write performance of 100 MB/s (worst, to 200\nMB/s max), I get a performance of 18.2 MB/s. Before, with\nother disk 43 MB/s (worst to 70 MB/s max) postgres came to\n14-16 MB/s.\n\nSo, I conclude finally:\n\n(1) Postgresql write throughput (slowly) scales with the\nharddisk speed.\n\n(2) The throughput (not counting WAL doubling data) in\npostgresql is 20-25% of the disk thoughput.\n\n\nI want to thank you all for the very good support!\n\n Felix\n\n\n",
"msg_date": "Tue, 19 Jan 2010 11:16:07 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Scott Carey:\n\n> You are CPU bound.\n> \n> 30% of 4 cores is greater than 25%. 25% is one core fully\n> used.\n\nI have measured the cores separately. Some of them reached\n30%. I am not CPU bound here.\n\n> The postgres compression of data in TOAST is\n> probably the problem. I'm assuming its doing Gzip, and at\n> the default compression level, which on random data will\n> be in the 15MB/sec range. I don't know if TOAST will do\n> compression at a lower compression level.\n\nHm. I use 'bytea' (see original posting) and SET STORAGE\nEXTERNAL for this column which switches of the compression\nAFAIK. I was doing this to measure the raw performance and\nmay not be able to use compression later in real scenario.\n\nBTW: In the initial tests I used 200 blocks of 4 MB bytea\nwhich is the real scenario; later on I was using 10 times\n80MB each just to reduce the number of INSERT commands and\nto make it easier to find the performance problem.\n\n\n> Is your data typically random or incompressible? If it is\n> compressible then your test should be changed to reflect\n> that.\n\nUnfortunatelly I can't say much yet. Of course, in case\ncompression makes sense and fits the CPU performance I will\nuse it.\n\n\n> If I am wrong, you are I/O bound\n\nYes. This is the first half of what we found out now.\n\n> -- this will show up in\n> windows Performance Monitor as \"Disk Time (%)\" -- which\n> you can get on a per device or total basis, along with i/o\n> per second (read and/or write) and bytes/sec metrics.\n\nYes, I am using this tool.\n\nHowever, the deeper question is (sounds ridiculous): Why am\nI I/O bound *this much* here. To recall: The write\nperformance in pg is about 20-25% of the worst case serial\nwrite performance of the disk (and only about 8-10% of the\nbest disk perf) even though pg_xlog (WAL) is moved to\nanother disk, only 10 simple INSERT commands, a simple table\nof 5 columns (4 unused, one bytea) and one index for OID, no\ncompression since STORAGE EXTERNAL, ntfs tweaks (noatime\netc), ...\n\n\n> To prove that you are CPU bound, split your test in half,\n> and run the two halves at the same time. If you are CPU\n> bound, then your bytes/sec performance will go up\n> significantly, along with CPU usage.\n\nDone already (see earlier posting). I am not CPU bound.\nSpeed was the same.\n\n\nThank You for the detailed reply.\n\n Felix\n\n\n",
"msg_date": "Tue, 19 Jan 2010 11:50:07 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Pierre Frédéric Caillaud:\n\n> I don't remember if you used TOAST compression or not.\n\nI use 'bytea' and SET STORAGE EXTERNAL for this column.\nAFAIK this switches off the compression.\n\n\n",
"msg_date": "Tue, 19 Jan 2010 11:55:26 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On 01/19/10 11:16, [email protected] wrote:\n> [email protected]:\n>\n>> I'll try to execute these tests on a SSD\n>> and/or Raid system.\n>\n> FYI:\n>\n> On a sata raid-0 (mid range hardware) and recent 2x 1.5 TB\n> disks with a write performance of 100 MB/s (worst, to 200\n> MB/s max), I get a performance of 18.2 MB/s. Before, with\n> other disk 43 MB/s (worst to 70 MB/s max) postgres came to\n> 14-16 MB/s.\n\n[I just skimmed this thread - did you increase the number of WAL logs to \nsomething very large, like 128?]\n\n> So, I conclude finally:\n>\n> (1) Postgresql write throughput (slowly) scales with the\n> harddisk speed.\n>\n> (2) The throughput (not counting WAL doubling data) in\n> postgresql is 20-25% of the disk thoughput.\n\nAnd this is one of the more often forgot reasons why storing large \nobjects in a database rather than in the file systems is a bad idea :)\n\n\n",
"msg_date": "Tue, 19 Jan 2010 12:52:24 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On 19/01/10 10:50, [email protected] wrote:\n> However, the deeper question is (sounds ridiculous): Why am\n> I I/O bound *this much* here. To recall: The write\n> performance in pg is about 20-25% of the worst case serial\n> write performance of the disk (and only about 8-10% of the\n> best disk perf) even though pg_xlog (WAL) is moved to\n> another disk, only 10 simple INSERT commands, a simple table\n> of 5 columns (4 unused, one bytea) and one index for OID, no\n> compression since STORAGE EXTERNAL, ntfs tweaks (noatime\n> etc), ...\n\nI'm no Windows expert, but the sysinternals tools (since bought by \nMicrosoft) have always proved useful to me.\n\nDiskmon should show you what's happening on your machine:\nhttp://technet.microsoft.com/en-us/sysinternals/bb896646.aspx\n\nBe aware that this will generate a *lot* of data very quickly and you'll \nneed to spend a little time analysing it. Try it without PG running to \nsee what your system is up to when \"idle\" first to get a baseline.\n\nUnfortunately it doesn't show disk seek times (which is probably what \nyou want to measure) but it should let you decode what reads/writes are \ntaking place when. If two consecutive disk accesses aren't adjacent then \nthat implies a seek of course. Worst case you get two or more processes \neach accessing different parts of the disk in an interleaved arrangement.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 19 Jan 2010 12:16:54 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Ivan Voras:\n\n> [I just skimmed this thread - did you increase the number of WAL logs to \n> something very large, like 128?]\n\nYes, I tried even more.\n\nI will be writing data quite constantly in the real scenario\nlater. So I wonder if increasing WAL logs will have a\npositive effect or not: AFAIK when I increase it, the\nduration after the max is hit will be longer then (which is\nnot acceptable in my case).\n\nCould anyone confirm if I got it right?\n\n Felix\n\n",
"msg_date": "Tue, 19 Jan 2010 14:36:30 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> wrote: \n> Scott Carey:\n> \n>> You are CPU bound.\n>> \n>> 30% of 4 cores is greater than 25%. 25% is one core fully\n>> used.\n> \n> I have measured the cores separately. Some of them reached\n> 30%. I am not CPU bound here.\n \nIf you have numbers like that when running one big query, or a\nstream of queries one-at-a-time, you are CPU bound. A single\nrequest only uses one CPU at a time although it could switch among a\nnumber of them, if the OS doesn't make an effort to keep each\nprocess with the same CPU.\n \n-Kevin\n",
"msg_date": "Tue, 19 Jan 2010 09:13:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On 01/19/10 14:36, [email protected] wrote:\n> Ivan Voras:\n>\n>> [I just skimmed this thread - did you increase the number of WAL logs to\n>> something very large, like 128?]\n>\n> Yes, I tried even more.\n>\n> I will be writing data quite constantly in the real scenario\n> later. So I wonder if increasing WAL logs will have a\n> positive effect or not: AFAIK when I increase it, the\n> duration after the max is hit will be longer then (which is\n> not acceptable in my case).\n>\n> Could anyone confirm if I got it right?\n\nIt seems so - if you are writing constantly then you will probably get \nlower but more long-term-stable performance from a smaller number of WAL \nlogs.\n\n",
"msg_date": "Tue, 19 Jan 2010 16:36:59 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 19, 2010, at 2:50 AM, [email protected] wrote:\n\n> Scott Carey:\n> \n>> You are CPU bound.\n>> \n>> 30% of 4 cores is greater than 25%. 25% is one core fully\n>> used.\n> \n> I have measured the cores separately. Some of them reached\n> 30%. I am not CPU bound here.\n> \n\nMeasuring the cores isn't enough. The OS switches threads between cores faster than it aggregates the usage time. On Windows, measure the individual process CPU to see if any of them are near 100%. One process can be using 100% (one full cpu, cpu bound) but switching between cores making it look like 25% on each.\n\nIf the individual postgres backend is significantly less than 100%, then you are probably not CPU bound. If this is the case and additionally the system has significant disk wait time, then you are definitely not CPU bound.\nRecord and post the perfmon log if you wish. In the \"Process\" section, select CPU time %, system time %, and user time % for all processes. In the graph, you should \nsee one (or two) processes eating up that CPU during the test run. \n\n> \n>> If I am wrong, you are I/O bound\n> \n> Yes. This is the first half of what we found out now.\n> \n\nDoes the OS report that you are actually waiting on disk? See the PerfMon \"Physical Disk\" section. Disk time % should be high if you are waiting on disk.\n\n>> -- this will show up in\n>> windows Performance Monitor as \"Disk Time (%)\" -- which\n>> you can get on a per device or total basis, along with i/o\n>> per second (read and/or write) and bytes/sec metrics.\n> \n> Yes, I am using this tool.\n> \n\nWhat does it report for disk time %? How many I/O per second?\n\n> However, the deeper question is (sounds ridiculous): Why am\n> I I/O bound *this much* here. To recall: The write\n> performance in pg is about 20-25% of the worst case serial\n> write performance of the disk (and only about 8-10% of the\n> best disk perf) even though pg_xlog (WAL) is moved to\n> another disk, only 10 simple INSERT commands, a simple table\n> of 5 columns (4 unused, one bytea) and one index for OID, no\n> compression since STORAGE EXTERNAL, ntfs tweaks (noatime\n> etc), ...\n> \n\nIts not going to be completely serial, we want to know if it is disk bound, and if so by what type of access. The disk time %, i/o per second, and MB/sec are needed to figure this out. MB/sec alone is not enough. PerfMon has tons of useful data you can extract on this -- i/o per second for writes and reads, size of writes, size of reads, time spent waiting on each disk, etc.\n\nAll the writes are not serial. Enough disk seeks interleaved will kill the sequential writes. \nYou have random writes due to the index. Try this without the index and see what happens. The index is also CPU consuming.\nYou can probably move the index to the other disk too (with tablespaces), and the random disk activity may then follow it. \nTo minimize index writes, increase shared_buffers. \n\n> \n>> To prove that you are CPU bound, split your test in half,\n>> and run the two halves at the same time. If you are CPU\n>> bound, then your bytes/sec performance will go up\n>> significantly, along with CPU usage.\n> \n> Done already (see earlier posting). I am not CPU bound.\n> Speed was the same.\n> \nIf that result correlates with the system reporting high Disk Time (%), and the MB/sec written is low, then random writes or reads are causing the slowdown. The chief suspects for that are the WAL log, and the index. The WAL is generally sequential itself, and not as much of a concern as the index.\n\nAnother reply references DiskMon from sysinternals. This is also highly useful. Once you have identified the bottleneck from PerfMon, you can use this to see the actual system API calls for the disk reads and writes, tracking down to \"which process on what file\". Much more than you can get from Linux easily.\n\nBulk inserts into an indexed table is always significantly slower than inserting unindexed and then indexing later. Partitioned tables combined with staging tables can help here if you need to increase insert throughput more. Also, if random writes are your problem, a battery backed caching raid controller will significantly improve performance, as will anything that can improve random write performance (high quality SSD, faster RPM disks, more disks).\n\n> \n> Thank You for the detailed reply.\n> \n> Felix\n> \n> \n\n",
"msg_date": "Tue, 19 Jan 2010 12:31:04 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 20, 2010, at 5:32 AM, [email protected] wrote:\n\n> \n>> Bulk inserts into an indexed table is always significantly\n>> slower than inserting unindexed and then indexing later.\n> \n> Agreed. However, shouldn't this be included in the disk-time\n> counters? If so, it should by near 100%.\n> \n\nWell, something is causing the system to alternate between CPU and disk bound here. \n(see below).\nIt would be useful to see what affect the index has.\n\n\n> \n> THE TESTS:\n> \n> In the attachement you'll find 2 screenshots perfmon34.png\n> and perfmon35.png (I hope 2x14 kb is o.k. for the mailing\n> list).\n> \n> To explain it a bit:\n> \n> \n> (A) The setting (to recall):\n> \n> WinXP, 4 CPU 3 Ghz, 4 GB RAM, Postgres 8.2.4, postgres\n> system data on hd D:, postgres data on separate sata raid-0\n> hd G: (serial write performance of 100-200 MB/s), pg_xlog\n> symlinked to separate hd E: (sata 10.000 rpm); using libpq\n> and PQexecParams() with $001.. notation and Format=1\n> (binary) for doing 10 times a simple INSERT command to\n> insert 80 MB of bytea data (so 800 MB in total) into a\n> simple 5 col table (1 bytea with STORAGE EXTERNAL, rest\n> unused cols, with oids, index on oid; all done locally.\n> \n> \n> (B) perfmon34.png: Results/graphs (performance monitor):\n> \nGreat data!\n\n> (1) The upper dark/gray graph touching the 100% sometimes is\n> \"disk write time %\" of the data disk G:\n> \n> (2) The yellow graph is nearly completly overpainted by (1)\n> since it is \"disk time %\".\n> \n> (3) The brown graph below (1) is \"Disk Write Byts/s\" divided\n> by 1.000.000, so around 40 MB/s average.\n> \n\nLooks like it is writing everything twice, or close to it. Alternatively the index writes occupy half, but that is unlikely.\n\n\n> (4) The read graph is \"Disk Time %\" of the WAL drive E:,\n> average approx. 30%.\n> \n\nWAL doesn't look like a bottleneck here, as other tests have shown.\nA larger wal_buffers setting might lower this more, since your record overflows the buffer for sure.\nYou might want to change your test case to write records similar size to what you expect (do you expect 80MB?) and then set wal_buffers up to the size of one checkpoint segment (16MB) if you expect larger data per transaction.\n\n> (5) Below (4) there is CPU time in total (average around\n> 20%), all 4 CPUs counted -- and please beleave me: separate\n> CPU logs shows CPUs peaks never above 40-50%. Watching the\n> several postgres processes: 0-10% CPU usage.\n\nWhat I see here is the following:\nThe system is I/O bound, or close, and then it is CPU bound. Note how the CPU spikes up by about 25% (one CPU) when the disk time drops.\n\n\n> \n> (6) The blue/cyan line is \"Disk Writes/sec\" divided by 100,\n> so around 1000 writes/s max for the data drive G:\n> \n> (7) The pink graph (Disk reads/s of data disk G:) shows\n> nearly zero activity.\n> \n> (8)\n> Duration of it all 40s, so inserting 800MB is done at a\n> speed of 20MB/s.\n> \n> (9) The other tool mentioned (DiskMon) tool had some\n> problems to list all data writes in parallel. It continued\n> to fill its list for 5 min. after the test was done. I have\n> not examined its results.\n> \nYes, that tool by default will log a LOT of data. It will be useful later if we want to figure out what \nsort of writes happen during the peaks and valleys on your chart.\n> \n> (C) My interpretation\n> \n> (1) Although the data disk G: sometimes hits 100%: All in\n> all it seems that neither the CPUs nor the data disk\n> (approx. 65%) nor the WAL disk (approx. 30%) are at their\n> limits. See also 1000 writes/s, 40MB/s write thoughput.\n> \n\nI think it is alternating. Whatever is causing the 25% CPU jump during the 'slow' periods is a clue. Some process on the system must be increasing its time significantly in these bursts. I suspect it is postgres flushing data from its shared_buffers to the OS. 8.2 is not very efficient at its ability to write out to the OS in a constant stream, and tends to be very 'bursty' like this. I suspect that 8.3 or 8.4 would perform a lot better here, when tuned right.\n\n> (2) The screenshot also demonstrates that the whole process\n> is not done smoothly but seems to be interrupted: About 15\n> times the data disk time% changes between 100% and ~40%.\n> \n> \n> (D) perfmod35.png: Further tests (fsync=off)\n> \n> I repeated the whole thing with fsync=off. The result was\n> remarkably better (35s instead of 40s, so 22.8 MB/s). The\n> former 100% peaks of the data disk G: are now flat 100%\n> tops for approx 50% of the time.\n> \n> See attachement perfmon35.png\n> \n> \n> (E) Remaining questions\n> \n> (1)\n> It seems that I am disk bound, however it is still a bit\n> unclear what the system is waiting for during these\n> significant 'interrupts' when neither the disk nor the CPU\n> (nor postgress processes) seem to be at its limits.\n> \n\nI suspect it is the postgres checkpoint system interacting with the write volume and size of shared_buffers.\nThis interaction was significantly improved in 8.3. If you can, running a contemporary version would probably help a lot. 8.2.anything is rather old from a performance perspective.\nAdjusting the work_mem and checkpoint tuning would likely help as well. If you can get the transaction to commit before the data has hit disk the total write volume should be lower. That means larger work_mem, or smaller write chunks per transaction.\n\n> (2)\n> The write troughput is still disappointing to me: Even if we\n> would find a way to avoid those 'interrupts' of disk\n> inactivity (see E 1), we are too far beyond serial disk\n> write throughput (20 MB/s data disk + 20 MB/s other (!) WAL\n> disk: is far below 100-200 MB/s resp. 40-70 MB/s).\n> \n\nAlthough reaching 'full' sequential throughput is very hard because not all of the writes are sequential, there is a rather large gap here. \n\n> (3)\n> It would be nice to have an SQL script for the whole test. I\n> failed though to read/create 80 MB data and insert it 10\n> times in a loop.\n> \n> \n> BTY: While writing data I've tested parallel readout via\n> Gigabit Ethernet by an external linux client which performed\n> greately and slowed down the write process for about 5s\n> only!\n> \n> \n> Felix\n> \n> <perfmon34.png><perfmon35.png>\n\n",
"msg_date": "Thu, 21 Jan 2010 00:25:41 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Scott Carey wrote:\n> On Jan 20, 2010, at 5:32 AM, [email protected] wrote:\n>\n> \n>> In the attachement you'll find 2 screenshots perfmon34.png\n>> and perfmon35.png (I hope 2x14 kb is o.k. for the mailing\n>> list).\n>>\n>> \n\nI don't think they made it to the list? I didn't see it, presumably \nScott got a direct copy. I'd like to get a copy and see the graphs even \nif takes an off-list message. If it's an 8.2 checkpoint issue, I know \nexactly what shapes those take in terms of the disk I/O pattern.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 21 Jan 2010 03:35:09 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Thu, 21 Jan 2010, Greg Smith wrote:\n>>> In the attachement you'll find 2 screenshots perfmon34.png\n>>> and perfmon35.png (I hope 2x14 kb is o.k. for the mailing\n>>> list).\n>\n> I don't think they made it to the list?\n\nNo, it seems that no emails with image attachments ever make it through \nthe list server. Someone mentioned something about banning the guy who set \nthe list up from the internet or something. \nhttp://archives.postgresql.org/pgsql-performance/2008-01/msg00290.php\n\nMatthew\n\n-- \n Bashir: The point is, if you lie all the time, nobody will believe you, even\n when you're telling the truth. (RE: The boy who cried wolf)\n Garak: Are you sure that's the point, Doctor?\n Bashir: What else could it be? -- Star Trek DS9\n Garak: That you should never tell the same lie twice. -- Improbable Cause\n",
"msg_date": "Thu, 21 Jan 2010 12:13:19 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "* Matthew Wakeling:\n\n> The data needs to be written first to the WAL, in order to provide\n> crash-safety. So you're actually writing 1600MB, not 800.\n\nIn addition to that, PostgreSQL 8.4.2 seems pre-fill TOAST files (or\nall relations?) with zeros when they are written first, which adds\nanother 400 to 800 MB.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Thu, 21 Jan 2010 16:24:12 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Scott Carey:\n\n> Well, something is causing the system to alternate between\n> CPU and disk bound here. (see below).\n> It would be useful to see what affect the index has.\n\nOk, I simply deleted the index and repeated the test: I did\nnot notice any difference. This is probably so because in\nfact I am doing just 10 INSERTs.\n\n\n> > (B) perfmon34.png: Results/graphs (performance monitor):\n> > \n> Great data!\n\nBTW: I have some more screenshots but as they do not arrive\non the mailing list I keep it. The new graphs are basicly\nthe same anyway.\n\n\n> > (1) The upper dark/gray graph touching the 100% sometimes is\n> > \"disk write time %\" of the data disk G:\n> > \n> > (2) The yellow graph is nearly completly overpainted by (1)\n> > since it is \"disk time %\".\n> > \n> > (3) The brown graph below (1) is \"Disk Write Byts/s\" divided\n> > by 1.000.000, so around 40 MB/s average.\n> > \n> \n> Looks like it is writing everything twice, or close to it.\n> Alternatively the index writes occupy half, but that is\n> unlikely.\n\n'Writing twice': That is the most interesting point I\nbelieve. Why is the data disk doing 40 MB/s *not* including\nWAL, however, having 20 MB/s write thoughput in fact. Seems\nlike: 20 MB for data, 20 MB for X, 20 MB for WAL. \n\nAlthough that questions is still unanswered: I verified\nagain that I am disk bound by temporarily replacing the\nraid-0 with slower solution: a singly attached sata disk\nof the same type: This *did* slow down the test a lot\n(approx. 20%). So, yes, I am disk bound but, again, why\nthat much...\n\nAbout removing the index on OIDs: No impact (see above).\n\n\n> > (4) The read graph is \"Disk Time %\" of the WAL drive E:,\n> > average approx. 30%.\n> > \n> \n> WAL doesn't look like a bottleneck here, as other tests\n> have shown. A larger wal_buffers setting might lower this\n> more, since your record overflows the buffer for sure.\n> You might want to change your test case to write records\n> similar size to what you expect (do you expect 80MB?) and\n> then set wal_buffers up to the size of one checkpoint\n> segment (16MB) if you expect larger data per transaction.\n\nOk, without knowing each exact effect I changed some of the\nconfiguration values (from the defaults in 8.2.4), and did\nsome tests:\n\n(1) First, the most important 8.2.4 defaults (for you to\noverlook):\n\n#shared_buffers=32MB\n#temp_buffers=8MB\n#max_prepared_transactions=5\n#work_mem=1MB\n#maintenance_work_mem=16MB\n#max_stack_depth=2MB\n#max_fsm_pages=204800\n#max_fsm_relations=1000\n#max_files_per_process=1000\n#shared_preload_libraries=''\n#vacuum_cost_delay=0\n#vacuum_cost_page_hit=1\n#vacuum_cost_page_miss=10\n#vacuum_cost_page_dirty=20\n#vacuum_cost_limit=200\n#bgwriter_delay=200ms\n#bgwriter_lru_percent=1.0\n#bgwriter_lru_maxpages=5\n#bgwriter_all_percent=0.333\n#bgwriter_all_maxpages=5\n#fsync=on\n#full_page_writes=on\n#wal_buffers=64kB\n#checkpoint_segments=3\n#checkpoint_timeout=5min\n#checkpoint_warning=30s\n#seq_page_cost=1.0\n#random_page_cost=4.0\n#cpu_tuple_cost=0.01\n#cpu_index_tuple_cost=0.005\n#cpu_operator_cost=0.0025\n#effective_cache_size=128MB\n#default_statistics_target=10\n#constraint_exclusion=off\n#from_collapse_limit=8\n#join_collapse_limit=8\n#autovacuum=on\n#autovacuum_naptime=1min\n#autovacuum_vacuum_threshold=500\n#autovacuum_analyze_threshold=250\n#autovacuum_vacuum_scale_factor=0.2\n#autovacuum_analyze_scale_factor=0.1\n#autovacuum_freeze_max_age=200000000\n#autovacuum_vacuum_cost_delay=-1\n#autovacuum_vacuum_cost_limit=-1\n#deadlock_timeout=1s\n#max_locks_per_transaction=64\n\n\n(2) The tests:\n\nNote: The standard speed was about 800MB/40s, so 20MB/s.\n\n\na)\nWhat I changed: fsync=off\nResult: 35s, so 5s faster.\n\n\nb) like a) but:\ncheckpoint_segments=128 (was 3)\nautovacuum=off\n\nResult: 35s (no change...?!)\n\n\nc) like b) but:\ntemp_buffers=200MB (was 8)\nwal_sync_method=open_datasync (was fsync)\nwal_buffers=1024kB (was 64)\n\nResult:\nThe best ever, it took just 29s, so 800MB/29s = 27.5MB/s.\nHowever, having autovacuum=off probably means that deleted\nrows will occupy disk space? And I also fear that\ncheckpoint_segments=128 mean that at some point in the\nfuture there will be a huge delay then (?).\n\n\nd) also like b) but:\ntemp_buffers=1000MB\nwal_buffers=4096kB\ncheckpoint_segments=3\nautovacuum=on\n\nResult: Again slower 36s\n\n\n\nI am not able to interprete that in depth.\n\n\n\n\n\n> > (C) My interpretation\n> > \n> > (1) Although the data disk G: sometimes hits 100%: All in\n> > all it seems that neither the CPUs nor the data disk\n> > (approx. 65%) nor the WAL disk (approx. 30%) are at their\n> > limits. See also 1000 writes/s, 40MB/s write thoughput.\n> > \n> \n> I think it is alternating. Whatever is causing the 25%\n> CPU jump during the 'slow' periods is a clue. Some\n> process on the system must be increasing its time\n> significantly in these bursts. I suspect it is postgres\n> flushing data from its shared_buffers to the OS. 8.2 is\n> not very efficient at its ability to write out to the OS\n> in a constant stream, and tends to be very 'bursty' like\n> this. I suspect that 8.3 or 8.4 would perform a lot\n> better here, when tuned right.\n\nOk, I've managed to use 8.4 here. Unfortunatelly: There was\nnearly no improvement in speed. For example test 2d)\nperformed in 35s.\n\n\n> > The write troughput is still disappointing to me: Even if we\n> > would find a way to avoid those 'interrupts' of disk\n> > inactivity (see E 1), we are too far beyond serial disk\n> > write throughput (20 MB/s data disk + 20 MB/s other (!) WAL\n\n... or better 20MB/s data disk + 20MB/s unexplained writes\nto data disk + 20 MB/s WAL disk...\n\n> > disk: is far below 100-200 MB/s resp. 40-70 MB/s).\n> > \n> \n> Although reaching 'full' sequential throughput is very\n> hard because not all of the writes are sequential, there\n> is a rather large gap here. \n\nYes, it's a pitty.\n\n\nThank You again so much.\n \n Felix\n\n\n",
"msg_date": "Fri, 22 Jan 2010 21:42:03 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 21, 2010, at 12:35 AM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> On Jan 20, 2010, at 5:32 AM, [email protected] wrote:\n>> \n>> \n>>> In the attachement you'll find 2 screenshots perfmon34.png\n>>> and perfmon35.png (I hope 2x14 kb is o.k. for the mailing\n>>> list).\n>>> \n>>> \n> \n> I don't think they made it to the list? I didn't see it, presumably \n> Scott got a direct copy. I'd like to get a copy and see the graphs even \n> if takes an off-list message. If it's an 8.2 checkpoint issue, I know \n> exactly what shapes those take in terms of the disk I/O pattern.\n> \n\nI got the images directly from the email, from the list. \nSo .... ???\n\n\n> -- \n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n\n",
"msg_date": "Fri, 22 Jan 2010 18:51:24 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 21, 2010, at 12:35 AM, Greg Smith wrote:\n\n> Scott Carey wrote:\n>> On Jan 20, 2010, at 5:32 AM, [email protected] wrote:\n>> \n>> \n>>> In the attachement you'll find 2 screenshots perfmon34.png\n>>> and perfmon35.png (I hope 2x14 kb is o.k. for the mailing\n>>> list).\n>>> \n>>> \n> \n> I don't think they made it to the list? I didn't see it, presumably \n> Scott got a direct copy. I'd like to get a copy and see the graphs even \n> if takes an off-list message. If it's an 8.2 checkpoint issue, I know \n> exactly what shapes those take in terms of the disk I/O pattern.\n> \n\nSorry -- I didn't get them from the list, I was CC'd along with the list, and so my copy has the images.\n\n> -- \n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n\n",
"msg_date": "Fri, 22 Jan 2010 18:52:49 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "On Jan 22, 2010, at 12:42 PM, [email protected] wrote:\n> \n> 'Writing twice': That is the most interesting point I\n> believe. Why is the data disk doing 40 MB/s *not* including\n> WAL, however, having 20 MB/s write thoughput in fact. Seems\n> like: 20 MB for data, 20 MB for X, 20 MB for WAL. \n> \n\nThere are a few things that can do this for non-TOAST stuff. The other comment that TOAST writes all zeros first might be related too.\n\n> Although that questions is still unanswered: I verified\n> again that I am disk bound by temporarily replacing the\n> raid-0 with slower solution: a singly attached sata disk\n> of the same type: This *did* slow down the test a lot\n> (approx. 20%). So, yes, I am disk bound but, again, why\n> that much...\n> \n\nSometimes disk bound (as the graphs show). I suspect that if you artificially slow your CPU down (maybe force it into power saving mode with a utility) it will also be slower. The I/O seems to be the most significant part though.\n\n> \n> (1) First, the most important 8.2.4 defaults (for you to\n> overlook):\n> \n> #shared_buffers=32MB\n\nTry 200MB for the above\n> #temp_buffers=8MB\n\nYou tried making this larger, which helped some.\n\n> #bgwriter_delay=200ms\n> #bgwriter_lru_percent=1.0\n> #bgwriter_lru_maxpages=5\n> #bgwriter_all_percent=0.333\n> #bgwriter_all_maxpages=5\n> #checkpoint_segments=3\n> #checkpoint_timeout=5min\n> #checkpoint_warning=30s\n\nCheck out this for info on these parameters\nhttp://wiki.postgresql.org/wiki/User:Gsmith (Is there a better link Greg?)\n\n> #fsync=on\nChanging this probably helps the OS spend less time flushing to disk.\n\n> \n> (2) The tests:\n> \n> Note: The standard speed was about 800MB/40s, so 20MB/s.\n> \n> \n> a)\n> What I changed: fsync=off\n> Result: 35s, so 5s faster.\n> \n> \n> b) like a) but:\n> checkpoint_segments=128 (was 3)\n> autovacuum=off\n> \n> Result: 35s (no change...?!)\n> \n\nyes, more checkpoint_segments will help if your shared_buffers is larger, it won't do a whole lot otherwise. Generally, I like to keep these roughly equal sized as a starting point for any small to medium sized configuration. So if shared_buffers is 1GB, that takes 64 checkpoint segments to hold for heavy write scenarios.\n\n> \n> c) like b) but:\n> temp_buffers=200MB (was 8)\n> wal_sync_method=open_datasync (was fsync)\n> wal_buffers=1024kB (was 64)\n> \n> Result:\n> The best ever, it took just 29s, so 800MB/29s = 27.5MB/s.\n> However, having autovacuum=off probably means that deleted\n> rows will occupy disk space? And I also fear that\n> checkpoint_segments=128 mean that at some point in the\n> future there will be a huge delay then (?).\n\nI am curious which of the two helped most. I don't think temp_buffers should do anything (it is for temp tables afaik).\n\n> d) also like b) but:\n> temp_buffers=1000MB\n> wal_buffers=4096kB\n> checkpoint_segments=3\n> autovacuum=on\n> \n> Result: Again slower 36s\n> \n\nTry changing shared_buffers. This is where uncommitted data needs to avoid overflowing before a commit. If this was non-TOAST data, i would suspect this is the cause of any double-writing. But I don't know enough about TOAST to know if the same things happen here.\n\n\n> Ok, I've managed to use 8.4 here. Unfortunatelly: There was\n> nearly no improvement in speed. For example test 2d)\n> performed in 35s.\n> \n\nWith a very small shared_buffers the improvements to Postgres' shared_buffer / checkpoint interaction can not be utilized.\n\n\n",
"msg_date": "Fri, 22 Jan 2010 19:31:00 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Scott Carey:\n\n> > (2) The tests:\n> > \n> > Note: The standard speed was about 800MB/40s, so 20MB/s.\n> > \n> > \n> > a)\n> > What I changed: fsync=off\n> > Result: 35s, so 5s faster.\n> > \n> > \n> > b) like a) but:\n> > checkpoint_segments=128 (was 3)\n> > autovacuum=off\n> > \n> > Result: 35s (no change...?!)\n> > \n> \n> yes, more checkpoint_segments will help if your\n> shared_buffers is larger, it won't do a whole lot\n> otherwise. Generally, I like to keep these roughly equal\n> sized as a starting point for any small to medium sized\n> configuration. So if shared_buffers is 1GB, that takes 64\n> checkpoint segments to hold for heavy write scenarios.\n\n(1)\n\nOk, that's what I tested: 1024 MB shared_buffers, 64\ncheckpoint segments.\n\nUnfortunatelly I could not run it on the same hardware\nanymore: The data is written to a single disk now, not raid\nanymore. So with the default shared_buffers of 8 MB (?) we\nshould expect 45s for writing the 800 MB. With the large\nshared_buffers and checkpoints (mentioned above) I got this:\n\n1. run (right after postgres server (re-)start): 28s (!)\n2. run: 44s\n3. run: 42s\n\nSo, roughly the same as with small buffers.\n\n\n(2)\nThen I switched again from 8.2.4 to 8.4.2:\n\n1. run (after server start): 25s.\n2. run: 38s\n3. run: 38s\n\nSo, 8.4 helped a bit over 8.2.\n\n\n(3) All in all\n\nBy (1) + (2) the performance bottleneck has, however,\nchanged a lot (as shown here by the performance monitor):\n\nNow, the test system is definitly disk bound. Roughly\nspeaking, at the middle of the whole test, for about 40-50%\nof the time, the 'data' disk was at 100% (and the 'WAL' at\n20%), while before and after that the 'WAL' disk had a lot\nof peaks at 100% (and 'data' disk at 30%).\n\nThe average MB/s of the 'data' disk was 40 MB/s (WAL:\n20MB/s) -- while the raw performance is 800MB/40s = 20MB/s,\nso still *half* what the disk does.\n\nSo, this remains as the last open question to me: It seems\nthe data is doubly written to the 'data' disk, although WAL\nis written to the separate 'WAL' disk.\n\n\n\n> > Ok, I've managed to use 8.4 here. Unfortunatelly: There was\n> > nearly no improvement in speed. For example test 2d)\n> > performed in 35s.\n> > \n> \n> With a very small shared_buffers the improvements to\n> Postgres' shared_buffer / checkpoint interaction can not\n> be utilized.\n\nSee above.\n\n\nThank You\n Felix\n\n\n",
"msg_date": "Mon, 25 Jan 2010 15:55:32 +0100",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "\nOn Jan 25, 2010, at 6:55 AM, [email protected] wrote:\n\n> Scott Carey:\n> \n>>> (2) The tests:\n>>> \n>>> Note: The standard speed was about 800MB/40s, so 20MB/s.\n>>> \n>>> \n>>> a)\n>>> What I changed: fsync=off\n>>> Result: 35s, so 5s faster.\n>>> \n>>> \n>>> b) like a) but:\n>>> checkpoint_segments=128 (was 3)\n>>> autovacuum=off\n>>> \n>>> Result: 35s (no change...?!)\n>>> \n>> \n>> yes, more checkpoint_segments will help if your\n>> shared_buffers is larger, it won't do a whole lot\n>> otherwise. Generally, I like to keep these roughly equal\n>> sized as a starting point for any small to medium sized\n>> configuration. So if shared_buffers is 1GB, that takes 64\n>> checkpoint segments to hold for heavy write scenarios.\n> \n> (1)\n> \n> Ok, that's what I tested: 1024 MB shared_buffers, 64\n> checkpoint segments.\n> \n> Unfortunatelly I could not run it on the same hardware\n> anymore: The data is written to a single disk now, not raid\n> anymore. So with the default shared_buffers of 8 MB (?) we\n> should expect 45s for writing the 800 MB. With the large\n> shared_buffers and checkpoints (mentioned above) I got this:\n> \n> 1. run (right after postgres server (re-)start): 28s (!)\n> 2. run: 44s\n> 3. run: 42s\n> \n> So, roughly the same as with small buffers.\n> \n> \n> (2)\n> Then I switched again from 8.2.4 to 8.4.2:\n> \n> 1. run (after server start): 25s.\n> 2. run: 38s\n> 3. run: 38s\n> \n\nIf you expect to typically only run a batch of these large inserts occasionally, hopefully the 25s performance will be what you get. \n\n> So, 8.4 helped a bit over 8.2.\n> \n> \n> (3) All in all\n> \n> By (1) + (2) the performance bottleneck has, however,\n> changed a lot (as shown here by the performance monitor):\n> \n> Now, the test system is definitly disk bound. Roughly\n> speaking, at the middle of the whole test, for about 40-50%\n> of the time, the 'data' disk was at 100% (and the 'WAL' at\n> 20%), while before and after that the 'WAL' disk had a lot\n> of peaks at 100% (and 'data' disk at 30%).\n> \n> The average MB/s of the 'data' disk was 40 MB/s (WAL:\n> 20MB/s) -- while the raw performance is 800MB/40s = 20MB/s,\n> so still *half* what the disk does.\n> \n> So, this remains as the last open question to me: It seems\n> the data is doubly written to the 'data' disk, although WAL\n> is written to the separate 'WAL' disk.\n> \n\nIt appears as though there is clear evidence that the system is writing data twice (excluding WAL). This is where my Postgres knowledge ends and someone else will have to comment. Why would it write the TOAST data twice?\n\n\n",
"msg_date": "Tue, 26 Jan 2010 13:27:24 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
},
{
"msg_contents": "Scott Carey wrote:\n>> #bgwriter_delay=200ms\n>> #bgwriter_lru_percent=1.0\n>> #bgwriter_lru_maxpages=5\n>> #bgwriter_all_percent=0.333\n>> #bgwriter_all_maxpages=5\n>> #checkpoint_segments=3\n>> #checkpoint_timeout=5min\n>> #checkpoint_warning=30s\n>> \n>\n> Check out this for info on these parameters\n> http://wiki.postgresql.org/wiki/User:Gsmith (Is there a better link Greg?)\n> \n\nNope. I started working on that back when I had some hope that it was \npossible to improve the background writer in PostgreSQL 8.2 without \ncompletely gutting it and starting over. The 8.3 development work \nproved that idea was mistaken, which meant historical trivia about how \nthe ineffective 8.2 version worked wasn't worth cleaning up to \npresentation quality anymore. Stuck it on my personal page on the wiki \njust so I didn't lose it and could point at it, never developed into a \nproper article.\n\nGenerally, my advice for people running 8.2 is to turn the background \nwriter off altogether:\n\nbgwriter_lru_maxpages=0\nbgwriter_all_maxpages=0\n\nBecause what is there by default isn't enough to really work, and if you \ncrank it up enough to do something useful it will waste a lot \nresources. It's possible with careful study to find a useful middle \nground--I know Kevin Grittner accomplished that on their 8.2 install, \nand I did it once in a way that wasn't horrible--but you're unlikely to \njust get one easily.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\n\nScott Carey wrote:\n\n\n#bgwriter_delay=200ms\n#bgwriter_lru_percent=1.0\n#bgwriter_lru_maxpages=5\n#bgwriter_all_percent=0.333\n#bgwriter_all_maxpages=5\n#checkpoint_segments=3\n#checkpoint_timeout=5min\n#checkpoint_warning=30s\n \n\n\nCheck out this for info on these parameters\nhttp://wiki.postgresql.org/wiki/User:Gsmith (Is there a better link Greg?)\n \n\n\nNope. I started working on that back when I had some hope that it was\npossible to improve the background writer in PostgreSQL 8.2 without\ncompletely gutting it and starting over. The 8.3 development work\nproved that idea was mistaken, which meant historical trivia about how\nthe ineffective 8.2 version worked wasn't worth cleaning up to\npresentation quality anymore. Stuck it on my personal page on the wiki\njust so I didn't lose it and could point at it, never developed into a\nproper article.\n\nGenerally, my advice for people running 8.2 is to turn the background\nwriter off altogether:\n\nbgwriter_lru_maxpages=0\nbgwriter_all_maxpages=0\n\nBecause what is there by default isn't enough to really work, and if\nyou crank it up enough to do something useful it will waste a lot\nresources. It's possible with careful study to find a useful middle\nground--I know Kevin Grittner accomplished that on their 8.2 install,\nand I did it once in a way that wasn't horrible--but you're unlikely to\njust get one easily.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Tue, 26 Jan 2010 17:02:15 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inserting 8MB bytea: just 25% of disk perf used?"
}
] |
[
{
"msg_contents": "Hi,\n\n=== Problem ===\n\ni have a db-table \"data_measurand\" with about 60000000 (60 Millions)\nrows and the following query takes about 20-30 seconds (with psql):\n\nmydb=# select count(*) from data_measurand;\n count \n----------\n 60846187\n(1 row)\n\n\n=== Question ===\n\n- What can i do to improve the performance for the data_measurand table?\n \n=== Background ===\n\nI created a application with django 1.1 ( http://djangoproject.com ) to\ncollect, analyze and visualize measurement data.\n\n=== My System ===\n\n= Postgres Version =\npostgres=# select version();\nversion \n---------------------------------------------------------------------\n PostgreSQL 8.3.9 on i486-pc-linux-gnu, compiled by GCC gcc-4.3.real\n(Debian 4.3.2-1.1) 4.3.2\n(1 row)\n\nI installed postgres with apt-get from debian lenny without any\nmodifications.\n\n= Debian Lenny Kernel Version =\nlenny:~# uname -a\nLinux or.ammonit.com 2.6.26-2-686-bigmem #1 SMP Wed Nov 4 21:12:12 UTC\n2009 i686 GNU/Linux\n\n= Hardware = \nmodel name\t: AMD Athlon(tm) 64 X2 Dual Core Processor 6000+\ncpu MHz\t\t: 1000.000\ncache size\t: 512 KB\nMemTotal\t: 8281516 kB (8 GB)\n\nI use a software raid and LVM for Logical Volume Management. Filesystem\nis ext3\n\n\n\n=== My Table Definitions ===\n\nmydb=# \\d data_measurand;\n Table \"public.data_measurand\"\n Column | Type |\nModifiers \n-----------------+------------------------+-------------------------------------------------------------\n id | integer | not null default\nnextval('data_measurand_id_seq'::regclass)\n entry_id | integer | not null\n sensor_id | integer | not null\n avg_value | numeric(10,4) | \n avg_count_value | integer | \n min_value | numeric(10,4) | \n max_value | numeric(10,4) | \n sigma_value | numeric(10,4) | \n unit | character varying(20) | not null\n status | integer | not null\n comment | character varying(255) | not null\nIndexes:\n \"data_measurand_pkey\" PRIMARY KEY, btree (id)\n \"data_measurand_entry_id_68e2e3fe\" UNIQUE, btree (entry_id,\nsensor_id)\n \"data_measurand_avg_count_value\" btree (avg_count_value)\n \"data_measurand_avg_value\" btree (avg_value)\n \"data_measurand_comment\" btree (comment)\n \"data_measurand_entry_id\" btree (entry_id)\n \"data_measurand_max_value\" btree (max_value)\n \"data_measurand_min_value\" btree (min_value)\n \"data_measurand_sensor_id\" btree (sensor_id)\n \"data_measurand_sigma_value\" btree (sigma_value)\n \"data_measurand_status\" btree (status)\n \"data_measurand_unit\" btree (unit)\nForeign-key constraints:\n \"entry_id_refs_id_50fa9bdf\" FOREIGN KEY (entry_id) REFERENCES\ndata_entry(id) DEFERRABLE INITIALLY DEFERRED\n \"sensor_id_refs_id_5ed84c7c\" FOREIGN KEY (sensor_id) REFERENCES\nsensor_sensor(id) DEFERRABLE INITIALLY DEFERRED\n\n\n\nmydb=# \\d data_entry;\n Table \"public.data_entry\"\n Column | Type |\nModifiers \n------------------+--------------------------+---------------------------------------------------------\n id | integer | not null default\nnextval('data_entry_id_seq'::regclass)\n project_id | integer | not null\n logger_id | integer | not null\n original_file_id | integer | not null\n datetime | timestamp with time zone | not null\nIndexes:\n \"data_entry_pkey\" PRIMARY KEY, btree (id)\n \"data_entry_logger_id_197f5d41\" UNIQUE, btree (logger_id, datetime)\n \"data_entry_datetime\" btree (datetime)\n \"data_entry_logger_id\" btree (logger_id)\n \"data_entry_original_file_id\" btree (original_file_id)\n \"data_entry_project_id\" btree (project_id)\nForeign-key constraints:\n \"logger_id_refs_id_5f73cf46\" FOREIGN KEY (logger_id) REFERENCES\nlogger_logger(id) DEFERRABLE INITIALLY DEFERRED\n \"original_file_id_refs_id_44e8d3b1\" FOREIGN KEY (original_file_id)\nREFERENCES data_originalfile(id) DEFERRABLE INITIALLY DEFERRED\n \"project_id_refs_id_719fb302\" FOREIGN KEY (project_id) REFERENCES\nproject_project(id) DEFERRABLE INITIALLY DEFERRED\n\n\n\nmydb=# \\d project_project;\n Table \"public.project_project\"\n Column | Type |\nModifiers \n---------------+------------------------+--------------------------------------------------------------\n id | integer | not null default\nnextval('project_project_id_seq'::regclass)\n auth_group_id | integer | not null\n name | character varying(200) | not null\n timezone | character varying(200) | \n longitude | double precision | \n latitude | double precision | \n altitude | double precision | \n comment | text | \nIndexes:\n \"project_project_pkey\" PRIMARY KEY, btree (id)\n \"project_project_auth_group_id\" btree (auth_group_id)\nForeign-key constraints:\n \"auth_group_id_refs_id_267c7fe5\" FOREIGN KEY (auth_group_id)\nREFERENCES auth_group(id) DEFERRABLE INITIALLY DEFERRED\n\n\n\nmydb=# \\d logger_logger;\n Table \"public.logger_logger\"\n Column | Type |\nModifiers \n-----------------------+--------------------------+------------------------------------------------------------\n id | integer | not null default\nnextval('logger_logger_id_seq'::regclass)\n auth_group_id | integer | not null\n project_id | integer | \n serial | character varying(50) | not null\n type | character varying(30) | not null\n comment | text | \n last_email | timestamp with time zone | \n last_checked_datetime | timestamp with time zone | \nIndexes:\n \"logger_logger_pkey\" PRIMARY KEY, btree (id)\n \"logger_logger_serial_key\" UNIQUE, btree (serial)\n \"logger_logger_auth_group_id\" btree (auth_group_id)\n \"logger_logger_last_checked_datetime\" btree (last_checked_datetime)\n \"logger_logger_project_id\" btree (project_id)\nForeign-key constraints:\n \"auth_group_id_refs_id_355ed859\" FOREIGN KEY (auth_group_id)\nREFERENCES auth_group(id) DEFERRABLE INITIALLY DEFERRED\n \"project_id_refs_id_5f4a56f3\" FOREIGN KEY (project_id) REFERENCES\nproject_project(id) DEFERRABLE INITIALLY DEFERRED\n\n\n\n\nI hope that's enough information. \n\nCheers Tom\n\n",
"msg_date": "Thu, 14 Jan 2010 15:58:44 +0100",
"msg_from": "tom <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow \"Select count(*) ...\" query on table with 60 Mio. rows"
},
{
"msg_contents": "On Thu, 14 Jan 2010, tom wrote:\n> i have a db-table \"data_measurand\" with about 60000000 (60 Millions)\n> rows and the following query takes about 20-30 seconds (with psql):\n>\n> mydb=# select count(*) from data_measurand;\n> count\n> ----------\n> 60846187\n> (1 row)\n\nSounds pretty reasonable to me. Looking at your table, the rows are maybe \n200 bytes wide? That's 12GB of data for Postgres to munch through. 30 \nseconds is really rather quick for that (400MB/s). What sort of RAID array \nis managing to give you that much?\n\n> I use a software raid and LVM for Logical Volume Management. Filesystem\n> is ext3\n\nDitch lvm.\n\n\nThis is an FAQ. Counting the rows in a table is an expensive operation in \nPostgres. It can't be answered directly from an index. If you want, you \ncan keep track of the number of rows yourself with triggers, but beware \nthat this will slow down write access to the table.\n\nMatthew\n\n-- \n Nog: Look! They've made me into an ensign!\n O'Brien: I didn't know things were going so badly.\n Nog: Frightening, isn't it?\n",
"msg_date": "Thu, 14 Jan 2010 15:11:39 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with 60 Mio.\n rows"
},
{
"msg_contents": "In response to tom :\n> Hi,\n> \n> === Problem ===\n> \n> i have a db-table \"data_measurand\" with about 60000000 (60 Millions)\n> rows and the following query takes about 20-30 seconds (with psql):\n> \n> mydb=# select count(*) from data_measurand;\n> count \n> ----------\n> 60846187\n> (1 row)\n> \n> \n> === Question ===\n> \n> - What can i do to improve the performance for the data_measurand table?\n\nShort answer: nothing.\n\nLong answer: PG has to check the visibility for each record, so it\nforces a seq.scan.\n\nBut you can get an estimation, ask pg_class (a system table), the column\nreltuples there contains an estimated row rount.\nhttp://www.postgresql.org/docs/current/static/catalog-pg-class.html\n\nIf you really needs the correct row-count you should create a TRIGGER\nand count with this trigger all INSERTs and DELETEs.\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 14 Jan 2010 16:18:14 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with 60 Mio. rows"
}
] |
[
{
"msg_contents": "Matthew Wakeling <[email protected]> wrote:\n \n> This is an FAQ.\n \nI just added it to the wiki FAQ page:\n \nhttp://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F\n \n-Kevin\n\n",
"msg_date": "Thu, 14 Jan 2010 09:47:53 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\n\t 60 Mio. rows"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Matthew Wakeling <[email protected]> wrote:\n> \n>> This is an FAQ.\n> \n> I just added it to the wiki FAQ page:\n> \n> http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F\n\nMaybe you could add a short note why an estimation like from the \npg_class table is usually enough.\n\n",
"msg_date": "Thu, 14 Jan 2010 16:59:57 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with 60 Mio. rows"
},
{
"msg_contents": "Ivan Voras <[email protected]> wrote:\n \n> Maybe you could add a short note why an estimation like from the \n> pg_class table is usually enough.\n \nOK. Will do.\n \n-Kevin\n",
"msg_date": "Thu, 14 Jan 2010 10:03:16 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with \n\t 60 Mio. rows"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Matthew Wakeling <[email protected]> wrote:\n> \n> \n>> This is an FAQ.\n>> \n> \n> I just added it to the wiki FAQ page:\n> \n> http://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F\n> \nThe content was already there, just not linked into the main FAQ yet: \nhttp://wiki.postgresql.org/wiki/Slow_Counting\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nMatthew Wakeling <[email protected]> wrote:\n \n \n\nThis is an FAQ.\n \n\n \nI just added it to the wiki FAQ page:\n \nhttp://wiki.postgresql.org/wiki/FAQ#Why_is_.22SELECT_count.28.2A.29_FROM_bigtable.3B.22_slow.3F\n \n\nThe content was already there, just not linked into the main FAQ yet: \nhttp://wiki.postgresql.org/wiki/Slow_Counting\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 14 Jan 2010 13:01:50 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\t 60\n Mio. rows"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> The content was already there, just not linked into the main FAQ\n> yet: \n> http://wiki.postgresql.org/wiki/Slow_Counting\n \nFor a question asked this frequently, it should probably be in the\nFAQ. I'll add a link from there to the more thorough write-up.\n \n-Kevin\n",
"msg_date": "Thu, 14 Jan 2010 12:12:17 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\t\n\t 60 Mio. rows"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Greg Smith <[email protected]> wrote:\n> \n> \n>> The content was already there, just not linked into the main FAQ\n>> yet: \n>> http://wiki.postgresql.org/wiki/Slow_Counting\n>> \n> \n> For a question asked this frequently, it should probably be in the\n> FAQ. I'll add a link from there to the more thorough write-up.\n> \n\nThere's a whole list of FAQs that are documented on the wiki but not in \nthe main FAQ yet leftover from before the main FAQ was hosted there. You \ncan see them all at \nhttp://wiki.postgresql.org/wiki/Frequently_Asked_Questions\n\nI just haven't had time to merge those all usefully into the main FAQ.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nGreg Smith <[email protected]> wrote:\n \n \n\nThe content was already there, just not linked into the main FAQ\nyet: \nhttp://wiki.postgresql.org/wiki/Slow_Counting\n \n\n \nFor a question asked this frequently, it should probably be in the\nFAQ. I'll add a link from there to the more thorough write-up.\n \n\n\nThere's a whole list of FAQs that are documented on the wiki but not in\nthe main FAQ yet leftover from before the main FAQ was hosted there. \nYou can see them all at\nhttp://wiki.postgresql.org/wiki/Frequently_Asked_Questions \n\nI just haven't had time to merge those all usefully into the main FAQ.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 14 Jan 2010 13:25:14 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\t\t 60\n Mio. rows"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> There's a whole list of FAQs that are documented on the wiki but\n> not in the main FAQ yet leftover from before the main FAQ was\n> hosted there. You can see them all at \n> http://wiki.postgresql.org/wiki/Frequently_Asked_Questions\n> \n> I just haven't had time to merge those all usefully into the main\n> FAQ.\n \nWell, unless you object to the way I did it, there's one down. \nShould I remove it from the list of \"Other FAQs\" on the page you\ncite?\n \n(Of course, it goes without saying that you're welcome to improve\nupon anything I put in there.)\n \n-Kevin\n",
"msg_date": "Thu, 14 Jan 2010 12:33:50 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\t\t\n\t 60 Mio. rows"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Greg Smith <[email protected]> wrote:\n> \n> \n>> There's a whole list of FAQs that are documented on the wiki but\n>> not in the main FAQ yet leftover from before the main FAQ was\n>> hosted there. You can see them all at \n>> http://wiki.postgresql.org/wiki/Frequently_Asked_Questions\n>>\n>> I just haven't had time to merge those all usefully into the main\n>> FAQ.\n>> \n> \n> Well, unless you object to the way I did it, there's one down. \n> Should I remove it from the list of \"Other FAQs\" on the page you\n> cite?\n> \n\nSure; everyone should feel free to assimilate into the main FAQ and wipe \nout anything on that smaller list. Those are mainly topics where the \ndiscussion of workarounds and approaches can be much longer than \nstandard FAQ length, so I suspect many of the answers are going to be a \nvery brief summary with a link to longer discussion. If you come across \na really small one, we might even wipe out the original page once it's \nmerged in.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nGreg Smith <[email protected]> wrote:\n \n \n\nThere's a whole list of FAQs that are documented on the wiki but\nnot in the main FAQ yet leftover from before the main FAQ was\nhosted there. You can see them all at \nhttp://wiki.postgresql.org/wiki/Frequently_Asked_Questions\n\nI just haven't had time to merge those all usefully into the main\nFAQ.\n \n\n \nWell, unless you object to the way I did it, there's one down. \nShould I remove it from the list of \"Other FAQs\" on the page you\ncite?\n \n\n\nSure; everyone should feel free to assimilate into the main FAQ and\nwipe out anything on that smaller list. Those are mainly topics where\nthe discussion of workarounds and approaches can be much longer than\nstandard FAQ length, so I suspect many of the answers are going to be a\nvery brief summary with a link to longer discussion. If you come\nacross a really small one, we might even wipe out the original page\nonce it's merged in.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 14 Jan 2010 13:49:58 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow \"Select count(*) ...\" query on table with\t\t\t 60\n Mio. rows"
}
] |
[
{
"msg_contents": "Hi,\n\nversion: 8.4.2\n\n\nI have a table called values:\n\ntest=*# \\d values\n Table \"public.values\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n value | real |\nIndexes:\n \"idx_id\" btree (id)\n\nThe table contains 100000 random rows and is analysed.\n\n\n\nAnd i have 2 queries, both returns the same result:\n\ntest=*# explain analyse select id, avg(value) over (partition by value) from values where id = 50 order by id;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=531.12..549.02 rows=1023 width=8) (actual time=2.032..4.165 rows=942 loops=1) \n -> Sort (cost=531.12..533.68 rows=1023 width=8) (actual time=2.021..2.270 rows=942 loops=1) \n Sort Key: value \n Sort Method: quicksort Memory: 53kB \n -> Bitmap Heap Scan on \"values\" (cost=24.19..479.98 rows=1023 width=8) (actual time=0.269..1.167 rows=942 loops=1) \n Recheck Cond: (id = 50) \n -> Bitmap Index Scan on idx_id (cost=0.00..23.93 rows=1023 width=0) (actual time=0.202..0.202 rows=942 loops=1) \n Index Cond: (id = 50) \n Total runtime: 4.454 ms \n(9 rows) \n\nTime: 4.859 ms\ntest=*# explain analyse select * from (select id, avg(value) over (partition by value) from values order by id) foo where id = 50;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan foo (cost=22539.64..24039.64 rows=500 width=12) (actual time=677.196..722.975 rows=942 loops=1) \n Filter: (foo.id = 50) \n -> Sort (cost=22539.64..22789.64 rows=100000 width=8) (actual time=631.991..690.411 rows=100000 loops=1) \n Sort Key: \"values\".id \n Sort Method: external merge Disk: 2528kB \n -> WindowAgg (cost=11116.32..12866.32 rows=100000 width=8) (actual time=207.462..479.330 rows=100000 loops=1) \n -> Sort (cost=11116.32..11366.32 rows=100000 width=8) (actual time=207.442..281.546 rows=100000 loops=1) \n Sort Key: \"values\".value \n Sort Method: external merge Disk: 1752kB \n -> Seq Scan on \"values\" (cost=0.00..1443.00 rows=100000 width=8) (actual time=0.010..29.742 rows=100000 loops=1) \n Total runtime: 725.362 ms \n(11 rows) \n\n\nNo question, this is a silly query, but the problem is the 2nd query: it\nis obviously not possible for the planner to put the where-condition\ninto the subquery. That's bad if i want to create a view:\n\ntest=*# create view view_values as select id, avg(value) over (partition by value) from values order by id;\nCREATE VIEW\nTime: 41.280 ms\ntest=*# commit;\nCOMMIT\nTime: 0.514 ms\ntest=# explain analyse select * from view_values where id=50;\n\nIt is the same bad plan with the Seq Scan on \"values\".\n\n\nIs this a bug or PEBKAC or something else?\n\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Thu, 14 Jan 2010 18:03:18 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad execution plan for subselects containing windowing-function"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> No question, this is a silly query, but the problem is the 2nd query: it\n> is obviously not possible for the planner to put the where-condition\n> into the subquery.\n\nWell, yeah: it might change the results of the window functions.\nI see no bug here. Your second query asks for a much more complicated\ncomputation, it's not surprising it takes longer.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Jan 2010 12:15:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad execution plan for subselects containing windowing-function "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> Andreas Kretschmer <[email protected]> writes:\n> > No question, this is a silly query, but the problem is the 2nd query: it\n> > is obviously not possible for the planner to put the where-condition\n> > into the subquery.\n> \n> Well, yeah: it might change the results of the window functions.\n> I see no bug here. Your second query asks for a much more complicated\n> computation, it's not surprising it takes longer.\n\nThank you for the fast answer.\n\nBut sorry, I disagree. It is the same query with the same result. I can't see\nhow the queries should return different results.\n\nWhat have i overlooked?\n\n\ntia, Andreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Thu, 14 Jan 2010 18:30:25 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad execution plan for subselects containing\n\twindowing-function"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> I see no bug here. Your second query asks for a much more complicated\n>> computation, it's not surprising it takes longer.\n\n> But sorry, I disagree. It is the same query with the same result. I can't see\n> how the queries should return different results.\n\nIn the first query\n\nselect id, avg(value) over (partition by value) from values where id = 50 order by id;\n\nthe avg() calculations are being done over only rows with id = 50. In\nthe second query\n\nselect * from (select id, avg(value) over (partition by value) from values order by id) foo where id = 50;\n\nthey are being done over all rows. In this particular example you\nhappen to get the same result, but that's just because \"avg(foo) over\npartition by foo\" is a dumb example --- it will necessarily just yield\nidentically foo. In more realistic computations the results would be\ndifferent.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 14 Jan 2010 12:42:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad execution plan for subselects containing windowing-function "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n\n> Andreas Kretschmer <[email protected]> writes:\n> > Tom Lane <[email protected]> wrote:\n> >> I see no bug here. Your second query asks for a much more complicated\n> >> computation, it's not surprising it takes longer.\n> \n> > But sorry, I disagree. It is the same query with the same result. I can't see\n> > how the queries should return different results.\n> \n> In the first query\n> \n> select id, avg(value) over (partition by value) from values where id = 50 order by id;\n> \n> the avg() calculations are being done over only rows with id = 50. In\n> the second query\n> \n> select * from (select id, avg(value) over (partition by value) from values order by id) foo where id = 50;\n> \n> they are being done over all rows. In this particular example you\n> happen to get the same result, but that's just because \"avg(foo) over\n> partition by foo\" is a dumb example --- it will necessarily just yield\n> identically foo. In more realistic computations the results would be\n> different.\n\nOkay, i believe you now ;-)\n\nI will try to find a case with different results ...\n\nThx for your fast help!\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Thu, 14 Jan 2010 19:03:49 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad execution plan for subselects containing\n\twindowing-function"
},
{
"msg_contents": "Andreas Kretschmer <[email protected]> wrote:\n> > they are being done over all rows. In this particular example you\n> > happen to get the same result, but that's just because \"avg(foo) over\n> > partition by foo\" is a dumb example --- it will necessarily just yield\n> > identically foo. In more realistic computations the results would be\n> > different.\n> \n> Okay, i believe you now ;-)\n> \n> I will try to find a case with different results ...\n\nI have got it!\n\n\ntest=# select * from values;\n id | value\n----+-------\n 1 | 10\n 2 | 20\n 3 | 30\n 4 | 40\n 5 | 50\n 6 | 60\n 7 | 70\n 8 | 80\n 9 | 90\n(9 rows)\n\nTime: 0.240 ms\ntest=*# select id, sum(value) over (order by id) from values where id = 5;\n id | sum\n----+-----\n 5 | 50\n(1 row)\n\nTime: 0.352 ms\ntest=*# select * from (select id, sum(value) over (order by id) from values) foo where id = 5;\n id | sum\n----+-----\n 5 | 150\n(1 row)\n\nTime: 0.383 ms\n\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n",
"msg_date": "Thu, 14 Jan 2010 19:31:39 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bad execution plan for subselects containing\n\twindowing-function"
}
] |
[
{
"msg_contents": "Hi all,\n \nI've just received this new server:\n1 x XEON 5520 Quad Core w/ HT\n8 GB RAM 1066 MHz\n16 x SATA II Seagate Barracuda 7200.12\n3ware 9650SE w/ 256MB BBU\n \nIt will run an Ubuntu 8.04 LTS Postgres 8.4 dedicated server. Its database\nwill be getting between 100 and 1000 inserts per second (those are call\ndetail records of ~300 bytes each) of around 20 clients (voip gateways).\nOther activity is mostly read-only and some non time-critical writes\ngenerally at off peak hours.\n \nSo my first choice was:\n \n2 discs in RAID 1 for OS + pg_xlog partitioned with ext2.\n12 discs in RAID 10 for postgres data, sole partition with ext3.\n2 spares\n \n \nMy second choice is:\n \n4 discs in RAID 10 for OS + pg_xlog partitioned with ext2\n10 discs in RAID 10 for postgres, ext3\n2 spares.\n \nThe bbu caché will be enabled for both raid volumes.\n \nI justified my first choice in that WAL writes are sequentially and OS\npretty much are too, so a RAID 1 probably would hold ground against a 12\ndisc RAID 10 with random writes.\n \nI don't know in advance if I will manage to gather enough time to try out\nboth setups so I wanted to know what you guys think of these 2 alternatives.\nDo you think a single RAID 1 will become a bottleneck? Feel free to suggest\na better setup I hadn't considered, it would be most welcome.\n \nPd: any clue if hdparm works to deactive the disks write cache even if they\nare behind the 3ware controller?\n \nRegards,\nFernando.\n \n\n\n\n\n\nHi \nall,\n \nI've just \nreceived this new server:\n1 x XEON 5520 Quad \nCore w/ HT\n8 GB RAM 1066 \nMHz\n16 x SATA II Seagate \nBarracuda 7200.12\n3ware 9650SE w/ \n256MB BBU\n \nIt will run an \nUbuntu 8.04 LTS Postgres 8.4 dedicated server. Its database will \nbe getting between 100 and 1000 inserts per second (those are call detail \nrecords of ~300 bytes each) of around 20 clients (voip gateways). Other activity \nis mostly read-only and some non time-critical writes generally at off peak \nhours.\n \nSo \nmy \nfirst choice was:\n \n2 discs in RAID \n1 for OS + pg_xlog partitioned with ext2.\n12 discs in RAID 10 \nfor postgres data, sole partition with ext3.\n2 \nspares\n \n \nMy second choice \nis:\n \n4 discs in RAID 10 \nfor OS + pg_xlog partitioned with ext2\n10 discs in RAID 10 \nfor postgres, ext3\n2 \nspares.\n \nThe bbu caché will \nbe enabled for both raid volumes.\n \nI justified my first \nchoice in that WAL writes are sequentially and OS pretty much are too, so a \nRAID 1 probably would hold ground against a 12 disc RAID 10 with random \nwrites.\n \nI don't know in \nadvance if I will manage to gather enough time to try out both setups so I \nwanted to know what you guys think of these 2 alternatives. Do you think a \nsingle RAID 1 will become a bottleneck? Feel free to suggest a better setup \nI hadn't considered, it would be most welcome.\n \nPd: any clue if \nhdparm works to deactive the disks write cache even if they are behind the \n3ware controller?\n \nRegards,\nFernando.",
"msg_date": "Thu, 14 Jan 2010 17:03:52 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "new server I/O setup"
},
{
"msg_contents": "On Thu, Jan 14, 2010 at 1:03 PM, Fernando Hevia <[email protected]> wrote:\n> Hi all,\n>\n> I've just received this new server:\n> 1 x XEON 5520 Quad Core w/ HT\n> 8 GB RAM 1066 MHz\n> 16 x SATA II Seagate Barracuda 7200.12\n> 3ware 9650SE w/ 256MB BBU\n>\n> It will run an Ubuntu 8.04 LTS Postgres 8.4 dedicated server. Its database\n> will be getting between 100 and 1000 inserts per second (those are call\n> detail records of ~300 bytes each) of around 20 clients (voip gateways).\n> Other activity is mostly read-only and some non time-critical writes\n> generally at off peak hours.\n>\n> So my first choice was:\n>\n> 2 discs in RAID 1 for OS + pg_xlog partitioned with ext2.\n> 12 discs in RAID 10 for postgres data, sole partition with ext3.\n> 2 spares\n>\n>\n> My second choice is:\n>\n> 4 discs in RAID 10 for OS + pg_xlog partitioned with ext2\n> 10 discs in RAID 10 for postgres, ext3\n> 2 spares.\n>\n> The bbu caché will be enabled for both raid volumes.\n>\n> I justified my first choice in that WAL writes are sequentially and OS\n> pretty much are too, so a RAID 1 probably would hold ground against a 12\n> disc RAID 10 with random writes.\n\nI think your first choice is right. I use the same basic setup with\n147G 15k5 SAS seagate drives and the pg_xlog / OS partition is almost\nnever close to the same level of utilization, according to iostat, as\nthe main 12 disk RAID-10 array is. We may have to buy a 16 disk array\nto keep up with load, and it would be all main data storage, and our\npg_xlog main drive pair would be just fine.\n\n> I don't know in advance if I will manage to gather enough time to try out\n> both setups so I wanted to know what you guys think of these 2\n> alternatives. Do you think a single RAID 1 will become a bottleneck? Feel\n> free to suggest a better setup I hadn't considered, it would be most\n> welcome.\n\nFor 12 disks, most likely not. Especially since your load is mostly\nsmall randomish writes, not a bunch of big multi-megabyte records or\nanything, so the random access performance on the 12 disk RAID-10\nshould be your limiting factor.\n\n> Pd: any clue if hdparm works to deactive the disks write cache even if they\n> are behind the 3ware controller?\n\nNot sure, but I'm pretty sure the 3ware card already does the right\nthing and turns off the write caching.\n",
"msg_date": "Thu, 14 Jan 2010 14:22:09 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": "Fernando Hevia wrote:\n> I justified my first choice in that WAL writes are sequentially and OS \n> pretty much are too, so a RAID 1 probably would hold ground against a \n> 12 disc RAID 10 with random writes.\n\nThe problem with this theory is that when PostgreSQL does WAL writes and \nasks to sync the data, you'll probably discover all of the open OS \nwrites that were sitting in the Linux write cache getting flushed before \nthat happens. And that could lead to horrible performance--good luck if \nthe database tries to do something after cron kicks off updatedb each \nnight for example.\n\nI think there are two viable configurations you should be considering \nyou haven't thought about:\n, but neither is quite what you're looking at:\n\n2 discs in RAID 1 for OS\n2 discs in RAID 1 for pg_xlog\n10 discs in RAID 10 for postgres, ext3\n2 spares.\n\n14 discs in RAID 10 for everything\n2 spares.\n\nImpossible to say which of the four possibilities here will work out \nbetter. I tend to lean toward the first one I listed above because it \nmakes it very easy to monitor the pg_xlog activity (and the non-database \nactivity) separately from everything else, and having no other writes \ngoing on makes it very unlikely that the pg_xlog will ever become a \nbottleneck. But if you've got 14 disks in there, it's unlikely to be a \nbottleneck anyway. The second config above will get you slightly better \nrandom I/O though, so for workloads that are really limited on that \nthere's a good reason to prefer it.\n\nAlso: the whole \"use ext2 for the pg_xlog\" idea is overrated far as I'm \nconcerned. I start with ext3, and only if I get evidence that the drive \nis a bottleneck do I ever think of reverting to unjournaled writes just \nto get a little speed boost. In practice I suspect you'll see no \nbenchmark difference, and will instead curse the decision the first time \nyour server is restarted badly and it gets stuck at fsck.\n\n> Pd: any clue if hdparm works to deactive the disks write cache even if \n> they are behind the 3ware controller?\n\nYou don't use hdparm for that sort of thing; you need to use 3ware's \ntw_cli utility. I believe that the individual drive caches are always \ndisabled, but whether the controller cache is turned on or not depends \non whether the card has a battery. The behavior here is kind of weird \nthough--it changes if you're in RAID mode vs. JBOD mode, so be careful \nto look at what all the settings are. Some of these 3ware cards default \nto extremely aggressive background scanning for bad blocks too, you \nmight have to tweak that downward too.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nFernando Hevia wrote:\n\n\n\nI\njustified my first choice in that WAL writes are sequentially and OS\npretty much are too, so a RAID 1 probably would hold ground against a\n12 disc RAID 10 with random writes.\n\n\nThe problem with this theory is that when PostgreSQL does WAL writes\nand asks to sync the data, you'll probably discover all of the open OS\nwrites that were sitting in the Linux write cache getting flushed\nbefore that happens. And that could lead to horrible performance--good\nluck if the database tries to do something after cron kicks off\nupdatedb each night for example.\n\nI think there are two viable configurations you should be considering\nyou haven't thought about:\n, but neither is quite what you're looking at:\n\n2 discs in RAID 1 for OS\n2 discs in RAID 1 for pg_xlog\n10 discs in RAID 10 for postgres, ext3\n2 spares.\n\n14 discs in RAID 10 for everything\n2 spares.\n\nImpossible to say which of the four possibilities here will work out\nbetter. I tend to lean toward the first one I listed above because it\nmakes it very easy to monitor the pg_xlog activity (and the\nnon-database activity) separately from everything else, and having no\nother writes going on makes it very unlikely that the pg_xlog will ever\nbecome a bottleneck. But if you've got 14 disks in there, it's\nunlikely to be a bottleneck anyway. The second config above will get\nyou slightly better random I/O though, so for workloads that are really\nlimited on that there's a good reason to prefer it.\n\nAlso: the whole \"use ext2 for the pg_xlog\" idea is overrated far as\nI'm concerned. I start with ext3, and only if I get evidence that the\ndrive is a bottleneck do I ever think of reverting to unjournaled\nwrites just to get a little speed boost. In practice I suspect you'll\nsee no benchmark difference, and will instead curse the decision the\nfirst time your server is restarted badly and it gets stuck at fsck.\n\n\nPd:\nany clue if hdparm works to deactive the disks write cache even if they\nare behind the 3ware controller?\n\n\n\nYou don't use hdparm for that sort of thing; you need to use 3ware's\ntw_cli utility. I believe that the individual drive caches are always\ndisabled, but whether the controller cache is turned on or not depends\non whether the card has a battery. The behavior here is kind of weird\nthough--it changes if you're in RAID mode vs. JBOD mode, so be careful\nto look at what all the settings are. Some of these 3ware cards\ndefault to extremely aggressive background scanning for bad blocks too,\nyou might have to tweak that downward too.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Fri, 15 Jan 2010 00:46:26 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": "On Thu, 14 Jan 2010, Scott Marlowe wrote:\n>> I've just received this new server:\n>> 1 x XEON 5520 Quad Core w/ HT\n>> 8 GB RAM 1066 MHz\n>> 16 x SATA II Seagate Barracuda 7200.12\n>> 3ware 9650SE w/ 256MB BBU\n>>\n>> 2 discs in RAID 1 for OS + pg_xlog partitioned with ext2.\n>> 12 discs in RAID 10 for postgres data, sole partition with ext3.\n>> 2 spares\n>\n> I think your first choice is right. I use the same basic setup with\n> 147G 15k5 SAS seagate drives and the pg_xlog / OS partition is almost\n> never close to the same level of utilization, according to iostat, as\n> the main 12 disk RAID-10 array is. We may have to buy a 16 disk array\n> to keep up with load, and it would be all main data storage, and our\n> pg_xlog main drive pair would be just fine.\n\nThe benefits of splitting off a couple of discs for WAL are dubious given \nthe BBU cache, given that the cache will convert the frequent fsyncs to \nsequential writes anyway. My advice would be to test the difference. If \nthe bottleneck is random writes on the 12-disc array, then it may actually \nhelp more to improve that to a 14-disc array instead.\n\nI'd also question whether you need two hot spares, with RAID-10. Obviously \nthat's a judgement call only you can make, but you could consider whether \nit is sufficient to just have a spare disc sitting on a shelf next to the \nserver rather than using up a slot in the server. Depends on how quickly \nyou can get to the server on failure, and how important the data is.\n\nMatthew\n\n-- \n In the beginning was the word, and the word was unsigned,\n and the main() {} was without form and void...",
"msg_date": "Fri, 15 Jan 2010 11:21:28 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Scott Marlowe \n> \n> I think your first choice is right. I use the same basic \n> setup with 147G 15k5 SAS seagate drives and the pg_xlog / OS \n> partition is almost never close to the same level of \n> utilization, according to iostat, as the main 12 disk RAID-10 \n> array is. We may have to buy a 16 disk array to keep up with \n> load, and it would be all main data storage, and our pg_xlog \n> main drive pair would be just fine.\n> \n\n\n> > Do you think a single RAID 1 will become a \n> bottleneck? \n> > Feel free to suggest a better setup I hadn't considered, it \n> would be \n> > most welcome.\n> \n> For 12 disks, most likely not. Especially since your load is \n> mostly small randomish writes, not a bunch of big \n> multi-megabyte records or anything, so the random access \n> performance on the 12 disk RAID-10 should be your limiting factor.\n> \n\nGood to know this setup has been tryied succesfully. \nThanks for the comments.\n\n",
"msg_date": "Fri, 15 Jan 2010 12:49:00 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Greg Smith\n> \n>> Fernando Hevia wrote: \n>> \n>> \tI justified my first choice in that WAL writes are \n>> sequentially and OS pretty much are too, so a RAID 1 probably \n>> would hold ground against a 12 disc RAID 10 with random writes.\n>> \n> \n> The problem with this theory is that when PostgreSQL does WAL \n> writes and asks to sync the data, you'll probably discover \n> all of the open OS writes that were sitting in the Linux \n> write cache getting flushed before that happens. And that \n> could lead to horrible performance--good luck if the database \n> tries to do something after cron kicks off updatedb each \n> night for example.\n> \n\nI actually hadn't considered such a scenario. It probably wont hit us\nbecause our real-time activity diminishes abruptly overnight when\nmaintainance routines kick in.\nBut in case this proves to be an issue, disabling synchronous_commit should\nhelp out, and thanks to the BBU cache the risk of lost transactions should\nbe very low. In any case I would leave it on till the issue arises. Do you\nagree?\n\nIn our business worst case situation could translate to losing a couple\nseconds worth of call records, all recoverable from secondary storage.\n\n\n> I think there are two viable configurations you should be \n> considering you haven't thought about:\n> , but neither is quite what you're looking at:\n> \n> 2 discs in RAID 1 for OS\n> 2 discs in RAID 1 for pg_xlog\n> 10 discs in RAID 10 for postgres, ext3\n> 2 spares.\n> \n> 14 discs in RAID 10 for everything\n> 2 spares.\n> \n> Impossible to say which of the four possibilities here will \n> work out better. I tend to lean toward the first one I \n> listed above because it makes it very easy to monitor the \n> pg_xlog activity (and the non-database activity) separately \n> from everything else, and having no other writes going on \n> makes it very unlikely that the pg_xlog will ever become a \n> bottleneck. But if you've got 14 disks in there, it's \n> unlikely to be a bottleneck anyway. The second config above \n> will get you slightly better random I/O though, so for \n> workloads that are really limited on that there's a good \n> reason to prefer it.\n> \n\nBeside the random writing, we have quite intensive random reads too. I need\nto maximize throughput on the RAID 10 array and it makes me feel rather\nuneasy the thought of taking 2 more disks from it.\nI did consider the 14 disks RAID 10 for all since it's very attractive for\nread I/O. But with 12 spins read I/O should be incredibly fast for us\nconsidering our current production server has a meager 4 disk raid 10.\nI still think the 2d RAID 1 + 12d RAID 10 will be the best combination for\nwrite throughput, providing the RAID 1 can keep pace with the RAID 10,\nsomething Scott already confirmed to be his experience.\n\n> Also: the whole \"use ext2 for the pg_xlog\" idea is overrated \n> far as I'm concerned. I start with ext3, and only if I get \n> evidence that the drive is a bottleneck do I ever think of \n> reverting to unjournaled writes just to get a little speed \n> boost. In practice I suspect you'll see no benchmark \n> difference, and will instead curse the decision the first \n> time your server is restarted badly and it gets stuck at fsck.\n> \n\nThis advice could be interpreted as \"start safe and take risks only if\nneeded\"\nI think you are right and will follow it.\n\n>> \tPd: any clue if hdparm works to deactive the disks \n>> write cache even if they are behind the 3ware controller?\n>> \t\n> \n> You don't use hdparm for that sort of thing; you need to use \n> 3ware's tw_cli utility. I believe that the individual drive \n> caches are always disabled, but whether the controller cache \n> is turned on or not depends on whether the card has a \n> battery. The behavior here is kind of weird though--it \n> changes if you're in RAID mode vs. JBOD mode, so be careful \n> to look at what all the settings are. Some of these 3ware \n> cards default to extremely aggressive background scanning for \n> bad blocks too, you might have to tweak that downward too.\n> \n\nIt has a battery and it is working in RAID mode. \nIt's also my first experience with a hardware controller. Im installing\ntw_cli at this moment.\n\nGreg, I hold your knowledge in this area in very high regard. \nYour comments are much appreciated.\n\n\nThanks,\nFernando\n\n",
"msg_date": "Fri, 15 Jan 2010 13:51:09 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Matthew Wakeling [mailto:[email protected]] \n> Enviado el: Viernes, 15 de Enero de 2010 08:21\n> Para: Scott Marlowe\n> CC: Fernando Hevia; [email protected]\n> Asunto: Re: [PERFORM] new server I/O setup\n> \n> On Thu, 14 Jan 2010, Scott Marlowe wrote:\n> >> I've just received this new server:\n> >> 1 x XEON 5520 Quad Core w/ HT\n> >> 8 GB RAM 1066 MHz\n> >> 16 x SATA II Seagate Barracuda 7200.12 3ware 9650SE w/ 256MB BBU\n> >>\n> >> 2 discs in RAID 1 for OS + pg_xlog partitioned with ext2.\n> >> 12 discs in RAID 10 for postgres data, sole partition with ext3.\n> >> 2 spares\n> >\n> > I think your first choice is right. I use the same basic \n> setup with \n> > 147G 15k5 SAS seagate drives and the pg_xlog / OS partition \n> is almost \n> > never close to the same level of utilization, according to \n> iostat, as \n> > the main 12 disk RAID-10 array is. We may have to buy a 16 \n> disk array \n> > to keep up with load, and it would be all main data \n> storage, and our \n> > pg_xlog main drive pair would be just fine.\n> \n> The benefits of splitting off a couple of discs for WAL are \n> dubious given the BBU cache, given that the cache will \n> convert the frequent fsyncs to sequential writes anyway. My \n> advice would be to test the difference. If the bottleneck is \n> random writes on the 12-disc array, then it may actually help \n> more to improve that to a 14-disc array instead.\n\nI am new to the BBU cache benefit and I have a lot to experience and learn.\nHopefully I will have the time to tests both setups.\nI was wondering if disabling the bbu cache on the RAID 1 array would make\nany difference. All 256MB would be available for the random I/O on the RAID\n10.\n\n> \n> I'd also question whether you need two hot spares, with \n> RAID-10. Obviously that's a judgement call only you can make, \n> but you could consider whether it is sufficient to just have \n> a spare disc sitting on a shelf next to the server rather \n> than using up a slot in the server. Depends on how quickly \n> you can get to the server on failure, and how important the data is.\n> \n\nThis is something I havent been able to make my mind since its very painful\nto loose those 2 slots.\nThey could make for the dedicated pg_xlog RAID 1 Greg's suggesting.\nVery tempting, but still think I will start safe for know and see what\nhappens later.\n\nThanks for your hindsight.\n\nRegards,\nFernando.\n\n",
"msg_date": "Fri, 15 Jan 2010 14:04:23 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": "On Fri, 15 Jan 2010, Fernando Hevia wrote:\n> I was wondering if disabling the bbu cache on the RAID 1 array would make\n> any difference. All 256MB would be available for the random I/O on the RAID\n> 10.\n\nThat would be pretty disastrous, to be honest. The benefit of the cache is \nnot only that it smooths random access, but it also accelerates fsync. The \nwhole point of the WAL disc is for it to be able to accept lots of fsyncs \nvery quickly, and it can't do that without its BBU cache.\n\nMatthew\n\n-- \n Heat is work, and work's a curse. All the heat in the universe, it's\n going to cool down, because it can't increase, then there'll be no\n more work, and there'll be perfect peace. -- Michael Flanders\n",
"msg_date": "Fri, 15 Jan 2010 17:15:39 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": "\n\tNo-one has mentioned SSDs yet ?...\n",
"msg_date": "Fri, 15 Jan 2010 19:00:11 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": " \n\n> -----Mensaje original-----\n> De: Pierre Frédéric Caillaud\n> Enviado el: Viernes, 15 de Enero de 2010 15:00\n> Para: [email protected]\n> Asunto: Re: [PERFORM] new server I/O setup\n> \n> \n> \tNo-one has mentioned SSDs yet ?...\n> \n\nThe post is about an already purchased server just delivered to my office. I\nhave been following with interest posts about SSD benchmarking but no SSD\nhave been bought this oportunity and we have no budget to buy them either,\nat least not in the foreseable future.\n\n",
"msg_date": "Fri, 15 Jan 2010 16:36:00 -0300",
"msg_from": "\"Fernando Hevia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: new server I/O setup"
},
{
"msg_contents": "2010/1/15 Fernando Hevia <[email protected]>:\n>\n>\n>> -----Mensaje original-----\n>> De: Pierre Frédéric Caillaud\n>> Enviado el: Viernes, 15 de Enero de 2010 15:00\n>> Para: [email protected]\n>> Asunto: Re: [PERFORM] new server I/O setup\n>>\n>>\n>> No-one has mentioned SSDs yet ?...\n>>\n>\n> The post is about an already purchased server just delivered to my office. I\n> have been following with interest posts about SSD benchmarking but no SSD\n> have been bought this oportunity and we have no budget to buy them either,\n> at least not in the foreseable future.\n\nAnd no matter how good they look on paper, being one of the first\npeople to use and in effect test them in production can be very\nexciting. And sometimes excitement isn't what you really want from\nyour production servers.\n",
"msg_date": "Fri, 15 Jan 2010 14:42:26 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: new server I/O setup"
}
] |
[
{
"msg_contents": "My client just informed me that new hardware is available for our DB server.\n\n. Intel Core 2 Quads Quad\n. 48 GB RAM\n. 4 Disk RAID drive (RAID level TBD)\n\nI have put the ugly details of what we do with our DB below, as well as the\npostgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\nwith very large tables and the server is always busy serving a constant\nstream of single-row UPDATEs and INSERTs from parallel automated processes.\n\nThere are less than 10 users, as the server is devoted to the KB production\nsystem.\n\nMy questions:\n\n1) Which RAID level would you recommend\n2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n3) If we were to port to a *NIX flavour, which would you recommend? (which\nsupport trouble-free PG builds/makes please!)\n4) Is this the right PG version for our needs?\n\nThanks,\n\nCarlo\n\nThe details of our use:\n\n. The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\nprofessional information of 1.3M individuals.\n. The KB tables related to these 130M individuals are naturally also large\n. The DB is in a perpetual state of serving TCL-scripted Extract, Transform\nand Load (ETL) processes\n. These ETL processes typically run 10 at-a-time (i.e. in parallel)\n. We would like to run more, but the server appears to be the bottleneck\n. The ETL write processes are 99% single row UPDATEs or INSERTs.\n. There are few, if any DELETEs\n. The ETL source data are \"import tables\"\n. The import tables are permanently kept in the data warehouse so that we\ncan trace the original source of any information.\n. There are 6000+ and counting\n. The import tables number from dozens to hundreds of thousands of rows.\nThey rarely require more than a pkey index.\n. Linking the KB to the source import date requires an \"audit table\" of 500M\nrows, and counting.\n. The size of the audit table makes it very difficult to manage, especially\nif we need to modify the design.\n. Because we query the audit table different ways to audit the ETL processes\ndecisions, almost every column in the audit table is indexed.\n. The maximum number of physical users is 10 and these users RARELY perform\nany kind of write\n. By contrast, the 10+ ETL processes are writing constantly\n. We find that internal stats drift, for whatever reason, causing row seq\nscans instead of index scans.\n. So far, we have never seen a situation where a seq scan has improved\nperformance, which I would attribute to the size of the tables\n. We believe our requirements are exceptional, and we would benefit\nimmensely from setting up the PG planner to always favour index-oriented\ndecisions - which seems to contradict everything that PG advice suggests as\nbest practice.\n\nCurrent non-default conf settings are:\n\nautovacuum = on\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 250\nautovacuum_naptime = 1min\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 500\nbgwriter_lru_maxpages = 100\ncheckpoint_segments = 64\ncheckpoint_warning = 290\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nlog_destination = 'stderr'\nlog_line_prefix = '%t '\nlogging_collector = on\nmaintenance_work_mem = 16MB\nmax_connections = 200\nmax_fsm_pages = 204800\nmax_locks_per_transaction = 128\nport = 5432\nshared_buffers = 500MB\nvacuum_cost_delay = 100\nwork_mem = 512MB\n\n\n",
"msg_date": "Thu, 14 Jan 2010 16:25:00 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "New server to improve performance on our large and busy DB - advice?\n\t(v2)"
},
{
"msg_contents": "I'll bite ....\n\n\n1. In general, RAID-10 is the only suitable RAID configuration for a\ndatabase. The decision making comes in how many drives, and splitting stuff\nup into LUNs (like putting pg_xlog on its own LUN).\n\n\n2. None of the above - you're asking the wrong question really. PostgreSQL\nis open source, and is developed on Unix. The Windows version is a pretty\ngood port, as Windows posrt of OSS stuff go, but it's just that, a port.\nYour server is being dedicated to running Postgres, so the right question to\nask is \"What is the best OS for running Postgres?\".\n\nFor any given database engine, regardless of the marketing and support\nstance, there is only one true \"primary\" enterprise OS platform that most\nbig mission critical sites use, and is the best supported and most stable\nplatform for that RDBMS. For Oracle, that's HP-UX (but 10 years ago, it was\nSolaris). For PostgreSQL, it's Linux.\n\nThe biggest problem with Postgres on Windows is that it only comes in\n32-bit. RAM is the ultimate performance tweak for an RDBMS, and to make\nproper use of modern amounts of RAM, you need a 64-bit executable.\n\n\n\n3. The two choices I'd consider are both Linux:\n\n- for the conservative / supported approach, get Red Hat and buy support\nfrom them and (e.g.) Enterprise DB\n- if you plan to keep pretty current and are happy actively managing\nversions and running locally compiled builds, go with Ubuntu\n\n\n4. The general wisdom is that there are a lot of improvements from 8.3 to\n8.4, but how much benefit you'll see in your environment is another\nquestion. If you're building a new system and have to migrate anyway, it\nseems like a good opportunity to upgrade.\n\n\nCheers\nDave\n\nOn Thu, Jan 14, 2010 at 3:25 PM, Carlo Stonebanks <\[email protected]> wrote:\n\n> My client just informed me that new hardware is available for our DB\n> server.\n>\n> . Intel Core 2 Quads Quad\n> . 48 GB RAM\n> . 4 Disk RAID drive (RAID level TBD)\n>\n> I have put the ugly details of what we do with our DB below, as well as the\n> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\n> with very large tables and the server is always busy serving a constant\n> stream of single-row UPDATEs and INSERTs from parallel automated processes.\n>\n> There are less than 10 users, as the server is devoted to the KB production\n> system.\n>\n> My questions:\n>\n> 1) Which RAID level would you recommend\n> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n> 3) If we were to port to a *NIX flavour, which would you recommend? (which\n> support trouble-free PG builds/makes please!)\n> 4) Is this the right PG version for our needs?\n>\n> Thanks,\n>\n> Carlo\n>\n> The details of our use:\n>\n> . The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\n> professional information of 1.3M individuals.\n> . The KB tables related to these 130M individuals are naturally also large\n> . The DB is in a perpetual state of serving TCL-scripted Extract, Transform\n> and Load (ETL) processes\n> . These ETL processes typically run 10 at-a-time (i.e. in parallel)\n> . We would like to run more, but the server appears to be the bottleneck\n> . The ETL write processes are 99% single row UPDATEs or INSERTs.\n> . There are few, if any DELETEs\n> . The ETL source data are \"import tables\"\n> . The import tables are permanently kept in the data warehouse so that we\n> can trace the original source of any information.\n> . There are 6000+ and counting\n> . The import tables number from dozens to hundreds of thousands of rows.\n> They rarely require more than a pkey index.\n> . Linking the KB to the source import date requires an \"audit table\" of\n> 500M\n> rows, and counting.\n> . The size of the audit table makes it very difficult to manage, especially\n> if we need to modify the design.\n> . Because we query the audit table different ways to audit the ETL\n> processes\n> decisions, almost every column in the audit table is indexed.\n> . The maximum number of physical users is 10 and these users RARELY perform\n> any kind of write\n> . By contrast, the 10+ ETL processes are writing constantly\n> . We find that internal stats drift, for whatever reason, causing row seq\n> scans instead of index scans.\n> . So far, we have never seen a situation where a seq scan has improved\n> performance, which I would attribute to the size of the tables\n> . We believe our requirements are exceptional, and we would benefit\n> immensely from setting up the PG planner to always favour index-oriented\n> decisions - which seems to contradict everything that PG advice suggests as\n> best practice.\n>\n> Current non-default conf settings are:\n>\n> autovacuum = on\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_analyze_threshold = 250\n> autovacuum_naptime = 1min\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_vacuum_threshold = 500\n> bgwriter_lru_maxpages = 100\n> checkpoint_segments = 64\n> checkpoint_warning = 290\n> datestyle = 'iso, mdy'\n> default_text_search_config = 'pg_catalog.english'\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> log_destination = 'stderr'\n> log_line_prefix = '%t '\n> logging_collector = on\n> maintenance_work_mem = 16MB\n> max_connections = 200\n> max_fsm_pages = 204800\n> max_locks_per_transaction = 128\n> port = 5432\n> shared_buffers = 500MB\n> vacuum_cost_delay = 100\n> work_mem = 512MB\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'll bite ....1. In general, RAID-10 is the only suitable RAID configuration for a database. The decision making comes in how many drives, and splitting stuff up into LUNs (like putting pg_xlog on its own LUN).\n2. None of the above - you're asking the wrong question really. PostgreSQL is open source, and is developed on Unix. The Windows version is a pretty good port, as Windows posrt of OSS stuff go, but it's just that, a port. Your server is being dedicated to running Postgres, so the right question to ask is \"What is the best OS for running Postgres?\". \nFor any given database engine, regardless of the marketing and support\nstance, there is only one true \"primary\" enterprise OS platform that most big\nmission critical sites use, and is the best supported and most stable\nplatform for that RDBMS. For Oracle, that's HP-UX (but 10 years ago, it\nwas Solaris). For PostgreSQL, it's Linux.\nThe biggest problem with Postgres on Windows is that it only comes in 32-bit. RAM is the ultimate performance tweak for an RDBMS, and to make proper use of modern amounts of RAM, you need a 64-bit executable. \n3. The two choices I'd consider are both Linux:- for the conservative / supported approach, get Red Hat and buy support from them and (e.g.) Enterprise DB- if you plan to keep pretty current and are happy actively managing versions and running locally compiled builds, go with Ubuntu\n4. The general wisdom is that there are a lot of improvements from 8.3 to 8.4, but how much benefit you'll see in your environment is another question. If you're building a new system and have to migrate anyway, it seems like a good opportunity to upgrade.\nCheersDaveOn Thu, Jan 14, 2010 at 3:25 PM, Carlo Stonebanks <[email protected]> wrote:\nMy client just informed me that new hardware is available for our DB server.\n\n. Intel Core 2 Quads Quad\n. 48 GB RAM\n. 4 Disk RAID drive (RAID level TBD)\n\nI have put the ugly details of what we do with our DB below, as well as the\npostgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\nwith very large tables and the server is always busy serving a constant\nstream of single-row UPDATEs and INSERTs from parallel automated processes.\n\nThere are less than 10 users, as the server is devoted to the KB production\nsystem.\n\nMy questions:\n\n1) Which RAID level would you recommend\n2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n3) If we were to port to a *NIX flavour, which would you recommend? (which\nsupport trouble-free PG builds/makes please!)\n4) Is this the right PG version for our needs?\n\nThanks,\n\nCarlo\n\nThe details of our use:\n\n. The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\nprofessional information of 1.3M individuals.\n. The KB tables related to these 130M individuals are naturally also large\n. The DB is in a perpetual state of serving TCL-scripted Extract, Transform\nand Load (ETL) processes\n. These ETL processes typically run 10 at-a-time (i.e. in parallel)\n. We would like to run more, but the server appears to be the bottleneck\n. The ETL write processes are 99% single row UPDATEs or INSERTs.\n. There are few, if any DELETEs\n. The ETL source data are \"import tables\"\n. The import tables are permanently kept in the data warehouse so that we\ncan trace the original source of any information.\n. There are 6000+ and counting\n. The import tables number from dozens to hundreds of thousands of rows.\nThey rarely require more than a pkey index.\n. Linking the KB to the source import date requires an \"audit table\" of 500M\nrows, and counting.\n. The size of the audit table makes it very difficult to manage, especially\nif we need to modify the design.\n. Because we query the audit table different ways to audit the ETL processes\ndecisions, almost every column in the audit table is indexed.\n. The maximum number of physical users is 10 and these users RARELY perform\nany kind of write\n. By contrast, the 10+ ETL processes are writing constantly\n. We find that internal stats drift, for whatever reason, causing row seq\nscans instead of index scans.\n. So far, we have never seen a situation where a seq scan has improved\nperformance, which I would attribute to the size of the tables\n. We believe our requirements are exceptional, and we would benefit\nimmensely from setting up the PG planner to always favour index-oriented\ndecisions - which seems to contradict everything that PG advice suggests as\nbest practice.\n\nCurrent non-default conf settings are:\n\nautovacuum = on\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_analyze_threshold = 250\nautovacuum_naptime = 1min\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_vacuum_threshold = 500\nbgwriter_lru_maxpages = 100\ncheckpoint_segments = 64\ncheckpoint_warning = 290\ndatestyle = 'iso, mdy'\ndefault_text_search_config = 'pg_catalog.english'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nlog_destination = 'stderr'\nlog_line_prefix = '%t '\nlogging_collector = on\nmaintenance_work_mem = 16MB\nmax_connections = 200\nmax_fsm_pages = 204800\nmax_locks_per_transaction = 128\nport = 5432\nshared_buffers = 500MB\nvacuum_cost_delay = 100\nwork_mem = 512MB\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 14 Jan 2010 16:35:53 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "On 15/01/2010 6:35 AM, Dave Crooke wrote:\n> I'll bite ....\n>\n>\n> 1. In general, RAID-10 is the only suitable RAID configuration for a\n> database. The decision making comes in how many drives, and splitting\n> stuff up into LUNs (like putting pg_xlog on its own LUN).\n>\n>\n\n> The biggest problem with Postgres on Windows is that it only comes in\n> 32-bit. RAM is the ultimate performance tweak for an RDBMS, and to make\n> proper use of modern amounts of RAM, you need a 64-bit executable.\n\n.... though that's much less important for Pg than for most other \nthings, as Pg uses a one-process-per-connection model and lets the OS \nhandle much of the caching. So long as the OS can use all that RAM for \ncaching, Pg will benefit, and it's unlikely you need >2GB for any given \nclient connection or for the postmaster.\n\nIt's nice to have the flexibility to push up shared_buffers, and it'd be \ngood to avoid any overheads in running 32-bit code on win64. However, \nit's not that unreasonable to run a 32-bit Pg on a 64-bit OS and expect \ngood performance.\n\nYou can always go 64-bit once 8.5/9.0 hits and has stabilized, anyway.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 15 Jan 2010 08:46:09 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy \tDB - advice? (v2)"
},
{
"msg_contents": "El 15/01/2010 14:43, Ivan Voras escribió:\n> hi,\n>\n> You wrote a lot of information here so let's confirm in a nutshell \n> what you have and what you are looking for:\n>\n> * A database that is of small to medium size (5 - 10 GB)?\n> * Around 10 clients that perform constant write operations to the \n> database (UPDATE/INSERT)\n> * Around 10 clients that occasionally read from the database\n> * Around 6000 tables in your database\n> * A problem with tuning it all\n> * Migration to new hardware and/or OS\n>\n> Is this all correct?\n>\n> First thing that is noticeable is that you seem to have way too few \n> drives in the server - not because of disk space required but because \n> of speed. You didn't say what type of drives you have and you didn't \n> say what you would consider desirable performance levels, but off hand \n> (because of the \"10 clients perform constant writes\" part) you will \n> probably want at least 2x-4x more drives.\n>\n> > 1) Which RAID level would you recommend\n>\n> With only 4 drives, RAID 10 is the only thing usable here.\n>\n> > 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n>\n> Would not recommend Windows OS.\n>\n> > 3) If we were to port to a *NIX flavour, which would you recommend? \n> (which\n> > support trouble-free PG builds/makes please!)\n>\n> Practically any. I'm biased for FreeBSD, a nice and supported version \n> of Linux will probably be fine.\n>\n> > 4) Is this the right PG version for our needs?\n>\n> If you are starting from scratch on a new server, go for the newest \n> version you can get - 8.4.2 in this case.\n>\n> Most importantly, you didn't say what you would consider desirable \n> performance. The hardware and the setup you described will work, but \n> not necessarily fast enough.\n>\n> > . So far, we have never seen a situation where a seq scan has improved\n> > performance, which I would attribute to the size of the tables\n>\n> ... and to the small number of drives you are using.\n>\n> > . We believe our requirements are exceptional, and we would benefit\n> > immensely from setting up the PG planner to always favour \n> index-oriented decisions\n>\n> Have you tried decreasing random_page_cost in postgresql.conf? Or \n> setting (as a last resort) enable_seqscan = off?\n>\n>\n> Carlo Stonebanks wrote:\n>> My client just informed me that new hardware is available for our DB \n>> server.\n>>\n>> . Intel Core 2 Quads Quad\n>> . 48 GB RAM\n>> . 4 Disk RAID drive (RAID level TBD)\n>>\n>> I have put the ugly details of what we do with our DB below, as well \n>> as the\n>> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\n>> with very large tables and the server is always busy serving a constant\n>> stream of single-row UPDATEs and INSERTs from parallel automated \n>> processes.\n>>\n>> There are less than 10 users, as the server is devoted to the KB \n>> production\n>> system.\n>>\n>> My questions:\n>>\n>> 1) Which RAID level would you recommend\n>> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n>> 3) If we were to port to a *NIX flavour, which would you recommend? \n>> (which\n>> support trouble-free PG builds/makes please!)\n>> 4) Is this the right PG version for our needs?\n>>\n>> Thanks,\n>>\n>> Carlo\n>>\n>> The details of our use:\n>>\n>> . The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\n>> professional information of 1.3M individuals.\n>> . The KB tables related to these 130M individuals are naturally also \n>> large\n>> . The DB is in a perpetual state of serving TCL-scripted Extract, \n>> Transform\n>> and Load (ETL) processes\n>> . These ETL processes typically run 10 at-a-time (i.e. in parallel)\n>> . We would like to run more, but the server appears to be the bottleneck\n>> . The ETL write processes are 99% single row UPDATEs or INSERTs.\n>> . There are few, if any DELETEs\n>> . The ETL source data are \"import tables\"\n>> . The import tables are permanently kept in the data warehouse so \n>> that we\n>> can trace the original source of any information.\n>> . There are 6000+ and counting\n>> . The import tables number from dozens to hundreds of thousands of rows.\n>> They rarely require more than a pkey index.\n>> . Linking the KB to the source import date requires an \"audit table\" \n>> of 500M\n>> rows, and counting.\n>> . The size of the audit table makes it very difficult to manage, \n>> especially\n>> if we need to modify the design.\n>> . Because we query the audit table different ways to audit the ETL \n>> processes\n>> decisions, almost every column in the audit table is indexed.\n>> . The maximum number of physical users is 10 and these users RARELY \n>> perform\n>> any kind of write\n>> . By contrast, the 10+ ETL processes are writing constantly\n>> . We find that internal stats drift, for whatever reason, causing row \n>> seq\n>> scans instead of index scans.\n>> . So far, we have never seen a situation where a seq scan has improved\n>> performance, which I would attribute to the size of the tables\n>> . We believe our requirements are exceptional, and we would benefit\n>> immensely from setting up the PG planner to always favour index-oriented\n>> decisions - which seems to contradict everything that PG advice \n>> suggests as\n>> best practice.\n>>\n>> Current non-default conf settings are:\n>>\n>> autovacuum = on\n>> autovacuum_analyze_scale_factor = 0.1\n>> autovacuum_analyze_threshold = 250\n>> autovacuum_naptime = 1min\n>> autovacuum_vacuum_scale_factor = 0.2\n>> autovacuum_vacuum_threshold = 500\n>> bgwriter_lru_maxpages = 100\n>> checkpoint_segments = 64\n>> checkpoint_warning = 290\n>> datestyle = 'iso, mdy'\n>> default_text_search_config = 'pg_catalog.english'\n>> lc_messages = 'C'\n>> lc_monetary = 'C'\n>> lc_numeric = 'C'\n>> lc_time = 'C'\n>> log_destination = 'stderr'\n>> log_line_prefix = '%t '\n>> logging_collector = on\n>> maintenance_work_mem = 16MB\n>> max_connections = 200\n>> max_fsm_pages = 204800\n>> max_locks_per_transaction = 128\n>> port = 5432\n>> shared_buffers = 500MB\n>> vacuum_cost_delay = 100\n>> work_mem = 512MB\n>>\n>>\n>>\n>\n>\nI have a question about that, due to all of you recommend RAID-10 for \nthe implementatio of this system. Would you give a available \narquitecture based on all these considerations?\nAbout the questions, I recommend FreeBSD too for a PostgreSQL production \nserver (and for other things too, not only Pg), but with Linux you can \nobtain a strong, reliable environment that can be more efficient that \nWindows.\n\nRegards\n\n",
"msg_date": "Fri, 15 Jan 2010 10:19:10 +0100",
"msg_from": "\"Ing. Marcos L. Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New server to improve performance on our large\n\tand busy DB - advice? (v2)"
},
{
"msg_contents": "hi,\n\nYou wrote a lot of information here so let's confirm in a nutshell what \nyou have and what you are looking for:\n\n* A database that is of small to medium size (5 - 10 GB)?\n* Around 10 clients that perform constant write operations to the \ndatabase (UPDATE/INSERT)\n* Around 10 clients that occasionally read from the database\n* Around 6000 tables in your database\n* A problem with tuning it all\n* Migration to new hardware and/or OS\n\nIs this all correct?\n\nFirst thing that is noticeable is that you seem to have way too few \ndrives in the server - not because of disk space required but because of \nspeed. You didn't say what type of drives you have and you didn't say \nwhat you would consider desirable performance levels, but off hand \n(because of the \"10 clients perform constant writes\" part) you will \nprobably want at least 2x-4x more drives.\n\n > 1) Which RAID level would you recommend\n\nWith only 4 drives, RAID 10 is the only thing usable here.\n\n > 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n\nWould not recommend Windows OS.\n\n > 3) If we were to port to a *NIX flavour, which would you recommend? \n(which\n > support trouble-free PG builds/makes please!)\n\nPractically any. I'm biased for FreeBSD, a nice and supported version of \nLinux will probably be fine.\n\n > 4) Is this the right PG version for our needs?\n\nIf you are starting from scratch on a new server, go for the newest \nversion you can get - 8.4.2 in this case.\n\nMost importantly, you didn't say what you would consider desirable \nperformance. The hardware and the setup you described will work, but not \nnecessarily fast enough.\n\n > . So far, we have never seen a situation where a seq scan has improved\n > performance, which I would attribute to the size of the tables\n\n... and to the small number of drives you are using.\n\n > . We believe our requirements are exceptional, and we would benefit\n > immensely from setting up the PG planner to always favour \nindex-oriented decisions\n\nHave you tried decreasing random_page_cost in postgresql.conf? Or \nsetting (as a last resort) enable_seqscan = off?\n\n\nCarlo Stonebanks wrote:\n> My client just informed me that new hardware is available for our DB \n> server.\n> \n> . Intel Core 2 Quads Quad\n> . 48 GB RAM\n> . 4 Disk RAID drive (RAID level TBD)\n> \n> I have put the ugly details of what we do with our DB below, as well as the\n> postgres.conf settings. But, to summarize: we have a PostgreSQL 8.3.6 DB\n> with very large tables and the server is always busy serving a constant\n> stream of single-row UPDATEs and INSERTs from parallel automated processes.\n> \n> There are less than 10 users, as the server is devoted to the KB production\n> system.\n> \n> My questions:\n> \n> 1) Which RAID level would you recommend\n> 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n> 3) If we were to port to a *NIX flavour, which would you recommend? (which\n> support trouble-free PG builds/makes please!)\n> 4) Is this the right PG version for our needs?\n> \n> Thanks,\n> \n> Carlo\n> \n> The details of our use:\n> \n> . The DB hosts is a data warehouse and a knowledgebase (KB) tracking the\n> professional information of 1.3M individuals.\n> . The KB tables related to these 130M individuals are naturally also large\n> . The DB is in a perpetual state of serving TCL-scripted Extract, Transform\n> and Load (ETL) processes\n> . These ETL processes typically run 10 at-a-time (i.e. in parallel)\n> . We would like to run more, but the server appears to be the bottleneck\n> . The ETL write processes are 99% single row UPDATEs or INSERTs.\n> . There are few, if any DELETEs\n> . The ETL source data are \"import tables\"\n> . The import tables are permanently kept in the data warehouse so that we\n> can trace the original source of any information.\n> . There are 6000+ and counting\n> . The import tables number from dozens to hundreds of thousands of rows.\n> They rarely require more than a pkey index.\n> . Linking the KB to the source import date requires an \"audit table\" of \n> 500M\n> rows, and counting.\n> . The size of the audit table makes it very difficult to manage, especially\n> if we need to modify the design.\n> . Because we query the audit table different ways to audit the ETL \n> processes\n> decisions, almost every column in the audit table is indexed.\n> . The maximum number of physical users is 10 and these users RARELY perform\n> any kind of write\n> . By contrast, the 10+ ETL processes are writing constantly\n> . We find that internal stats drift, for whatever reason, causing row seq\n> scans instead of index scans.\n> . So far, we have never seen a situation where a seq scan has improved\n> performance, which I would attribute to the size of the tables\n> . We believe our requirements are exceptional, and we would benefit\n> immensely from setting up the PG planner to always favour index-oriented\n> decisions - which seems to contradict everything that PG advice suggests as\n> best practice.\n> \n> Current non-default conf settings are:\n> \n> autovacuum = on\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_analyze_threshold = 250\n> autovacuum_naptime = 1min\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_vacuum_threshold = 500\n> bgwriter_lru_maxpages = 100\n> checkpoint_segments = 64\n> checkpoint_warning = 290\n> datestyle = 'iso, mdy'\n> default_text_search_config = 'pg_catalog.english'\n> lc_messages = 'C'\n> lc_monetary = 'C'\n> lc_numeric = 'C'\n> lc_time = 'C'\n> log_destination = 'stderr'\n> log_line_prefix = '%t '\n> logging_collector = on\n> maintenance_work_mem = 16MB\n> max_connections = 200\n> max_fsm_pages = 204800\n> max_locks_per_transaction = 128\n> port = 5432\n> shared_buffers = 500MB\n> vacuum_cost_delay = 100\n> work_mem = 512MB\n> \n> \n> \n\n",
"msg_date": "Fri, 15 Jan 2010 14:43:10 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice? (v2)"
},
{
"msg_contents": "On Fri, Jan 15, 2010 at 8:43 AM, Ivan Voras <[email protected]> wrote:\n> Have you tried decreasing random_page_cost in postgresql.conf? Or setting\n> (as a last resort) enable_seqscan = off?\n\nIf you need to set enable_seqscan to off to get the planner to use\nyour index, the chances that that index are actually going to improve\nperformance are extremely poor.\n\n...Robert\n",
"msg_date": "Fri, 15 Jan 2010 09:41:55 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New server to improve performance on our large and\n\tbusy DB - advice? (v2)"
},
{
"msg_contents": "On Thu, 14 Jan 2010 16:35:53 -0600\nDave Crooke <[email protected]> wrote:\n\n> For any given database engine, regardless of the marketing and support\n> stance, there is only one true \"primary\" enterprise OS platform that\n> most big mission critical sites use, and is the best supported and\n> most stable platform for that RDBMS. For Oracle, that's HP-UX (but 10\n> years ago, it was Solaris). For PostgreSQL, it's Linux.\n\nI am interested in this response and am wondering if this is just\nDave's opinion or some sort of official PostgreSQL policy. I am\nlearning PostgreSQL by running it on FreeBSD 8.0-STABLE. So far I\nhave found no problems and have even read a few posts that are critical\nof Linux's handling of fsync. I really don't want to start a Linux vs\nFreeBSD flame war (I like Linux and use that too, though not for\ndatabase use), I am just intrigued by the claim that Linux is somehow\nthe natural OS for running PostgreSQL. I think if Dave had said \"for\nPostgreSQL, it's a variant of Unix\" I wouldn't have been puzzled. So I\nsuppose the question is: what is it about Linux specifically (as\ncontrasted with other Unix-like OSes, especially Open Source ones) that\nmakes it particularly suitable for running PostgreSQL?\n\nBest,\nTony\n \n",
"msg_date": "Fri, 15 Jan 2010 16:10:40 +0000",
"msg_from": "Tony McC <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice? (v2)"
},
{
"msg_contents": "On Fri, Jan 15, 2010 at 8:10 AM, Tony McC <[email protected]> wrote:\n\n>> most stable platform for that RDBMS. For Oracle, that's HP-UX (but 10\n>> years ago, it was Solaris). For PostgreSQL, it's Linux.\n>\n> I am interested in this response and am wondering if this is just\n> Dave's opinion or some sort of official PostgreSQL policy.\n\n>I really don't want to start a Linux vs\n> FreeBSD flame war (I like Linux and use that too, though not for\n> database use), I am just intrigued by the claim that Linux is somehow\n> the natural OS for running PostgreSQL.\n\n\nI would wager that this response is a tad flame-bait-\"ish\".\n\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n",
"msg_date": "Fri, 15 Jan 2010 08:20:26 -0800",
"msg_from": "Richard Broersma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "On Fri, Jan 15, 2010 at 11:10 AM, Tony McC <[email protected]> wrote:\n> what is it about Linux specifically (as\n> contrasted with other Unix-like OSes, especially Open Source ones) that\n> makes it particularly suitable for running PostgreSQL?\n\nNothing that I know of.\n\n...Robert\n",
"msg_date": "Fri, 15 Jan 2010 11:23:02 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "\n> > 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n>\n> Would not recommend Windows OS.\n\n\tBTW, I'd be interested to know the NTFS fragmentation stats of your \ndatabase file.\n",
"msg_date": "Fri, 15 Jan 2010 17:47:07 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: New server to improve performance on our large and\n\tbusy DB - advice? (v2)"
},
{
"msg_contents": "Richard Broersma <[email protected]> writes:\n> On Fri, Jan 15, 2010 at 8:10 AM, Tony McC <[email protected]> wrote:\n>>> most stable platform for that RDBMS. For Oracle, that's HP-UX (but 10\n>>> years ago, it was Solaris). For PostgreSQL, it's Linux.\n\n>> I am interested in this response and am wondering if this is just\n>> Dave's opinion or some sort of official PostgreSQL policy.\n\n>> I really don't want to start a Linux vs\n>> FreeBSD flame war (I like Linux and use that too, though not for\n>> database use), I am just intrigued by the claim that Linux is somehow\n>> the natural OS for running PostgreSQL.\n\n> I would wager that this response is a tad flame-bait-\"ish\".\n\nIndeed. It's certainly not \"project policy\".\n\nGiven the Linux kernel hackers' apparent disinterest in fixing their\nOOM kill policy or making write barriers work well (or at all, with\nLVM), I think arguing that Linux is the best database platform requires\na certain amount of suspension of disbelief.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Jan 2010 11:54:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice? (v2)"
},
{
"msg_contents": "Tom Lane wrote:\n> Given the Linux kernel hackers' apparent disinterest in fixing their\n> OOM kill policy or making write barriers work well (or at all, with\n> LVM), I think arguing that Linux is the best database platform requires\n> a certain amount of suspension of disbelief.\n> \n\nDon't forget the general hostility toward how the database allocates \nshared memory on that list too.\n\nI was suggesting Linux as being the best in the context of consistently \nhaving up to date packages that install easily if you can use the PGDG \nyum repo, since that was a specific request. The idea that Linux is \nsomehow the preferred platform from PostgreSQL is pretty weird; it's \njust a popular one, and has plenty of drawbacks.\n\nI think it's certainly the case that you have to enter into using \nPostgreSQL with Linux with the understanding that you only use the most \nbasic and well understood parts of the OS. Filesystem other than ext3? \nProbably buggy, may get corrupted. Using the latest write-barrier code \nrather than the most basic fsync approach? Probably buggy, may get \ncorrupted. Using LVM instead of simple partitions? Probably going to \nperform badly, maybe buggy and get corrupted too. Assuming software \nRAID can replace a hardware solution with a battery-backed write cache? \nNever happen.\n\nThere's a narrow Linux setup for PostgreSQL that works well for a lot of \npeople, but some days it does feel like that's in spite of the \npriorities of the people working on the Linux kernel.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 13:28:42 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy DB - advice? (v2)"
},
{
"msg_contents": "On Fri, Jan 15, 2010 at 11:28 AM, Greg Smith <[email protected]> wrote:\n> Tom Lane wrote:\n>>\n>> Given the Linux kernel hackers' apparent disinterest in fixing their\n>> OOM kill policy or making write barriers work well (or at all, with\n>> LVM), I think arguing that Linux is the best database platform requires\n>> a certain amount of suspension of disbelief.\n>>\n>\n> Don't forget the general hostility toward how the database allocates shared\n> memory on that list too.\n>\n> I was suggesting Linux as being the best in the context of consistently\n> having up to date packages that install easily if you can use the PGDG yum\n> repo, since that was a specific request. The idea that Linux is somehow the\n> preferred platform from PostgreSQL is pretty weird; it's just a popular one,\n> and has plenty of drawbacks.\n>\n> I think it's certainly the case that you have to enter into using PostgreSQL\n> with Linux with the understanding that you only use the most basic and well\n> understood parts of the OS. Filesystem other than ext3? Probably buggy,\n> may get corrupted. Using the latest write-barrier code rather than the most\n> basic fsync approach? Probably buggy, may get corrupted. Using LVM instead\n> of simple partitions? Probably going to perform badly, maybe buggy and get\n> corrupted too. Assuming software RAID can replace a hardware solution with\n> a battery-backed write cache? Never happen.\n>\n> There's a narrow Linux setup for PostgreSQL that works well for a lot of\n> people, but some days it does feel like that's in spite of the priorities of\n> the people working on the Linux kernel.\n\nAs someone who uses Linux to run postgresql dbs, I tend to agree.\nIt's not quite alchemy or anything, but there are very real caveats to\nbe aware of when using linux as the OS for postgresql to run on top\nof. I will say that XFS seems to be a very stable file system, and we\nuse it for some of our databases with no problems at all. But most of\nour stuff sits on ext3 because it's stable and reliable and fast\nenough.\n\nEach OS has some warts when it comes to running pg on it. Could be a\nnarrower selection of hardware drivers, buggy locale support, iffy\nkernel behaviour when lots of memory is allocated. And most have a\nway to work around those issues as long as you're careful what you're\ndoing. If you're familiar with one OS and its warts, you're more\nlikely to be bitten by the warts of another OS that's new to you no\nmatter how good it is.\n\nAnd as always, test the crap outta your setup, cause the time to find\nproblems is before you put a machine into production.\n",
"msg_date": "Fri, 15 Jan 2010 14:38:53 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "Scott Marlowe <[email protected]> wrote:\n \n> I will say that XFS seems to be a very stable file system, and we\n> use it for some of our databases with no problems at all. But\n> most of our stuff sits on ext3 because it's stable and reliable\n> and fast enough.\n \nOur PostgreSQL data directories are all on xfs, with everything else\n(OS, etc) on ext3. We've been happy with it, as long as we turn off\nwrite barriers, which is only save with a RAID controller with BBU\ncache.\n \n> And as always, test the crap outta your setup, cause the time to\n> find problems is before you put a machine into production.\n \nAbsolutely.\n \n-Kevin\n",
"msg_date": "Fri, 15 Jan 2010 15:55:08 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large\n\tand busy DB - advice? (v2)"
},
{
"msg_contents": "Just opinion, and like Greg, I was suggesting it along the lines of \"it's\nthe platform most production PG instances run on, so you're following a well\ntrodden path, and any issue you encounter is likely to have been found and\nfixed by someone else\".\n\nIt's not about the general suitability of the OS as a database platform, or\nits feature set, it's about a combination of specific versions of OS,\nkernel, DB etc that are known to work reliably.\n\n\nI am curious about the write barrier and shmem issues that other folks have\nalluded to ... I am pretty new to using PG, but I've used other databases on\nLinux in production (mostly Oracle, some MySQL) which also use these kernel\nresources and never encountered problems related to them even under very\nhigh loads.\n\nI'd also like to know what OS'es the PG core folks like Tom use.\n\nCheers\nDave\n\nOn Fri, Jan 15, 2010 at 10:10 AM, Tony McC <[email protected]> wrote:\n\n> On Thu, 14 Jan 2010 16:35:53 -0600\n> Dave Crooke <[email protected]> wrote:\n>\n> > For any given database engine, regardless of the marketing and support\n> > stance, there is only one true \"primary\" enterprise OS platform that\n> > most big mission critical sites use, and is the best supported and\n> > most stable platform for that RDBMS. For Oracle, that's HP-UX (but 10\n> > years ago, it was Solaris). For PostgreSQL, it's Linux.\n>\n> I am interested in this response and am wondering if this is just\n> Dave's opinion or some sort of official PostgreSQL policy. I am\n> learning PostgreSQL by running it on FreeBSD 8.0-STABLE. So far I\n> have found no problems and have even read a few posts that are critical\n> of Linux's handling of fsync. I really don't want to start a Linux vs\n> FreeBSD flame war (I like Linux and use that too, though not for\n> database use), I am just intrigued by the claim that Linux is somehow\n> the natural OS for running PostgreSQL. I think if Dave had said \"for\n> PostgreSQL, it's a variant of Unix\" I wouldn't have been puzzled. So I\n> suppose the question is: what is it about Linux specifically (as\n> contrasted with other Unix-like OSes, especially Open Source ones) that\n> makes it particularly suitable for running PostgreSQL?\n>\n> Best,\n> Tony\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nJust opinion, and like Greg, I was suggesting it along the lines of \"it's the platform most production PG instances run on, so you're following a well trodden path, and any issue you encounter is likely to have been found and fixed by someone else\". \nIt's not about the general suitability of the OS as a database platform, or its feature set, it's about a combination of specific versions of OS, kernel, DB etc that are known to work reliably.I am curious about the write barrier and shmem issues that other folks have alluded to ... I am pretty new to using PG, but I've used other databases on Linux in production (mostly Oracle, some MySQL) which also use these kernel resources and never encountered problems related to them even under very high loads.\nI'd also like to know what OS'es the PG core folks like Tom use.CheersDaveOn Fri, Jan 15, 2010 at 10:10 AM, Tony McC <[email protected]> wrote:\nOn Thu, 14 Jan 2010 16:35:53 -0600\nDave Crooke <[email protected]> wrote:\n\n> For any given database engine, regardless of the marketing and support\n> stance, there is only one true \"primary\" enterprise OS platform that\n> most big mission critical sites use, and is the best supported and\n> most stable platform for that RDBMS. For Oracle, that's HP-UX (but 10\n> years ago, it was Solaris). For PostgreSQL, it's Linux.\n\nI am interested in this response and am wondering if this is just\nDave's opinion or some sort of official PostgreSQL policy. I am\nlearning PostgreSQL by running it on FreeBSD 8.0-STABLE. So far I\nhave found no problems and have even read a few posts that are critical\nof Linux's handling of fsync. I really don't want to start a Linux vs\nFreeBSD flame war (I like Linux and use that too, though not for\ndatabase use), I am just intrigued by the claim that Linux is somehow\nthe natural OS for running PostgreSQL. I think if Dave had said \"for\nPostgreSQL, it's a variant of Unix\" I wouldn't have been puzzled. So I\nsuppose the question is: what is it about Linux specifically (as\ncontrasted with other Unix-like OSes, especially Open Source ones) that\nmakes it particularly suitable for running PostgreSQL?\n\nBest,\nTony\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 15 Jan 2010 19:37:50 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "This is the second time I've heard that \"PG shared buffer on Windows doesn't\nmatter\" ... I'd like to understand the reasoning behind that claim, and why\nit differs from other DB servers.\n\n.... though that's much less important for Pg than for most other things, as\n> Pg uses a one-process-per-connection model and lets the OS handle much of\n> the caching. So long as the OS can use all that RAM for caching, Pg will\n> benefit, and it's unlikely you need >2GB for any given client connection or\n> for the postmaster.\n>\n\nAny DB software would benefit from the OS buffer cache, but there is still\nthe overhead of copying that data into the shared buffer area, and as\nnecessary unpacking it into in-memory format.\n\nOracle uses a more or less identical process and memory model to PG, and for\nsure you can't have too much SGA with it.\n\nIt's nice to have the flexibility to push up shared_buffers, and it'd be\n> good to avoid any overheads in running 32-bit code on win64. However, it's\n> not that unreasonable to run a 32-bit Pg on a 64-bit OS and expect good\n> performance.\n>\n\nMy reasoning goes like this:\n\na. there is a significant performance benefit to using a large proportion of\nmemory as in-process DB server cache instead of OS level block / filesystem\ncache\n\nb. the only way to do so on modern hardware (i.e. >>4GB) is with a 64-bit\nbinary\n\nc. therefore, a 64-bit binary is essential\n\n\nYou're the second person that's said a. is only a \"nice to have\" with PG ...\nwhat makes the difference?\n\nCheers\nDave\n\nThis is the second time I've heard that \"PG shared buffer on Windows doesn't matter\" ... I'd like to understand the reasoning behind that claim, and why it differs from other DB servers.\n.... though that's much less important for Pg than for most other things, as Pg uses a one-process-per-connection model and lets the OS handle much of the caching. So long as the OS can use all that RAM for caching, Pg will benefit, and it's unlikely you need >2GB for any given client connection or for the postmaster.\nAny DB software would benefit from the OS buffer cache, but there is still the overhead of copying that data into the shared buffer area, and as necessary unpacking it into in-memory format.Oracle uses a more or less identical process and memory model to PG, and for sure you can't have too much SGA with it.\n\nIt's nice to have the flexibility to push up shared_buffers, and it'd be good to avoid any overheads in running 32-bit code on win64. However, it's not that unreasonable to run a 32-bit Pg on a 64-bit OS and expect good performance.\nMy reasoning goes like this:a. there is a significant performance benefit to using a large proportion of memory as in-process DB server cache instead of OS level block / filesystem cacheb. the only way to do so on modern hardware (i.e. >>4GB) is with a 64-bit binary\nc. therefore, a 64-bit binary is essentialYou're the second person that's said a. is only a \"nice to have\" with PG ... what makes the difference?CheersDave",
"msg_date": "Fri, 15 Jan 2010 19:49:25 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy\n\tDB - advice? (v2)"
},
{
"msg_contents": "Dave Crooke <[email protected]> writes:\n> This is the second time I've heard that \"PG shared buffer on Windows doesn't\n> matter\" ... I'd like to understand the reasoning behind that claim, and why\n> it differs from other DB servers.\n\nAFAIK we don't really understand why, but the experimental evidence is\nthat increasing shared_buffers to really large values doesn't help much\non Windows. You can probably find more in the archives.\n\nI'm not sure that this has been retested recently, so it might be\nobsolete information, but it's what we've got.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Jan 2010 21:05:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and busy DB -\n\tadvice? (v2)"
},
{
"msg_contents": "Dave Crooke wrote:\n\n> My reasoning goes like this:\n> a. there is a significant performance benefit to using a large \n> proportion of memory as in-process DB server cache instead of OS level \n> block / filesystem cache\n> b. the only way to do so on modern hardware (i.e. >>4GB) is with a \n> 64-bit binary\n> c. therefore, a 64-bit binary is essential\n> You're the second person that's said a. is only a \"nice to have\" with \n> PG ... what makes the difference?\n\nThe PostgreSQL model presumes that it's going to be cooperating with the \noperating system cache. In a default config, all reads and writes go \nthrough the OS cache. You can get the WAL writes to be written in a way \nthat bypasses the OS cache, but even that isn't the default. This makes \nPostgreSQL's effective cache size equal to shared_buffers *plus* the OS \ncache. This is why Windows can perform OK even without having a giant \namount of dedicated RAM; it just leans on the OS more heavily instead. \nThat's not as efficient, because you're shuffling more things between \nshared_buffers and the OS than you would on a UNIX system, but it's \nstill way faster than going all the way to disk for something. On, say, \na system with 16GB of RAM, you can setup Windows to use 256MB of \nshared_buffers, and expect that you'll find at least another 14GB or so \nof data cached by the OS.\n\nThe reasons why Windows is particularly unappreciative of being \nallocated memory directly isn't well understood. But the basic property \nthat shared_buffers is not the only source, or even the largest source, \nof caching is not unique to that platform.\n\n> Oracle uses a more or less identical process and memory model to PG, \n> and for sure you can't have too much SGA with it.\n\nThe way data goes in and out of Oracle's SGA is often via direct I/O \ninstead of even touching the OS read/white cache. That's why the \nsituation is so different there. If you're on an Oracle system, and you \nneed to re-read a block that was recently evicted from the SGA, it's \nprobably going to be read from disk. In the same situation with \nPostgreSQL, it's likely you'll find it's still in the OS cache.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 15 Jan 2010 21:09:24 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server to improve performance on our large and\n\tbusy \tDB - advice? (v2)"
},
{
"msg_contents": "> * A database that is of small to medium size (5 - 10 GB)?\n> * Around 10 clients that perform constant write operations to the database \n> (UPDATE/INSERT)\n> * Around 10 clients that occasionally read from the database\n> * Around 6000 tables in your database\n> * A problem with tuning it all\n> * Migration to new hardware and/or OS\n>\n> Is this all correct?\n\nActually, the tablespace is very large, over 500GB. However, the actualy \nproduction DB is 200GB.\n\n> First thing that is noticeable is that you seem to have way too few drives \n> in the server - not because of disk space required but because of speed. \n> You didn't say what type of drives you have and you didn't say what you \n> would consider desirable performance levels, but off hand (because of the \n> \"10 clients perform constant writes\" part) you will probably want at least \n> 2x-4x more drives.\n\n> With only 4 drives, RAID 10 is the only thing usable here.\n\nWhat would be the optimum RAID level and number of disks?\n\n> > 2) Which Windows OS would you recommend? (currently 2008 x64 Server)\n>\n> Would not recommend Windows OS.\n\nWe may be stuck as my client is only considering Red Hat Linux (still \nwaiting to find out which version). If it turns out that this limitatt \ndoesn't give better than a marginal improvement, then there is no incentive \nto create more complications in what is basically a Windows shop (although \nthe project manager is a Linux advocate).\n\n> Most importantly, you didn't say what you would consider desirable \n> performance. The hardware and the setup you described will work, but not \n> necessarily fast enough.\n\nOnce again, it seems as though we are down to the number of drives...\n\n> Have you tried decreasing random_page_cost in postgresql.conf? Or setting \n> (as a last resort) enable_seqscan = off?\n\nIn critical code sections, we do - we have stored procedures and code \nsegments which save the current enable_seqscan value, set it to off (local \nto the transaction), then restore it after the code has run. Our current \n\"planner cost\" values are all default. Is this what you would choose for a \nIntel Core 2 Quads Quad with 48 GB RAM?\n\n# - Planner Cost Constants -\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\n#effective_cache_size = 128MB\n\nThanks for the help,\n\nCarlo \n\n",
"msg_date": "Wed, 20 Jan 2010 17:26:05 -0500",
"msg_from": "\"Carlo Stonebanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: New server to improve performance on our large and busy DB -\n\tadvice? (v2)"
}
] |
[
{
"msg_contents": "Hey folks,\n\nSorry for the OT - we are most of the way through a Db2 --> PG\nmigration that is some 18 months in the making so far. We've got\nmaybe another 3 to 6 months to go before we are complete, and in the\nmeantime have identified the need for connection pooling in Db2, a-la\nthe excellent pgbouncer tool we have implemented on PG\n\nWe are 100% CentOS based.\n\nAnyone know of anything?\n\n From my process list it looks like Db2 V8.1 - my DBA is away at the\nmoment so I cannot ask him :)\n\nroot 3370 1 0 2009 ? 00:18:38 /opt/IBM/db2/V8.1/bin/db2fmcd\n\nthanks,\n-Alan\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n",
"msg_date": "Fri, 15 Jan 2010 12:16:01 -0500",
"msg_from": "Alan McKay <[email protected]>",
"msg_from_op": true,
"msg_subject": "OT: Db2 connection pooling?"
},
{
"msg_contents": "Ug, sorry! As soon as I hit \"enter\" I realised this was the wrong\nlist even for OT :-)\n\nOn Fri, Jan 15, 2010 at 12:16 PM, Alan McKay <[email protected]> wrote:\n> Hey folks,\n>\n> Sorry for the OT - we are most of the way through a Db2 --> PG\n> migration that is some 18 months in the making so far. We've got\n> maybe another 3 to 6 months to go before we are complete, and in the\n> meantime have identified the need for connection pooling in Db2, a-la\n> the excellent pgbouncer tool we have implemented on PG\n>\n> We are 100% CentOS based.\n>\n> Anyone know of anything?\n>\n> From my process list it looks like Db2 V8.1 - my DBA is away at the\n> moment so I cannot ask him :)\n>\n> root 3370 1 0 2009 ? 00:18:38 /opt/IBM/db2/V8.1/bin/db2fmcd\n>\n> thanks,\n> -Alan\n>\n> --\n> “Don't eat anything you've ever seen advertised on TV”\n> - Michael Pollan, author of \"In Defense of Food\"\n>\n\n\n\n-- \n“Don't eat anything you've ever seen advertised on TV”\n - Michael Pollan, author of \"In Defense of Food\"\n",
"msg_date": "Fri, 15 Jan 2010 12:16:33 -0500",
"msg_from": "Alan McKay <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: OT: Db2 connection pooling?"
}
] |
[
{
"msg_contents": "Dear performance group:\n\nWe have just upgraded our monitoring server software and\nnow the following query for graphing the data performs\nabysmally with the default settings. Here is the results\nof the EXPLAIN ANALYZE run with nestloops enabled:\n\nSET enable_nestloop = 'on';\nEXPLAIN SELECT g.graphid FROM graphs g,graphs_items gi,items i,hosts_groups hg,rights r,users_groups ug WHERE (g.graphid/100000000000000) in (0) AND gi.graphid=g.graphid AND i.itemid=gi.itemid AND hg.hostid=i.hostid AND r.id=hg.groupid AND r.groupid=ug.usrgrpid AND ug.userid=20 AND r.permission>=2 AND NOT EXISTS( SELECT gii.graphid FROM graphs_items gii, items ii WHERE gii.graphid=g.graphid AND gii.itemid=ii.itemid AND EXISTS( SELECT hgg.groupid FROM hosts_groups hgg, rights rr, users_groups ugg WHERE ii.hostid=hgg.hostid AND rr.id=hgg.groupid AND rr.groupid=ugg.usrgrpid AND ugg.userid=20 AND rr.permission<2)) AND (g.graphid IN (2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010));\n-----\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=94.07..10304.00 rows=1 width=8) (actual time=194.557..27975.338 rows=607 loops=1)\n Join Filter: (r.groupid = ug.usrgrpid)\n -> Seq Scan on users_groups ug (cost=0.00..1.15 rows=1 width=8) (actual time=0.020..0.026 rows=1 loops=1)\n Filter: (userid = 20)\n -> Nested Loop (cost=94.07..10302.65 rows=16 width=16) (actual time=98.126..27965.748 rows=5728 loops=1)\n -> Nested Loop (cost=94.07..10301.76 rows=2 width=16) (actual time=98.085..27933.529 rows=928 loops=1)\n -> Nested Loop (cost=94.07..10301.20 rows=2 width=16) (actual time=98.074..27924.076 rows=837 loops=1)\n -> Nested Loop (cost=94.07..10299.07 rows=2 width=16) (actual time=98.063..27914.106 rows=837 loops=1)\n -> Nested Loop Anti Join (cost=94.07..10294.64 rows=1 width=8) (actual time=98.049..27907.702 rows=281 loops=1)\n Join Filter: (gii.graphid = g.graphid)\n -> Bitmap Heap Scan on graphs g (cost=94.07..233.17 rows=1 width=8) (actual time=0.529..1.772 rows=281 loops=1)\n Recheck Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n Filter: ((graphid / 100000000000000::bigint) = 0)\n -> Bitmap Index Scan on graphs_pkey (cost=0.00..94.07 rows=246 width=0) (actual time=0.507..0.507 rows=294 loops=1)\n Index Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n -> Nested Loop (cost=0.00..17449.43 rows=1954 width=8) (actual time=99.304..99.304 rows=0 loops=281)\n -> Index Scan using graphs_items_2 on graphs_items gii (cost=0.00..69.83 rows=1954 width=16) (actual time=0.013..3.399 rows=1954 loops=281)\n -> Index Scan using items_pkey on items ii (cost=0.00..8.88 rows=1 width=8) (actual time=0.046..0.046 rows=0 loops=549074)\n Index Cond: (ii.itemid = gii.itemid)\n Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n SubPlan 1\n -> Nested Loop (cost=0.00..7.83 rows=1 width=0) (actual time=0.040..0.040 rows=0 loops=549074)\n Join Filter: (rr.groupid = ugg.usrgrpid)\n -> Nested Loop (cost=0.00..6.67 rows=1 width=8) (actual time=0.037..0.037 rows=0 loops=549074)\n Join Filter: (hgg.groupid = rr.id)\n -> Index Scan using hosts_groups_1 on hosts_groups hgg (cost=0.00..4.27 rows=1 width=8) (actual time=0.003..0.005 rows=1 loops=549074)\n Index Cond: ($0 = hostid)\n -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (actual time=0.027..0.027 rows=0 loops=532214)\n Filter: (rr.permission < 2)\n -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n Filter: (ugg.userid = 20)\n SubPlan 2\n -> Nested Loop (cost=2.34..8.13 rows=1 width=8) (never executed)\n -> Nested Loop (cost=0.00..3.55 rows=1 width=8) (never executed)\n Join Filter: (rr.groupid = ugg.usrgrpid)\n -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (never executed)\n Filter: (permission < 2)\n -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n Filter: (ugg.userid = 20)\n -> Bitmap Heap Scan on hosts_groups hgg (cost=2.34..4.45 rows=11 width=16) (never executed)\n Recheck Cond: (hgg.groupid = rr.id)\n -> Bitmap Index Scan on hosts_groups_2 (cost=0.00..2.33 rows=11 width=0) (never executed)\n Index Cond: (hgg.groupid = rr.id)\n -> Index Scan using graphs_items_2 on graphs_items gi (cost=0.00..4.41 rows=2 width=16) (actual time=0.005..0.010 rows=3 loops=281)\n Index Cond: (gi.graphid = g.graphid)\n -> Index Scan using items_pkey on items i (cost=0.00..1.05 rows=1 width=16) (actual time=0.004..0.006 rows=1 loops=837)\n Index Cond: (i.itemid = gi.itemid)\n -> Index Scan using hosts_groups_1 on hosts_groups hg (cost=0.00..0.27 rows=1 width=16) (actual time=0.003..0.005 rows=1 loops=837)\n Index Cond: (hg.hostid = i.hostid)\n -> Index Scan using rights_2 on rights r (cost=0.00..0.38 rows=5 width=16) (actual time=0.004..0.015 rows=6 loops=928)\n Index Cond: (r.id = hg.groupid)\n Filter: (r.permission >= 2)\n Total runtime: 27976.516 ms\n(53 rows)\n\nAnd here is the the same plan with nestloops disabled:\n\nSET enable_nestloop = 'off';\nEXPLAIN ANALYZE SELECT g.graphid FROM graphs g,graphs_items gi,items i,hosts_groups hg,rights r,users_groups ug WHERE (g.graphid/100000000000000) in (0) AND gi.graphid=g.graphid AND i.itemid=gi.itemid AND hg.hostid=i.hostid AND r.id=hg.groupid AND r.groupid=ug.usrgrpid AND ug.userid=20 AND r.permission>=2 AND NOT EXISTS( SELECT gii.graphid FROM graphs_items gii, items ii WHERE gii.graphid=g.graphid AND gii.itemid=ii.itemid AND EXISTS( SELECT hgg.groupid FROM hosts_groups hgg, rights rr, users_groups ugg WHERE ii.hostid=hgg.hostid AND rr.id=hgg.groupid AND rr.groupid=ugg.usrgrpid AND ugg.userid=20 AND rr.permission<2)) AND (g.graphid IN (2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010));\n-----\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=106463.65..106466.57 rows=1 width=8) (actual time=67.513..68.637 rows=607 loops=1)\n Hash Cond: (r.id = hg.groupid)\n -> Hash Join (cost=1.16..4.05 rows=8 width=8) (actual time=0.255..0.416 rows=15 loops=1)\n Hash Cond: (r.groupid = ug.usrgrpid)\n -> Seq Scan on rights r (cost=0.00..2.39 rows=111 width=16) (actual time=0.015..0.184 rows=111 loops=1)\n Filter: (permission >= 2)\n -> Hash (cost=1.15..1.15 rows=1 width=8) (actual time=0.022..0.022 rows=1 loops=1)\n -> Seq Scan on users_groups ug (cost=0.00..1.15 rows=1 width=8) (actual time=0.010..0.015 rows=1 loops=1)\n Filter: (userid = 20)\n -> Hash (cost=106462.46..106462.46 rows=2 width=16) (actual time=67.225..67.225 rows=928 loops=1)\n -> Hash Join (cost=106457.61..106462.46 rows=2 width=16) (actual time=63.608..65.720 rows=928 loops=1)\n Hash Cond: (hg.hostid = i.hostid)\n -> Seq Scan on hosts_groups hg (cost=0.00..4.06 rows=206 width=16) (actual time=0.008..0.305 rows=206 loops=1)\n -> Hash (cost=106457.58..106457.58 rows=2 width=16) (actual time=63.565..63.565 rows=837 loops=1)\n -> Hash Anti Join (cost=105576.85..106457.58 rows=2 width=16) (actual time=19.955..62.166 rows=837 loops=1)\n Hash Cond: (g.graphid = gii.graphid)\n -> Hash Join (cost=282.09..1162.79 rows=2 width=16) (actual time=13.021..52.860 rows=837 loops=1)\n Hash Cond: (i.itemid = gi.itemid)\n -> Seq Scan on items i (cost=0.00..831.13 rows=13213 width=16) (actual time=0.004..22.955 rows=13213 loops=1)\n -> Hash (cost=282.07..282.07 rows=2 width=16) (actual time=9.890..9.890 rows=837 loops=1)\n -> Hash Join (cost=233.18..282.07 rows=2 width=16) (actual time=1.514..8.514 rows=837 loops=1)\n Hash Cond: (gi.graphid = g.graphid)\n -> Seq Scan on graphs_items gi (cost=0.00..41.54 rows=1954 width=16) (actual time=0.005..2.713 rows=1954 loops=1)\n -> Hash (cost=233.17..233.17 rows=1 width=8) (actual time=1.489..1.489 rows=281 loops=1)\n -> Bitmap Heap Scan on graphs g (cost=94.07..233.17 rows=1 width=8) (actual time=0.526..1.056 rows=281 loops=1)\n Recheck Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n Filter: ((graphid / 100000000000000::bigint) = 0)\n -> Bitmap Index Scan on graphs_pkey (cost=0.00..94.07 rows=246 width=0) (actual time=0.504..0.504 rows=294 loops=1)\n Index Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n -> Hash (cost=105270.33..105270.33 rows=1954 width=8) (actual time=6.914..6.914 rows=0 loops=1)\n -> Hash Join (cost=65.97..105270.33 rows=1954 width=8) (actual time=6.912..6.912 rows=0 loops=1)\n Hash Cond: (ii.itemid = gii.itemid)\n -> Index Scan using items_pkey on items ii (cost=0.00..105110.51 rows=6606 width=8) (actual time=6.907..6.907 rows=0 loops=1)\n Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n SubPlan 1\n -> Hash Join (cost=3.56..7.86 rows=1 width=0) (never executed)\n Hash Cond: (rr.groupid = ugg.usrgrpid)\n -> Hash Join (cost=2.40..6.68 rows=1 width=8) (never executed)\n Hash Cond: (hgg.groupid = rr.id)\n -> Index Scan using hosts_groups_1 on hosts_groups hgg (cost=0.00..4.27 rows=1 width=8) (never executed)\n Index Cond: ($0 = hostid)\n -> Hash (cost=2.39..2.39 rows=1 width=16) (never executed)\n -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (never executed)\n Filter: (permission < 2)\n -> Hash (cost=1.15..1.15 rows=1 width=8) (never executed)\n -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n Filter: (userid = 20)\n SubPlan 2\n -> Hash Join (cost=3.58..8.56 rows=1 width=8) (actual time=0.062..0.062 rows=0 loops=1)\n Hash Cond: (hgg.groupid = rr.id)\n -> Seq Scan on hosts_groups hgg (cost=0.00..4.06 rows=206 width=16) (actual time=0.006..0.006 rows=1 loops=1)\n -> Hash (cost=3.56..3.56 rows=1 width=8) (actual time=0.040..0.040 rows=0 loops=1)\n -> Hash Join (cost=1.16..3.56 rows=1 width=8) (actual time=0.037..0.037 rows=0 loops=1)\n Hash Cond: (rr.groupid = ugg.usrgrpid)\n -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (actual time=0.035..0.035 rows=0 loops=1)\n Filter: (permission < 2)\n -> Hash (cost=1.15..1.15 rows=1 width=8) (never executed)\n -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n Filter: (userid = 20)\n -> Hash (cost=41.54..41.54 rows=1954 width=16) (never executed)\n -> Seq Scan on graphs_items gii (cost=0.00..41.54 rows=1954 width=16) (never executed)\n Total runtime: 69.630 ms\n(62 rows)\n\n\nIt looks like it thinks that there will only be 1 row hitting\nthe nestloop, when in fact there are 607 rows. Does anyone have\nany ideas for how to coerce PostgreSQL into using the hashjoin\nplan instead? I am running 8.4.2 and have done a database-wide\nvacuum/analyze.\n\nRegards,\nKen\n",
"msg_date": "Fri, 15 Jan 2010 16:54:16 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad plan choice nestloop vs. hashjoin"
},
{
"msg_contents": "Kenneth Marshall <[email protected]> wrote:\n \n> with the default settings\n \nDo you mean you haven't changed any settings in your postgresql.conf\nfile from their defaults?\n \n-Kevin\n",
"msg_date": "Fri, 15 Jan 2010 16:58:57 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan choice nestloop vs. hashjoin"
},
{
"msg_contents": "On Fri, Jan 15, 2010 at 04:58:57PM -0600, Kevin Grittner wrote:\n> Kenneth Marshall <[email protected]> wrote:\n> \n> > with the default settings\n> \n> Do you mean you haven't changed any settings in your postgresql.conf\n> file from their defaults?\n> \n> -Kevin\n> \nSorry, here are the differences from the default:\n\nmax_connections = 100 # (change requires restart)\nshared_buffers = 256MB # min 128kB or max_connections*16kB\nwork_mem = 16MB # min 64kB\nmaintenance_work_mem = 512MB # min 1MB\nsynchronous_commit = off # immediate fsync at commit\nwal_buffers = 256kB # min 32kB\ncheckpoint_segments = 30 # in logfile segments, min 1, 16MB each\nseq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 2.0 # same scale as above\neffective_cache_size = 12GB\nlog_min_duration_statement = 5000\n\nThe machine has 16GB of RAM and the DB is currently about 8GB. It\nis going to grow much larger as information is acquired.\n\nCheers,\nKen\n",
"msg_date": "Fri, 15 Jan 2010 18:14:40 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan choice nestloop vs. hashjoin"
},
{
"msg_contents": "Kenneth Marshall <[email protected]> writes:\n> We have just upgraded our monitoring server software and\n> now the following query for graphing the data performs\n> abysmally with the default settings. Here is the results\n> of the EXPLAIN ANALYZE run with nestloops enabled:\n\nThat plan seems a bit wacko --- I don't see a reason for it to be using\nan indexscan on items ii. Can you extract a self-contained test case?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 15 Jan 2010 19:27:27 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan choice nestloop vs. hashjoin "
},
{
"msg_contents": "Kenneth Marshall <[email protected]> writes:\n> We have just upgraded our monitoring server software and\n> now the following query for graphing the data performs\n> abysmally with the default settings. Here is the results\n> of the EXPLAIN ANALYZE run with nestloops enabled:\n\nI poked at this a bit more and now think I see where the problem is.\nThe thing that would be easiest for you to do something about is\nthe misestimation here:\n\n> -> Nested Loop Anti Join (cost=94.07..10294.64 rows=1 width=8) (actual time=98.049..27907.702 rows=281 loops=1)\n> Join Filter: (gii.graphid = g.graphid)\n> -> Bitmap Heap Scan on graphs g (cost=94.07..233.17 rows=1 width=8) (actual time=0.529..1.772 rows=281 loops=1)\n> Recheck Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581!\n ,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n> Filter: ((graphid / 100000000000000::bigint) = 0)\n> -> Bitmap Index Scan on graphs_pkey (cost=0.00..94.07 rows=246 width=0) (actual time=0.507..0.507 rows=294 loops=1)\n> Index Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835!\n ,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n\nThe estimate of the ANY condition is not too bad (246 vs 294 actual). But it\nhasn't got any ability to deal with the \"(graphid / 100000000000000::bigint) = 0\"\nfilter condition, and is falling back to a default selectivity estimate for\nthat, which IIRC is just 0.005 --- but actually, that condition doesn't\neliminate any rows at all. Do you need that condition in the first\nplace? Can you persuade your client-side software to eliminate it when\nit's impossible based on the ANY list? Or at least recast it to\nsomething more easily estimatable, like \"graphid < 100000000000000\"?\n\nIf you really have to have the condition just like that, I'd advise\ncreating an index on \"(graphid / 100000000000000::bigint)\". That would\ncause ANALYZE to accumulate statistics on that expression, which'd\nresult in a far better estimate.\n\nThe reason that this misestimate hammers it so hard is that the\ninside of the nestloop looks like\n\n> -> Nested Loop (cost=0.00..17449.43 rows=1954 width=8) (actual time=99.304..99.304 rows=0 loops=281)\n> -> Index Scan using graphs_items_2 on graphs_items gii (cost=0.00..69.83 rows=1954 width=16) (actual time=0.013..3.399 rows=1954 loops=281)\n> -> Index Scan using items_pkey on items ii (cost=0.00..8.88 rows=1 width=8) (actual time=0.046..0.046 rows=0 loops=549074)\n> Index Cond: (ii.itemid = gii.itemid)\n> Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n> SubPlan 1\n> -> Nested Loop (cost=0.00..7.83 rows=1 width=0) (actual time=0.040..0.040 rows=0 loops=549074)\n> Join Filter: (rr.groupid = ugg.usrgrpid)\n> -> Nested Loop (cost=0.00..6.67 rows=1 width=8) (actual time=0.037..0.037 rows=0 loops=549074)\n> Join Filter: (hgg.groupid = rr.id)\n> -> Index Scan using hosts_groups_1 on hosts_groups hgg (cost=0.00..4.27 rows=1 width=8) (actual time=0.003..0.005 rows=1 loops=549074)\n> Index Cond: ($0 = hostid)\n> -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (actual time=0.027..0.027 rows=0 loops=532214)\n> Filter: (rr.permission < 2)\n> -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n> Filter: (ugg.userid = 20)\n> SubPlan 2\n> -> Nested Loop (cost=2.34..8.13 rows=1 width=8) (never executed)\n> -> Nested Loop (cost=0.00..3.55 rows=1 width=8) (never executed)\n> Join Filter: (rr.groupid = ugg.usrgrpid)\n> -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (never executed)\n> Filter: (permission < 2)\n> -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n> Filter: (ugg.userid = 20)\n> -> Bitmap Heap Scan on hosts_groups hgg (cost=2.34..4.45 rows=11 width=16) (never executed)\n> Recheck Cond: (hgg.groupid = rr.id)\n> -> Bitmap Index Scan on hosts_groups_2 (cost=0.00..2.33 rows=11 width=0) (never executed)\n> Index Cond: (hgg.groupid = rr.id)\n\nThe alternative subplans are variant implementations of the inner EXISTS\ntest. Unfortunately, it's choosing to go with the \"retail\" lookup, not\nknowing that that's going to wind up being executed 549074 times.\nIf it had gone to the \"wholesale\" hashtable implementation, this would\nprobably have run a whole lot faster. (We have it in mind to allow the\nexecutor to switch to the other implementation on-the-fly when it\nbecomes clear that the planner misjudged the rowcount, but it's not done\nyet.)\n\nHowever, I'm curious about your statement that this used to perform\nbetter. Used to perform better on what, and what was the plan back\nthen? I don't believe that pre-8.4 PG would have done better than\n8.4 on either of these points. It certainly wouldn't have won on the\nEXISTS subplan, because before 8.4 that would always have been done in\nthe \"retail\" style.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 18 Jan 2010 12:13:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad plan choice nestloop vs. hashjoin "
},
{
"msg_contents": "On Mon, Jan 18, 2010 at 12:13:24PM -0500, Tom Lane wrote:\n> Kenneth Marshall <[email protected]> writes:\n> > We have just upgraded our monitoring server software and\n> > now the following query for graphing the data performs\n> > abysmally with the default settings. Here is the results\n> > of the EXPLAIN ANALYZE run with nestloops enabled:\n> \n> I poked at this a bit more and now think I see where the problem is.\n> The thing that would be easiest for you to do something about is\n> the misestimation here:\n> \n> > -> Nested Loop Anti Join (cost=94.07..10294.64 rows=1 width=8) (actual time=98.049..27907.702 rows=281 loops=1)\n> > Join Filter: (gii.graphid = g.graphid)\n> > -> Bitmap Heap Scan on graphs g (cost=94.07..233.17 rows=1 width=8) (actual time=0.529..1.772 rows=281 loops=1)\n> > Recheck Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835,581!\n> ,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n> > Filter: ((graphid / 100000000000000::bigint) = 0)\n> > -> Bitmap Index Scan on graphs_pkey (cost=0.00..94.07 rows=246 width=0) (actual time=0.507..0.507 rows=294 loops=1)\n> > Index Cond: (graphid = ANY ('{2,3,4,5,386,387,969,389,971,972,973,446,447,448,449,450,451,471,456,470,473,477,472,474,475,476,478,479,480,481,482,483,484,459,614,655,658,645,490,492,489,493,496,495,498,497,499,501,500,502,974,558,559,562,566,563,564,565,567,568,569,570,571,535,572,573,534,536,538,539,540,541,542,543,544,545,537,546,547,548,552,553,554,555,556,549,550,551,557,577,578,579,580,574,576,581,835,587,588,589,590,560,561,836,591,592,593,594,595,827,389,495,498,497,597,598,599,975,978,999,1004,604,605,606,679,616,634,635,636,637,638,618,629,630,631,632,633,671,682,669,670,678,679,680,674,672,676,673,675,677,681,682,683,683,644,652,829,681,687,698,685,686,705,706,707,708,830,945,946,710,716,712,714,713,709,718,721,720,719,723,724,747,749,750,730,731,732,733,734,735,736,737,738,739,740,741,742,743,744,745,772,774,775,755,756,757,758,759,760,761,762,763,764,765,766,767,768,769,770,777,776,977,824,823,826,825,829,832,833,835!\n> ,581,836,842,852,854,839,840,838,853,855,847,848,944,846,859,850,899,901,902,903,864,865,866,867,976,979,939,941,942,943,906,907,908,909,910,868,969,991,950,955,964,966,952,953,962,965,967,959,961,968,1001,1002,1003,986,987,988,994,995,996,1008,1006,1007,1009,1010}'::bigint[]))\n> \n> The estimate of the ANY condition is not too bad (246 vs 294 actual). But it\n> hasn't got any ability to deal with the \"(graphid / 100000000000000::bigint) = 0\"\n> filter condition, and is falling back to a default selectivity estimate for\n> that, which IIRC is just 0.005 --- but actually, that condition doesn't\n> eliminate any rows at all. Do you need that condition in the first\n> place? Can you persuade your client-side software to eliminate it when\n> it's impossible based on the ANY list? Or at least recast it to\n> something more easily estimatable, like \"graphid < 100000000000000\"?\n> \n> If you really have to have the condition just like that, I'd advise\n> creating an index on \"(graphid / 100000000000000::bigint)\". That would\n> cause ANALYZE to accumulate statistics on that expression, which'd\n> result in a far better estimate.\n> \n> The reason that this misestimate hammers it so hard is that the\n> inside of the nestloop looks like\n> \n> > -> Nested Loop (cost=0.00..17449.43 rows=1954 width=8) (actual time=99.304..99.304 rows=0 loops=281)\n> > -> Index Scan using graphs_items_2 on graphs_items gii (cost=0.00..69.83 rows=1954 width=16) (actual time=0.013..3.399 rows=1954 loops=281)\n> > -> Index Scan using items_pkey on items ii (cost=0.00..8.88 rows=1 width=8) (actual time=0.046..0.046 rows=0 loops=549074)\n> > Index Cond: (ii.itemid = gii.itemid)\n> > Filter: (alternatives: SubPlan 1 or hashed SubPlan 2)\n> > SubPlan 1\n> > -> Nested Loop (cost=0.00..7.83 rows=1 width=0) (actual time=0.040..0.040 rows=0 loops=549074)\n> > Join Filter: (rr.groupid = ugg.usrgrpid)\n> > -> Nested Loop (cost=0.00..6.67 rows=1 width=8) (actual time=0.037..0.037 rows=0 loops=549074)\n> > Join Filter: (hgg.groupid = rr.id)\n> > -> Index Scan using hosts_groups_1 on hosts_groups hgg (cost=0.00..4.27 rows=1 width=8) (actual time=0.003..0.005 rows=1 loops=549074)\n> > Index Cond: ($0 = hostid)\n> > -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (actual time=0.027..0.027 rows=0 loops=532214)\n> > Filter: (rr.permission < 2)\n> > -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n> > Filter: (ugg.userid = 20)\n> > SubPlan 2\n> > -> Nested Loop (cost=2.34..8.13 rows=1 width=8) (never executed)\n> > -> Nested Loop (cost=0.00..3.55 rows=1 width=8) (never executed)\n> > Join Filter: (rr.groupid = ugg.usrgrpid)\n> > -> Seq Scan on rights rr (cost=0.00..2.39 rows=1 width=16) (never executed)\n> > Filter: (permission < 2)\n> > -> Seq Scan on users_groups ugg (cost=0.00..1.15 rows=1 width=8) (never executed)\n> > Filter: (ugg.userid = 20)\n> > -> Bitmap Heap Scan on hosts_groups hgg (cost=2.34..4.45 rows=11 width=16) (never executed)\n> > Recheck Cond: (hgg.groupid = rr.id)\n> > -> Bitmap Index Scan on hosts_groups_2 (cost=0.00..2.33 rows=11 width=0) (never executed)\n> > Index Cond: (hgg.groupid = rr.id)\n> \n> The alternative subplans are variant implementations of the inner EXISTS\n> test. Unfortunately, it's choosing to go with the \"retail\" lookup, not\n> knowing that that's going to wind up being executed 549074 times.\n> If it had gone to the \"wholesale\" hashtable implementation, this would\n> probably have run a whole lot faster. (We have it in mind to allow the\n> executor to switch to the other implementation on-the-fly when it\n> becomes clear that the planner misjudged the rowcount, but it's not done\n> yet.)\n> \n> However, I'm curious about your statement that this used to perform\n> better. Used to perform better on what, and what was the plan back\n> then? I don't believe that pre-8.4 PG would have done better than\n> 8.4 on either of these points. It certainly wouldn't have won on the\n> EXISTS subplan, because before 8.4 that would always have been done in\n> the \"retail\" style.\n> \n> \t\t\tregards, tom lane\n> \n\nHi Tom,\n\nYour update beat me. I was just about to send you the trimmed example.\nThe performance problem was not present previously because this code\nis new in this release, a .0 release. This section of code is completely\nnew. I will post a bug report to the developers with your analysis\nand hopefully they can include the fix in the .1 release. I am going\nto create the index so I do not have to track source code changes\nto their application. The performance is still 2X slower than with\nenable_nestloop set to off, but that is great compared to 200X slower\nwithout the index:\n\nCREATE INDEX fix_bad_stat_estimate ON graphs ((graphid / 100000000000000::bigint));\n\nThank you again.\n\nRegards,\nKen\n",
"msg_date": "Mon, 18 Jan 2010 12:57:44 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad plan choice nestloop vs. hashjoin"
}
] |
[
{
"msg_contents": "A few months ago the worst of the bugs in the ext4 fsync code started \nclearing up, with \nhttp://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5f3481e9a80c240f169b36ea886e2325b9aeb745 \nas a particularly painful one. That made it into the 2.6.32 kernel \nreleased last month. Some interesting benchmark news today suggests a \nversion of ext4 that might actually work for databases is showing up in \nearly packaged distributions:\n\nhttp://www.phoronix.com/scan.php?page=article&item=ubuntu_lucid_alpha2&num=3\n\nAlong with the massive performance drop that comes from working fsync. \nSee \nhttp://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=2 \nfor background about this topic from when the issue was discovered:\n\n\"[This change] is required for safe behavior with volatile write caches \non drives. You could mount with -o nobarrier and [the performance drop] \nwould go away, but a sequence like write->fsync->lose power->reboot may \nwell find your file without the data that you synced, if the drive had \nwrite caches enabled. If you know you have no write cache, or that it \nis safely battery backed, then you can mount with -o nobarrier, and not \nincur this penalty.\"\n\nThe pgbench TPS figure Phoronix has been reporting has always been a \nfictitious one resulting from unsafe write caching. With 2.6.32 \nreleased with ext4 defaulting to proper behavior on fsync, that's going \nto make for a very interesting change. On one side, we might finally be \nable to use regular drives with their caches turned on safely, taking \nadvantage of the cache for other writes while doing the right thing with \nthe database writes. On the other, anyone who believed the fictitious \nnumbers before is going to be in a rude surprise and think there's a \nmassive regression here. There's some potential for this to show \nPostgreSQL in a bad light, when people discover they really only can get \n~100 commits/second out of cheap hard drives and assume the database is \nto blame. Interesting times.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.co\n\n",
"msg_date": "Fri, 15 Jan 2010 22:05:49 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "ext4 finally doing the right thing"
},
{
"msg_contents": "On Fri, 2010-01-15 at 22:05 -0500, Greg Smith wrote:\n> A few months ago the worst of the bugs in the ext4 fsync code started \n> clearing up, with \n> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=5f3481e9a80c240f169b36ea886e2325b9aeb745 \n> as a particularly painful one.\n\nWow, thanks for the heads-up!\n\n> On one side, we might finally be \n> able to use regular drives with their caches turned on safely, taking \n> advantage of the cache for other writes while doing the right thing with \n> the database writes.\n\nThat could be good news. What's your opinion on the practical\nperformance impact? If it doesn't need to be fsync'd, the kernel\nprobably shouldn't have written it to the disk yet anyway, right (I'm\nassuming here that the OS buffer cache is much larger than the disk\nwrite cache)?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 19 Jan 2010 15:28:34 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "Jeff Davis wrote:\n>\n>> On one side, we might finally be \n>> able to use regular drives with their caches turned on safely, taking \n>> advantage of the cache for other writes while doing the right thing with \n>> the database writes.\n>> \n>\n> That could be good news. What's your opinion on the practical\n> performance impact? If it doesn't need to be fsync'd, the kernel\n> probably shouldn't have written it to the disk yet anyway, right (I'm\n> assuming here that the OS buffer cache is much larger than the disk\n> write cache)?\n> \n\nI know they just tweaked this area recently so this may be a bit out of \ndate, but kernels starting with 2.6.22 allow you to get up to 10% of \nmemory dirty before getting really aggressive about writing things out, \nwith writes starting to go heavily at 5%. So even with a 1GB server, \nyou could easily find 100MB of data sitting in the kernel buffer cache \nahead of a database write that needs to hit disc. Once you start \nconsidering the case with modern hardware, where even my desktop has 8GB \nof RAM and most serious servers I see have 32GB, you can easily have \ngigabytes of such data queued in front of the write that now needs to \nhit the platter.\n\nThe dream is that a proper barrier implementation will then shuffle your \nimportant write to the front of that queue, without waiting for \neverything else to clear first. The exact performance impact depends on \nhow many non-database writes happen. But even on a dedicated database \ndisk, it should still help because there are plenty of non-sync'd writes \ncoming out the background writer via its routine work and the checkpoint \nwrites. And the ability to fully utilize the write cache on the \nindividual drives, on commodity hardware, without risking database \ncorruption would make life a lot easier.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n\n\n\n\n\n\nJeff Davis wrote:\n\n\nOn one side, we might finally be \nable to use regular drives with their caches turned on safely, taking \nadvantage of the cache for other writes while doing the right thing with \nthe database writes.\n \n\n\nThat could be good news. What's your opinion on the practical\nperformance impact? If it doesn't need to be fsync'd, the kernel\nprobably shouldn't have written it to the disk yet anyway, right (I'm\nassuming here that the OS buffer cache is much larger than the disk\nwrite cache)?\n \n\n\nI know they just tweaked this area recently so this may be a bit out of\ndate, but kernels starting with 2.6.22 allow you to get up to 10% of\nmemory dirty before getting really aggressive about writing things out,\nwith writes starting to go heavily at 5%. So even with a 1GB server,\nyou could easily find 100MB of data sitting in the kernel buffer cache\nahead of a database write that needs to hit disc. Once you start\nconsidering the case with modern hardware, where even my desktop has\n8GB of RAM and most serious servers I see have 32GB, you can easily\nhave gigabytes of such data queued in front of the write that now needs\nto hit the platter.\n\nThe dream is that a proper barrier implementation will then shuffle\nyour important write to the front of that queue, without waiting for\neverything else to clear first. The exact performance impact depends\non how many non-database writes happen. But even on a dedicated\ndatabase disk, it should still help because there are plenty of\nnon-sync'd writes coming out the background writer via its routine work\nand the checkpoint writes. And the ability to fully utilize the write\ncache on the individual drives, on commodity hardware, without risking\ndatabase corruption would make life a lot easier.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Wed, 20 Jan 2010 16:18:48 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "That doesn't sound right. The kernel having 10% of memory dirty doesn't mean\nthere's a queue you have to jump at all. You don't get into any queue until\nthe kernel initiates write-out which will be based on the usage counters --\nbasically a lru. fsync and cousins like sync_file_range and\nposix_fadvise(DONT_NEED) in initiate write-out right away.\n\nHow many pending write-out requests for how much data the kernel should keep\nactive is another question but I imagine it has more to do with storage\nhardware than how much memory your system has. And for most hardware it's\nprobably on the order of megabytes or less.\n\ngreg\n\nOn 20 Jan 2010 21:19, \"Greg Smith\" <[email protected]> wrote:\n\nJeff Davis wrote: > > >> On one side, we might finally be >> able to use\nregular drives with their ...\nI know they just tweaked this area recently so this may be a bit out of\ndate, but kernels starting with 2.6.22 allow you to get up to 10% of memory\ndirty before getting really aggressive about writing things out, with writes\nstarting to go heavily at 5%. So even with a 1GB server, you could easily\nfind 100MB of data sitting in the kernel buffer cache ahead of a database\nwrite that needs to hit disc. Once you start considering the case with\nmodern hardware, where even my desktop has 8GB of RAM and most serious\nservers I see have 32GB, you can easily have gigabytes of such data queued\nin front of the write that now needs to hit the platter.\n\nThe dream is that a proper barrier implementation will then shuffle your\nimportant write to the front of that queue, without waiting for everything\nelse to clear first. The exact performance impact depends on how many\nnon-database writes happen. But even on a dedicated database disk, it\nshould still help because there are plenty of non-sync'd writes coming out\nthe background writer via its routine work and the checkpoint writes. And\nthe ability to fully utilize the write cache on the individual drives, on\ncommodity hardware, without risking database corruption would make life a\nlot easier.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\nThat doesn't sound right. The kernel having 10% of memory dirty doesn't mean there's a queue you have to jump at all. You don't get into any queue until the kernel initiates write-out which will be based on the usage counters -- basically a lru. fsync and cousins like sync_file_range and posix_fadvise(DONT_NEED) in initiate write-out right away. \nHow many pending write-out requests for how much data the kernel should keep active is another question but I imagine it has more to do with storage hardware than how much memory your system has. And for most hardware it's probably on the order of megabytes or less.\ngreg\nOn 20 Jan 2010 21:19, \"Greg Smith\" <[email protected]> wrote:\nJeff Davis wrote:\n>\n>\n>> On one side, we might finally be \n>> able to use regular drives with their ...\nI know they just tweaked this area recently so this may be a bit out of\ndate, but kernels starting with 2.6.22 allow you to get up to 10% of\nmemory dirty before getting really aggressive about writing things out,\nwith writes starting to go heavily at 5%. So even with a 1GB server,\nyou could easily find 100MB of data sitting in the kernel buffer cache\nahead of a database write that needs to hit disc. Once you start\nconsidering the case with modern hardware, where even my desktop has\n8GB of RAM and most serious servers I see have 32GB, you can easily\nhave gigabytes of such data queued in front of the write that now needs\nto hit the platter.\n\nThe dream is that a proper barrier implementation will then shuffle\nyour important write to the front of that queue, without waiting for\neverything else to clear first. The exact performance impact depends\non how many non-database writes happen. But even on a dedicated\ndatabase disk, it should still help because there are plenty of\nnon-sync'd writes coming out the background writer via its routine work\nand the checkpoint writes. And the ability to fully utilize the write\ncache on the individual drives, on commodity hardware, without risking\ndatabase corruption would make life a lot easier.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Thu, 21 Jan 2010 05:15:40 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "Greg Stark wrote:\n>\n> That doesn't sound right. The kernel having 10% of memory dirty \n> doesn't mean there's a queue you have to jump at all. You don't get \n> into any queue until the kernel initiates write-out which will be \n> based on the usage counters -- basically a lru. fsync and cousins like \n> sync_file_range and posix_fadvise(DONT_NEED) in initiate write-out \n> right away.\n>\n\nMost safe ways ext3 knows how to initiate a write-out on something that \nmust go (because it's gotten an fsync on data there) requires flushing \nevery outstanding write to that filesystem along with it. So as soon as \na single WAL write shows up, bam! The whole cache is emptied (or at \nleast everything associated with that filesystem), and the caller who \nasked for that little write is stuck waiting for everything to clear \nbefore their fsync returns success.\n\nThis particular issue absolutely killed Firefox when they switched to \nusing SQLite not too long ago; high-level discussion at \nhttp://shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/ and \nconfirmation/discussion of the issue on lkml at \nhttps://kerneltrap.org/mailarchive/linux-fsdevel/2008/5/26/1941354 . \n\nNote the comment from the first article saying \"those delays can be 30 \nseconds or more\". On multiple occasions, I've measured systems with \ndozens of disks in a high-performance RAID1+0 with battery-backed \ncontroller that could grind to a halt for 10, 20, or more seconds in \nthis situation, when running pgbench on a big database. As was the case \non the latest one I saw, if you've got 32GB of RAM and have let 3.2GB of \nrandom I/O from background writer/checkpoint writes back up because \nLinux has been lazy about getting to them, that takes a while to clear \nno matter how good the underlying hardware.\n\nWrite barriers were supposed to improve all this when added to ext3, but \nthey just never seemed to work right for many people. After reading \nthat lkml thread, among others, I know I was left not trusting anything \nbeyond the simplest path through this area of the filesystem. Slow is \nbetter than corrupted.\n\nSo the good news I was relaying is that it looks like this finally work \non ext4, giving it the behavior you described and expected, but that's \nnot actually been there until now. I was hoping someone with more free \ntime than me might be interested to go investigate further if I pointed \nthe advance out. I'm stuck with too many production systems to play \nwith new kernels at the moment, but am quite curious.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 21 Jan 2010 00:58:13 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "Both of those refer to the *drive* cache.\n\ngreg\n\nOn 21 Jan 2010 05:58, \"Greg Smith\" <[email protected]> wrote:\n\nGreg Stark wrote: > > > That doesn't sound right. The kernel having 10% of\nmemory dirty doesn't mean...\nMost safe ways ext3 knows how to initiate a write-out on something that must\ngo (because it's gotten an fsync on data there) requires flushing every\noutstanding write to that filesystem along with it. So as soon as a single\nWAL write shows up, bam! The whole cache is emptied (or at least everything\nassociated with that filesystem), and the caller who asked for that little\nwrite is stuck waiting for everything to clear before their fsync returns\nsuccess.\n\nThis particular issue absolutely killed Firefox when they switched to using\nSQLite not too long ago; high-level discussion at\nhttp://shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/ and\nconfirmation/discussion of the issue on lkml at\nhttps://kerneltrap.org/mailarchive/linux-fsdevel/2008/5/26/1941354 .\nNote the comment from the first article saying \"those delays can be 30\nseconds or more\". On multiple occasions, I've measured systems with dozens\nof disks in a high-performance RAID1+0 with battery-backed controller that\ncould grind to a halt for 10, 20, or more seconds in this situation, when\nrunning pgbench on a big database. As was the case on the latest one I saw,\nif you've got 32GB of RAM and have let 3.2GB of random I/O from background\nwriter/checkpoint writes back up because Linux has been lazy about getting\nto them, that takes a while to clear no matter how good the underlying\nhardware.\n\nWrite barriers were supposed to improve all this when added to ext3, but\nthey just never seemed to work right for many people. After reading that\nlkml thread, among others, I know I was left not trusting anything beyond\nthe simplest path through this area of the filesystem. Slow is better than\ncorrupted.\n\nSo the good news I was relaying is that it looks like this finally work on\next4, giving it the behavior you described and expected, but that's not\nactually been there until now. I was hoping someone with more free time\nthan me might be interested to go investigate further if I pointed the\nadvance out. I'm stuck with too many production systems to play with new\nkernels at the moment, but am quite curious.\n\n-- Greg Smith 2ndQuadrant Baltimore, MD PostgreSQL Training, Services\nand Support greg@2ndQu...\n\nBoth of those refer to the *drive* cache. \ngreg\nOn 21 Jan 2010 05:58, \"Greg Smith\" <[email protected]> wrote:Greg Stark wrote:\n>\n>\n> That doesn't sound right. The kernel having 10% of memory dirty doesn't mean...\nMost safe ways ext3 knows how to initiate a write-out on something that must go (because it's gotten an fsync on data there) requires flushing every outstanding write to that filesystem along with it. So as soon as a single WAL write shows up, bam! The whole cache is emptied (or at least everything associated with that filesystem), and the caller who asked for that little write is stuck waiting for everything to clear before their fsync returns success.\n\nThis particular issue absolutely killed Firefox when they switched to using SQLite not too long ago; high-level discussion at http://shaver.off.net/diary/2008/05/25/fsyncers-and-curveballs/ and confirmation/discussion of the issue on lkml at https://kerneltrap.org/mailarchive/linux-fsdevel/2008/5/26/1941354 . \n\nNote the comment from the first article saying \"those delays can be 30 seconds or more\". On multiple occasions, I've measured systems with dozens of disks in a high-performance RAID1+0 with battery-backed controller that could grind to a halt for 10, 20, or more seconds in this situation, when running pgbench on a big database. As was the case on the latest one I saw, if you've got 32GB of RAM and have let 3.2GB of random I/O from background writer/checkpoint writes back up because Linux has been lazy about getting to them, that takes a while to clear no matter how good the underlying hardware.\n\nWrite barriers were supposed to improve all this when added to ext3, but they just never seemed to work right for many people. After reading that lkml thread, among others, I know I was left not trusting anything beyond the simplest path through this area of the filesystem. Slow is better than corrupted.\n\nSo the good news I was relaying is that it looks like this finally work on ext4, giving it the behavior you described and expected, but that's not actually been there until now. I was hoping someone with more free time than me might be interested to go investigate further if I pointed the advance out. I'm stuck with too many production systems to play with new kernels at the moment, but am quite curious.\n\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\ngreg@2ndQu...",
"msg_date": "Thu, 21 Jan 2010 11:13:42 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "* Greg Smith <[email protected]> [100121 00:58]:\n> Greg Stark wrote:\n>>\n>> That doesn't sound right. The kernel having 10% of memory dirty \n>> doesn't mean there's a queue you have to jump at all. You don't get \n>> into any queue until the kernel initiates write-out which will be \n>> based on the usage counters -- basically a lru. fsync and cousins like \n>> sync_file_range and posix_fadvise(DONT_NEED) in initiate write-out \n>> right away.\n>>\n>\n> Most safe ways ext3 knows how to initiate a write-out on something that \n> must go (because it's gotten an fsync on data there) requires flushing \n> every outstanding write to that filesystem along with it. So as soon as \n> a single WAL write shows up, bam! The whole cache is emptied (or at \n> least everything associated with that filesystem), and the caller who \n> asked for that little write is stuck waiting for everything to clear \n> before their fsync returns success.\n\nSure, if your WAL is on the same FS as your data, you're going to get\nhit, and *especially* on ext3...\n\nBut, I think that's one of the reasons people usually recommend putting\nWAL separate. Even if it's just another partition on the same (set of)\ndisk(s), you get the benefit of not having to wait for all the dirty\next3 pages from your whole database FS to be flushed before the WAL write\ncan complete on it's own FS.\n\na.\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.",
"msg_date": "Thu, 21 Jan 2010 08:51:29 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "* Greg Smith:\n\n> Note the comment from the first article saying \"those delays can be 30\n> seconds or more\". On multiple occasions, I've measured systems with\n> dozens of disks in a high-performance RAID1+0 with battery-backed\n> controller that could grind to a halt for 10, 20, or more seconds in\n> this situation, when running pgbench on a big database.\n\nWe see that quite a bit, too (we're still on ext3, mostly 2.6.26ish\nkernels). It seems that the most egregious issues (which even trigger\nthe two minute kernel hangcheck timer) are related to CFQ. We don't\nsee it on systems we have switched to the deadline I/O scheduler. But\ndata on this is a bit sketchy.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Thu, 21 Jan 2010 14:04:25 +0000",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "Aidan Van Dyk wrote:\n> Sure, if your WAL is on the same FS as your data, you're going to get\n> hit, and *especially* on ext3...\n>\n> But, I think that's one of the reasons people usually recommend putting\n> WAL separate.\n\nSeparate disks can actually concentrate the problem. The writes to the \ndata disk by checkpoints will also have fsync behind them eventually, so \nsplitting out the WAL means you just push the big write backlog to a \nlater point. So less frequently performance dives, but sometimes \nbigger. All of the systems I was mentioning seeing >10 second pauses on \nhad a RAID-1 pair of WAL disks split from the main array.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 21 Jan 2010 09:49:05 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "* Greg Smith <[email protected]> [100121 09:49]:\n> Aidan Van Dyk wrote:\n>> Sure, if your WAL is on the same FS as your data, you're going to get\n>> hit, and *especially* on ext3...\n>>\n>> But, I think that's one of the reasons people usually recommend putting\n>> WAL separate.\n>\n> Separate disks can actually concentrate the problem. The writes to the \n> data disk by checkpoints will also have fsync behind them eventually, so \n> splitting out the WAL means you just push the big write backlog to a \n> later point. So less frequently performance dives, but sometimes \n> bigger. All of the systems I was mentioning seeing >10 second pauses on \n> had a RAID-1 pair of WAL disks split from the main array.\n\nThat's right, so with the WAL split off on it's own disk, you don't wait\non \"WAL\" for your checkpoint/data syncs, but you can build up a huge\nwait in the queue for main data (which can even block reads).\n\nHaving WAL on the main disk means that (for most ext3), you sometimes\nhave WAL writes taking longer, but the WAL fsyncs are keeping the\nbacklog \"down\" in the main data area too.\n\nNow, with ext4 moving to full barrier/fsync support, we could get to the\npoint where WAL in the main data FS can mimic the state where WAL is\nseperate, namely that WAL writes can \"jump the queue\" and be written\nwithout waiting for the data pages to be flushed down to disk, but also\nthat you'll get the big backlog of data pages to flush when\nthe first fsyncs on big data files start coming from checkpoints...\n\na.\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.",
"msg_date": "Thu, 21 Jan 2010 10:05:10 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": ">Aidan Van Dyk <[email protected]> wrote: \n> But, I think that's one of the reasons people usually recommend\n> putting WAL separate. Even if it's just another partition on the\n> same (set of) disk(s), you get the benefit of not having to wait\n> for all the dirty ext3 pages from your whole database FS to be\n> flushed before the WAL write can complete on it's own FS.\n \n[slaps forehead]\n \nI've been puzzling about why we're getting timeouts on one of two\napparently identical (large) servers. We forgot to move the pg_xlog\ndirectory to the separate mount point we created for it on the same\nRAID. I didn't think to check that until I saw your post.\n \n-Kevin\n",
"msg_date": "Thu, 21 Jan 2010 09:54:26 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "\n> Now, with ext4 moving to full barrier/fsync support, we could get to the\n> point where WAL in the main data FS can mimic the state where WAL is\n> seperate, namely that WAL writes can \"jump the queue\" and be written\n> without waiting for the data pages to be flushed down to disk, but also\n> that you'll get the big backlog of data pages to flush when\n> the first fsyncs on big data files start coming from checkpoints...\n\n\tDoes postgres write something to the logfile whenever a fsync() takes a \nsuspiciously long amount of time ?\n",
"msg_date": "Thu, 21 Jan 2010 17:36:47 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ext4 finally doing the right thing"
},
{
"msg_contents": "Pierre Frédéric Caillaud wrote:\n>\n> Does postgres write something to the logfile whenever a fsync() \n> takes a suspiciously long amount of time ?\n\nNot specifically. If you're logging statements that take a while, you \ncan see this indirectly, but commits that just take much longer than usual.\n\nIf you turn on log_checkpoints, the \"sync time\" is broken out for you, \nproblems in this area can show up there too.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 21 Jan 2010 19:13:01 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ext4 finally doing the right thing"
}
] |
[
{
"msg_contents": "El 19/01/2010 13:59, Willy-Bas Loos escribi�:\n> Hi,\n>\n> I have a query that runs for about 16 hours, it should run at least \n> weekly.\n> There are also clients connecting via a website, we don't want to keep \n> them waiting because of long DSS queries.\n>\n> We use Debian Lenny.\n> I've noticed that renicing the process really lowers the load (in \n> \"top\"), though i think we are I/O bound. Does that make any sense?\n>\n> Cheers,\n>\n> WBL\n> -- \n> \"Patriotism is the conviction that your country is superior to all \n> others because you were born in it.\" -- George Bernard Shaw\n�16 hours?\n�Which the amount of data? of 10 to 30 000 000 of records no?\n\n�Do you put the code here to see if we can help you on its optimization?\n\nAbout the question, you can give more information with iostat.\nRegards\n\n",
"msg_date": "Tue, 19 Jan 2010 09:25:28 +0100",
"msg_from": "\"Ing. Marcos L. Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: renice on an I/O bound box"
},
{
"msg_contents": "Hi,\n\nI have a query that runs for about 16 hours, it should run at least weekly.\nThere are also clients connecting via a website, we don't want to keep them\nwaiting because of long DSS queries.\n\nWe use Debian Lenny.\nI've noticed that renicing the process really lowers the load (in \"top\"),\nthough i think we are I/O bound. Does that make any sense?\n\nCheers,\n\nWBL\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nHi,I have a query that runs for about 16 hours, it should run at least weekly.There are also clients connecting via a website, we don't want to keep them waiting because of long DSS queries.We use Debian Lenny.\nI've noticed that renicing the process really lowers the load (in \"top\"), though i think we are I/O bound. Does that make any sense?Cheers,WBL-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Tue, 19 Jan 2010 13:59:41 +0100",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": false,
"msg_subject": "renice on an I/O bound box"
},
{
"msg_contents": "On Tue, Jan 19, 2010 at 3:09 PM, Jean-David Beyer <[email protected]>wrote:\n\n> Willy-Bas Loos wrote:\n>\n>\n>>\n>> On Tue, Jan 19, 2010 at 2:28 PM, Jean-David Beyer <[email protected]<mailto:\n>> [email protected]>> wrote:\n>>\n>> It could make sense.\n>>\n>> I once had a job populating a database. It was I/O bound and ran for a\n>> couple of hours. In those days, it was on a machine shared with\n>> developers of other software, and running my job interfered a lot with\n>> their work. I tried running it \"nice\" and this slowed my job down a lot\n>> so it took something like 16 hours. But it did not help the other users\n>> because my job was screwing the disk cache in memory that much longer.\n>> It worked out best for everyone to run my job at normal priority, but\n>> to\n>> start it just before quitting time so it was nearly finished by the\n>> time\n>> they came back to work.\n>>\n>> There was a better way to solve the problem (use raw IO for the dbms),\n>> but in those days they would not let me do that. I had done it before\n>> when I was working in another department and it worked just fine.\n>>\n>> It is my understanding that postgreSQL does not use raw IO because it\n>> does not help all that much. With my current machine with 8 GB RAM (6\n>> GB\n>> of which is is cache) and a relatively small database, this seems to be\n>> true.\n>>\n>>\n>> Did you mean to not send that to the list?\n>>\n>\n> I meant to send it to the list. Most of the lists I am on do that\n> automatically when I press Reply. A few do not, and I forget which is\n> which.\n\n\nOk, back on the list now.\n\n\n>\n>\n> Seems like what you are saying is that in your case, it didn't work out.\n>> The cache thing is a good one to remember.\n>>\n>> How could it make sense then?\n>>\n>\n> Because the process screws up the disk cache, and that takes time to\n> recover. At which point your process runs again and messes up the cache\n> again.\n>\n> I have a similar problem now. I run a lot of BOINC processes at the\n> lowest priority (nice 19) to eat up all the processor cycles. These\n> normally do not interfere with anything. But when I run a postgreSQL\n> process, it runs slowly because whenever postgreSQL pauses for IO, the\n> BOINC stuff runs and invalidates (in this case, the memory L1, L2, and\n> L3) cache, so when postgreSQL gets a processor, all the memory caches\n> are invalid and the machine runs at RAM speeds (533MHz) instead of cache\n> speed (3.06 GHz). So if I turn off the BOINC processes, that do not\n> compete for processor cycles, but do compete for cache slots, the\n> postgreSQL process runs faster.\n>\n\nSo to me it *doesn't* make sense, renicing the process in the case that you\ndescribe.\nProbably hard to detect too.\n\nWould things be better if the processes would compete for CPU cycles? Or\nequally bad, only you can see it reflected in the load?\n\nIs it generally bad to renice postgres processes, unless you are CPU bound?\n(and even then..)\n\nCheers,\n\nWBL\n\n\n\n>\n>\n\n> --\n> .~. Jean-David Beyer Registered Linux User 85642.\n> /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n> /( )\\ Shrewsbury, New Jersey http://counter.li.org\n> ^^-^^ 09:00:01 up 11 days, 10:55, 3 users, load average: 4.34, 4.42, 4.43\n>\n\n\n\n-- \n\"Patriotism is the conviction that your country is superior to all others\nbecause you were born in it.\" -- George Bernard Shaw\n\nOn Tue, Jan 19, 2010 at 3:09 PM, Jean-David Beyer <[email protected]> wrote:\nWilly-Bas Loos wrote:\n\n\n\nOn Tue, Jan 19, 2010 at 2:28 PM, Jean-David Beyer <[email protected] <mailto:[email protected]>> wrote:\n\n It could make sense.\n\n I once had a job populating a database. It was I/O bound and ran for a\n couple of hours. In those days, it was on a machine shared with\n developers of other software, and running my job interfered a lot with\n their work. I tried running it \"nice\" and this slowed my job down a lot\n so it took something like 16 hours. But it did not help the other users\n because my job was screwing the disk cache in memory that much longer.\n It worked out best for everyone to run my job at normal priority, but to\n start it just before quitting time so it was nearly finished by the time\n they came back to work.\n\n There was a better way to solve the problem (use raw IO for the dbms),\n but in those days they would not let me do that. I had done it before\n when I was working in another department and it worked just fine.\n\n It is my understanding that postgreSQL does not use raw IO because it\n does not help all that much. With my current machine with 8 GB RAM (6 GB\n of which is is cache) and a relatively small database, this seems to be\n true.\n\n\nDid you mean to not send that to the list?\n\n\nI meant to send it to the list. Most of the lists I am on do that\nautomatically when I press Reply. A few do not, and I forget which is which.Ok, back on the list now. \n\n\n\nSeems like what you are saying is that in your case, it didn't work out.\nThe cache thing is a good one to remember.\n\nHow could it make sense then?\n\n\nBecause the process screws up the disk cache, and that takes time to\nrecover. At which point your process runs again and messes up the cache\nagain.\n\nI have a similar problem now. I run a lot of BOINC processes at the\nlowest priority (nice 19) to eat up all the processor cycles. These\nnormally do not interfere with anything. But when I run a postgreSQL\nprocess, it runs slowly because whenever postgreSQL pauses for IO, the\nBOINC stuff runs and invalidates (in this case, the memory L1, L2, and\nL3) cache, so when postgreSQL gets a processor, all the memory caches\nare invalid and the machine runs at RAM speeds (533MHz) instead of cache\nspeed (3.06 GHz). So if I turn off the BOINC processes, that do not\ncompete for processor cycles, but do compete for cache slots, the\npostgreSQL process runs faster.So to me it *doesn't* make sense, renicing the process in the case that you describe.Probably hard to detect too.Would things be better if the processes would compete for CPU cycles? Or equally bad, only you can see it reflected in the load?\nIs it generally bad to renice postgres processes, unless you are CPU bound? (and even then..)Cheers,WBL \n \n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 09:00:01 up 11 days, 10:55, 3 users, load average: 4.34, 4.42, 4.43\n-- \"Patriotism is the conviction that your country is superior to all others because you were born in it.\" -- George Bernard Shaw",
"msg_date": "Tue, 19 Jan 2010 18:07:10 +0100",
"msg_from": "Willy-Bas Loos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: renice on an I/O bound box"
},
{
"msg_contents": "On 19-1-2010 13:59 Willy-Bas Loos wrote:\n> Hi,\n>\n> I have a query that runs for about 16 hours, it should run at least weekly.\n> There are also clients connecting via a website, we don't want to keep\n> them waiting because of long DSS queries.\n>\n> We use Debian Lenny.\n> I've noticed that renicing the process really lowers the load (in\n> \"top\"), though i think we are I/O bound. Does that make any sense?\n\nRenicing a postgresql-process can be a very bad thing for the \nthroughput. As it may also possess some locks, which are required by the \nprocesses that you think should have a higher priority. Those higher \npriority processes will be locked by the lower priority one.\n\nThen again, renicing postgresql as a whole can be useful. And if your \nabsolutely certain you want to renice a process, renicing a process \nshouldn't break anything. But it may have some unexpected side effects.\n\nAnother command to look at, if you're I/O-bound, is the 'ionice' \ncommand, which is similar to nice, but obviously intended for I/O.\nFor some I/O-bound background job, one of the 'idle' classes can be a \nnice level. But for a (single) postgres-process, I'd be careful again \nfor the same reasons as with process-nice.\n\nTo see which commands do some I/O, looking at 'iotop' may be useful, \napart from just examining the output of 'iostat' and similar commands.\n\nBest regards,\n\nArjen\n",
"msg_date": "Tue, 19 Jan 2010 21:16:09 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: renice on an I/O bound box"
},
{
"msg_contents": "On 20/01/2010 4:16 AM, Arjen van der Meijden wrote:\n\n> Another command to look at, if you're I/O-bound, is the 'ionice'\n> command, which is similar to nice, but obviously intended for I/O.\n> For some I/O-bound background job, one of the 'idle' classes can be a\n> nice level. But for a (single) postgres-process, I'd be careful again\n> for the same reasons as with process-nice.\n\nUnfortunately, due to the action of the bgwriter 'ionice' isn't as \neffective as you might hope. It's still useful, but it depends on the load.\n\nhttp://wiki.postgresql.org/wiki/Priorities#Prioritizing_I.2FO\n\n--\nCraig Ringer\n",
"msg_date": "Wed, 20 Jan 2010 09:18:35 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: renice on an I/O bound box"
}
] |
[
{
"msg_contents": "Hello,\n\nWe are running into some performance issues with running VACUUM FULL on the\npg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if\nanybody here might be able to suggest anything to help address the issue.\nSpecifically, when running VACUUM FULL on the pg_largeobject table, it\nappears that one of our CPUs is pegged at 100% (we have 8 on this particular\nbox), and the I/O load on the machine is VERY light (10-20 I/O operations\nper second--nowhere near what our array is capable of). Our pg_largeobject\ntable is about 200 gigabytes, and I suspect that about 30-40% of the table\nare dead rows (after having run vacuumlo and deleting large numbers of large\nobjects). We've tuned vacuum_cost_delay to 0.\n\nI have read that doing a CLUSTER might be faster and less intrusive, but\ntrying that on the pg_largeobject table yields this: ERROR:\n\"pg_largeobject\" is a system catalog\n\nOne other thing: it is possible to run VACUUM FULL for a while, interrupt\nit, then run it again later and have it pick up from where it left off? If\nso, then we could just break up the VACUUM FULL into more manageable chunks\nand tackle it a few hours at a time when our users won't care. I thought I\nread that some of the FSM changes in 8.4 would make this possible, but I'm\nnot sure if that applies here.\n\nIf anybody has any info here, it would be greatly appreciated. Thanks!\n\nSam\n\nHello,We are running into some performance issues with running VACUUM FULL on the pg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if anybody here might be able to suggest anything to help address the issue. Specifically, when running VACUUM FULL on the pg_largeobject table, it appears that one of our CPUs is pegged at 100% (we have 8 on this particular box), and the I/O load on the machine is VERY light (10-20 I/O operations per second--nowhere near what our array is capable of). Our pg_largeobject table is about 200 gigabytes, and I suspect that about 30-40% of the table are dead rows (after having run vacuumlo and deleting large numbers of large objects). We've tuned vacuum_cost_delay to 0.\nI have read that doing a CLUSTER might be faster and less intrusive, but trying that on the pg_largeobject table yields this: ERROR: \"pg_largeobject\" is a system catalogOne other thing: it is possible to run VACUUM FULL for a while, interrupt it, then run it again later and have it pick up from where it left off? If so, then we could just break up the VACUUM FULL into more manageable chunks and tackle it a few hours at a time when our users won't care. I thought I read that some of the FSM changes in 8.4 would make this possible, but I'm not sure if that applies here.\nIf anybody has any info here, it would be greatly appreciated. Thanks!Sam",
"msg_date": "Tue, 19 Jan 2010 12:19:10 -0800",
"msg_from": "PG User 2010 <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance question on VACUUM FULL (Postgres 8.4.2)"
},
{
"msg_contents": "On Tue, 2010-01-19 at 12:19 -0800, PG User 2010 wrote:\n> Hello,\n> \n> We are running into some performance issues with running VACUUM FULL\n> on the pg_largeobject table in Postgres (8.4.2 under Linux), and I'm\n> wondering if anybody here might be able to suggest anything to help\n> address the issue.\n\nAre you running VACUUM (without FULL) regularly? And if so, is that\ninsufficient?\n\n> Our pg_largeobject table is about 200 gigabytes, and I suspect that\n> about 30-40% of the table are dead rows (after having run vacuumlo and\n> deleting large numbers of large objects).\n\nYou can always expect some degree of bloat. Can you give an exact number\nbefore and after the VACUUM FULL? Or is this a one-shot attempt that\nnever finished?\n\nIf large objects are being added/removed regularly, it might be better\njust to wait (and do regular VACUUMs), and the table will naturally\ncompact after the rows at the end are removed.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 19 Jan 2010 14:38:03 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance question on VACUUM FULL (Postgres 8.4.2)"
},
{
"msg_contents": "Hi Jeff,\n\nAre you running VACUUM (without FULL) regularly? And if so, is that\n> insufficient?\n>\n\nUnfortunately, we have not run vacuumlo as often as we would like, and that\nhas caused a lot of garbage blobs to get generated by our application.\n\nYou can always expect some degree of bloat. Can you give an exact number\n> before and after the VACUUM FULL? Or is this a one-shot attempt that\n> never finished?\n>\n> If large objects are being added/removed regularly, it might be better\n> just to wait (and do regular VACUUMs), and the table will naturally\n> compact after the rows at the end are removed.\n>\n\nOur vacuum full is still running after several days, so I'm unsure when it\nwill finish (it would be nice to be able to get a rough idea of % complete\nfor vacuum full, but I don't know of any way to do that). I estimate that\nthere are probably several million dead blobs taking up ~ 80 gigabytes of\nspace.\n\nI believe that once we are running vacuumlo regularly, then normal vacuums\nwill work fine and we won't have much of a wasted space issue. However,\nright now we have LOTS of dead space and that is causing operational issues\n(primarily slower + larger backups, maybe some other slight performance\nissues).\n\nSo, here are my questions (maybe I should post these to the -general or\n-admin mailing lists?):\n\n1) is there any easy way to fiddle with the vacuum process so that it is not\nCPU bound and doing very little I/O? Why would vacuum full be CPU bound\nanyway???\n\n2) is it possible to interrupt VACUUM FULL, then re-start it later on and\nhave it pick up where it was working before?\n\n3) are there any alternatives, such as CLUSTER (which doesn't seem to be\nallowed since pg_largeboject is a system table) that would work?\n\nThanks so much!\n\nSam\n\nHi Jeff,Are you \nrunning VACUUM (without FULL) regularly? And if so, is that\ninsufficient?Unfortunately, we have not run vacuumlo as often as we would like, and that has caused a lot of garbage blobs to get generated by our application.\nYou can always expect some degree of bloat. Can you give an exact number\nbefore and after the VACUUM FULL? Or is this a one-shot attempt that\nnever finished?\n\nIf large objects are being added/removed regularly, it might be better\njust to wait (and do regular VACUUMs), and the table will naturally\ncompact after the rows at the end are removed.Our vacuum full is still running after several days, so I'm unsure when it will finish (it would be nice to be able to get a rough idea of % complete for vacuum full, but I don't know of any way to do that). I estimate that there are probably several million dead blobs taking up ~ 80 gigabytes of space.\nI believe that once we are running vacuumlo regularly, then normal vacuums will work fine and we won't have much of a wasted space issue. However, right now we have LOTS of dead space and that is causing operational issues (primarily slower + larger backups, maybe some other slight performance issues).\nSo, here are my questions (maybe I should post these to the -general or -admin mailing lists?):1) is there any easy way to fiddle with the vacuum process so that it is not CPU bound and doing very little I/O? Why would vacuum full be CPU bound anyway???\n2) is it possible to interrupt VACUUM FULL, then re-start it later on and have it pick up where it was working before?3) are there any alternatives, such as CLUSTER (which doesn't seem to be allowed since pg_largeboject is a system table) that would work?\nThanks so much!Sam",
"msg_date": "Thu, 21 Jan 2010 14:43:32 -0800",
"msg_from": "PG User 2010 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance question on VACUUM FULL (Postgres 8.4.2)"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm not sure this is the right place to ask my question, so please if it is\nnot let me know where I can get an answer from.\n\nI'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing\nan update query with a where clause that updates approximately 100 000 rows\nin a table containing approximately 3 200 000 rows.\n\nThe update query is very simple: UPDATE TABLE1 SET FIELD1 = FIELD1 WHERE\nFIELD2 < 0.83 (the where clause is used to limit the affected rows to ~ 100\n000, and the \"SET FIELD1 = FIELD1\" is only on purpose to keep the data of\nthe table unchanged).\n\nActually this query is inside a function and this function is called from a\n.sh file (the function is called 100 times with a vacuum analyze after each\ncall for the table).\n\nSo the average execution time of the function is around 2.5 mins, meaning\nthat the update query (+ the vacuum) takes 2.5 mins to execute. So is this a\nnormal behavior? (The same function in oracle with the same environment\n(with our vacuum obviously) is executed in 11 second).\n\nNote that no index is created on FIELD2 (neither in postgresql nor in\noracle)\n\n \n\nThanks for your help.\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\nI’m not sure this is the right place to ask my\nquestion, so please if it is not let me know where I can get an answer from.\nI’m using postgresql 8.4 on Linux machine with 1.5 GB\nRAM, and I’m issuing an update query with a where clause that updates\napproximately 100 000 rows in a table containing approximately 3 200 000 rows.\nThe update query is very simple: UPDATE TABLE1 SET FIELD1 =\nFIELD1 WHERE FIELD2 < 0.83 (the where clause is used to limit the affected\nrows to ~ 100 000, and the “SET FIELD1 = FIELD1” is only on purpose\nto keep the data of the table unchanged).\nActually this query is inside a function and this function\nis called from a .sh file (the function is called 100 times with a vacuum\nanalyze after each call for the table).\nSo the average execution time of the function is around 2.5\nmins, meaning that the update query (+ the vacuum) takes 2.5 mins to execute.\nSo is this a normal behavior? (The same function in oracle with the same\nenvironment (with our vacuum obviously) is executed in 11 second).\nNote that no index is created on FIELD2 (neither in\npostgresql nor in oracle)\n \nThanks for your help.",
"msg_date": "Thu, 21 Jan 2010 17:27:02 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update query"
},
{
"msg_contents": "\"elias ghanem\" <[email protected]> wrote:\n \n> I'm not sure this is the right place to ask my question\n \nYes it is. You gave a lot of good information, but we'd have a\nbetter shot at diagnosing the issue with a bit more. Please read\nthe following and resubmit with as much of the requested information\nas you can. Note that you might need to break out the problem query\nfrom the function to run EXPLAIN ANALYZE (the output of which is one\nof the more useful diagnostic tools we have).\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Thu, 21 Jan 2010 09:43:19 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update query"
}
] |
[
{
"msg_contents": "Hi,\n\nThanks for your help, here's more details as you requested:\n\n-The version of postgres is 8.4 (by the way select pg_version() is not\nworking but let's concentrate on the query issue)\n\nHere's the full definition of the table with it's indices:\n\n-- Table: in_sortie\n\n \n\n-- DROP TABLE in_sortie;\n\n \n\nCREATE TABLE in_sortie\n\n(\n\n \"type\" character(1),\n\n site_id character varying(100),\n\n fiche_produit_id character varying(100),\n\n numero_commande character varying(100),\n\n ligne_commande integer,\n\n date_sortie date,\n\n quantite_sortie numeric(15,2),\n\n date_livraison_souhaitee date,\n\n quantite_souhaitee numeric(15,2),\n\n client_ref character varying(100),\n\n valeur numeric(15,2),\n\n type_mouvement character varying(100),\n\n etat_sortie_annulation integer,\n\n etat_sortie_prevision integer,\n\n etat_sortie_taux_service integer,\n\n date_commande date,\n\n valide character varying(1)\n\n)\n\nWITH (\n\n OIDS=FALSE\n\n)\n\nTABLESPACE \"AG_INTERFACE\";\n\n \n\n-- Index: idx_in_sortie\n\n \n\n-- DROP INDEX idx_in_sortie;\n\n \n\nCREATE INDEX idx_in_sortie\n\n ON in_sortie\n\n USING btree\n\n (site_id, fiche_produit_id);\n\n \n\n-- Index: idx_in_sortie_fp\n\n \n\n-- DROP INDEX idx_in_sortie_fp;\n\n \n\nCREATE INDEX idx_in_sortie_fp\n\n ON in_sortie\n\n USING btree\n\n (fiche_produit_id);\n\n \n\n-- Index: idx_in_sortie_site\n\n \n\n-- DROP INDEX idx_in_sortie_site;\n\n \n\nCREATE INDEX idx_in_sortie_site\n\n ON in_sortie\n\n USING btree\n\n (site_id);\n\n \n\n-Concerning the postgresql.conf file I've tried to changed the default\nvalues such as: shared_buffers and effective_cache_size. but this did not\nchange the result.\n\n \n\n-The WAL IS NOT ON DIFFERENT DISK, THEY ARE ON THE SAME DISK WHER THE DB IS\n(for the moment I don't have the possibility of moving them to another disk\nbut maybe \"just for testing\" you can tell me how I can totally disable WAL\nif possible).\n\n \n\nI'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm issuing\nan update query with a where clause that updates approximately 100 000 rows\nin a table containing approximately 3 200 000 rows.\n\nThe update query is very simple: UPDATE IN_SORTIE SET VALIDE = VALIDE WHERE\nVALEUR < 0.83 (the where clause is used to limit the affected rows to ~ 100\n000, and the \"SET VALIDE = VALIDE\" is only on purpose to keep the data of\nthe table unchanged).\n\nActually this query is inside a function and this function is called from a\n.sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT -d\n$DB_NAME -U $DB_USER -c \"SELECT testupdate()\"\n\n (the function is called 100 times with a vacuum analyze after each call for\nthe table).\n\nSo the average execution time of the function is around 2.5 mins, meaning\nthat the update query (+ the vacuum) takes 2.5 mins to execute. So is this a\nnormal behavior? (The same function in oracle with the same environment\n(with our vacuum obviously) is executed in 11 second).\n\n \n\nThanks for your help.\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\nThanks for your help, here’s more details as you\nrequested:\n-The version of postgres is 8.4 (by the way select\npg_version() is not working but let’s concentrate on the query issue)\nHere’s the full definition of the table with it’s\nindices:\n-- Table: in_sortie\n \n-- DROP TABLE in_sortie;\n \nCREATE TABLE in_sortie\n(\n \"type\" character(1),\n site_id character varying(100),\n fiche_produit_id character varying(100),\n numero_commande character varying(100),\n ligne_commande integer,\n date_sortie date,\n quantite_sortie numeric(15,2),\n date_livraison_souhaitee date,\n quantite_souhaitee numeric(15,2),\n client_ref character varying(100),\n valeur numeric(15,2),\n type_mouvement character varying(100),\n etat_sortie_annulation integer,\n etat_sortie_prevision integer,\n etat_sortie_taux_service integer,\n date_commande date,\n valide character varying(1)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"AG_INTERFACE\";\n \n-- Index: idx_in_sortie\n \n-- DROP INDEX idx_in_sortie;\n \nCREATE INDEX idx_in_sortie\n ON in_sortie\n USING btree\n (site_id, fiche_produit_id);\n \n-- Index: idx_in_sortie_fp\n \n-- DROP INDEX idx_in_sortie_fp;\n \nCREATE INDEX idx_in_sortie_fp\n ON in_sortie\n USING btree\n (fiche_produit_id);\n \n-- Index: idx_in_sortie_site\n \n-- DROP INDEX idx_in_sortie_site;\n \nCREATE INDEX idx_in_sortie_site\n ON in_sortie\n USING btree\n (site_id);\n \n-Concerning the postgresql.conf file I’ve tried to\nchanged the default values such as: shared_buffers\nand effective_cache_size… but this did not change the result.\n \n-The WAL IS NOT ON DIFFERENT\nDISK, THEY ARE ON THE SAME DISK WHER THE DB IS (for the moment I don’t\nhave the possibility of moving them to another disk but maybe “just for\ntesting” you can tell me how I can totally disable WAL if possible).\n \nI’m using postgresql 8.4 on Linux machine with 1.5 GB\nRAM, and I’m issuing an update query with a where clause that updates\napproximately 100 000 rows in a table containing approximately 3 200 000 rows.\nThe update query is very simple: UPDATE IN_SORTIE SET VALIDE\n= VALIDE WHERE VALEUR < 0.83 (the where clause is used to limit the affected\nrows to ~ 100 000, and the “SET VALIDE = VALIDE” is only on purpose\nto keep the data of the table unchanged).\nActually this query is inside a function and this function\nis called from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT\n-d $DB_NAME -U $DB_USER -c \"SELECT testupdate()\"\n (the function is called 100 times with a vacuum analyze\nafter each call for the table).\nSo the average execution time of the function is around 2.5\nmins, meaning that the update query (+ the vacuum) takes 2.5 mins to execute.\nSo is this a normal behavior? (The same function in oracle with the same\nenvironment (with our vacuum obviously) is executed in 11 second).\n \nThanks for your help.",
"msg_date": "Thu, 21 Jan 2010 18:14:38 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update query"
},
{
"msg_contents": "\"elias ghanem\" <[email protected]> wrote:\n \n> here's more details as you requested\n \nYou didn't include an EXPLAIN ANALYZE of the UPDATE statement.\n \n> -The version of postgres is 8.4 (by the way select pg_version() is\n> not working but let's concentrate on the query issue)\n \nAs far as I know, there is no pg_version() function; try\n \nSELECT version();\n \nSometimes the exact version is relevant to a performance issue, but\nthere aren't many of fixes for performance regression in 8.4 minor\nreleases, so it might not matter in this particular case.\n \n> -Concerning the postgresql.conf file I've tried to changed the\n> default values such as: shared_buffers and effective_cache_size.\n> but this did not change the result.\n \nPerhaps not, but other settings might help performance. Am I to\nunderstand that you're running an \"out of the box\" configuration,\nwith no tuning yet?\n \n> -The WAL IS NOT ON DIFFERENT DISK, THEY ARE ON THE SAME DISK WHER\n> THE DB IS (for the moment I don't have the possibility of moving\n> them to another disk but maybe \"just for testing\" you can tell me\n> how I can totally disable WAL if possible).\n \nYou can't totally disable it, as it is there primarily to ensure\ndatabase integrity. There are several ways to tune it, based on the\nnumber of WAL segments, the WAL buffers, the background writer\naggressiveness, various delays, etc. Understanding the workload is\nkey to appropriate tuning.\n \n> I'm using postgresql 8.4 on Linux machine with 1.5 GB RAM, and I'm\n> issuing an update query with a where clause that updates\n> approximately 100 000 rows in a table containing approximately\n> 3 200 000 rows.\n \nThis is not a use case where PostgreSQL shines; it is, however, a\nrather unusual use case in normal operations. I'm curious why\nyou're testing this -- if we understood the real problem behind the\ntest we might be able to provide more useful advice. \"Teaching to\nthe test\" has its limitations.\n \n> The update query is very simple: UPDATE IN_SORTIE SET VALIDE =\n> VALIDE WHERE VALEUR < 0.83 (the where clause is used to limit the\n> affected rows to ~ 100 000, and the \"SET VALIDE = VALIDE\" is only\n> on purpose to keep the data of the table unchanged).\n \nSo you want to optimize a query which does absolutely nothing to the\ndata. It's not hard to make that particular case *much* faster,\nwhich again leads one to wonder what you're *really* trying to\noptimize. If we knew that, it might open up options not applicable\nto the synthetic case.\n \n> (the function is called 100 times with a vacuum analyze after\n> each call for the table).\n> \n> So the average execution time of the function is around 2.5 mins,\n> meaning that the update query (+ the vacuum) takes 2.5 mins to\n> execute.\n \nVacuuming normally happens as a background or off-hours process, so\nas not to slow down user queries. Now, running ten million updates\nagainst a table with 3.2 million rows without vacuuming would cause\nits own set of problems; so we're back to the question of -- if you\nreally don't want to do ten million updates to a three million row\ntable to make no changes, what is it that you *do* want to do for\nwhich you're using this test to optimize? Any advice given without\nknowing that would be a shot in the dark.\n \n-Kevin\n",
"msg_date": "Thu, 21 Jan 2010 12:44:26 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update query"
},
{
"msg_contents": "elias ghanem wrote:\n\n> Actually this query is inside a function and this function is called\n> from a .sh file using the following syntax: psql -h $DB_HOST -p $DB_PORT\n> -d $DB_NAME -U $DB_USER -c \"SELECT testupdate()\"\n> \n> (the function is called 100 times with a vacuum analyze after each call\n> for the table).\n> \n> So the average execution time of the function is around 2.5 mins,\n> meaning that the update query (+ the vacuum) takes 2.5 mins to execute.\n> So is this a normal behavior? (The same function in oracle with the same\n> environment (with our vacuum obviously) is executed in 11 second).\n\nIt might be worth measuring using psql's \\timing to see how long the\nupdate and the vacuum take individually.\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 22 Jan 2010 11:22:36 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update query"
},
{
"msg_contents": "On Thu, Jan 21, 2010 at 11:14 AM, elias ghanem <[email protected]> wrote:\n> So the average execution time of the function is around 2.5 mins, meaning\n> that the update query (+ the vacuum) takes 2.5 mins to execute. So is this a\n> normal behavior? (The same function in oracle with the same environment\n> (with our vacuum obviously) is executed in 11 second).\n\nDoes Oracle get slower if you actually change something?\n\n...Robert\n",
"msg_date": "Fri, 22 Jan 2010 10:05:38 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow update query"
}
] |
[
{
"msg_contents": "The issues we are seeing besides just saying the reports take over 26 hours,\nis that the query seems to be CPU bound. Meaning that the query consumes an\nentire CPU and quite often it is sitting with 65%-90% WAIT. Now this is not\niowait, the disks are fine, 5000-6000tps, 700K reads etc with maybe 10-13%\niowait.\n\nHowever much of the time that I see the CPU at 65%-90% Wait, there is very\nlittle disk access, so it's not the disk subsystem (in my opinion). I've\nalso moved CPU's around and the sql seems to stall regardless of what CPU\nthe job has been provided with. Memory I pulled memory to test and again,\nother than this data set consuming 10gigs of data, 700K free (will add more\nmemory), but since the disks are not a bottleneck and I don't appear to be\nswapping, I keep coming back to the data or sql.\n\nI'm providing the data that I think is requested when a performance issue is\nobserved.\n\nThere is an autovac running, the queries are run on static data so\nINSERTS/UPDATES/DELETES\n\n\nThe query seems to have gotten slower as the data set grew.\n\nRedhat\nPostgres 8.3.4\n8 cpu box\n10gig of ram\n\n\n\nNumber of rows in the table= 100million\n\n· Size of table including indices =21GB\n\n· Time to create a combined index on 2 columns (tagged boolean ,\nmakeid text) = more than 1 hr 30 minutes\n\n· Data distribution = In the 98mill records, there are 7000 unique\nmakeid's, and 21mill unique UID's. About 41mill of the records have\ntagged=true\n\n· Time to execute the following query with indices on makeid and\ntagged = 90-120 seconds. The planner uses the webid index and filters on\ntagged and then rechecks the webid index\n\n* SELECT COUNT(DISTINCT uid ) AS active_users FROM\npixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged =true*\n\n· Time to execute the the same query with a combined index on makeid\nand tagged = 60-100 seconds. The planner uses the combined index and then\nfilters tagged.\n\n*SELECT COUNT(DISTINCT uid ) AS active_users FROM pixelpool.userstats\nWHERE makeid ='bmw-ferman' AND tagged =true*\n\n* Plan*\n\n* \"Aggregate (cost=49467.00..49467.01 rows=1 width=8)\"*\n\n* \" -> Bitmap Heap Scan on userstats\n(cost=363.49..49434.06 rows=13175 width=8)\"*\n\n* \" Recheck Cond: (makeid = 'b1mw-ferman'::text)\"*\n\n* \" Filter: tagged\"*\n\n* \" -> Bitmap Index Scan on\nidx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n\n* \" Index Cond: ((makeid = 'b1mw-ferman'::text)\nAND (tagged = true))\"*\n\n\nAny assistance would be appreciated, don't worry about slapping me around I\nneed to figure this out. Otherwise I'm buying new hardware where it may not\nbe required.\n\nThanks\n\nTory\n\n*\n*\n\nThe issues we are seeing besides just saying the reports take over 26 hours, is that the query seems to be CPU bound. Meaning that the query consumes an entire CPU and quite often it is sitting with 65%-90% WAIT. Now this is not iowait, the disks are fine, 5000-6000tps, 700K reads etc with maybe 10-13% iowait.\nHowever much of the time that I see the CPU at 65%-90% Wait, there is very little disk access, so it's not the disk subsystem (in my opinion). I've also moved CPU's around and the sql seems to stall regardless of what CPU the job has been provided with. Memory I pulled memory to test and again, other than this data set consuming 10gigs of data, 700K free (will add more memory), but since the disks are not a bottleneck and I don't appear to be swapping, I keep coming back to the data or sql. \nI'm providing the data that I think is requested when a performance issue is observed. \nThere is an autovac running, the queries are run on static data so INSERTS/UPDATES/DELETES The query seems to have gotten slower as the data set grew.Redhat\nPostgres 8.3.48 cpu box10gig of ramNumber of rows in the table= 100million\n\n· \nSize of table including indices =21GB\n· \nTime to create a combined index on 2 columns\n(tagged boolean , makeid text) = more than 1 hr 30 minutes\n· \nData distribution = In the 98mill records, there\nare 7000 unique makeid's, and 21mill unique UID's. About 41mill of the records\nhave tagged=true\n· \nTime to execute the following query with indices\non makeid and tagged = 90-120 seconds. The planner uses the webid index and\nfilters on tagged and then rechecks the webid index \n \nSELECT COUNT(DISTINCT uid ) AS active_users FROM\npixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged\n=true\n· \nTime to execute the the same query with a combined\nindex on makeid and tagged = 60-100 seconds. The planner uses the combined\nindex and then filters tagged.\nSELECT COUNT(DISTINCT uid ) AS active_users\nFROM pixelpool.userstats WHERE makeid ='bmw-ferman' AND\ntagged =true\n \nPlan\n \n\"Aggregate (cost=49467.00..49467.01 rows=1 width=8)\"\n \n\" -> Bitmap Heap Scan on userstats \n(cost=363.49..49434.06 rows=13175 width=8)\"\n \n\" Recheck Cond: (makeid =\n'b1mw-ferman'::text)\"\n \n\" Filter: tagged\"\n \n\" -> Bitmap Index Scan\non idx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"\n \n\" \nIndex Cond: ((makeid = 'b1mw-ferman'::text) AND (tagged = true))\"Any assistance would be appreciated, don't worry about slapping me around I need to figure this out. Otherwise I'm buying new hardware where it may not be required.\nThanksTory",
"msg_date": "Thu, 21 Jan 2010 14:15:29 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Data Set Growth causing 26+hour runtime, on what we believe to be\n\tvery simple SQL"
},
{
"msg_contents": "Tory M Blue wrote:\n\n> Any assistance would be appreciated, don't worry about slapping me\n> around I need to figure this out. Otherwise I'm buying new hardware\n> where it may not be required.\n\nWhat is the reporting query that takes 26 hours? You didn't seem to\ninclude it, or any query plan information for it (EXPLAIN or EXPLAIN\nANALYZE results).\n\nWhat sort of activity is happening on the db concurrently with your\ntests? What's your max connection limit?\n\nWhat're your shared_buffers and effective_cache_size settings?\n\nCould sorts be spilling to disk? Check work_mem size and enable logging\nof tempfiles (see the manual).\n\nDoes an explicit ANALYZE of the problem table(s) help?\n\n--\nCraig Ringer\n",
"msg_date": "Fri, 22 Jan 2010 11:46:07 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be \tvery simple SQL"
},
{
"msg_contents": "On 21/01/10 22:15, Tory M Blue wrote:\n> · Data distribution = In the 98mill records, there are 7000 unique\n> makeid's, and 21mill unique UID's. About 41mill of the records have\n> tagged=true\n>\n> · Time to execute the following query with indices on makeid and\n> tagged = 90-120 seconds. The planner uses the webid index and filters on\n> tagged and then rechecks the webid index\n>\n> * SELECT COUNT(DISTINCT uid ) AS active_users FROM\n> pixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged =true*\n>\n> · Time to execute the the same query with a combined index on makeid\n> and tagged = 60-100 seconds. The planner uses the combined index and then\n> filters tagged.\n\nTwo things:\n\n1. You have got the combined index on (makeid, tagged) and not (tagged, \nmakeid) haven't you? Just checking.\n\n2. If it's mostly tagged=true you are interested in you can always use a \npartial index: CREATE INDEX ... (makeid) WHERE tagged\nThis might be a win even if you need a second index with WHERE NOT tagged.\n\n\nAlso, either I've not had enough cofee yet, or a bitmap scan is an odd \nchoice for only ~ 13000 rows out of 100 million.\n\n> * \" -> Bitmap Index Scan on\n> idx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n>\n> * \" Index Cond: ((makeid = 'b1mw-ferman'::text)\n> AND (tagged = true))\"*\n\nOtherwise, see what Craig said.\n\nI'm assuming this isn't the query that is CPU bound for a long time. \nUnless your table is horribly bloated, there's no reason for that \njudging by this plan.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 22 Jan 2010 09:42:49 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be \tvery simple SQL"
},
{
"msg_contents": "On Thu, Jan 21, 2010 at 7:46 PM, Craig Ringer\n<[email protected]>wrote:\n\n>\n> > Any assistance would be appreciated, don't worry about slapping me\n> > around I need to figure this out. Otherwise I'm buying new hardware\n> > where it may not be required.\n>\n> What is the reporting query that takes 26 hours? You didn't seem to\n> include it, or any query plan information for it (EXPLAIN or EXPLAIN\n> ANALYZE results).\n>\n\nIt's this query, run 6000 times with a diff makeid's *\n*\n\n*SELECT COUNT(DISTINCT uid ) AS active_users FROM pixelpool.userstats\nWHERE makeid ='bmw-ferman' AND tagged =true*\n\n* Plan*\n\n* \"Aggregate (cost=49467.00..49467.01 rows=1 width=8)\"*\n\n* \" -> Bitmap Heap Scan on userstats\n(cost=363.49..49434.06 rows=13175 width=8)\"*\n\n* \" Recheck Cond: (makeid = 'b1mw-ferman'::text)\"*\n\n* \" Filter: tagged\"*\n\n* \" -> Bitmap Index Scan on\nidx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n\n* \" Index Cond: ((makeid = 'b1mw-ferman'::text)\nAND (tagged = true))\"*\n\n\n> What sort of activity is happening on the db concurrently with your\n> tests? What's your max connection limit?\n>\n\n50 max and there is nothing, usually one person connected if that, otherwise\nit's a cron job that bulk inserts and than jobs later on run that generate\nthe reports off the static data. No deletes or updates happening.\n\n\n>\n> What're your shared_buffers and effective_cache_size settings?\n>\n\nshared_buffers = 1028MB (Had this set at 128 and 256 and just recently\nbumped it higher, didn't buy me anything)\nmaintenance_work_mem = 128MB\nfsync=on\nrandom_page_cost = 4.0\neffective_cache_size = 7GB\ndefault vac settings\n\n\n>\n> Could sorts be spilling to disk? Check work_mem size and enable logging\n> of tempfiles (see the manual).\n>\n\nwork_mem = 100MB # min 64kB\n\nWill do and I guess it's possible but during the queries, reports I don't\nsee a ton of writes, mostly reads\n\n>\n> Does an explicit ANALYZE of the problem table(s) help?\n>\n\nIt didn't.\n\nThanks\nTory\n\nOn Thu, Jan 21, 2010 at 7:46 PM, Craig Ringer <[email protected]> wrote:\n\n> Any assistance would be appreciated, don't worry about slapping me\n> around I need to figure this out. Otherwise I'm buying new hardware\n> where it may not be required.\n\nWhat is the reporting query that takes 26 hours? You didn't seem to\ninclude it, or any query plan information for it (EXPLAIN or EXPLAIN\nANALYZE results). It's this query, run 6000 times with a diff makeid's SELECT COUNT(DISTINCT uid ) AS active_users\nFROM pixelpool.userstats WHERE makeid ='bmw-ferman' AND\ntagged =true\n \nPlan\n \n\"Aggregate (cost=49467.00..49467.01 rows=1 width=8)\"\n \n\" -> Bitmap Heap Scan on userstats \n(cost=363.49..49434.06 rows=13175 width=8)\"\n \n\" Recheck Cond: (makeid =\n'b1mw-ferman'::text)\"\n \n\" Filter: tagged\"\n \n\" -> Bitmap Index Scan\non idx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"\n \n\" \nIndex Cond: ((makeid = 'b1mw-ferman'::text) AND (tagged = true))\"\n\nWhat sort of activity is happening on the db concurrently with your\ntests? What's your max connection limit?50 max and there is nothing, usually one person connected if that, otherwise it's a cron job that bulk inserts and than jobs later on run that generate the reports off the static data. No deletes or updates happening.\n \n\nWhat're your shared_buffers and effective_cache_size settings?shared_buffers = 1028MB (Had this set at 128 and 256 and just recently bumped it higher, didn't buy me anything)\nmaintenance_work_mem = 128MB \nfsync=on\nrandom_page_cost = 4.0 \neffective_cache_size = 7GB\ndefault vac settings \n\n\nCould sorts be spilling to disk? Check work_mem size and enable logging\nof tempfiles (see the manual).work_mem = 100MB # min 64kBWill do and I guess it's possible but during the queries, reports I don't see a ton of writes, mostly reads \n\n\nDoes an explicit ANALYZE of the problem table(s) help?It didn't. Thanks Tory",
"msg_date": "Fri, 22 Jan 2010 09:59:34 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Fri, Jan 22, 2010 at 1:42 AM, Richard Huxton <[email protected]> wrote:\n\n> On 21/01/10 22:15, Tory M Blue wrote:\n>\n>> · Data distribution = In the 98mill records, there are 7000 unique\n>>\n>> makeid's, and 21mill unique UID's. About 41mill of the records have\n>> tagged=true\n>>\n>> · Time to execute the following query with indices on makeid and\n>> tagged = 90-120 seconds. The planner uses the webid index and filters on\n>> tagged and then rechecks the webid index\n>>\n>> * SELECT COUNT(DISTINCT uid ) AS active_users FROM\n>> pixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged =true*\n>>\n>> · Time to execute the the same query with a combined index on\n>> makeid\n>> and tagged = 60-100 seconds. The planner uses the combined index and then\n>> filters tagged.\n>>\n>\n> Two things:\n>\n> 1. You have got the combined index on (makeid, tagged) and not (tagged,\n> makeid) haven't you? Just checking.\n>\n\nYes we do\n\n\n> 2. If it's mostly tagged=true you are interested in you can always use a\n> partial index: CREATE INDEX ... (makeid) WHERE tagged\n> This might be a win even if you need a second index with WHERE NOT tagged.\n>\n\nPartial index doesn't seem to fit here due to the fact that there are 35-40%\nMarked True.\n\nDidn't think about creating a second index for false, may give that a shot.\n\n>\n>\n> Also, either I've not had enough cofee yet, or a bitmap scan is an odd\n> choice for only ~ 13000 rows out of 100 million.\n>\n> * \" -> Bitmap Index Scan on\n>>\n>> idx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n>>\n>> * \" Index Cond: ((makeid =\n>> 'b1mw-ferman'::text)\n>> AND (tagged = true))\"*\n>>\n>\n> Otherwise, see what Craig said.\n>\n> I'm assuming this isn't the query that is CPU bound for a long time. Unless\n> your table is horribly bloated, there's no reason for that judging by this\n> plan.\n>\n\nIt is, but not always, only when there are 10K more matches. And the explain\nunfortunately is sometimes way high or way low, so the expalin is hit and\nmiss.\n\nBut the same sql that returns maybe 500 rows is pretty fast, it's the return\nof 10K+ rows that seems to stall and is CPU Bound.\n\nThanks\n\nTory\n\nOn Fri, Jan 22, 2010 at 1:42 AM, Richard Huxton <[email protected]> wrote:\nOn 21/01/10 22:15, Tory M Blue wrote:\n\n· Data distribution = In the 98mill records, there are 7000 unique\nmakeid's, and 21mill unique UID's. About 41mill of the records have\ntagged=true\n\n· Time to execute the following query with indices on makeid and\ntagged = 90-120 seconds. The planner uses the webid index and filters on\ntagged and then rechecks the webid index\n\n* SELECT COUNT(DISTINCT uid ) AS active_users FROM\npixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged =true*\n\n· Time to execute the the same query with a combined index on makeid\nand tagged = 60-100 seconds. The planner uses the combined index and then\nfilters tagged.\n\n\nTwo things:\n\n1. You have got the combined index on (makeid, tagged) and not (tagged, makeid) haven't you? Just checking.Yes we do \n\n2. If it's mostly tagged=true you are interested in you can always use a partial index: CREATE INDEX ... (makeid) WHERE tagged\nThis might be a win even if you need a second index with WHERE NOT tagged.Partial index doesn't seem to fit here due to the fact that there are 35-40% Marked True.Didn't think about creating a second index for false, may give that a shot. \n\n\n\nAlso, either I've not had enough cofee yet, or a bitmap scan is an odd choice for only ~ 13000 rows out of 100 million.\n\n\n* \" -> Bitmap Index Scan on\nidx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n\n* \" Index Cond: ((makeid = 'b1mw-ferman'::text)\nAND (tagged = true))\"*\n\n\nOtherwise, see what Craig said.\n\nI'm assuming this isn't the query that is CPU bound for a long time. Unless your table is horribly bloated, there's no reason for that judging by this plan.It is, but not always, only when there are 10K more matches. And the explain unfortunately is sometimes way high or way low, so the expalin is hit and miss.\nBut the same sql that returns maybe 500 rows is pretty fast, it's the return of 10K+ rows that seems to stall and is CPU Bound. Thanks Tory",
"msg_date": "Fri, 22 Jan 2010 10:03:35 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On 22/01/10 18:03, Tory M Blue wrote:\n> On Fri, Jan 22, 2010 at 1:42 AM, Richard Huxton<[email protected]> wrote:\n>\n>> On 21/01/10 22:15, Tory M Blue wrote:\n\n>> 2. If it's mostly tagged=true you are interested in you can always use a\n>> partial index: CREATE INDEX ... (makeid) WHERE tagged\n>> This might be a win even if you need a second index with WHERE NOT tagged.\n>>\n>\n> Partial index doesn't seem to fit here due to the fact that there are 35-40%\n> Marked True.\n>\n> Didn't think about creating a second index for false, may give that a shot.\n\nIf you're mostly search tagged=true, try the partial index - it'll mean \nthe planner is just scanning the index for the one term.\n\n>> Also, either I've not had enough cofee yet, or a bitmap scan is an odd\n>> choice for only ~ 13000 rows out of 100 million.\n>>\n>> * \" -> Bitmap Index Scan on\n>>>\n>>> idx_retargetuserstats_makeidtag (cost=0.00..360.20 rows=13175 width=0)\"*\n>>>\n>>> * \" Index Cond: ((makeid =\n>>> 'b1mw-ferman'::text)\n>>> AND (tagged = true))\"*\n>>>\n>>\n>> Otherwise, see what Craig said.\n>>\n>> I'm assuming this isn't the query that is CPU bound for a long time. Unless\n>> your table is horribly bloated, there's no reason for that judging by this\n>> plan.\n>\n> It is, but not always, only when there are 10K more matches. And the explain\n> unfortunately is sometimes way high or way low, so the expalin is hit and\n> miss.\n>\n> But the same sql that returns maybe 500 rows is pretty fast, it's the return\n> of 10K+ rows that seems to stall and is CPU Bound.\n\nHmm - might be able to push that cross-over point up a bit by tweaking \nvarious costs, but you've got to be careful you don't end up making all \nyour other queries worse. It'd be good to figure out what the problem is \nfirst.\n\nLooking at the query there are four stages:\n 1. Scan the index, build a bitmap of heap pages with matching rows\n 2. Scan those pages, find the rows that match\n 3. Run DISTINCT on the uids\n 4. Count them\nI wonder if it could be the DISTINCT. What happens with a count(*) or \ncount(uid) instead? Also - you might find EXPLAIN ANALYZE more useful \nthan straight EXPLAIN here. That will show actual times for each stage.\n\nOn Craig's branch of this thread, you say you call it 6000 times with \ndifferent \"makeid\"s. Any reason why you can't join to a temp table and \njust do it in one query?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 22 Jan 2010 18:25:00 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Fri, 22 Jan 2010, Tory M Blue wrote:\n> But the same sql that returns maybe 500 rows is pretty fast, it's the return\n> of 10K+ rows that seems to stall and is CPU Bound.\n\nOkay, so you have two differing cases. Show us the EXPLAIN ANALYSE for \nboth of them, and we will see what the difference is.\n\nMatthew\n\n-- \n The third years are wandering about all worried at the moment because they\n have to hand in their final projects. Please be sympathetic to them, say\n things like \"ha-ha-ha\", but in a sympathetic tone of voice \n -- Computer Science Lecturer\n",
"msg_date": "Fri, 22 Jan 2010 18:26:50 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Fri, Jan 22, 2010 at 10:59 AM, Tory M Blue <[email protected]> wrote:\n> On Thu, Jan 21, 2010 at 7:46 PM, Craig Ringer <[email protected]>\n> wrote:\n>> > Any assistance would be appreciated, don't worry about slapping me\n>> > around I need to figure this out. Otherwise I'm buying new hardware\n>> > where it may not be required.\n>>\n>> What is the reporting query that takes 26 hours? You didn't seem to\n>> include it, or any query plan information for it (EXPLAIN or EXPLAIN\n>> ANALYZE results).\n>\n> It's this query, run 6000 times with a diff makeid's\n>\n> SELECT COUNT(DISTINCT uid ) AS active_users FROM pixelpool.userstats\n> WHERE makeid ='bmw-ferman' AND tagged =true\n\nAny chance of trying this instead:\n\nselect makeid, count(distinct uid) as active_users from\npixelpool.userstats where tagged=true group by makeid\n\nAnd seeing how long it takes? If you're limiting the total number of\nmakeids then you could add\n\nand makeid in (biglistofmakeidsgoeshere)\n\nNote that a partial index of\n\ncreate index xyz on pixelpool.userstats (makeid) where tagged;\n\nmight help both the original and this query.\n",
"msg_date": "Fri, 22 Jan 2010 11:38:21 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Fri, Jan 22, 2010 at 10:26 AM, Matthew Wakeling <[email protected]> wrote:\n>\n> On Fri, 22 Jan 2010, Tory M Blue wrote:\n>>\n>> But the same sql that returns maybe 500 rows is pretty fast, it's the return\n>> of 10K+ rows that seems to stall and is CPU Bound.\n>\n> Okay, so you have two differing cases. Show us the EXPLAIN ANALYSE for both of them, and we will see what the difference is.\n>\n> Matthew\n\nOkay, understood\n\nHere is the explain plan for the query. Actual rows that the query\nreturns is 6369\n\nSLOW\n\nThis is for this SQL\n SELECT COUNT(distinct uid ) AS active_users FROM\npixelpool.userstats WHERE makeid =\n'gmps-armen-chevy' and tagged=true\n\nExplain\n\"Aggregate (cost=118883.96..118883.97 rows=1 width=8)\"\n\" -> Bitmap Heap Scan on userstats (cost=797.69..118850.46\nrows=13399 width=8)\"\n\" Recheck Cond: (makeid = 'gmps-armen-chevy'::text)\"\n\" Filter: tagged\"\n\" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n(cost=0.00..794.34 rows=33276 width=0)\"\n\" Index Cond: (makeid = 'gmps-armen-chevy'::text)\"\n\n\nExplain Analyze\n\"Aggregate (cost=118883.96..118883.97 rows=1 width=8) (actual\ntime=31219.376..31219.376 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on userstats (cost=797.69..118850.46\nrows=13399 width=8) (actual time=281.604..31190.290 rows=19799\nloops=1)\"\n\" Recheck Cond: (makeid = 'gmps-armen-chevy'::text)\"\n\" Filter: tagged\"\n\" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n(cost=0.00..794.34 rows=33276 width=0) (actual time=258.506..258.506\nrows=23242 loops=1)\"\n\" Index Cond: (makeid = 'gmps-armen-chevy'::text)\"\n\"Total runtime: 31219.536 ms\"\n\nFAST\n\n\"Explain\n\"explain SELECT a.makeid, COUNT(DISTINCT CASE WHEN tagged=true\nTHEN a.uid END)\n\"AS active_users,\n\"COUNT(DISTINCT CASE WHEN tagged=false THEN a.uid END) as unassociated\n\"FROM pixelpool.userstats a\n\"WHERE makeid ='gmps-oden' GROUP BY a.makeid\n\n\n\nExplain Analyze\n\n\"GroupAggregate (cost=802.66..119105.01 rows=1 width=23) (actual\ntime=3813.550..3813.551 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on userstats a (cost=802.66..118855.43\nrows=33276 width=23) (actual time=55.400..3807.908 rows=2606 loops=1)\"\n\" Recheck Cond: (makeid = 'gmps-oden'::text)\"\n\" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n(cost=0.00..794.34 rows=33276 width=0) (actual time=51.748..51.748\nrows=2677 loops=1)\"\n\" Index Cond: (makeid = 'gmps-oden'::text)\"\n\"Total runtime: 3813.626 ms\"\n\nThanks again\n\nTory\n",
"msg_date": "Fri, 22 Jan 2010 11:06:27 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Fri, Jan 22, 2010 at 11:06 AM, Tory M Blue <[email protected]> wrote:\n> On Fri, Jan 22, 2010 at 10:26 AM, Matthew Wakeling <[email protected]> wrote:\n>>\n>> On Fri, 22 Jan 2010, Tory M Blue wrote:\n>>>\n>>> But the same sql that returns maybe 500 rows is pretty fast, it's the return\n>>> of 10K+ rows that seems to stall and is CPU Bound.\n>>\n>> Okay, so you have two differing cases. Show us the EXPLAIN ANALYSE for both of them, and we will see what the difference is.\n>>\n>> Matthew\n>\n> Okay, understood\n>\n> Here is the explain plan for the query. Actual rows that the query\n> returns is 6369\n>\n> SLOW\n>\n> This is for this SQL\n> SELECT COUNT(distinct uid ) AS active_users FROM\n> pixelpool.userstats WHERE makeid =\n> 'gmps-armen-chevy' and tagged=true\n>\n> Explain\n> \"Aggregate (cost=118883.96..118883.97 rows=1 width=8)\"\n> \" -> Bitmap Heap Scan on userstats (cost=797.69..118850.46\n> rows=13399 width=8)\"\n> \" Recheck Cond: (makeid = 'gmps-armen-chevy'::text)\"\n> \" Filter: tagged\"\n> \" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n> (cost=0.00..794.34 rows=33276 width=0)\"\n> \" Index Cond: (makeid = 'gmps-armen-chevy'::text)\"\n>\n>\n> Explain Analyze\n> \"Aggregate (cost=118883.96..118883.97 rows=1 width=8) (actual\n> time=31219.376..31219.376 rows=1 loops=1)\"\n> \" -> Bitmap Heap Scan on userstats (cost=797.69..118850.46\n> rows=13399 width=8) (actual time=281.604..31190.290 rows=19799\n> loops=1)\"\n> \" Recheck Cond: (makeid = 'gmps-armen-chevy'::text)\"\n> \" Filter: tagged\"\n> \" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n> (cost=0.00..794.34 rows=33276 width=0) (actual time=258.506..258.506\n> rows=23242 loops=1)\"\n> \" Index Cond: (makeid = 'gmps-armen-chevy'::text)\"\n> \"Total runtime: 31219.536 ms\"\n\nSorry mis-pasted the FAST!!!\n\nExplain\n\"GroupAggregate (cost=802.66..119105.01 rows=1 width=23)\"\n\" -> Bitmap Heap Scan on userstats a (cost=802.66..118855.43\nrows=33276 width=23)\"\n\" Recheck Cond: (makeid = 'gmps-oden'::text)\"\n\" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n(cost=0.00..794.34 rows=33276 width=0)\"\n\" Index Cond: (makeid = 'gmps-oden'::text)\"\n\n\nExplain Analyze\n\"GroupAggregate (cost=802.66..119105.01 rows=1 width=23) (actual\ntime=3813.550..3813.551 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on userstats a (cost=802.66..118855.43\nrows=33276 width=23) (actual time=55.400..3807.908 rows=2606 loops=1)\"\n\" Recheck Cond: (makeid = 'gmps-oden'::text)\"\n\" -> Bitmap Index Scan on idx_retargetuserstats_makeid\n(cost=0.00..794.34 rows=33276 width=0) (actual time=51.748..51.748\nrows=2677 loops=1)\"\n\" Index Cond: (makeid = 'gmps-oden'::text)\"\n\"Total runtime: 3813.626 ms\"\n",
"msg_date": "Fri, 22 Jan 2010 11:08:02 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On 23/01/2010 1:59 AM, Tory M Blue wrote:\n\n> It's this query, run 6000 times with a diff makeid's /\n> /\n>\n> /SELECT COUNT(DISTINCT uid ) AS active_users FROM\n> pixelpool.userstats WHERE makeid ='bmw-ferman' AND tagged =true/\n>\n> / Plan/\n>\n> / \"Aggregate (cost=49467.00..49467.01 rows=1 width=8)\"/\n>\n> / \" -> Bitmap Heap Scan on userstats (cost=363.49..49434.06\n> rows=13175 width=8)\"/\n>\n> / \" Recheck Cond: (makeid = 'b1mw-ferman'::text)\"/\n>\n> / \" Filter: tagged\"/\n>\n> / \" -> Bitmap Index Scan on idx_retargetuserstats_makeidtag\n> (cost=0.00..360.20 rows=13175 width=0)\"/\n>\n> / \" Index Cond: ((makeid = 'b1mw-ferman'::text) AND (tagged\n> = true))\"/\n\nTry:\n\n- Adding a partial index on makeid, eg:\n\n CREATE INDEX userstats_makeid_where_tagged_idx\n ON userstats (makeid) WHERE (tagged);\n\n- Instead of repeating the query 6000 times in a loop, collect the data \nin one pass by joining against a temp table containing the makeids of \ninterest.\n\nSELECT COUNT(DISTINCT u.uid) AS active_users\nFROM pixelpool.userstats u\nINNER JOIN temp_makeids m ON (u.makeid = m.makeid)\nWHERE u.tagged = true;\n\n(If the 6000 repeats are really a correlated subquery part of a bigger \nquery you still haven't shown, then you might be able to avoid 6000 \nindividual passes by adjusting your outer query instead).\n\n--\nCraig Ringer\n",
"msg_date": "Sat, 23 Jan 2010 10:25:17 +0800",
"msg_from": "Craig Ringer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On 22/01/10 19:06, Tory M Blue wrote:\n\n > Here is the explain plan for the query. Actual rows that the query\n > returns is 6369\n\nActually, it processes 19,799 rows (see the actual rows= below).\n\n> SLOW\n\n> \" -> Bitmap Heap Scan on userstats (cost=797.69..118850.46\n> rows=13399 width=8) (actual time=281.604..31190.290 rows=19799\n> loops=1)\"\n\n> \"Total runtime: 31219.536 ms\"\n\n> FAST\n\n> \" -> Bitmap Heap Scan on userstats a (cost=802.66..118855.43\n> rows=33276 width=23) (actual time=55.400..3807.908 rows=2606 loops=1)\"\n\n> \"Total runtime: 3813.626 ms\"\n\nOK - so the first query processes 19,799 rows in 31,219 ms (about 1.5ms \nper row)\n\nThe second processes 2,606 rows in 3,813 ms (about 1.3ms per row).\n\nYou are asking for DISTINCT user-ids, so it's seems reasonable that it \nwill take slightly longer to check a larger set of user-ids.\n\nOtherwise, both queries are the same. I'm still a little puzzled by the \nbitmap scan, but the planner probably knows more about your data than I do.\n\nThe main time is spent in the \"bitmap heap scan\" which is where it's \ngrabbing actual row data (and presumably building a hash over the uid \ncolumn). you can see how long in the \"actual time\" the first number \n(e.g. 281.604) is the time spent before it starts, and the second is the \ntotal time at finish (31190.290). If \"loops\" was greater than 1 you \nwould multiply the times by the number of loops to get a total.\n\nSo - there's nothing \"wrong\" in the sense that the second query does the \nsame as the first. Let's take a step back. What you really want is your \nreports to be faster.\n\nYou mentioned you were running this query thousands of times with a \ndifferent \"makeid\" each time. Running it once for all possible values \nand stashing the results in a temp table will probably be *much* faster. \nThe planner can just scan the whole table once and build up its results \nas it goes.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 25 Jan 2010 10:54:30 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Mon, 25 Jan 2010, Richard Huxton wrote:\n> OK - so the first query processes 19,799 rows in 31,219 ms (about 1.5ms per \n> row)\n>\n> The second processes 2,606 rows in 3,813 ms (about 1.3ms per row).\n\nAgreed. One query is faster than the other because it has to do an eighth \nthe amount of work.\n\nMatthew\n\n-- \n I wouldn't be so paranoid if you weren't all out to get me!!\n",
"msg_date": "Mon, 25 Jan 2010 11:59:41 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
},
{
"msg_contents": "On Mon, Jan 25, 2010 at 3:59 AM, Matthew Wakeling <[email protected]> wrote:\n> On Mon, 25 Jan 2010, Richard Huxton wrote:\n>>\n>> OK - so the first query processes 19,799 rows in 31,219 ms (about 1.5ms\n>> per row)\n>>\n>> The second processes 2,606 rows in 3,813 ms (about 1.3ms per row).\n>\n> Agreed. One query is faster than the other because it has to do an eighth\n> the amount of work.\n>\n> Matthew\n\nThanks guys, ya this has dropped the time by half. The process is\nmanageable now. Thanks again. For some reason we thought this method\nwould make it take more time, vs less. So again appreciate the help :)\n\nTory\n",
"msg_date": "Mon, 25 Jan 2010 14:17:14 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Data Set Growth causing 26+hour runtime, on what we\n\tbelieve to be very simple SQL"
}
] |
[
{
"msg_contents": "Does anyone know if there exists an open source implementation of the \nTPC-C benchmark for postgresql somewhere?\n",
"msg_date": "Thu, 21 Jan 2010 23:51:00 +0100",
"msg_from": "tmp <[email protected]>",
"msg_from_op": true,
"msg_subject": "TPC-C implementation for postgresql?"
},
{
"msg_contents": "tmp wrote:\n> Does anyone know if there exists an open source implementation of the \n> TPC-C benchmark for postgresql somewhere?\n>\n\nhttp://osdldbt.sourceforge.net/\nhttp://wiki.postgresql.org/wiki/DBT-2\nhttp://www.slideshare.net/markwkm/postgresql-portland-performance-practice-project-database-test-2-howto\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Thu, 21 Jan 2010 19:10:40 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPC-C implementation for postgresql?"
}
] |
[
{
"msg_contents": "Hello All,\n\nHow to identify if a table requires full vacuum? How to identify when to do\nre-index on an existing index of a table?\n\nIs there any tool for the above?\n\nThanks\nDeepak Murthy\n\nHello All,How to identify if a table requires full vacuum? How to identify when to do re-index on an existing index of a table?Is there any tool for the above?\nThanksDeepak Murthy",
"msg_date": "Fri, 22 Jan 2010 00:11:37 -0800",
"msg_from": "DM <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fragmentation/Vacuum, Analyze, Re-Index"
},
{
"msg_contents": "Is there any script/tool to identify if the table requires full vacuum? or\nto re-index an existing index table?\n\nThanks\nDeepak\n\nOn Fri, Jan 22, 2010 at 12:11 AM, DM <[email protected]> wrote:\n\n> Hello All,\n>\n> How to identify if a table requires full vacuum? How to identify when to do\n> re-index on an existing index of a table?\n>\n> Is there any tool for the above?\n>\n> Thanks\n> Deepak Murthy\n>\n>\n>\n\nIs there any script/tool to identify if the table requires full vacuum? or to re-index an existing index table?ThanksDeepakOn Fri, Jan 22, 2010 at 12:11 AM, DM <[email protected]> wrote:\nHello All,How to identify if a table requires full vacuum? How to identify when to do re-index on an existing index of a table?\nIs there any tool for the above?\nThanksDeepak Murthy",
"msg_date": "Fri, 22 Jan 2010 09:10:48 -0800",
"msg_from": "DM <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fragmentation/Vacuum, Analyze, Re-Index"
},
{
"msg_contents": "DM wrote:\n> Is there any script/tool to identify if the table requires full vacuum? \n> or to re-index an existing index table?\n> \n\nDon't know if there is a script to specifically do this, though you may \nfind this query a useful one:\n\nSELECT relname, reltuples, relpages FROM pg_class ORDER BY relpages DESC;\n\n\n(it shows what's currently using most of the disk).\n\n\nIn general though, you should never use \"VACUUM FULL\". The best bet is \nto tune autovacuum to be more aggressive, and then occasionally run CLUSTER.\n\nBest wishes,\n\nRichard\n\n\n\n> Thanks\n> Deepak\n> \n",
"msg_date": "Fri, 22 Jan 2010 19:27:27 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation/Vacuum, Analyze, Re-Index"
},
{
"msg_contents": "On 1/22/2010 2:27 PM, Richard Neill wrote:\n> DM wrote:\n>> Is there any script/tool to identify if the table requires full\n>> vacuum? or to re-index an existing index table?\n\nhttp://pgsql.tapoueh.org/site/html/news/20080131.bloat.html\n\nThe bucardo project has released its nagios plugins for PostgreSQL and we can extract from them this nice view \nin order to check for table and index bloat into our PostgreSQL databases:\n",
"msg_date": "Sat, 23 Jan 2010 01:32:11 -0500",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fragmentation/Vacuum, Analyze, Re-Index"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nFor the explain analyze here's the output:\n\n\"Seq Scan on in_sortie (cost=0.00..171140.19 rows=114449 width=84) (actual\ntime=15.074..28461.349 rows=99611 loops=1)\"\n\n\" Output: type, site_id, fiche_produit_id, numero_commande, ligne_commande,\ndate_sortie, quantite_sortie, date_livraison_souhaitee, quantite_souhaitee,\nclient_ref, valeur, type_mouvement, etat_sortie_annulation,\netat_sortie_prevision, etat_sortie_taux_service, date_commande, valide\"\n\n\" Filter: (valeur < 0.83)\"\n\n\"Total runtime: 104233.651 ms\"\n\n \n\n(Although the total runtime is 104233.651 ms when I run the query it takes\n2.5 mins)\n\n \n\n-Concerning the exact version of postgresql I'm using, here is the result of\nthe select version() :\n\nPostgreSQL 8.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.4.6\n20060404 (Red Hat 3.4.6-10), 32-bit\n\n \n\n- for the postgresql.conf I've attached the file.\n\n \n\n-Concerning the query, I'm sorry; it seems that I did not explain the\nproblem clearly enough. Here's a better explanation:\n\nThis update, shown below, is just one step in a long process. After\nprocessing certain rows, these rows have to be flagged so they don't get\nprocessed another time.\n\nUPDATE IN_SORTIE SET VALIDE = 'O' WHERE VALEUR < 0.83\n\nThe [SET VALIDE = 'O'] merely flags this row as already processed.\n\nThe where clause that identifies these rows is rather simple: [WHERE VALEUR\n< 0.83]. It affects around 100,000 records in a table that contains around\n3,000,000.\n\nWe are running this process on both Oracle and Postgres. I have noticed that\nthis particular UPDATE statement for the same table size and the same number\nof rows affected, takes 11 seconds on Oracle while it takes 2.5 minutes on\nPostgres.\n\nKnowing that there are no indexes on either database for this table;\n\n \n\nSo the problem can be resumed by the following: why a query like UPDATE\nIN_SORTIE SET VALIDE = 'O' WHERE VALEUR < 0.83 takes 2.5 min on Postgresql\nknowing that it is issued on a table containing around 3 000 000 records and\naffects around 1 00 000 record\n\n \n\nThanks again for your advise",
"msg_date": "Fri, 22 Jan 2010 14:42:39 +0200",
"msg_from": "\"elias ghanem\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow update query"
}
] |
[
{
"msg_contents": "Hello,\n\n I am working on a project that will take out structured content\nfrom wikipedia\nand put it in our database. Before putting the data into the database I\nwrote a script to\nfind out the number of rows every table would be having after the data is in\nand I found\nthere is a table which will approximately have 5 crore entries after data\nharvesting.\nIs it advisable to keep so much data in one table ?\n I have read about 'partitioning' a table. An other idea I have is\nto break the table into\ndifferent tables after the no of rows in a table has reached a certain\nlimit say 10 lacs.\nFor example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\neach having 10 lac entries.\nI needed advice on whether I should go for partitioning or the approach I\nhave thought of.\n We have a HP server with 32GB ram,16 processors. The storage has\n24TB diskspace (1TB/HD).\nWe have put them on RAID-5. It will be great if we could know the parameters\nthat can be changed in the\npostgres configuration file so that the database makes maximum utilization\nof the server we have.\nFor eg parameters that would increase the speed of inserts and selects.\n\n\nThank you in advance\nRajiv Nair\n\nHello, I am working on a project that will take out structured content from wikipediaand put it in our database. Before putting the data into the database I wrote a script tofind out the number of rows every table would be having after the data is in and I found\n\nthere is a table which will approximately have 5 crore entries after data harvesting.Is it advisable to keep so much data in one table ? I have read about 'partitioning' a table. An other idea I have is to break the table into \n\ndifferent tables after the no of rows in a table has reached a certain limit say 10 lacs. For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b' each having 10 lac entries. \n\nI needed advice on whether I should go for partitioning or the approach I have thought of. We have a HP server with 32GB ram,16 processors. The storage has 24TB diskspace (1TB/HD).We have put them on RAID-5. It will be great if we could know the parameters that can be changed in the\n\npostgres configuration file so that the database makes maximum utilization of the server we have.For eg parameters that would increase the speed of inserts and selects.Thank you in advanceRajiv Nair",
"msg_date": "Mon, 25 Jan 2010 22:53:41 +0530",
"msg_from": "nair rajiv <[email protected]>",
"msg_from_op": true,
"msg_subject": "splitting data into multiple tables"
},
{
"msg_contents": "On Mon, Jan 25, 2010 at 10:53 PM, nair rajiv <[email protected]> wrote:\n\n> Hello,\n>\n> I am working on a project that will take out structured content\n> from wikipedia\n> and put it in our database. Before putting the data into the database I\n> wrote a script to\n> find out the number of rows every table would be having after the data is\n> in and I found\n> there is a table which will approximately have 5 crore entries after data\n> harvesting.\n> Is it advisable to keep so much data in one table ?\n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into\n> different tables after the no of rows in a table has reached a certain\n> limit say 10 lacs.\n> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> each having 10 lac entries.\n> I needed advice on whether I should go for partitioning or the approach I\n> have thought of.\n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the\n> postgres configuration file so that the database makes maximum utilization\n> of the server we have.\n> For eg parameters that would increase the speed of inserts and selects.\n>\n>\n> Thank you in advance\n> Rajiv Nair\n\n\nWe have several servers that regularly run into records exceeding 50 million\nrecords on dual quad core machine with 8 GB RAM and 4 SAS 15K hard disks in\nRAID 10. If 50 million is the max amount of records that you are looking\nat, I would suggest not breaking the table. Rather, configure the database\nsettings present in postgresql.conf file to handle such loads.\n\nYou already have a powerful machine (I assume it's 16 core, not 16 physical\nprocessors), and if configured well, I hope would present no problems in\naccessing those records. For tuning PostgreSql, you can take a look at\npgtune (http://pgfoundry.org/projects/pgtune/) .\n\nTwo changes that I can suggest in your hardware would be to go in for SAS\n15K disks instead of SATA if you can do with less capacity, and goign in for\nRAID 10 instead of RAID 5.\n\n\nRegards\n\nAmitabh Kant\n\nOn Mon, Jan 25, 2010 at 10:53 PM, nair rajiv <[email protected]> wrote:\n\nHello, I am working on a project that will take out structured content from wikipediaand put it in our database. Before putting the data into the database I wrote a script tofind out the number of rows every table would be having after the data is in and I found\n\n\n\nthere is a table which will approximately have 5 crore entries after data harvesting.Is it advisable to keep so much data in one table ? I have read about 'partitioning' a table. An other idea I have is to break the table into \n\n\n\ndifferent tables after the no of rows in a table has reached a certain limit say 10 lacs. For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b' each having 10 lac entries. \n\n\n\nI needed advice on whether I should go for partitioning or the approach I have thought of. We have a HP server with 32GB ram,16 processors. The storage has 24TB diskspace (1TB/HD).We have put them on RAID-5. It will be great if we could know the parameters that can be changed in the\n\n\n\npostgres configuration file so that the database makes maximum utilization of the server we have.For eg parameters that would increase the speed of inserts and selects.Thank you in advanceRajiv Nair\nWe have several servers that regularly run into records exceeding 50 million records on dual quad core machine with 8 GB RAM and 4 SAS 15K hard disks in RAID 10. If 50 million is the max amount of records that you are looking at, I would suggest not breaking the table. Rather, configure the database settings present in postgresql.conf file to handle such loads. \nYou already have a powerful machine (I assume it's 16 core, not 16 physical processors), and if configured well, I hope would present no problems in accessing those records. For tuning PostgreSql, you can take a look at pgtune (http://pgfoundry.org/projects/pgtune/) .\nTwo changes that I can suggest in your hardware would be to go in for SAS 15K disks instead of SATA if you can do with less capacity, and goign in for RAID 10 instead of RAID 5.RegardsAmitabh Kant",
"msg_date": "Mon, 25 Jan 2010 23:03:23 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Mon, Jan 25, 2010 at 10:53 PM, nair rajiv <[email protected]> wrote:\n\n> Hello,\n>\n> I am working on a project that will take out structured content\n> from wikipedia\n> and put it in our database. Before putting the data into the database I\n> wrote a script to\n> find out the number of rows every table would be having after the data is\n> in and I found\n> there is a table which will approximately have 5 crore entries after data\n> harvesting.\n> Is it advisable to keep so much data in one table ?\n>\n\nIt is not good to keep these much amount of data in a single table, again,\nit depends on your application and the database usage.\n\n\n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into\n> different tables after the no of rows in a table has reached a certain\n> limit say 10 lacs.\n> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> each having 10 lac entries.\n>\n\nI think this wont help that much if you have a single machine. Partition the\ntable and keep the data in different nodes. Have a look at the tools like\npgpool.II\n\n\n> I needed advice on whether I should go for partitioning or the approach I\n> have thought of.\n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the\n> postgres configuration file so that the database makes maximum utilization\n> of the server we have.\n>\n\nWhat would be your total data base size? What is the IOPS? You should\npartition the db and keep the data across multiple nodes and process them in\nparallel.\n\n\n> For eg parameters that would increase the speed of inserts and selects.\n>\n>\n>\npgfoundry.org/projects/*pgtune*/ - have a look at check the docs\n\n\n\n\n> Thank you in advance\n> Rajiv Nair\n\nOn Mon, Jan 25, 2010 at 10:53 PM, nair rajiv <[email protected]> wrote:\nHello, I am working on a project that will take out structured content from wikipediaand put it in our database. Before putting the data into the database I wrote a script tofind out the number of rows every table would be having after the data is in and I found\n\n\nthere is a table which will approximately have 5 crore entries after data harvesting.Is it advisable to keep so much data in one table ?It is not good to keep these much amount of data in a single table, again, it depends on your application and the database usage.\n I have read about 'partitioning' a table. An other idea I have is to break the table into \n\n\ndifferent tables after the no of rows in a table has reached a certain limit say 10 lacs. For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b' each having 10 lac entries. \nI think this wont help that much if you have a single machine. Partition the table and keep the data in different nodes. Have a look at the tools like pgpool.II \n\n\nI needed advice on whether I should go for partitioning or the approach I have thought of. We have a HP server with 32GB ram,16 processors. The storage has 24TB diskspace (1TB/HD).We have put them on RAID-5. It will be great if we could know the parameters that can be changed in the\n\n\npostgres configuration file so that the database makes maximum utilization of the server we have.What would be your total data base size? What is the IOPS? You should partition the db and keep the data across multiple nodes and process them in parallel.\n For eg parameters that would increase the speed of inserts and selects.\npgfoundry.org/projects/pgtune/ - have a look at check the docs \nThank you in advanceRajiv Nair",
"msg_date": "Mon, 25 Jan 2010 23:09:17 +0530",
"msg_from": "Viji V Nair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "nair rajiv <[email protected]> wrote:\n \n> I found there is a table which will approximately have 5 crore\n> entries after data harvesting.\n> Is it advisable to keep so much data in one table ?\n \nThat's 50,000,000 rows, right? At this site, you're looking at a\nnon-partitioned table with more than seven times that if you go to a\ncase and click the \"Court Record Events\" button:\n \nhttp://wcca.wicourts.gov/\n \n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into different tables after the no of rows in\n> a table has reached a certain limit say 10 lacs.\n> For example, dividing a table 'datatable' to 'datatable_a',\n> 'datatable_b' each having 10 lac entries.\n> I needed advice on whether I should go for partitioning or the\n> approach I have thought of.\n \nIt can help, and it can hurt. It depends on the nature of the data\nand how it is used. To get a meaningful answer, I think we'd need\nto know a bit more about it.\n \n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the postgres configuration file\n> so that the database makes maximum utilization of the server we\n> have.\n \nAgain, it depends a bit on the nature of the queries. For ideas on\nwhere to start, you might want to look here:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nIf you get any particular queries which aren't performing as well as\nyou think they should, you can post here with details. See this for\ninformation to include:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n",
"msg_date": "Mon, 25 Jan 2010 11:47:22 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "Kevin Grittner wrote:\n> nair rajiv <[email protected]> wrote:\n> \n>> I found there is a table which will approximately have 5 crore\n>> entries after data harvesting.\n>> Is it advisable to keep so much data in one table ?\n> \n> That's 50,000,000 rows, right?\n\nYou should remember that words like lac and crore are not English words, and most English speakers around the world don't know what they mean. Thousand, million, billion and so forth are the English words that everyone knows.\n\nCraig\n",
"msg_date": "Mon, 25 Jan 2010 11:31:38 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 1:01 AM, Craig James <[email protected]>wrote:\n\n> Kevin Grittner wrote:\n>\n>> nair rajiv <[email protected]> wrote:\n>>\n>>\n>>> I found there is a table which will approximately have 5 crore\n>>> entries after data harvesting.\n>>> Is it advisable to keep so much data in one table ?\n>>>\n>> That's 50,000,000 rows, right?\n>>\n>\n> You should remember that words like lac and crore are not English words,\n> and most English speakers around the world don't know what they mean.\n> Thousand, million, billion and so forth are the English words that everyone\n> knows.\n>\n\n\n\nOh I am Sorry. I wasn't aware of that\nI repost my query with suggested changes.\n\n\n\nHello,\n\n I am working on a project that will take out structured content\nfrom wikipedia\nand put it in our database. Before putting the data into the database I\nwrote a script to\nfind out the number of rows every table would be having after the data is in\nand I found\nthere is a table which will approximately have 50,000,000 rows after data\nharvesting.\nIs it advisable to keep so much data in one table ?\n I have read about 'partitioning' a table. An other idea I have is\nto break the table into\ndifferent tables after the no of rows in a table has reached a certain\nlimit say 10,00,000.\nFor example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\neach having 10,00,000 rows.\nI needed advice on whether I should go for partitioning or the approach I\nhave thought of.\n We have a HP server with 32GB ram,16 processors. The storage has\n24TB diskspace (1TB/HD).\nWe have put them on RAID-5. It will be great if we could know the parameters\nthat can be changed in the\npostgres configuration file so that the database makes maximum utilization\nof the server we have.\nFor eg parameters that would increase the speed of inserts and selects.\n\n\nThank you in advance\nRajiv Nair\n\n>\n> Craig\n>\n\nOn Tue, Jan 26, 2010 at 1:01 AM, Craig James <[email protected]> wrote:\nKevin Grittner wrote:\n\nnair rajiv <[email protected]> wrote:\n \n\nI found there is a table which will approximately have 5 crore\nentries after data harvesting.\nIs it advisable to keep so much data in one table ?\n\n That's 50,000,000 rows, right?\n\n\nYou should remember that words like lac and crore are not English words, and most English speakers around the world don't know what they mean. Thousand, million, billion and so forth are the English words that everyone knows.\nOh I am Sorry. I wasn't aware of that I repost my query with suggested changes.Hello, I am working on a project that will take out structured content from wikipedia\nand put it in our database. Before putting the data into the database I wrote a script tofind out the number of rows every table would be having after the data is in and I found\n\nthere is a table which will approximately have 50,000,000 rows after data harvesting.Is it advisable to keep so much data in one table ? I have read about 'partitioning' a table. An other idea I have is to break the table into \n\n\ndifferent tables after the no of rows in a table has reached a certain limit say 10,00,000. For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b' each having 10,00,000 rows. \n\n\nI needed advice on whether I should go for partitioning or the approach I have thought of. We have a HP server with 32GB ram,16 processors. The storage has 24TB diskspace (1TB/HD).We have put them on RAID-5. It will be great if we could know the parameters that can be changed in the\n\n\npostgres configuration file so that the database makes maximum utilization of the server we have.For eg parameters that would increase the speed of inserts and selects.Thank you in advanceRajiv Nair \n\n\nCraig",
"msg_date": "Tue, 26 Jan 2010 06:09:48 +0530",
"msg_from": "nair rajiv <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:\n> On Tue, Jan 26, 2010 at 1:01 AM, Craig James \n<[email protected]>wrote:\n> I am working on a project that will take out structured content\n> from wikipedia\n> and put it in our database. Before putting the data into the database I\n> wrote a script to\n> find out the number of rows every table would be having after the data is\n> in and I found\n> there is a table which will approximately have 50,000,000 rows after data\n> harvesting.\n> Is it advisable to keep so much data in one table ?\nDepends on your access patterns. I.e. how many rows are you accessing at the \nsame time - do those have some common locality and such.\n\n\n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into\n> different tables after the no of rows in a table has reached a certain\n> limit say 10,00,000.\n> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> each having 10,00,000 rows.\n> I needed advice on whether I should go for partitioning or the approach I\n> have thought of.\nYour approach is pretty close to partitioning - except that partitioning makes \nthat mostly invisible to the outside so it is imho preferrable.\n\n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the\n> postgres configuration file so that the database makes maximum utilization\n> of the server we have.\n> For eg parameters that would increase the speed of inserts and selects.\nNot using RAID-5 possibly would be a good start - many people (me included) \nexperienced bad write performance on it. It depends a great deal on the \ncontroller/implementation though.\nRAID-10 is normally to be considered more advantageous despite its lower \nusable space ratio.\nDid you create one big RAID-5 out of all disks? Thats not a good idea, because \nits pretty likely that another disk fails while you restore a previously \nfailed disk. Unfortunately in that configuration that means you have lost your \ncomplete data (in the most common implementations at least).\n\nAndres\n\nPS: Your lines are strangely wrapped...\n",
"msg_date": "Tue, 26 Jan 2010 01:49:09 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 6:19 AM, Andres Freund <[email protected]> wrote:\n\n> On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:\n> > On Tue, Jan 26, 2010 at 1:01 AM, Craig James\n> <[email protected]>wrote:\n> > I am working on a project that will take out structured content\n> > from wikipedia\n> > and put it in our database. Before putting the data into the database I\n> > wrote a script to\n> > find out the number of rows every table would be having after the data is\n> > in and I found\n> > there is a table which will approximately have 50,000,000 rows after data\n> > harvesting.\n> > Is it advisable to keep so much data in one table ?\n> Depends on your access patterns. I.e. how many rows are you accessing at\n> the\n> same time - do those have some common locality and such.\n>\n\n I'll give a brief idea of how this table is. The important columns\nare\nsubject, predicate and object. So given a predicate and object one should\nbe able to get all the subjects, given subject and a predicate one should\nbe able to retrieve all the objects. I have created an indexes on these\nthree\ncolumns.\n\n>\n>\n> > I have read about 'partitioning' a table. An other idea I have\n> is\n> > to break the table into\n> > different tables after the no of rows in a table has reached a certain\n> > limit say 10,00,000.\n> > For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> > each having 10,00,000 rows.\n> > I needed advice on whether I should go for partitioning or the approach I\n> > have thought of.\n> Your approach is pretty close to partitioning - except that partitioning\n> makes\n> that mostly invisible to the outside so it is imho preferrable.\n>\n> > We have a HP server with 32GB ram,16 processors. The storage\n> has\n> > 24TB diskspace (1TB/HD).\n> > We have put them on RAID-5. It will be great if we could know the\n> > parameters that can be changed in the\n> > postgres configuration file so that the database makes maximum\n> utilization\n> > of the server we have.\n> > For eg parameters that would increase the speed of inserts and selects.\n> Not using RAID-5 possibly would be a good start - many people (me included)\n> experienced bad write performance on it. It depends a great deal on the\n> controller/implementation though.\n> RAID-10 is normally to be considered more advantageous despite its lower\n> usable space ratio.\n> Did you create one big RAID-5 out of all disks? Thats not a good idea,\n> because\n> its pretty likely that another disk fails while you restore a previously\n> failed disk. Unfortunately in that configuration that means you have lost\n> your\n> complete data (in the most common implementations at least).\n>\n\nNo, I am using only 12TB i.e 12 HDs of the 24TB I have\n\n>\n> Andres\n>\n> PS: Your lines are strangely wrapped...\n>\n\nOn Tue, Jan 26, 2010 at 6:19 AM, Andres Freund <[email protected]> wrote:\nOn Tuesday 26 January 2010 01:39:48 nair rajiv wrote:\n> On Tue, Jan 26, 2010 at 1:01 AM, Craig James\n<[email protected]>wrote:\n> I am working on a project that will take out structured content\n> from wikipedia\n> and put it in our database. Before putting the data into the database I\n> wrote a script to\n> find out the number of rows every table would be having after the data is\n> in and I found\n> there is a table which will approximately have 50,000,000 rows after data\n> harvesting.\n> Is it advisable to keep so much data in one table ?\nDepends on your access patterns. I.e. how many rows are you accessing at the\nsame time - do those have some common locality and such. I'll give a brief idea of how this table is. The important columns aresubject, predicate and object. So given a predicate and object one should\nbe able to get all the subjects, given subject and a predicate one shouldbe able to retrieve all the objects. I have created an indexes on these threecolumns.\n\n\n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into\n> different tables after the no of rows in a table has reached a certain\n> limit say 10,00,000.\n> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> each having 10,00,000 rows.\n> I needed advice on whether I should go for partitioning or the approach I\n> have thought of.\nYour approach is pretty close to partitioning - except that partitioning makes\nthat mostly invisible to the outside so it is imho preferrable.\n\n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the\n> postgres configuration file so that the database makes maximum utilization\n> of the server we have.\n> For eg parameters that would increase the speed of inserts and selects.\nNot using RAID-5 possibly would be a good start - many people (me included)\nexperienced bad write performance on it. It depends a great deal on the\ncontroller/implementation though.\nRAID-10 is normally to be considered more advantageous despite its lower\nusable space ratio.\nDid you create one big RAID-5 out of all disks? Thats not a good idea, because\nits pretty likely that another disk fails while you restore a previously\nfailed disk. Unfortunately in that configuration that means you have lost your\ncomplete data (in the most common implementations at least).No, I am using only 12TB i.e 12 HDs of the 24TB I have \n\nAndres\n\nPS: Your lines are strangely wrapped...",
"msg_date": "Tue, 26 Jan 2010 09:18:54 +0530",
"msg_from": "nair rajiv <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 9:18 AM, nair rajiv <[email protected]> wrote:\n\n>\n>\n> On Tue, Jan 26, 2010 at 6:19 AM, Andres Freund <[email protected]> wrote:\n>\n>> On Tuesday 26 January 2010 01:39:48 nair rajiv wrote:\n>> > On Tue, Jan 26, 2010 at 1:01 AM, Craig James\n>> <[email protected]>wrote:\n>> > I am working on a project that will take out structured\n>> content\n>> > from wikipedia\n>> > and put it in our database. Before putting the data into the database I\n>> > wrote a script to\n>> > find out the number of rows every table would be having after the data\n>> is\n>> > in and I found\n>> > there is a table which will approximately have 50,000,000 rows after\n>> data\n>> > harvesting.\n>> > Is it advisable to keep so much data in one table ?\n>> Depends on your access patterns. I.e. how many rows are you accessing at\n>> the\n>> same time - do those have some common locality and such.\n>>\n>\n> I'll give a brief idea of how this table is. The important columns\n> are\n> subject, predicate and object. So given a predicate and object one should\n> be able to get all the subjects, given subject and a predicate one should\n> be able to retrieve all the objects. I have created an indexes on these\n> three\n> columns.\n>\n>>\n>>\n>> > I have read about 'partitioning' a table. An other idea I have\n>> is\n>> > to break the table into\n>> > different tables after the no of rows in a table has reached a certain\n>> > limit say 10,00,000.\n>> > For example, dividing a table 'datatable' to 'datatable_a',\n>> 'datatable_b'\n>> > each having 10,00,000 rows.\n>> > I needed advice on whether I should go for partitioning or the approach\n>> I\n>> > have thought of.\n>> Your approach is pretty close to partitioning - except that partitioning\n>> makes\n>> that mostly invisible to the outside so it is imho preferrable.\n>>\n>> > We have a HP server with 32GB ram,16 processors. The storage\n>> has\n>> > 24TB diskspace (1TB/HD).\n>> > We have put them on RAID-5. It will be great if we could know the\n>> > parameters that can be changed in the\n>> > postgres configuration file so that the database makes maximum\n>> utilization\n>> > of the server we have.\n>> > For eg parameters that would increase the speed of inserts and selects.\n>> Not using RAID-5 possibly would be a good start - many people (me\n>> included)\n>> experienced bad write performance on it. It depends a great deal on the\n>> controller/implementation though.\n>> RAID-10 is normally to be considered more advantageous despite its lower\n>> usable space ratio.\n>> Did you create one big RAID-5 out of all disks? Thats not a good idea,\n>> because\n>> its pretty likely that another disk fails while you restore a previously\n>> failed disk. Unfortunately in that configuration that means you have lost\n>> your\n>> complete data (in the most common implementations at least).\n>>\n>\n> No, I am using only 12TB i.e 12 HDs of the 24TB I have\n>\n\nA 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. Now\nyou can calculate the number of disks, specifically spindles, for getting\nyour desired throughput and IOPs\n\n\n>\n>> Andres\n>>\n>> PS: Your lines are strangely wrapped...\n>>\n>\n>\n\nOn Tue, Jan 26, 2010 at 9:18 AM, nair rajiv <[email protected]> wrote:\nOn Tue, Jan 26, 2010 at 6:19 AM, Andres Freund <[email protected]> wrote:\nOn Tuesday 26 January 2010 01:39:48 nair rajiv wrote:\n> On Tue, Jan 26, 2010 at 1:01 AM, Craig James\n<[email protected]>wrote:\n> I am working on a project that will take out structured content\n> from wikipedia\n> and put it in our database. Before putting the data into the database I\n> wrote a script to\n> find out the number of rows every table would be having after the data is\n> in and I found\n> there is a table which will approximately have 50,000,000 rows after data\n> harvesting.\n> Is it advisable to keep so much data in one table ?\nDepends on your access patterns. I.e. how many rows are you accessing at the\nsame time - do those have some common locality and such. I'll give a brief idea of how this table is. The important columns aresubject, predicate and object. So given a predicate and object one should\n\nbe able to get all the subjects, given subject and a predicate one shouldbe able to retrieve all the objects. I have created an indexes on these threecolumns.\n\n\n> I have read about 'partitioning' a table. An other idea I have is\n> to break the table into\n> different tables after the no of rows in a table has reached a certain\n> limit say 10,00,000.\n> For example, dividing a table 'datatable' to 'datatable_a', 'datatable_b'\n> each having 10,00,000 rows.\n> I needed advice on whether I should go for partitioning or the approach I\n> have thought of.\nYour approach is pretty close to partitioning - except that partitioning makes\nthat mostly invisible to the outside so it is imho preferrable.\n\n> We have a HP server with 32GB ram,16 processors. The storage has\n> 24TB diskspace (1TB/HD).\n> We have put them on RAID-5. It will be great if we could know the\n> parameters that can be changed in the\n> postgres configuration file so that the database makes maximum utilization\n> of the server we have.\n> For eg parameters that would increase the speed of inserts and selects.\nNot using RAID-5 possibly would be a good start - many people (me included)\nexperienced bad write performance on it. It depends a great deal on the\ncontroller/implementation though.\nRAID-10 is normally to be considered more advantageous despite its lower\nusable space ratio.\nDid you create one big RAID-5 out of all disks? Thats not a good idea, because\nits pretty likely that another disk fails while you restore a previously\nfailed disk. Unfortunately in that configuration that means you have lost your\ncomplete data (in the most common implementations at least).No, I am using only 12TB i.e 12 HDs of the 24TB I have A 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. Now you can calculate the number of disks, specifically spindles, for getting your desired throughput and IOPs\n \n\nAndres\n\nPS: Your lines are strangely wrapped...",
"msg_date": "Tue, 26 Jan 2010 13:28:02 +0530",
"msg_from": "Viji V Nair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Mon, 25 Jan 2010, Viji V Nair wrote:\n> I think this wont help that much if you have a single machine. Partition the\n> table and keep the data in different nodes. Have a look at the tools like\n> pgpool.II\n\nSo partitioning. You have three choices:\n\n1. Use a single table\n2. Partition the table on the same server\n3. Partition the data across multiple servers.\n\nThis is in increasing order of complexity.\n\nThere will probably be no problem at all with option 1. The only problem \narises if you run a query that performs a full sequential scan of the \nentire table, which would obviously take a while. If your queries are \nindexable, then option 1 is almost certainly the best option.\n\nOption 2 adds complexity in the Postgres server. You will need to \npartition your tables in a logical manner - that is, there needs to be \nsome difference between rows in table a compared to rows in table b. This \nmeans that the partitioning will in effect be a little like indexing. You \ndo not want to have too many partitions. The advantage is that if a query \nrequires a full sequential scan, then there is the possibility of skipping \nsome of the partitions, although there is some complexity involved in \ngetting this to work correctly. In a lot of cases, partitioning will make \nqueries slower by confusing the planner.\n\nOption 3 is only useful when you have a real performance problem with \nlong-running queries (partitioning the data across servers) or with very \nlarge numbers of queries (duplicating the data across servers). It also \nadds much complexity. It is fairly simple to run a \"filter these results \nfrom the table\" queries across multiple servers, but if that was all you \nwere doing, you may as well use an index instead. It becomes impossible to \nperform proper cross-referencing queries without some very clever software \n(because not all the data is available on the server), which will probably \nbe hard to manage and slow down the execution anyway.\n\nMy recommendation would be to stick with a single table unless you have a \nreal need to partition.\n\nMatthew\n\n-- \nNote: some countries impose serious penalties for a conspiracy to overthrow\n the political system. THIS DOES NOT FIX THE VULNERABILITY.\n\t -- http://seclists.org/vulnwatch/2003/q2/0002.html\n",
"msg_date": "Tue, 26 Jan 2010 11:41:56 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Mon, 25 Jan 2010, nair rajiv wrote:\n> I am working on a project that will take out structured content from \n> wikipedia and put it in our database...\n> there is a table which will approximately have 5 crore entries after data\n> harvesting.\n\nHave you asked the Wikimedia Foundation if they mind you consuming that \nmuch of their bandwidth, or even if there are copyright issues involved in \ngrabbing that much of their data?\n\n(The other problem with using the word \"crore\" is that although it may \nmean 10000000 in a few countries, it could also mean 500000.)\n\nMatthew\n\n-- \n Of course it's your fault. Everything here's your fault - it says so in your\n contract. - Quark\n",
"msg_date": "Tue, 26 Jan 2010 11:45:41 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 5:15 PM, Matthew Wakeling <[email protected]>wrote:\n\n> On Mon, 25 Jan 2010, nair rajiv wrote:\n>\n>> I am working on a project that will take out structured content from\n>> wikipedia and put it in our database...\n>>\n>> there is a table which will approximately have 5 crore entries after data\n>> harvesting.\n>>\n>\n> Have you asked the Wikimedia Foundation if they mind you consuming that\n> much of their bandwidth, or even if there are copyright issues involved in\n> grabbing that much of their data?\n>\n\n\nWe are downloading the nt and owl files kept for download at\nhttp://wiki.dbpedia.org/Downloads34\n\n\n> (The other problem with using the word \"crore\" is that although it may mean\n> 10000000 in a few countries, it could also mean 500000.)\n>\n> Matthew\n>\n> --\n> Of course it's your fault. Everything here's your fault - it says so in\n> your\n> contract. - Quark\n>\n\nOn Tue, Jan 26, 2010 at 5:15 PM, Matthew Wakeling <[email protected]> wrote:\nOn Mon, 25 Jan 2010, nair rajiv wrote:\n\nI am working on a project that will take out structured content from wikipedia and put it in our database...\nthere is a table which will approximately have 5 crore entries after data\nharvesting.\n\n\nHave you asked the Wikimedia Foundation if they mind you consuming that much of their bandwidth, or even if there are copyright issues involved in grabbing that much of their data? We are downloading the nt and owl files kept for download at\nhttp://wiki.dbpedia.org/Downloads34\n\n(The other problem with using the word \"crore\" is that although it may mean 10000000 in a few countries, it could also mean 500000.)\n\nMatthew\n\n-- \nOf course it's your fault. Everything here's your fault - it says so in your\ncontract. - Quark",
"msg_date": "Tue, 26 Jan 2010 20:47:09 +0530",
"msg_from": "nair rajiv <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "Viji V Nair wrote:\n> A 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. \n> Now you can calculate the number of disks, specifically spindles, for \n> getting your desired throughput and IOPs\n\nI think you mean 120MB/s for that first part. Regardless, presuming you \ncan provision a database just based on IOPS rarely works. It's nearly \nimpossible to estimate what you really need anyway for a database app, \ngiven that much of real-world behavior depends on the cached in memory \nvs. uncached footprint of the data you're working with. By the time you \nput a number of disks into an array, throw a controller card cache on \ntop of it, then add the OS and PostgreSQL caches on top of those, you \nare so far disconnected from the underlying drive IOPS that speaking in \nthose terms doesn't get you very far. I struggle with this every time I \ntalk with a SAN vendor. Their fixation on IOPS without considering \nthings like how sequential scans mixed into random I/O will get handled \nis really disconnected from how databases work in practice. For \nexample, I constantly end up needing to detune IOPS in favor of \nreadahead to make \"SELECT x,y,z FROM t\" run at an acceptable speed on \nbig tables.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 26 Jan 2010 12:41:06 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 11:11 PM, Greg Smith <[email protected]> wrote:\n\n> Viji V Nair wrote:\n>\n>> A 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. Now\n>> you can calculate the number of disks, specifically spindles, for getting\n>> your desired throughput and IOPs\n>>\n>\n> I think you mean 120MB/s for that first part. Regardless, presuming you\n> can provision a database just based on IOPS rarely works. It's nearly\n> impossible to estimate what you really need anyway for a database app, given\n> that much of real-world behavior depends on the cached in memory vs.\n> uncached footprint of the data you're working with. By the time you put a\n> number of disks into an array, throw a controller card cache on top of it,\n> then add the OS and PostgreSQL caches on top of those, you are so far\n> disconnected from the underlying drive IOPS that speaking in those terms\n> doesn't get you very far. I struggle with this every time I talk with a SAN\n> vendor. Their fixation on IOPS without considering things like how\n> sequential scans mixed into random I/O will get handled is really\n> disconnected from how databases work in practice. For example, I constantly\n> end up needing to detune IOPS in favor of readahead to make \"SELECT x,y,z\n> FROM t\" run at an acceptable speed on big tables.\n>\n>\nYes, you are right.\n\nThere are catches in the SAN controllers also. SAN vendors wont give that\nmuch information regarding their internal controller design. They will say\nthey have 4 external 4G ports, you should also check how many internal ports\nthey have and the how the controllers are operating, in Active-Active or\nActive- Standby mode.\n\n\n\n\n> --\n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n>\n\nOn Tue, Jan 26, 2010 at 11:11 PM, Greg Smith <[email protected]> wrote:\nViji V Nair wrote:\n\nA 15k rpm SAS drive will give you a throughput of 12MB and 120 IOPS. Now you can calculate the number of disks, specifically spindles, for getting your desired throughput and IOPs\n\n\nI think you mean 120MB/s for that first part. Regardless, presuming you can provision a database just based on IOPS rarely works. It's nearly impossible to estimate what you really need anyway for a database app, given that much of real-world behavior depends on the cached in memory vs. uncached footprint of the data you're working with. By the time you put a number of disks into an array, throw a controller card cache on top of it, then add the OS and PostgreSQL caches on top of those, you are so far disconnected from the underlying drive IOPS that speaking in those terms doesn't get you very far. I struggle with this every time I talk with a SAN vendor. Their fixation on IOPS without considering things like how sequential scans mixed into random I/O will get handled is really disconnected from how databases work in practice. For example, I constantly end up needing to detune IOPS in favor of readahead to make \"SELECT x,y,z FROM t\" run at an acceptable speed on big tables.\n\nYes, you are right. There are catches in the SAN controllers also. SAN vendors wont give that much information regarding their internal controller design. They will say they have 4 external 4G ports, you should also check how many internal ports they have and the how the controllers are operating, in Active-Active or Active- Standby mode.\n \n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com",
"msg_date": "Tue, 26 Jan 2010 23:53:25 +0530",
"msg_from": "Viji V Nair <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
},
{
"msg_contents": "Viji V Nair wrote:\n> There are catches in the SAN controllers also. SAN vendors wont give \n> that much information regarding their internal controller design. They \n> will say they have 4 external 4G ports, you should also check how many \n> internal ports they have and the how the controllers are operating, \n> in Active-Active or Active- Standby mode.\n\nRight, the SAN cache serves the same purpose as the controller cache on \ndirect-attached storage. I've never seen a Fiber Channel card that had \nits own local cache too; doubt that's even possible. So I think of them \nas basically being the same type of cache, with the primary difference \nbeing that the transfers between the host and the cache has some latency \non it with FC compared to direct storage.\n\nYou're right that people should question the internal design too of \ncourse. Some days I wonder if I'm in the wrong business--the people who \ndo SAN tuning seem to have no idea what they're doing and yet are still \nexpensive to hire. But this is off-topic for the question being asked here.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Tue, 26 Jan 2010 15:32:34 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: splitting data into multiple tables"
}
] |
[
{
"msg_contents": "One of our most-used queries performs poorly (taking over 2 seconds) and a \ntiny amount of refactoring shows it can be fast (less than 1ms) by \ntransforming the OR case (which spans two tables) into a UNION.\n\nI have created a simple test case (below) which shows the difference we \nare seeing in query plans before and after refactoring.\n\nIs it beyond the ability of the query planner to optimise this query \nwithout refactoring? Or is the appropriate index missing, and if so, what \nwould it be?\n\nPerhaps the refactored query is, in fact, different and could produce \ndifferent data in certain corner-cases; I can't see where this could be \nthough.\n\nYour suggestions are appreciated and I hope the information is useful. \nMany thanks.\n\nMark\n\n\n-- The plans below are from PostgreSQL 8.5alpha3. Also tested with\n-- similar results on PostgreSQL 8.4.2\n\n-- Data structure where a container contains multiple items\n\nCREATE TABLE container (\n id integer PRIMARY KEY,\n selected bool NOT NULL DEFAULT false\n);\n\nCREATE TABLE item (\n container_id integer NOT NULL\n REFERENCES container(id) ON DELETE CASCADE,\n n integer NOT NULL,\n selected bool NOT NULL DEFAULT false,\n PRIMARY KEY (container_id, n)\n);\n\n-- Partial indexes to find selected containers or selected items\n\nCREATE INDEX container_selected\n ON container (selected)\n WHERE selected IS true;\n\nCREATE INDEX item_selected\n ON item (selected)\n WHERE selected IS true;\n\n-- Populate the data; for a small minority of items and containers,\n-- 'selected' is true\n\nCREATE LANGUAGE plpgsql;\n\nCREATE OR REPLACE FUNCTION populate()\nRETURNS VOID\nAS $$\nDECLARE\n i integer;\n j integer;\nBEGIN\n FOR i IN 0..999 LOOP\n\n INSERT INTO container (id, selected)\n VALUES (i, RANDOM() < 0.01);\n\n FOR j IN 0..999 LOOP\n INSERT INTO item (container_id, n, selected)\n VALUES (i, j, RANDOM() < 0.001);\n END LOOP;\n\n END LOOP;\nEND\n$$ LANGUAGE plpgsql;\n\nSELECT populate();\nVACUUM ANALYZE;\n\nSELECT COUNT(*) FROM container; -- 1000\nSELECT COUNT(*) FROM container WHERE selected IS true; -- 9\nSELECT COUNT(*) FROM item; -- 1000000\nSELECT COUNT(*) FROM item WHERE selected IS true; -- 1004\n\n-- A query to find all items where the item or container is selected\n\nEXPLAIN ANALYZE\n SELECT container_id, n\n FROM item\n INNER JOIN container ON item.container_id = container.id\n WHERE item.selected IS true\n OR container.selected IS true;\n\n-- Resulting query plan\n--\n-- Hash Join (cost=28.50..92591.11 rows=10016 width=8) (actual time=372.659..1269.207 rows=9996 loops=1)\n-- Hash Cond: (item.container_id = container.id)\n-- Join Filter: ((item.selected IS TRUE) OR (container.selected IS TRUE))\n-- -> Seq Scan on item (cost=0.00..78778.68 rows=1002468 width=9) (actual time=370.590..663.764 rows=1000000 loops=1)\n-- -> Hash (cost=16.00..16.00 rows=1000 width=5) (actual time=0.805..0.805 rows=1000 loops=1)\n-- -> Seq Scan on container (cost=0.00..16.00 rows=1000 width=5) (actual time=0.007..0.296 rows=1000 loops=1)\n-- Total runtime: 1271.676 ms\n-- (7 rows)\n\n-- The refactored SQL, which queries the same data but is fast\n\nEXPLAIN ANALYZE\n SELECT container_id, n\n FROM item\n INNER JOIN container ON item.container_id = container.id\n WHERE item.selected IS true\n UNION\n SELECT container_id, n\n FROM item\n INNER JOIN container ON item.container_id = container.id\n WHERE container.selected IS true;\n\n-- Resulting query plan:\n--\n-- HashAggregate (cost=18018.43..18120.33 rows=10190 width=8) (actual time=22.784..26.341 rows=9996 loops=1)\n-- -> Append (cost=28.50..17967.48 rows=10190 width=8) (actual time=0.908..16.676 rows=10004 loops=1)\n-- -> Hash Join (cost=28.50..90.05 rows=1002 width=8) (actual time=0.907..3.113 rows=1004 loops=1)\n-- Hash Cond: (public.item.container_id = public.container.id)\n-- -> Index Scan using item_selected on item (cost=0.00..47.77 rows=1002 width=8) (actual time=0.036..1.425 rows=1004 loops=1)\n-- Index Cond: (selected = true)\n-- -> Hash (cost=16.00..16.00 rows=1000 width=4) (actual time=0.856..0.856 rows=1000 loops=1)\n-- -> Seq Scan on container (cost=0.00..16.00 rows=1000 width=4) (actual time=0.006..0.379 rows=1000 loops=1)\n-- -> Nested Loop (cost=0.00..17775.53 rows=9188 width=8) (actual time=0.024..9.175 rows=9000 loops=1)\n-- -> Index Scan using container_selected on container (cost=0.00..12.33 rows=9 width=4) (actual time=0.005..0.012 rows=9 loops=1)\n-- Index Cond: (selected = true)\n-- -> Index Scan using item_pkey on item (cost=0.00..1960.93 rows=1021 width=8) (actual time=0.014..0.460 rows=1000 loops=9)\n-- Index Cond: (public.item.container_id = public.container.id)\n-- Total runtime: 28.617 ms\n-- (14 rows)\n",
"msg_date": "Tue, 26 Jan 2010 16:00:40 +0000 (GMT)",
"msg_from": "Mark Hills <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor query plan across OR operator"
},
{
"msg_contents": "just create index on both columns:\nCREATE INDEX foo_i ON foo(bar1, bar2);\n\n\nHTH\n",
"msg_date": "Tue, 26 Jan 2010 16:10:36 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan across OR operator"
},
{
"msg_contents": "Mark Hills <[email protected]> writes:\n> One of our most-used queries performs poorly (taking over 2 seconds) and a \n> tiny amount of refactoring shows it can be fast (less than 1ms) by \n> transforming the OR case (which spans two tables) into a UNION.\n\nI'd suggest going with the UNION. We are unlikely to make the planner\nlook for such cases, because usually such a transformation would be a\nnet loss. It seems like rather a corner case that it's a win even on\nyour example.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Jan 2010 11:41:54 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan across OR operator "
},
{
"msg_contents": "On Tue, Jan 26, 2010 at 11:41 AM, Tom Lane <[email protected]> wrote:\n> Mark Hills <[email protected]> writes:\n>> One of our most-used queries performs poorly (taking over 2 seconds) and a\n>> tiny amount of refactoring shows it can be fast (less than 1ms) by\n>> transforming the OR case (which spans two tables) into a UNION.\n>\n> I'd suggest going with the UNION. We are unlikely to make the planner\n> look for such cases, because usually such a transformation would be a\n> net loss. It seems like rather a corner case that it's a win even on\n> your example.\n\nThis has come up for me, too. But even if we grant that it's\nworthwhile, it seems like a tricky optimization to apply in practice,\nbecause unless your row estimates are very accurate, you might easily\napply it when you would have been better off leaving it alone. And it\nseems like getting accurate estimates would be hard, since the\nconditions might be highly correlated, or not, and they're on\ndifferent tables.\n\n...Robert\n",
"msg_date": "Tue, 26 Jan 2010 14:35:07 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan across OR operator"
},
{
"msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Jan 26, 2010 at 11:41 AM, Tom Lane <[email protected]> wrote:\n>> I'd suggest going with the UNION. �We are unlikely to make the planner\n>> look for such cases, because usually such a transformation would be a\n>> net loss. �It seems like rather a corner case that it's a win even on\n>> your example.\n\n> This has come up for me, too. But even if we grant that it's\n> worthwhile, it seems like a tricky optimization to apply in practice,\n> because unless your row estimates are very accurate, you might easily\n> apply it when you would have been better off leaving it alone. And it\n> seems like getting accurate estimates would be hard, since the\n> conditions might be highly correlated, or not, and they're on\n> different tables.\n\nActually, in the type of case Mark is showing, the estimates might be\n*more* accurate since the condition gets decomposed into separate\nper-table conditions. I'm still dubious about how often it's a win\nthough.\n\nThere's another problem, which is that transforming to UNION isn't\nnecessarily a safe transformation: it only works correctly if the\nquery output columns are guaranteed unique. Otherwise it might fold\nduplicates together that would have remained distinct in the original\nquery. If your query output columns include a primary key then the\nplanner could be confident this was safe, but that reduces the scope\nof the transformation even further ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Jan 2010 15:48:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan across OR operator "
},
{
"msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Actually, in the type of case Mark is showing, the estimates might\n> be *more* accurate since the condition gets decomposed into\n> separate per-table conditions. I'm still dubious about how often\n> it's a win though.\n> \n> There's another problem, which is that transforming to UNION isn't\n> necessarily a safe transformation: it only works correctly if the\n> query output columns are guaranteed unique. Otherwise it might\n> fold duplicates together that would have remained distinct in the\n> original query. If your query output columns include a primary\n> key then the planner could be confident this was safe, but that\n> reduces the scope of the transformation even further ...\n \nFWIW, I've seen this optimization in other products. I remember\nbeing surprised sometimes that it wasn't used where I thought it\nwould be, and I had to explicitly transform the query to UNION to\nget the performance benefit. That was probably due to the sort of\nconstraints you mention on when it is truly equivalent.\n \nPersonally, I'd put this one in the \"it would be nice if\" category. \nDoes it merit a TODO list entry, perhaps?\n \n-Kevin\n",
"msg_date": "Tue, 26 Jan 2010 15:05:02 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor query plan across OR operator"
}
] |
[
{
"msg_contents": "Dear All,\n\nJust wondering whether there is a missing scope for the query planner \n(on 8.4.2) to be cleverer than it currently is.\n\nSpecifically, I wonder whether the optimiser should know that by \nconverting a CASE condition into a WHERE condition, it can use an index.\n\nHave I found a possible enhancement, or is this simply too hard to do?\n\nBest wishes,\n\nRichard\n\n\n\nExample:\n--------\n\nIn this example, tbl_tracker has 255751 rows, with a primary key \"id\", \nwhose values lie uniformly in the range 1...1255750.\n\nIf one is trying to count multiple conditions, the following query seems \nto be the most obvious way to do it:\n\nSELECT\n SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) AS c1,\n SUM (case when id > 1210000 and id < 1220000 then 1 else 0 end) AS c2,\n SUM (case when id > 1220000 and id < 1230000 then 1 else 0 end) AS c3,\n SUM (case when id > 1230000 and id < 1240000 then 1 else 0 end) AS c4,\n SUM (case when id > 1240000 and id < 1250000 then 1 else 0 end) AS c5\nFROM tbl_tracker;\n\n\n c1 | c2 | c3 | c4 | c5\n------+------+------+------+------\n 2009 | 2018 | 2099 | 2051 | 2030\n\nTime: 361.666 ms\n\n\n\nThis can be manually optimised into a far uglier (but much much faster) \nquery:\n\nSELECT * FROM\n (SELECT COUNT (1) AS c1 FROM tbl_tracker\n WHERE id > 1200000 and id < 1210000) AS s1,\n (SELECT COUNT (1) AS c2 FROM tbl_tracker\n WHERE id > 1210000 and id < 1220000) AS s2,\n (SELECT COUNT (1) AS c3 FROM tbl_tracker\n WHERE id > 1220000 and id < 1230000) AS s3,\n (SELECT COUNT (1) AS c4 FROM tbl_tracker\n WHERE id > 1230000 and id < 1240000) AS s4,\n (SELECT COUNT (1) AS c5 FROM tbl_tracker\n WHERE id > 1240000 and id < 1250000) AS s5\n\n c1 | c2 | c3 | c4 | c5\n------+------+------+------+------\n 2009 | 2018 | 2099 | 2051 | 2030\n(1 row)\n\nTime: 21.091 ms\n\n\n\n\n\nDebugging\n---------\n\nThe simple queries are:\n\nSELECT SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) \nfrom tbl_tracker;\n\nTime: 174.804 ms\n\nExplain shows that this does a sequential scan.\n\n\n\nSELECT COUNT(1) from tbl_tracker WHERE id > 1200000 and id < 1210000;\n\nTime: 4.153 ms\n\nExplain shows that this uses the index, as expected.\n\n\n",
"msg_date": "Tue, 26 Jan 2010 17:10:26 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Should the optimiser convert a CASE into a WHERE if it can?"
},
{
"msg_contents": "Richard Neill <[email protected]> writes:\n> SELECT\n> SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) AS c1,\n> SUM (case when id > 1210000 and id < 1220000 then 1 else 0 end) AS c2,\n> ...\n> FROM tbl_tracker;\n\n> This can be manually optimised into a far uglier (but much much faster) \n> query:\n\n> SELECT * FROM\n> (SELECT COUNT (1) AS c1 FROM tbl_tracker\n> WHERE id > 1200000 and id < 1210000) AS s1,\n> (SELECT COUNT (1) AS c2 FROM tbl_tracker\n> WHERE id > 1210000 and id < 1220000) AS s2,\n> ...\n\nWe're unlikely to consider doing this, for a couple of reasons:\nit's unlikely to come up often enough to justify the cycles the planner\nwould spend looking for the case *on every query*, and it requires very\nspecial knowledge about the behavior of two specific aggregate functions,\nwhich is something the planner tends to avoid using.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Jan 2010 12:21:38 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if it can? "
},
{
"msg_contents": "On Tue, 26 Jan 2010, Richard Neill wrote:\n> SELECT SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) from \n> tbl_tracker;\n>\n> Explain shows that this does a sequential scan.\n\nI'd defer to Tom on this one, but really, for Postgres to work this out, \nit would have to peer deep into the mysterious SUM function, and realise \nthat the number zero is a noop. I suppose it would be possible, but you'd \nhave to define noops for each of the different possible functions, *and* \nmake the planner clever enough to spot the noop-matching number in the \nelse and convert the WHEN into a WHERE.\n\nIn my mind, this is quite a lot of work for the planner to do to solve \nthis one. That translates into quite a lot of work for some poor \nprogrammer to do to achieve it. If you have the money, then hire someone \nto do it!\n\nMatthew\n\n-- \n I don't want the truth. I want something I can tell parliament!\n -- Rt. Hon. Jim Hacker MP\n",
"msg_date": "Tue, 26 Jan 2010 17:23:06 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if\n it can?"
},
{
"msg_contents": "Thanks for your answers.\n\n\nDavid Wilson wrote:\n\n > Why not simply add the where clause to the original query?\n >\n > SELECT\n > SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) AS c1,\n > SUM (case when id > 1210000 and id < 1220000 then 1 else 0 end) AS c2,\n > SUM (case when id > 1220000 and id < 1230000 then 1 else 0 end) AS c3,\n > SUM (case when id > 1230000 and id < 1240000 then 1 else 0 end) AS c4,\n > SUM (case when id > 1240000 and id < 1250000 then 1 else 0 end) AS c5\n > FROM tbl_tracker WHERE (id>1200000) AND (id<1250000);\n >\n > I didn't populate any test tables, but I'd expect that to do just as\n > well without being any uglier than the original query is.\n\nYou're absolutely right, but I'm afraid this won't help. I'd simplified \nthe original example query, but in real life, I've got about 50 \ndifferent sub-ranges, which cover virtually all the id-space.\n\n----------\n\nTom Lane wrote:\n> Richard Neill <[email protected]> writes:\n>> SELECT\n>> SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) AS c1,\n>> SUM (case when id > 1210000 and id < 1220000 then 1 else 0 end) AS c2,\n>> ...\n>> FROM tbl_tracker;\n> \n>> This can be manually optimised into a far uglier (but much much faster) \n>> query:\n> \n>> SELECT * FROM\n>> (SELECT COUNT (1) AS c1 FROM tbl_tracker\n>> WHERE id > 1200000 and id < 1210000) AS s1,\n>> (SELECT COUNT (1) AS c2 FROM tbl_tracker\n>> WHERE id > 1210000 and id < 1220000) AS s2,\n>> ...\n> \n> We're unlikely to consider doing this, for a couple of reasons:\n> it's unlikely to come up often enough to justify the cycles the planner\n> would spend looking for the case *on every query*, and it requires very\n> special knowledge about the behavior of two specific aggregate functions,\n> which is something the planner tends to avoid using.\n> \n\nOK - that's all I was wondering. I thought I'd raise this in case it \nmight be helpful.\n\nI'll add a note to:\nhttp://www.postgresql.org/docs/8.4/interactive/functions-conditional.html\nto point out that this is something of a trap for the unwary\n\nRegards,\n\nRichard\n",
"msg_date": "Tue, 26 Jan 2010 17:41:32 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if\n it can?"
},
{
"msg_contents": "\nOn Jan 26, 2010, at 9:41 AM, Richard Neill wrote:\n\n> Thanks for your answers.\n> \n> \n> David Wilson wrote:\n> \n>> Why not simply add the where clause to the original query?\n>> \n>> SELECT\n>> SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) AS c1,\n>> SUM (case when id > 1210000 and id < 1220000 then 1 else 0 end) AS c2,\n>> SUM (case when id > 1220000 and id < 1230000 then 1 else 0 end) AS c3,\n>> SUM (case when id > 1230000 and id < 1240000 then 1 else 0 end) AS c4,\n>> SUM (case when id > 1240000 and id < 1250000 then 1 else 0 end) AS c5\n>> FROM tbl_tracker WHERE (id>1200000) AND (id<1250000);\n>> \n>> I didn't populate any test tables, but I'd expect that to do just as\n>> well without being any uglier than the original query is.\n> \n> You're absolutely right, but I'm afraid this won't help. I'd simplified \n> the original example query, but in real life, I've got about 50 \n> different sub-ranges, which cover virtually all the id-space.\n> \n\nWell, it probably shouldn't use the index if it covers the vast majority of the table. I wonder if it is actually faster to reformulate with WHERE or not at that point -- it might be slower.",
"msg_date": "Tue, 26 Jan 2010 13:32:21 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if\n it can?"
},
{
"msg_contents": "2010/1/26 Matthew Wakeling <[email protected]>\n\n> On Tue, 26 Jan 2010, Richard Neill wrote:\n>\n>> SELECT SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end)\n>> from tbl_tracker;\n>>\n>> Explain shows that this does a sequential scan.\n>>\n>\n> I'd defer to Tom on this one, but really, for Postgres to work this out, it\n> would have to peer deep into the mysterious SUM function, and realise that\n> the number zero is a noop. I suppose it would be possible, but you'd have to\n> define noops for each of the different possible functions, *and* make the\n> planner clever enough to spot the noop-matching number in the else and\n> convert the WHEN into a WHERE.\n>\n> Hello.\n\nHow about SELECT SUM (case when id > 1200000 and id < 1210000 then 1 end)\nfrom tbl_tracker;\nIt gives same result (may be unless there are no records at all) and\noptimizer already knows it need not to call function for null input. Such an\noptimization would cover much more cases. It would look like:\n * Check only for aggregate subselects\n * All the functions should be noop for null input\n * Add ORed constraint for every function input is not null (in this example\n(case when id > A1 and id < B1 then 1 end is not null) or (case when id > A2\nand id < B2 then 1 end is not null) or ... or (case when id > An and id < Bn\nthen 1 end is not null)\n * Know special \"case\" (case when id > A1 and id < B1 then 1 end is not\nnull) <=> (id > A1 and id < B1)\nby ORing all the \"when\" conditions case when C1 then D1 when C2 then D2 ...\nwhen Cm then Dm end is not null <=> C1 or C2 or ... or Cm.\nEvent without last part it may give bonuses even for \"select count(field)\nfrom table\" transformed into \"select count(field) from table where field is\nnot null\" and using [partial] indexes.\nAs of last \"*\", replacing COUNT with SUM(CASE()) is used often enough when\nmultiple count calculations are needed.\n\nBest regards, Vitalii Tymchyshyn\n\n2010/1/26 Matthew Wakeling <[email protected]>\nOn Tue, 26 Jan 2010, Richard Neill wrote:\n\nSELECT SUM (case when id > 1200000 and id < 1210000 then 1 else 0 end) from tbl_tracker;\n\nExplain shows that this does a sequential scan.\n\n\nI'd defer to Tom on this one, but really, for Postgres to work this out, it would have to peer deep into the mysterious SUM function, and realise that the number zero is a noop. I suppose it would be possible, but you'd have to define noops for each of the different possible functions, *and* make the planner clever enough to spot the noop-matching number in the else and convert the WHEN into a WHERE.\nHello.How about SELECT SUM (case when id > 1200000 and id < 1210000 then 1 end) from tbl_tracker;It gives same result (may be unless there are no records at all) and optimizer already knows it need not to call function for null input. Such an optimization would cover much more cases. It would look like:\n * Check only for aggregate subselects * All the functions should be noop for null input * Add ORed constraint for every function input is not null (in this example (case when id > A1 and id < B1 then 1 end is not null) or (case when id > A2 and id < B2 then 1 end is not null) or ... or (case when id > An and id < Bn then 1 end is not null)\n * Know special \"case\" (case when id > A1 and id < B1 then 1 end is not null) <=> (id > A1 and id < B1)by ORing all the \"when\" conditions case when C1 then D1 when C2 then D2 ... when Cm then Dm end is not null <=> C1 or C2 or ... or Cm.\nEvent without last part it may give bonuses even for \"select count(field) from table\" transformed into \"select count(field) from table where field is not null\" and using [partial] indexes. As of last \"*\", replacing COUNT with SUM(CASE()) is used often enough when multiple count calculations are needed.\nBest regards, Vitalii Tymchyshyn",
"msg_date": "Wed, 27 Jan 2010 18:53:46 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if it\n\tcan?"
},
{
"msg_contents": "On Wed, 27 Jan 2010, Віталій Тимчишин wrote:\n> How about SELECT SUM (case when id > 1200000 and id < 1210000 then 1 end)\n> from tbl_tracker;\n\nThat is very interesting.\n\n> * All the functions should be noop for null input\n\nAlas, not true for COUNT(*), AVG(), etc.\n\nMatthew\n\n-- \n An optimist sees the glass as half full, a pessimist as half empty,\n and an engineer as having redundant storage capacity.",
"msg_date": "Wed, 27 Jan 2010 17:01:30 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if\n it \tcan?"
},
{
"msg_contents": "27 січня 2010 р. 19:01 Matthew Wakeling <[email protected]> написав:\n\n> On Wed, 27 Jan 2010, Віталій Тимчишин wrote:\n>\n>> How about SELECT SUM (case when id > 1200000 and id < 1210000 then 1 end)\n>> from tbl_tracker;\n>>\n>\n> That is very interesting.\n>\n>\n> * All the functions should be noop for null input\n>>\n>\n> Alas, not true for COUNT(*), AVG(), etc.\n>\n> select avg(b), count(b), count(*) from (values (2),(null))a(b)\ngives (2.0, 1, 2) for me, so AVG is in game. Sure, it won't work for\ncount(*), but optimizer already knows which aggregates are strict and which\nare not, so no new information is needed.\n\nBest regards, Vitalii Tymchyshyn\n\n27 січня 2010 р. 19:01 Matthew Wakeling <[email protected]> написав:\nOn Wed, 27 Jan 2010, Віталій Тимчишин wrote:\n\nHow about SELECT SUM (case when id > 1200000 and id < 1210000 then 1 end)\nfrom tbl_tracker;\n\n\nThat is very interesting.\n\n\n* All the functions should be noop for null input\n\n\nAlas, not true for COUNT(*), AVG(), etc.select avg(b), count(b), count(*) from (values (2),(null))a(b)gives (2.0, 1, 2) for me, so AVG is in game. Sure, it won't work for count(*), but optimizer already knows which aggregates are strict and which are not, so no new information is needed.\nBest regards, Vitalii Tymchyshyn",
"msg_date": "Wed, 27 Jan 2010 19:09:59 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Should the optimiser convert a CASE into a WHERE if it\n\tcan?"
}
] |
[
{
"msg_contents": "Had a quick look at a benchmark someone put together of MySQL vs PostgreSQL,\nand while PostgreSQL is generally faster, I noticed the bulk delete was very\nslow: http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\n\nIs this normal?\n\nThom\n\nHad a quick look at a benchmark someone put together of MySQL vs \nPostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk\n delete was very slow: http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\nIs this normal?Thom",
"msg_date": "Wed, 27 Jan 2010 13:28:09 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmark shows very slow bulk delete"
},
{
"msg_contents": "On 01/27/10 14:28, Thom Brown wrote:\n> Had a quick look at a benchmark someone put together of MySQL vs\n> PostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk\n> delete was very slow:\n> http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\n\nI wish that, when people got the idea to run a simplistic benchmark like \nthis, they would at least have the common sense to put the database on a \nRAM drive to avoid problems with different cylinder speeds of rotational \nmedia and fragmentation from multiple runs.\n\nHere are some typical results from a desktop SATA drive:\n\nada0\n\t512 \t# sectorsize\n\t500107862016\t# mediasize in bytes (466G)\n\t976773168 \t# mediasize in sectors\n\t969021 \t# Cylinders according to firmware.\n\t16 \t# Heads according to firmware.\n\t63 \t# Sectors according to firmware.\n\t6QG3Z026 \t# Disk ident.\n\nSeek times:\n\tFull stroke:\t 250 iter in 5.676993 sec = 22.708 msec\n\tHalf stroke:\t 250 iter in 4.284583 sec = 17.138 msec\n\tQuarter stroke:\t 500 iter in 6.805539 sec = 13.611 msec\n\tShort forward:\t 400 iter in 2.678447 sec = 6.696 msec\n\tShort backward:\t 400 iter in 2.318637 sec = 5.797 msec\n\tSeq outer:\t 2048 iter in 0.214292 sec = 0.105 msec\n\tSeq inner:\t 2048 iter in 0.203929 sec = 0.100 msec\nTransfer rates:\n\toutside: 102400 kbytes in 1.229694 sec = 83273 kbytes/sec\n\tmiddle: 102400 kbytes in 1.446570 sec = 70788 kbytes/sec\n\tinside: 102400 kbytes in 2.446670 sec = 41853 kbytes/sec\n\nThis doesn't explain the 4-orders-of-magnitude difference between MySQL \nand PostgreSQL in bulk_delete() (0.02 vs 577) but it does suggest that \nsome other results where the performance is close, might be bogus.\n\nIt's tough to benchmark anything involving rotational drives :)\n\n",
"msg_date": "Wed, 27 Jan 2010 15:23:59 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "On Wed, 27 Jan 2010, Thom Brown wrote:\n> Had a quick look at a benchmark someone put together of MySQL vs PostgreSQL,\n> and while PostgreSQL is generally faster, I noticed the bulk delete was very\n> slow: http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\n>\n> Is this normal?\n\nOn the contrary, TRUNCATE TABLE is really rather fast.\n\nSeriously, the Postgres developers, when designing the system, decided on \na database layout that was optimised for the most common cases. Bulk \ndeletion of data is not really that common an operation, unless you are \ndeleting whole categories of data, where setting up partitioning and \ndeleting whole partitions would be sensible.\n\nOther complications are that the server has to maintain concurrent \nintegrity - that is, another transaction must be able to see either none \nof the changes or all of them. As a consequence of this, Postgres needs to \ndo a sequential scan through the table and mark the rows for deletion in \nthe transaction, before flipping the transaction committed status and \ncleaning up afterwards.\n\nI'd be interested in how mysql manages to delete a whole load of rows in \n0.02 seconds. How many rows is that?\n\n(Reading in the comments, I saw this: \"The slow times for Postgresql Bulk \nModify/Bulk Delete can be explained by foreign key references to the \nupdates table.\" I'm not sure that fully explains it though, unless there \nare basically zero rows being deleted - it's hardly bulk then, is it?)\n\nMatthew\n\n-- \n People who love sausages, respect the law, and work with IT standards \n shouldn't watch any of them being made. -- Peter Gutmann\n",
"msg_date": "Wed, 27 Jan 2010 14:49:06 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "Thom Brown <[email protected]> wrote:\n \n> Had a quick look at a benchmark someone put together of MySQL vs\n> PostgreSQL, and while PostgreSQL is generally faster, I noticed\n> the bulk delete was very slow:\n> http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\n> \n> Is this normal?\n \nIt is if you don't have an index on the table which has a foreign\nkey defined which references the table in which you're doing\ndeletes. The author of the benchmark apparently didn't realize that\nMySQL automatically adds such an index to the dependent table, while\nPostgreSQL leaves it to you to decide whether to add such an index. \nFor \"insert-only\" tables, it isn't always worth the cost of\nmaintaining it.\n \nAlso, I see that the database was small enough to be fully cached,\nyet the costs weren't adjusted to the recommended values for such an\nenvironment, so PostgreSQL should *really* have beaten MySQL by more\nthan it did.\n \n-Kevin\n",
"msg_date": "Wed, 27 Jan 2010 08:54:04 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "On Wed, Jan 27, 2010 at 9:54 AM, Kevin Grittner <[email protected]\n> wrote:\n\n> It is if you don't have an index on the table which has a foreign\n> key defined which references the table in which you're doing\n> deletes. The author of the benchmark apparently didn't realize that\n> MySQL automatically adds such an index to the dependent table, while\n> PostgreSQL leaves it to you to decide whether to add such an index.\n> For \"insert-only\" tables, it isn't always worth the cost of\n> maintaining it.\n>\n>\nIt really gets to me that I have to not use some foreign keys in MySQL\nbecause I can't afford to maintain the index. I have to write super fun\n\"check constraints\" that look like\n\nDELIMITER \\\\\nCREATE TRIGGER Location_Pre_Delete BEFORE DELETE ON Locations FOR EACH ROW\nBEGIN\n DECLARE _id INT;\n SELECT id INTO _id FROM BigHistoryTable WHERE locationId = OLD.id LIMIT 1;\n IF _id IS NOT NULL THEN\n INSERT INTO BigHistoryTable\n(column_that_does_not_exist_but_says_that_you_violated_my_hacked_foreign_key)\nVALUES ('fail');\n END IF;\nEND\\\\\n\nSometimes I can't sleep at night for having written that code.\n\nOn Wed, Jan 27, 2010 at 9:54 AM, Kevin Grittner <[email protected]> wrote:\nIt is if you don't have an index on the table which has a foreign\nkey defined which references the table in which you're doing\ndeletes. The author of the benchmark apparently didn't realize that\nMySQL automatically adds such an index to the dependent table, while\nPostgreSQL leaves it to you to decide whether to add such an index.\nFor \"insert-only\" tables, it isn't always worth the cost of\nmaintaining it.It really gets to me that I have to not use some foreign keys in MySQL because I can't afford to maintain the index. I have to write super fun \"check constraints\" that look like\nDELIMITER \\\\CREATE TRIGGER Location_Pre_Delete BEFORE DELETE ON Locations FOR EACH ROWBEGIN DECLARE _id INT; SELECT id INTO _id FROM BigHistoryTable WHERE locationId = OLD.id LIMIT 1;\n IF _id IS NOT NULL THEN INSERT INTO BigHistoryTable (column_that_does_not_exist_but_says_that_you_violated_my_hacked_foreign_key) VALUES ('fail'); END IF;END\\\\\nSometimes I can't sleep at night for having written that code.",
"msg_date": "Wed, 27 Jan 2010 10:37:22 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "On Wednesday 27 January 2010 15:49:06 Matthew Wakeling wrote:\n> On Wed, 27 Jan 2010, Thom Brown wrote:\n> > Had a quick look at a benchmark someone put together of MySQL vs\n> > PostgreSQL, and while PostgreSQL is generally faster, I noticed the bulk\n> > delete was very slow:\n> > http://www.randombugs.com/linux/mysql-postgresql-benchmarks.html\n> > \n> > Is this normal?\n> \n> On the contrary, TRUNCATE TABLE is really rather fast.\n> \n> Seriously, the Postgres developers, when designing the system, decided on\n> a database layout that was optimised for the most common cases. Bulk\n> deletion of data is not really that common an operation, unless you are\n> deleting whole categories of data, where setting up partitioning and\n> deleting whole partitions would be sensible.\n> \n> Other complications are that the server has to maintain concurrent\n> integrity - that is, another transaction must be able to see either none\n> of the changes or all of them. As a consequence of this, Postgres needs to\n> do a sequential scan through the table and mark the rows for deletion in\n> the transaction, before flipping the transaction committed status and\n> cleaning up afterwards.\n> \n> I'd be interested in how mysql manages to delete a whole load of rows in\n> 0.02 seconds. How many rows is that?\nAfair mysql detects that case and converts it into some truncate equivalent.\n\nAndres\n",
"msg_date": "Wed, 27 Jan 2010 16:56:34 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "Ivan Voras wrote:\n> I wish that, when people got the idea to run a simplistic benchmark \n> like this, they would at least have the common sense to put the \n> database on a RAM drive to avoid problems with different cylinder \n> speeds of rotational media and fragmentation from multiple runs.\nHuh?\n> It's tough to benchmark anything involving rotational drives :)\nBut - how the database organises its IO to maximise the available \nbandwidth, limit\navaiodable seeks, and limit avoidable flushes is absolutely key to \nrealistic performance,\nespecially on modest everyday hardware. Not everyone has a usage that \njustifies\n'enterprise' kit - but plenty of people can benefit from something a \nstep up from\nSQLite.\n\nIf you just want to benchmark query processor efficiency then that's one \nscenario\nwhere taking physical IO out of the picture might be justified, but I \ndon't see a good\nreason to suggest that it is 'common sense' to do so for all testing, \nand while the\nhardware involved is pretty low end, its still a valid data point.\n.\n\n\n",
"msg_date": "Wed, 27 Jan 2010 21:19:13 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "Kevin Grittner wrote:\n> It is if you don't have an index on the table which has a foreign\n> key defined which references the table in which you're doing\n> deletes. The author of the benchmark apparently didn't realize that\n> MySQL automatically adds such an index to the dependent table, while\n> PostgreSQL leaves it to you to decide whether to add such an index. \n> \n\nThe author there didn't write the PostgreSQL schema; he's just using the \nosdb test kit: http://osdb.sourceforge.net/\n\nGiven that both Peter and Neil Conway have thrown work their way, I know \nthere's been some PG specific work done on that project by people who \nknow what's going on, but I'm not sure if that included a performance \ncheck. A quick glance at \nhttp://osdb.cvs.sourceforge.net/viewvc/osdb/osdb/src/callable-sql/postgres-ui/osdb-pg-ui.m4?revision=1.4&view=markup \nfinds this:\n\n 222 createIndexForeign(char* tName, char* keyName, char* keyCol,\n 223 char* fTable, char* fFields) {\n 224 snprintf(cmd, CMDBUFLEN,\n 225 \"alter table %s add constraint %s foreign key (%s) references %s (%s)\",\n 226 tName, keyName, keyCol, fTable, fFields);\n\n\nBut I don't see any obvious spot where the matching index that should go \nalong with that is created at. The code is just convoluted enough (due \nto how they abstract away support for multiple databases) that I'm not \nsure yet--maybe they call their createIndexBtree function and fix this \nin a later step. But the way the function is \nnamed--\"createIndexForeign\"--seems to suggest they believe that this \noperation will create the index, too, which as you point out is just not \ntrue.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 27 Jan 2010 16:38:44 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
},
{
"msg_contents": "James Mansion wrote:\n> Ivan Voras wrote:\n>> I wish that, when people got the idea to run a simplistic benchmark\n>> like this, they would at least have the common sense to put the\n>> database on a RAM drive to avoid problems with different cylinder\n>> speeds of rotational media and fragmentation from multiple runs.\n> Huh?\n>> It's tough to benchmark anything involving rotational drives :)\n> But - how the database organises its IO to maximise the available\n> bandwidth, limit\n> avaiodable seeks, and limit avoidable flushes is absolutely key to\n> realistic performance,\n> especially on modest everyday hardware. Not everyone has a usage that\n> justifies\n> 'enterprise' kit - but plenty of people can benefit from something a\n> step up from\n> SQLite.\n> \n> If you just want to benchmark query processor efficiency then that's one\n> scenario\n> where taking physical IO out of the picture might be justified, but I\n> don't see a good\n> reason to suggest that it is 'common sense' to do so for all testing,\n> and while the\n> hardware involved is pretty low end, its still a valid data point.\n> .\n\nYou are right, of course, for common benchmarking to see what\nperformance can be expected from some setup in some circumstances, but\nnot where the intention is to compare products.\n\nYou don't have to go the memory drive / SSD route - just make sure the\ndatabases always use the same (small) area of the disk drive.",
"msg_date": "Thu, 28 Jan 2010 11:52:09 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmark shows very slow bulk delete"
}
] |
[
{
"msg_contents": "Hi all - sorry to create additional email 'noise'\n\nBut I've been trying to post a rather long query to\n\nThe pgsql-performance user list. Dave thought\n\nThat it might have been bounced due to the length\n\nAnd suggested I send a short 'blast'\n\n \n\nIf this works I'll send a shortened version of my query later.\n\n \n\nThank you,\n\n \n\nMark Steben | Database Administrator \n <http://www.autorevenue.com> @utoRevenueR - \"Keeping Customers Close\" \n95D Ashley Ave, West Springfield, MA 01089 \n413.243.4800 x1512 (Phone) |413.732-1824 (Fax) \n <http://www.dominionenterprises.com> @utoRevenue is a registered trademark\nand a division of Dominion Enterprises \n\n\n\n \n\n\n\n\n\n\n\n\n\n\nHi all – sorry to create additional email ‘noise’\nBut I’ve been trying to post a rather long query to\nThe pgsql-performance user list. Dave thought\nThat it might have been bounced due to the length\nAnd suggested I send a short ‘blast’\n \nIf this works I’ll send a shortened version of my\nquery later.\n \nThank you,\n \nMark Steben | Database Administrator \n@utoRevenue® -\n\"Keeping Customers Close\" \n95D Ashley Ave, West\nSpringfield, MA 01089 \n413.243.4800 x1512\n(Phone) |413.732-1824\n(Fax) \n@utoRevenue is a registered trademark and a\ndivision of Dominion Enterprises",
"msg_date": "Wed, 27 Jan 2010 10:33:45 -0500",
"msg_from": "\"Mark Steben\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "test send (recommended by Dave Page)"
},
{
"msg_contents": "On Wed, 27 Jan 2010, Mark Steben wrote:\n> Subject: [PERFORM] test send (recommended by Dave Page)\n> \n> Hi all - sorry to create additional email 'noise'\n>\n> But I've been trying to post a rather long query to\n>\n> The pgsql-performance user list. Dave thought\n>\n> That it might have been bounced due to the length\n>\n> And suggested I send a short 'blast'\n>\n>\n>\n> If this works I'll send a shortened version of my query later.\n\nWhatever you do, don't try to send an email to the list with the word \n\"help\" in the subject. The mailing list software will silently throw away \nyour email. Helpful, for a \"help\" mailing list.\n\nMatthew\n\n-- \n The early bird gets the worm, but the second mouse gets the cheese.\n",
"msg_date": "Wed, 27 Jan 2010 15:36:30 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: test send (recommended by Dave Page)"
},
{
"msg_contents": "On Wed, Jan 27, 2010 at 3:33 PM, Mark Steben <[email protected]> wrote:\n> Hi all – sorry to create additional email ‘noise’\n>\n> But I’ve been trying to post a rather long query to\n>\n> The pgsql-performance user list. Dave thought\n>\n> That it might have been bounced due to the length\n>\n> And suggested I send a short ‘blast’\n>\n>\n>\n> If this works I’ll send a shortened version of my query later.\n\nI got it. Try posting large query plans etc. to pastebin or a similar\nservice to keep the mail size down.\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\n",
"msg_date": "Wed, 27 Jan 2010 15:43:53 +0000",
"msg_from": "Dave Page <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: test send (recommended by Dave Page)"
}
] |
[
{
"msg_contents": "Hello.\n\nI've always thought that PostgreSQL would propagate constraint from field1\nto field2 if condition says field1=field2, but this does not seem the case:\ndict=# explain select * from domain_list,title.domains where processed_at is\nnot null and key=groupid and key < 1000000 and groupid < 1000000;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------\n Hash Join (cost=2179918.87..4529994.61 rows=4616 width=318)\n Hash Cond: (domain_list.key = domains.groupid)\n -> Bitmap Heap Scan on domain_list (cost=26253.02..2310541.55\nrows=870759 width=123)\n Recheck Cond: (key < 1000000)\n -> Bitmap Index Scan on domain_list_new_pkey (cost=0.00..26035.33\nrows=870759 width=0)\n Index Cond: (key < 1000000)\n -> Hash (cost=2119232.34..2119232.34 rows=864201 width=195)\n -> Bitmap Heap Scan on domains (cost=16674.34..2119232.34\nrows=864201 width=195)\n Recheck Cond: (groupid < 1000000)\n Filter: (processed_at IS NOT NULL)\n -> Bitmap Index Scan on dgroup (cost=0.00..16458.29\nrows=890154 width=0)\n Index Cond: (groupid < 1000000)\n(12 rows)\n\ndict=# explain select * from domain_list,title.domains where processed_at is\nnot null and key=groupid and key < 1000000 ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------\n Hash Join (cost=2337583.04..18222634.81 rows=845372 width=318)\n Hash Cond: (domains.groupid = domain_list.key)\n -> Seq Scan on domains (cost=0.00..5423788.20 rows=158280964 width=195)\n Filter: (processed_at IS NOT NULL)\n -> Hash (cost=2310541.55..2310541.55 rows=870759 width=123)\n -> Bitmap Heap Scan on domain_list (cost=26253.02..2310541.55\nrows=870759 width=123)\n Recheck Cond: (key < 1000000)\n -> Bitmap Index Scan on domain_list_new_pkey\n (cost=0.00..26035.33 rows=870759 width=0)\n Index Cond: (key < 1000000)\n(9 rows)\n\ndict=# explain select * from domain_list,title.domains where processed_at is\nnot null and key=groupid and groupid < 1000000;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------\n Hash Join (cost=2153665.85..16943819.35 rows=862710 width=318)\n Hash Cond: (domain_list.key = domains.groupid)\n -> Seq Scan on domain_list (cost=0.00..6887257.54 rows=162753054\nwidth=123)\n -> Hash (cost=2119232.34..2119232.34 rows=864201 width=195)\n -> Bitmap Heap Scan on domains (cost=16674.34..2119232.34\nrows=864201 width=195)\n Recheck Cond: (groupid < 1000000)\n Filter: (processed_at IS NOT NULL)\n -> Bitmap Index Scan on dgroup (cost=0.00..16458.29\nrows=890154 width=0)\n Index Cond: (groupid < 1000000)\n(9 rows)\n\n\nThe first query is the fastest one, but it is equal to both 2 and 3 and I\nthought PostgreSQL can perform such propagation by itself.\n\nBest regards, Vitalii Tymchyshyn.\n\nHello.I've always thought that PostgreSQL would propagate constraint from field1 to field2 if condition says field1=field2, but this does not seem the case:dict=# explain select * from domain_list,title.domains where processed_at is not null and key=groupid and key < 1000000 and groupid < 1000000;\n QUERY PLAN --------------------------------------------------------------------------------------------------\n Hash Join (cost=2179918.87..4529994.61 rows=4616 width=318) Hash Cond: (domain_list.key = domains.groupid) -> Bitmap Heap Scan on domain_list (cost=26253.02..2310541.55 rows=870759 width=123)\n Recheck Cond: (key < 1000000) -> Bitmap Index Scan on domain_list_new_pkey (cost=0.00..26035.33 rows=870759 width=0) Index Cond: (key < 1000000)\n -> Hash (cost=2119232.34..2119232.34 rows=864201 width=195) -> Bitmap Heap Scan on domains (cost=16674.34..2119232.34 rows=864201 width=195) Recheck Cond: (groupid < 1000000)\n Filter: (processed_at IS NOT NULL) -> Bitmap Index Scan on dgroup (cost=0.00..16458.29 rows=890154 width=0) Index Cond: (groupid < 1000000)\n(12 rows)dict=# explain select * from domain_list,title.domains where processed_at is not null and key=groupid and key < 1000000 ; QUERY PLAN \n-------------------------------------------------------------------------------------------------------- Hash Join (cost=2337583.04..18222634.81 rows=845372 width=318) Hash Cond: (domains.groupid = domain_list.key)\n -> Seq Scan on domains (cost=0.00..5423788.20 rows=158280964 width=195) Filter: (processed_at IS NOT NULL) -> Hash (cost=2310541.55..2310541.55 rows=870759 width=123)\n -> Bitmap Heap Scan on domain_list (cost=26253.02..2310541.55 rows=870759 width=123) Recheck Cond: (key < 1000000) -> Bitmap Index Scan on domain_list_new_pkey (cost=0.00..26035.33 rows=870759 width=0)\n Index Cond: (key < 1000000)(9 rows)dict=# explain select * from domain_list,title.domains where processed_at is not null and key=groupid and groupid < 1000000;\n QUERY PLAN -------------------------------------------------------------------------------------------- Hash Join (cost=2153665.85..16943819.35 rows=862710 width=318)\n Hash Cond: (domain_list.key = domains.groupid) -> Seq Scan on domain_list (cost=0.00..6887257.54 rows=162753054 width=123) -> Hash (cost=2119232.34..2119232.34 rows=864201 width=195)\n -> Bitmap Heap Scan on domains (cost=16674.34..2119232.34 rows=864201 width=195) Recheck Cond: (groupid < 1000000) Filter: (processed_at IS NOT NULL)\n -> Bitmap Index Scan on dgroup (cost=0.00..16458.29 rows=890154 width=0) Index Cond: (groupid < 1000000)(9 rows)\nThe first query is the fastest one, but it is equal to both 2 and 3 and I thought PostgreSQL can perform such propagation by itself.Best regards, Vitalii Tymchyshyn.",
"msg_date": "Thu, 28 Jan 2010 13:21:01 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Constraint propagating for equal fields"
},
{
"msg_contents": "2010/1/28 Віталій Тимчишин <[email protected]>\n>\n> I've always thought that PostgreSQL would propagate constraint from field1 to field2 if condition says field1=field2, but this does not seem the case:\n\nversion?\n\n--\ngreg\n",
"msg_date": "Sat, 30 Jan 2010 02:30:19 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Constraint propagating for equal fields"
},
{
"msg_contents": "30 січня 2010 р. 04:30 Greg Stark <[email protected]> написав:\n\n> 2010/1/28 Віталій Тимчишин <[email protected]>\n> >\n> > I've always thought that PostgreSQL would propagate constraint from\n> field1 to field2 if condition says field1=field2, but this does not seem the\n> case:\n>\n> version?\n>\n>\nPostgreSQL 8.3.7 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1\n20070719 [FreeBSD]\n\n30 січня 2010 р. 04:30 Greg Stark <[email protected]> написав:\n2010/1/28 Віталій Тимчишин <[email protected]>\n>\n> I've always thought that PostgreSQL would propagate constraint from field1 to field2 if condition says field1=field2, but this does not seem the case:\n\nversion?\n PostgreSQL 8.3.7 on amd64-portbld-freebsd7.2, compiled by GCC cc (GCC) 4.2.1 20070719 [FreeBSD]",
"msg_date": "Mon, 1 Feb 2010 15:30:57 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Constraint propagating for equal fields"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have a server running CentOS5 with 6gb of memory that will run postgres\n8.3 exclusively.\nI would like to allocate 4gb of the memory to shared buffers for postgres.\nI have modified some kernel settings as follows:\n\nshmall 1048576 pages 4,294,967,296 bytes\nshmmax 4,294,967,295 bytes\n\nI can set the postgres config to shared_buffers = 2700MB but no higher.\nIf I try shared_buffers = 2750MB the server fails to start with a message it\ncannot allocate memory:\n\n2010-01-29 11:24:39 EST FATAL: shmat(id=1638400) failed: Cannot allocate\nmemory\n\nIs there some other setting that could be limiting the amount I can\nallocate?\n\nExcerpt from postgresql.conf:\n\n# - Memory -\n\nshared_buffers = 2750MB # min 128kB or max_connections*16kB\n # (change requires restart)\ntemp_buffers = 32MB # min 800kB\nmax_prepared_transactions = 10 # can be 0 or more\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 2MB # min 64kB\nmaintenance_work_mem = 32MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n\nAny help appreciated, Thanks\n\nRod\n\nHi All,I have a server running CentOS5 with 6gb of memory that will run postgres 8.3 exclusively.I would like to allocate 4gb of the memory to shared buffers for postgres.I have modified some kernel settings as follows:\nshmall 1048576 pages 4,294,967,296 bytesshmmax 4,294,967,295 bytesI can set the postgres config to shared_buffers = 2700MB but no higher.If I try shared_buffers = 2750MB the server fails to start with a message it cannot allocate memory:\n2010-01-29 11:24:39 EST FATAL: shmat(id=1638400) failed: Cannot allocate memoryIs there some other setting that could be limiting the amount I can allocate?Excerpt from postgresql.conf:# - Memory -\nshared_buffers = 2750MB # min 128kB or max_connections*16kB # (change requires restart)temp_buffers = 32MB # min 800kBmax_prepared_transactions = 10 # can be 0 or more\n # (change requires restart)# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 2MB # min 64kBmaintenance_work_mem = 32MB # min 1MB#max_stack_depth = 2MB # min 100kBAny help appreciated, ThanksRod",
"msg_date": "Fri, 29 Jan 2010 11:37:55 -0500",
"msg_from": "\"**Rod MacNeil\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Limited Shared Buffer Problem"
},
{
"msg_contents": "\n\n**Rod MacNeil wrote:\n> Hi All,\n> \n> I have a server running CentOS5 with 6gb of memory that will run \n> postgres 8.3 exclusively.\n> I would like to allocate 4gb of the memory to shared buffers for postgres.\n\nIt might be worth pausing at this point:\n\nThe various postgresql tuning guides usually suggest that on a dedicated \nsystem, you should give postgres about 1/4 of the RAM for shared \nbuffers, while telling it that the effective_cache_size = 1/2 RAM.\n\nPostgres will make good use of the OS cache as a file-cache - the \n\"effective_cache_size\" setting is advisory to postgres that it can \nexpect about this much data to be in RAM.\n\nAlso, If you are setting up a new system, it's probably worth going for \n8.4.2. Postgres is relatively easy to build from source.\n\nHTH,\n\nRichard\n",
"msg_date": "Fri, 29 Jan 2010 16:53:01 +0000",
"msg_from": "Richard Neill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "On Fri, Jan 29, 2010 at 9:37 AM, **Rod MacNeil\n<[email protected]> wrote:\n> Hi All,\n>\n> I have a server running CentOS5 with 6gb of memory that will run postgres\n> 8.3 exclusively.\n> I would like to allocate 4gb of the memory to shared buffers for postgres.\n> I have modified some kernel settings as follows:\n>\n> shmall 1048576 pages 4,294,967,296 bytes\n> shmmax 4,294,967,295 bytes\n>\n> I can set the postgres config to shared_buffers = 2700MB but no higher.\n> If I try shared_buffers = 2750MB the server fails to start with a message it\n> cannot allocate memory:\n\nAre you running 32 or 64 bit Centos?\n\nAlso, that's a rather high setting for shared_buffers on a 6G machine.\n Generally 2G or so should be plenty unless you have actual data sets\nthat are larger than that.\n",
"msg_date": "Fri, 29 Jan 2010 10:18:15 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "You are probably running 32bit OS. So the problem is that the OS\ncannot allocate more than 3G of memory continuous . Then the only\nsolution is to migrate to a 64bit OS.\n\n2010/1/29 **Rod MacNeil <[email protected]>:\n> Hi All,\n>\n> I have a server running CentOS5 with 6gb of memory that will run postgres\n> 8.3 exclusively.\n> I would like to allocate 4gb of the memory to shared buffers for postgres.\n> I have modified some kernel settings as follows:\n>\n> shmall 1048576 pages 4,294,967,296 bytes\n> shmmax 4,294,967,295 bytes\n>\n> I can set the postgres config to shared_buffers = 2700MB but no higher.\n> If I try shared_buffers = 2750MB the server fails to start with a message it\n> cannot allocate memory:\n>\n> 2010-01-29 11:24:39 EST FATAL: shmat(id=1638400) failed: Cannot allocate\n> memory\n>\n> Is there some other setting that could be limiting the amount I can\n> allocate?\n>\n> Excerpt from postgresql.conf:\n>\n> # - Memory -\n>\n> shared_buffers = 2750MB # min 128kB or max_connections*16kB\n> # (change requires restart)\n> temp_buffers = 32MB # min 800kB\n> max_prepared_transactions = 10 # can be 0 or more\n> # (change requires restart)\n> # Note: Increasing max_prepared_transactions costs ~600 bytes of shared\n> memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 2MB # min 64kB\n> maintenance_work_mem = 32MB # min 1MB\n> #max_stack_depth = 2MB # min 100kB\n>\n>\n> Any help appreciated, Thanks\n>\n> Rod\n>\n>\n",
"msg_date": "Fri, 29 Jan 2010 18:46:10 +0100",
"msg_from": "jose javier parra sanchez <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "2010/1/29 Richard Neill <[email protected]>:\n>\n>\n> **Rod MacNeil wrote:\n>>\n>> Hi All,\n>>\n>> I have a server running CentOS5 with 6gb of memory that will run postgres\n>> 8.3 exclusively.\n>> I would like to allocate 4gb of the memory to shared buffers for postgres.\n>\n> It might be worth pausing at this point:\n>\n> The various postgresql tuning guides usually suggest that on a dedicated\n> system, you should give postgres about 1/4 of the RAM for shared buffers,\n> while telling it that the effective_cache_size = 1/2 RAM.\n>\n> Postgres will make good use of the OS cache as a file-cache - the\n> \"effective_cache_size\" setting is advisory to postgres that it can expect\n> about this much data to be in RAM.\n\nAFAIK effective_cache_size is estimation of OS Page Cache + Estimated\nCache in shared_buffers.\n\n>\n> Also, If you are setting up a new system, it's probably worth going for\n> 8.4.2. Postgres is relatively easy to build from source.\n>\n> HTH,\n>\n> Richard\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain\n",
"msg_date": "Fri, 29 Jan 2010 19:18:33 +0100",
"msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "Richard Neill escribi�:\n>\n>\n> **Rod MacNeil wrote:\n>> Hi All,\n>>\n>> I have a server running CentOS5 with 6gb of memory that will run \n>> postgres 8.3 exclusively.\n>> I would like to allocate 4gb of the memory to shared buffers for \n>> postgres.\n>\n> It might be worth pausing at this point:\n>\n> The various postgresql tuning guides usually suggest that on a \n> dedicated system, you should give postgres about 1/4 of the RAM for \n> shared buffers, while telling it that the effective_cache_size = 1/2 RAM.\n>\n> Postgres will make good use of the OS cache as a file-cache - the \n> \"effective_cache_size\" setting is advisory to postgres that it can \n> expect about this much data to be in RAM.\n>\n> Also, If you are setting up a new system, it's probably worth going \n> for 8.4.2. Postgres is relatively easy to build from source.\n>\n> HTH,\n>\n> Richard\n>\nAll these values has to be combined with the others: shared_buffers, \nwork_mem,etc.\nMy recommendation is to go down a little the shmmax and the \nshared_buffers values.\nIs very necessary that you have these values so high?\n\nRegards\n\n\n-- \n-------------------------------------\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nIng. Marcos Lu�s Ort�z Valmaseda\nPostgreSQL System DBA && DWH -- BI Apprentice\n\nCentro de Tecnolog�as de Almacenamiento y An�lisis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\n\n-- PostgreSQL --\n\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n-- DWH + BI --\n\nhttp://www.tdwi.org\n\n-------------------------------------------\n\n",
"msg_date": "Fri, 29 Jan 2010 12:24:12 -0600",
"msg_from": "=?ISO-8859-1?Q?=22Ing_=2E_Marcos_Lu=EDs_Ort=EDz_Valmaseda?=\n\t=?ISO-8859-1?Q?=22?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "Thanx, I will try out that recommendation.\n\n\nOn Fri, Jan 29, 2010 at 11:53 AM, Richard Neill <[email protected]> wrote:\n\n>\n>\n> **Rod MacNeil wrote:\n>\n>> Hi All,\n>>\n>> I have a server running CentOS5 with 6gb of memory that will run postgres\n>> 8.3 exclusively.\n>> I would like to allocate 4gb of the memory to shared buffers for postgres.\n>>\n>\n> It might be worth pausing at this point:\n>\n> The various postgresql tuning guides usually suggest that on a dedicated\n> system, you should give postgres about 1/4 of the RAM for shared buffers,\n> while telling it that the effective_cache_size = 1/2 RAM.\n>\n> Postgres will make good use of the OS cache as a file-cache - the\n> \"effective_cache_size\" setting is advisory to postgres that it can expect\n> about this much data to be in RAM.\n>\n> Also, If you are setting up a new system, it's probably worth going for\n> 8.4.2. Postgres is relatively easy to build from source.\n>\n> HTH,\n>\n> Richard\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRod MacNeil\nSenior Software Engineer\nInteract Direct Marketing, Inc.\nwww.interactdirect.com\[email protected]\nPrimary Phone Mississauga Ontario: 905-278-4086\nAlternate Phone London Ontario: 519-438-6245, Ext 183\n\nThanx, I will try out that recommendation.On Fri, Jan 29, 2010 at 11:53 AM, Richard Neill <[email protected]> wrote:\n\n\n**Rod MacNeil wrote:\n\nHi All,\n\nI have a server running CentOS5 with 6gb of memory that will run postgres 8.3 exclusively.\nI would like to allocate 4gb of the memory to shared buffers for postgres.\n\n\nIt might be worth pausing at this point:\n\nThe various postgresql tuning guides usually suggest that on a dedicated system, you should give postgres about 1/4 of the RAM for shared buffers, while telling it that the effective_cache_size = 1/2 RAM.\n\nPostgres will make good use of the OS cache as a file-cache - the \"effective_cache_size\" setting is advisory to postgres that it can expect about this much data to be in RAM.\n\nAlso, If you are setting up a new system, it's probably worth going for 8.4.2. Postgres is relatively easy to build from source.\n\nHTH,\n\nRichard\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Rod MacNeilSenior Software EngineerInteract Direct Marketing, [email protected]\nPrimary Phone Mississauga Ontario: 905-278-4086Alternate Phone London Ontario: 519-438-6245, Ext 183",
"msg_date": "Fri, 29 Jan 2010 13:36:12 -0500",
"msg_from": "\"**Rod MacNeil\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Limited Shared Buffer Problem"
},
{
"msg_contents": "C�dric Villemain wrote:\n> AFAIK effective_cache_size is estimation of OS Page Cache + Estimated\n> Cache in shared_buffers.\n> \n\nYes, the total value you set is used as is, and should include both \npieces of memory. The planner doesn't add the shared_buffers value to \nthe total first for you, as some people might guess it would. \n\nThe only thing effective_cache_size is used for is estimating how \nexpensive an index is likely to be to use, to make decisions like when \nto do an index-based scan instead of just scanning the table itself.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Fri, 29 Jan 2010 17:19:41 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Limited Shared Buffer Problem"
}
] |
[
{
"msg_contents": "Hitting a performance issues that I'm not sure how to diagnose.\n\nSELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s;\nSeq Scan on stomp_steps s (cost=0.00..793.52 rows=2902 width=4)\n(actual time=26509.919..26509.919 rows=0 loops=1)\nTotal runtime: 26509.972 ms\n\nThe inner function looks like this:\n\nCREATE FUNCTION highscores_for_steps_and_card(steps_id int, card_id\nint, count int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$\n SELECT r.id FROM stomp_round r\n WHERE ($1 IS NULL OR r.steps_id = $1) AND ($2 IS NULL OR\nr.user_card_id = $2)\n ORDER BY r.score DESC LIMIT $3\n$$\n\n Limit (cost=13.12..13.12 rows=1 width=8) (actual time=0.054..0.054\nrows=0 loops=1)\n -> Sort (cost=13.12..13.12 rows=1 width=8) (actual\ntime=0.051..0.051 rows=0 loops=1)\n Sort Key: score\n Sort Method: quicksort Memory: 17kB\n -> Bitmap Heap Scan on stomp_round r (cost=9.09..13.11\nrows=1 width=8) (actual time=0.036..0.036 rows=0 loops=1)\n Recheck Cond: ((280 = steps_id) AND (user_card_id = 591))\n -> BitmapAnd (cost=9.09..9.09 rows=1 width=0) (actual\ntime=0.032..0.032 rows=0 loops=1)\n -> Bitmap Index Scan on stomp_round_steps_id\n(cost=0.00..4.40 rows=20 width=0) (actual time=0.030..0.030 rows=0\nloops=1)\n Index Cond: (280 = steps_id)\n -> Bitmap Index Scan on stomp_round_user_card_id\n (cost=0.00..4.44 rows=25 width=0) (never executed)\n Index Cond: (user_card_id = 591)\n Total runtime: 0.153 ms\n(12 rows)\n\nstomp_steps has about 1500 rows, so it finds 1500 high scores, one for\neach stage.\n\nI expected scalability issues from this on a regular drive, since\nit'll be doing a ton of index seeking when not working out of cache,\nso I expected to need to change to an SSD at some point (when it no\nlonger easily fits in cache). However, I/O doesn't seem to be the\nbottleneck yet. If I run it several times, it consistently takes 26\nseconds. The entire database is in OS cache (find | xargs cat:\n250ms).\n\nI'm not sure why the full query (26s) is orders of magnitude slower\nthan 1500*0.150ms (225ms). It's not a very complex query, and I'd\nhope it's not being re-planned every iteration through the loop. Any\nthoughts? Using SELECT to iterate over a table like this is very\nuseful (and I don't know any practical alternative), but it's\ndifficult to profile since it doesn't play nice with EXPLAIN ANALYZE.\n\n-- \nGlenn Maynard\n",
"msg_date": "Fri, 29 Jan 2010 22:49:46 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query: table iteration (8.3)"
},
{
"msg_contents": "Glenn Maynard wrote:\n> Hitting a performance issues that I'm not sure how to diagnose.\n>\n> SELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s;\n> Seq Scan on stomp_steps s (cost=0.00..793.52 rows=2902 width=4)\n> (actual time=26509.919..26509.919 rows=0 loops=1)\n> Total runtime: 26509.972 ms\n> \nHello Glenn,\n\nStomp_steps is analyzed to 2902 rows but when you run the query the\nactual rows are 0. This means that the highscore function is not called\nor the number 0 is incorrect.\nSuppose that the number of rows is 2900, then 26 seconds means 100ms per\nfunction call. This still is a lot, compared to the 0.054 ms analyze\nresult below. The truth might be that you probably got that result by\nexplaining the query in the function with actual parameter values. This\nplan differs from the one that is made when the function is called from\nsql and is planned (once) without parameters, and in that case the plan\nis probably different. A way to check the plan of that query is to turn\non debug_print_plan and watch the server log. It takes a bit getting\nused. The plan starts with CONTEXT: SQL function \"functionname\" during\nstartup and is also recognized because in the opexpr (operator\nexpression) one of the operands is a parameter. Important is the total\ncost of the top plan node (the limit).\n\nI know 8.3 is mentioned in the subject, but I think that a WITH query\n(http://www.postgresql.org/docs/8.4/interactive/queries-with.html) could\nbe a good solution to your problem and may be worth trying out, if you\nhave the possibility to try out 8.4.\n\nRegards,\nYeb Havinga\n\n\n\n> The inner function looks like this:\n>\n> CREATE FUNCTION highscores_for_steps_and_card(steps_id int, card_id\n> int, count int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$\n> SELECT r.id FROM stomp_round r\n> WHERE ($1 IS NULL OR r.steps_id = $1) AND ($2 IS NULL OR\n> r.user_card_id = $2)\n> ORDER BY r.score DESC LIMIT $3\n> $$\n>\n> Limit (cost=13.12..13.12 rows=1 width=8) (actual time=0.054..0.054\n> rows=0 loops=1)\n> -> Sort (cost=13.12..13.12 rows=1 width=8) (actual\n> time=0.051..0.051 rows=0 loops=1)\n> Sort Key: score\n> Sort Method: quicksort Memory: 17kB\n> -> Bitmap Heap Scan on stomp_round r (cost=9.09..13.11\n> rows=1 width=8) (actual time=0.036..0.036 rows=0 loops=1)\n> Recheck Cond: ((280 = steps_id) AND (user_card_id = 591))\n> -> BitmapAnd (cost=9.09..9.09 rows=1 width=0) (actual\n> time=0.032..0.032 rows=0 loops=1)\n> -> Bitmap Index Scan on stomp_round_steps_id\n> (cost=0.00..4.40 rows=20 width=0) (actual time=0.030..0.030 rows=0\n> loops=1)\n> Index Cond: (280 = steps_id)\n> -> Bitmap Index Scan on stomp_round_user_card_id\n> (cost=0.00..4.44 rows=25 width=0) (never executed)\n> Index Cond: (user_card_id = 591)\n> Total runtime: 0.153 ms\n> (12 rows)\n>\n> stomp_steps has about 1500 rows, so it finds 1500 high scores, one for\n> each stage.\n>\n> I expected scalability issues from this on a regular drive, since\n> it'll be doing a ton of index seeking when not working out of cache,\n> so I expected to need to change to an SSD at some point (when it no\n> longer easily fits in cache). However, I/O doesn't seem to be the\n> bottleneck yet. If I run it several times, it consistently takes 26\n> seconds. The entire database is in OS cache (find | xargs cat:\n> 250ms).\n>\n> I'm not sure why the full query (26s) is orders of magnitude slower\n> than 1500*0.150ms (225ms). It's not a very complex query, and I'd\n> hope it's not being re-planned every iteration through the loop. Any\n> thoughts? Using SELECT to iterate over a table like this is very\n> useful (and I don't know any practical alternative), but it's\n> difficult to profile since it doesn't play nice with EXPLAIN ANALYZE.\n>\n> \n\n\n",
"msg_date": "Mon, 01 Feb 2010 15:54:47 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Mon, Feb 1, 2010 at 6:15 AM, Yeb Havinga <[email protected]> wrote:\n> Glenn Maynard wrote:\n>> SELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s;\n>> Seq Scan on stomp_steps s (cost=0.00..793.52 rows=2902 width=4)\n>> (actual time=26509.919..26509.919 rows=0 loops=1)\n>> Total runtime: 26509.972 ms\n\n> Stomp_steps is analyzed to 2902 rows but when you run the query the actual\n> rows are 0. This means that the highscore function is not called or the\n> number 0 is incorrect.\n\nThis SELECT returns 0 rows: it calls the function 1500 times, and each\ntime it returns no data, because there simply aren't any results for\nthese parameters.\n\n> below. The truth might be that you probably got that result by explaining\n> the query in the function with actual parameter values. This plan differs\n> from the one that is made when the function is called from sql and is\n> planned (once) without parameters, and in that case the plan is probably\n> different.\n\nYeah. It would help a lot if EXPLAIN could show query plans of\nfunctions used by the statement and not just the top-level query.\n\n> A way to check the plan of that query is to turn on\n> debug_print_plan and watch the server log. It takes a bit getting used. The\n> plan starts with CONTEXT: SQL function \"functionname\" during startup and is\n> also recognized because in the opexpr (operator expression) one of the\n> operands is a parameter. Important is the total cost of the top plan node\n> (the limit).\n\nThanks.\n\n\"SELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s\":\n\nSquinting at the output, it definitely looks like a less optimized\nplan; it's using a SEQSCAN instead of BITMAPHEAPSCAN. (I've attached\nthe output.)\n\nDoes the planner not optimize functions based on context? That seems\nlike a huge class of optimizations. The first NULLTEST can be\noptimized away, since that parameter comes from a NOT NULL source (a\nPK). The second NULLTEST can also be optimized away, since it's a\nconstant value (591). The search could be a BITMAPHEAPSCAN,\nsubstituting the s.id value for each call, instead of a SEQSCAN. (Not\nthat I'm concerned about a few cheap NULLTESTs, I'm just surprised at\nit using such a generic plan.)\n\nIf I create a new function with the constant parameters hard-coded,\nit's back to BITMAPHEAPSCAN: 175ms. This suggests a horrible\nworkaround: creating temporary functions every time I make this type\nof query, with the fixed values substituted textually. I'd really\nlove to know a less awful fix.\n\n> I know 8.3 is mentioned in the subject, but I think that a WITH query\n> (http://www.postgresql.org/docs/8.4/interactive/queries-with.html) could be\n> a good solution to your problem and may be worth trying out, if you have the\n> possibility to try out 8.4.\n\nI can't see how to apply WITH to this. Non-recursive WITH seems like\nsyntax sugar that doesn't do anything a plain SELECT can't do, and I\ndon't think what I'm doing here can be done with a regular SELECT.\n\n-- \nGlenn Maynard",
"msg_date": "Mon, 1 Feb 2010 21:52:45 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "Glenn Maynard wrote:\n> On Mon, Feb 1, 2010 at 6:15 AM, Yeb Havinga <[email protected]> wrote:\n> \n>> Stomp_steps is analyzed to 2902 rows but when you run the query the actual\n>> rows are 0. This means that the highscore function is not called or the\n>> number 0 is incorrect.\n>> \n> This SELECT returns 0 rows: it calls the function 1500 times, and each\n> time it returns no data, because there simply aren't any results for\n> these parameters.\n> \nHmm.. first posting on a pg mailing list and I make this mistake.. What \nan introduction :-[\nChecked the source and indeed for every plan node the number of tuples \nthat result from it are counted. In most cases this is the number of \nrecords that match the qualifiers (where clause/join conditions) so that \nwas in my head: actual rows = rows that match where, and without where \nI'd expected the actual rows to reflect the total number of rows in the \ntable. But with a set returning functions this number is something \ncompletely different.\n>> below. The truth might be that you probably got that result by explaining\n>> the query in the function with actual parameter values. This plan differs\n>> from the one that is made when the function is called from sql and is\n>> planned (once) without parameters, and in that case the plan is probably\n>> different.\n>> \n>\n> Yeah. It would help a lot if EXPLAIN could show query plans of\n> functions used by the statement and not just the top-level query.\n> \nLike subplans are, yes. Sounds like a great future.\n> Squinting at the output, it definitely looks like a less optimized\n> plan; it's using a SEQSCAN instead of BITMAPHEAPSCAN. (I've attached\n> the output.)\n>\n> Does the planner not optimize functions based on context?\nI believe it does for (re) binding of parameter values to prepared \nstatements, but not in the case of an sql function. To test an idea, \nthere might be a workaround where you could write a pl/pgsql function \nthat makes a string with the query and actual parameter values and \nexecutes that new query everytime. It's not as pretty as a sql function, \nbut would give an idea of how fast things would run with each loop \nreplanned. Another idea is that maybe you could 'hint' the planner at \nplanning time of the sql function by giving it some extra set commands \n(like set search_path but then set enable_seqscan = off) - I don't know \nif planning of the sql function occurs in the environment given by it's \nset commands, but its worth a try. Again, certainly not pretty.\n> I can't see how to apply WITH to this. Non-recursive WITH seems like\n> syntax sugar that doesn't do anything a plain SELECT can't do, and I\n> don't think what I'm doing here can be done with a regular SELECT.\n> \nWith indeed is not a solution because the with query is executed once, \nso it cannot take a parameter. What about a window function on a join of \nstomp_steps and stomp_round with partition by on steps_id and user_card \nis and order by score and with row_number() < your third parameter. From \nthe docs I read that window functions cannot be part of the where \nclause: an extra subselect leven is needed then to filter the correct \nrow numbers.\n\nRegards,\nYeb Havinga\n\n",
"msg_date": "Tue, 02 Feb 2010 11:06:00 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Fri, Jan 29, 2010 at 10:49 PM, Glenn Maynard <[email protected]> wrote:\n> Hitting a performance issues that I'm not sure how to diagnose.\n>\n> SELECT highscores_for_steps_and_card(s.id, 591, 1) FROM stomp_steps s;\n> Seq Scan on stomp_steps s (cost=0.00..793.52 rows=2902 width=4)\n> (actual time=26509.919..26509.919 rows=0 loops=1)\n> Total runtime: 26509.972 ms\n>\n> The inner function looks like this:\n>\n> CREATE FUNCTION highscores_for_steps_and_card(steps_id int, card_id\n> int, count int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$\n> SELECT r.id FROM stomp_round r\n> WHERE ($1 IS NULL OR r.steps_id = $1) AND ($2 IS NULL OR\n> r.user_card_id = $2)\n> ORDER BY r.score DESC LIMIT $3\n> $$\n>\n> Limit (cost=13.12..13.12 rows=1 width=8) (actual time=0.054..0.054\n> rows=0 loops=1)\n> -> Sort (cost=13.12..13.12 rows=1 width=8) (actual\n> time=0.051..0.051 rows=0 loops=1)\n> Sort Key: score\n> Sort Method: quicksort Memory: 17kB\n> -> Bitmap Heap Scan on stomp_round r (cost=9.09..13.11\n> rows=1 width=8) (actual time=0.036..0.036 rows=0 loops=1)\n> Recheck Cond: ((280 = steps_id) AND (user_card_id = 591))\n> -> BitmapAnd (cost=9.09..9.09 rows=1 width=0) (actual\n> time=0.032..0.032 rows=0 loops=1)\n> -> Bitmap Index Scan on stomp_round_steps_id\n> (cost=0.00..4.40 rows=20 width=0) (actual time=0.030..0.030 rows=0\n> loops=1)\n> Index Cond: (280 = steps_id)\n> -> Bitmap Index Scan on stomp_round_user_card_id\n> (cost=0.00..4.44 rows=25 width=0) (never executed)\n> Index Cond: (user_card_id = 591)\n> Total runtime: 0.153 ms\n> (12 rows)\n>\n> stomp_steps has about 1500 rows, so it finds 1500 high scores, one for\n> each stage.\n>\n> I expected scalability issues from this on a regular drive, since\n> it'll be doing a ton of index seeking when not working out of cache,\n> so I expected to need to change to an SSD at some point (when it no\n> longer easily fits in cache). However, I/O doesn't seem to be the\n> bottleneck yet. If I run it several times, it consistently takes 26\n> seconds. The entire database is in OS cache (find | xargs cat:\n> 250ms).\n>\n> I'm not sure why the full query (26s) is orders of magnitude slower\n> than 1500*0.150ms (225ms). It's not a very complex query, and I'd\n> hope it's not being re-planned every iteration through the loop. Any\n> thoughts? Using SELECT to iterate over a table like this is very\n> useful (and I don't know any practical alternative), but it's\n> difficult to profile since it doesn't play nice with EXPLAIN ANALYZE.\n\nI believe that the time for the seq-scan node doesn't include the time\nto generate the outputs, which is where all the function calls are.\nAs a general rule, I have found that function calls are reaaaaally\nslow, and that calling a function in a loop is almost always a bad\nidea. You didn't mention what PG version you're running, but I\nbelieve that with a sufficiently new version (8.4?) it'll actually\ninline SQL functions into the invoking query, which will probably be\nlots faster. If not, you can inline it manually.\n\nRewriting it as a join will likely be faster still:\n\nSELECT r.id FROM stomp_steps s, stomp_round r WHERE (s.id IS NULL OR\nr.steps_id = s.id) AND ($1 IS NULL OR r.user_card_id = $1) ORDER BY\nr.score DESC LIMIT $2\n\nYou might even break it into two cases:\n\nSELECT r.id FROM stomp_steps s, stomp_round r WHERE r.steps_id = s.id\nAND ($1 IS NULL OR r.user_card_id = $1)\nUNION ALL\nSELECT r.id FROM stomp_steps s, stomp_round r WHERE s.id IS NULL AND\n($1 IS NULL OR r.user_card_id = $1)\nORDER BY r.score DESC LIMIT $2\n\nOr if s.id can't really be NULL:\n\nSELECT r.id FROM stomp_steps s, stomp_round r WHERE r.steps_id = s.id\nAND ($1 IS NULL OR r.user_card_id = $1)\nORDER BY r.score DESC LIMIT $2\n\nThese kinds of rewrites allow the query planner progressively more\nflexibility - to use a hash or merge join, for example, instead of a\nnested loop. And they eliminate overhead. You'll have to play around\nwith it and see what works best in your particular environment, but in\ngeneral, I find it pays big dividends to avoid wrapping these kinds of\nlogic bits inside a function.\n\n...Robert\n",
"msg_date": "Wed, 3 Feb 2010 22:05:47 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Tue, Feb 2, 2010 at 5:06 AM, Yeb Havinga <[email protected]> wrote:\n> I believe it does for (re) binding of parameter values to prepared\n> statements, but not in the case of an sql function. To test an idea, there\n> might be a workaround where you could write a pl/pgsql function that makes a\n> string with the query and actual parameter values and executes that new\n> query everytime. It's not as pretty as a sql function, but would give an\n> idea of how fast things would run with each loop replanned. Another idea is\n\nThat, or just have the code generate a function on the fly, and then\ndelete it. For example:\n\nCREATE FUNCTION tmp_highscores_for_steps_and_card_PID(steps_id int)\nRETURNS SETOF INTEGER LANGUAGE SQL AS $$\n SELECT r.id FROM stomp_round r\n WHERE ($1 IS NULL OR r.steps_id = $1) AND r.user_card_id = 591\n ORDER BY r.score DESC LIMIT 1\n$$;\nSELECT tmp_highscores_for_steps_and_card_PID(s.id) FROM stomp_steps s;\nDROP FUNCTION tmp_highscores_for_steps_and_card_PID(int);\n\nAn ugly hack, but it'd unblock things, at least. (Or, I hope so. I\ndo have other variants of this, for things like \"high scores in your\ncountry\", \"your 5 most recent high scores\", etc. That's why I'm doing\nthis dynamically like this, and not just caching high scores in\nanother table.)\n\n> With indeed is not a solution because the with query is executed once, so it\n> cannot take a parameter. What about a window function on a join of\n> stomp_steps and stomp_round with partition by on steps_id and user_card is\n> and order by score and with row_number() < your third parameter. From the\n> docs I read that window functions cannot be part of the where clause: an\n> extra subselect leven is needed then to filter the correct row numbers.\n\nSomeone suggested window functions for this back when I was designing\nit, and I looked at them. I recall it being very slow, always doing a\nseq scan, and it seemed like this wasn't quite what windowing was\ndesigned for...\n\n-- \nGlenn Maynard\n",
"msg_date": "Thu, 4 Feb 2010 01:30:06 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Wed, Feb 3, 2010 at 10:05 PM, Robert Haas <[email protected]> wrote:\n> Rewriting it as a join will likely be faster still:\n>\n> SELECT r.id FROM stomp_steps s, stomp_round r WHERE (s.id IS NULL OR\n> r.steps_id = s.id) AND ($1 IS NULL OR r.user_card_id = $1) ORDER BY\n> r.score DESC LIMIT $2\n\nThat's not the same; this SELECT will only find the N highest scores,\nsince the LIMIT applies to the whole results. Mine finds the highest\nscores for each stage (steps), since the scope of the LIMIT is each\ncall of the function (eg. \"find the top score for each stage\" as\nopposed to \"find the top five scores for each stage\").\n\nThat's the only reason I used a function at all to begin with--I know\nno way to do this with a plain SELECT.\n\neg.\n\nCREATE FUNCTION test(int) RETURNS SETOF INTEGER LANGUAGE SQL AS $$\n SELECT generate_series(100 * $1, 100 * $1 + 5) LIMIT 2;\n$$;\nCREATE TABLE test_table(id integer primary key);\nINSERT INTO test_table SELECT generate_series(1, 5);\nSELECT test(t.id) FROM test_table t;\n\nIf there's a way to do this without a helper function (that can\noptimize to index scans--I'm not sure 8.4's windowing did, need to\nrecheck), I'd really like to know it.\n\n> And they eliminate overhead.\n\nI assumed that function calls within a SELECT would be inlined for\noptimization before reaching the planner--that's why I was surprised\nwhen it was falling back on a seq scan, and not optimizing for the\ncontext.\n\nI'm using 8.3. I see \"Inline simple set-returning SQL functions in\nFROM clauses\" in the 8.4 changelog; I'm not sure if that applies to\nthis, since this set-returning SQL function isn't in the FROM clause.\n\n-- \nGlenn Maynard\n",
"msg_date": "Thu, 4 Feb 2010 03:24:31 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "isn't that possible with window functions and cte ?\nrank, and limit ?\n",
"msg_date": "Thu, 4 Feb 2010 08:28:51 +0000",
"msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "2010/2/4 Grzegorz Jaśkiewicz <[email protected]>:\n> isn't that possible with window functions and cte ?\n> rank, and limit ?\n\nIt is, but again I tried that when I originally designed this and I\nthink it ended up using seq scans, or at least being slow for some\nreason or other.\n\nBut I'll be dropping this db into 8.4 soon to see if it helps\nanything, and I'll check again (and if it's still slow I'll post more\ndetails). It's been a while and I might just have been doing\nsomething wrong.\n\n-- \nGlenn Maynard\n",
"msg_date": "Thu, 4 Feb 2010 04:09:11 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 3:24 AM, Glenn Maynard <[email protected]> wrote:\n> On Wed, Feb 3, 2010 at 10:05 PM, Robert Haas <[email protected]> wrote:\n>> Rewriting it as a join will likely be faster still:\n>>\n>> SELECT r.id FROM stomp_steps s, stomp_round r WHERE (s.id IS NULL OR\n>> r.steps_id = s.id) AND ($1 IS NULL OR r.user_card_id = $1) ORDER BY\n>> r.score DESC LIMIT $2\n>\n> That's not the same; this SELECT will only find the N highest scores,\n> since the LIMIT applies to the whole results. Mine finds the highest\n> scores for each stage (steps), since the scope of the LIMIT is each\n> call of the function (eg. \"find the top score for each stage\" as\n> opposed to \"find the top five scores for each stage\").\n>\n> That's the only reason I used a function at all to begin with--I know\n> no way to do this with a plain SELECT.\n\nOh, I get it. Yeah, I don't think you can do that without LATERAL(),\nwhich we don't have, unless the window-function thing works...\n\n...Robert\n",
"msg_date": "Thu, 4 Feb 2010 16:57:21 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 4:09 AM, Glenn Maynard <[email protected]> wrote:\n> But I'll be dropping this db into 8.4 soon to see if it helps\n> anything, and I'll check again (and if it's still slow I'll post more\n> details). It's been a while and I might just have been doing\n> something wrong.\n\nWindowing doesn't want to scale for this case. I'll simplify to give\nan actual test case.\n\ncreate table test_users (id serial primary key);\ninsert into test_users (id) select generate_series(1, 1000);\ncreate table test (id serial primary key, score integer, user_id integer);\ninsert into test (user_id, score) select s.id, random() * 1000000 from\n(select generate_series(1, 1000) as id) as s, generate_series(1,\n1000);\ncreate index test_1 on test (score);\ncreate index test_2 on test (user_id, score desc);\nanalyze;\n\nThis generates a thousand users, with a thousand scores each. This\nfinds the top score for each user (ignoring the detail of duplicate\nscores; easy to deal with):\n\nSELECT sub.id FROM (\n SELECT t.id, rank() OVER (PARTITION BY t.user_id ORDER BY score\nDESC) AS rank\n FROM test t\n) AS sub WHERE rank <= 1;\n\nThis does use the test_2 index (as intended), but it's still very\nslow: 11 seconds on my system.\n\nIt seems like it's doing a *complete* scan of the index, generating\nranks for every row, and then filters out all but the first of each\nrank. That means it scales linearly with the total number of rows.\nAll it really needs to do is jump to each user in the index and pull\nout the first entry (according to the \"score desc\" part of the test_2\nindex), which would make it scale linearly with the number of users.\n\nThe function version:\n\nCREATE FUNCTION high_score_for_user(user_id int) RETURNS SETOF INTEGER\nLANGUAGE SQL AS $$\n SELECT t.id FROM test t\n WHERE t.user_id = $1\n ORDER BY t.score DESC LIMIT 1\n$$;\nSELECT high_score_for_user(u.id) FROM test_users u;\n\nruns in 100ms.\n\nI think I'm stuck with either creating temporary functions with the\nconstants already replaced, or creating an SQL function that evaluates\na brand new query as a string as Yeb suggested.\n\n-- \nGlenn Maynard\n",
"msg_date": "Thu, 4 Feb 2010 22:04:41 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "Glenn Maynard wrote:\n> The function version:\n>\n> CREATE FUNCTION high_score_for_user(user_id int) RETURNS SETOF INTEGER\n> LANGUAGE SQL AS $$\n> SELECT t.id FROM test t\n> WHERE t.user_id = $1\n> ORDER BY t.score DESC LIMIT 1\n> $$;\n> SELECT high_score_for_user(u.id) FROM test_users u;\n>\n> runs in 100ms.\n> \nHi Glenn,\n\nAbout cached plans of SQL functions: from the source of function.c\n\n00067 /*\n00068 * An SQLFunctionCache record is built during the first call,\n00069 * and linked to from the fn_extra field of the FmgrInfo struct.\n00070 *\n00071 * Note that currently this has only the lifespan of the calling \nquery.\n00072 * Someday we might want to consider caching the parse/plan \nresults longer\n00073 * than that.\n00074 */\n\nSo it is planned at every call of\n\nSELECT high_score_for_user(u.id) FROM test_users u;\n\nand the cache is used between each row of test_users. The plan is with a \nparameter, that means the optimizer could not make use of an actual \nvalue during planning. However, your test case is clever in the sense \nthat there is an index on users and score and the sql function has an \norder by that matches the index, so the planner can avoid a sort by \naccessing the test table using the index. In this particular case, that \nmeans that the plan is optimal; no unneeded tuples are processed and the \n(function) plan complexity is logaritmic on the size of the test \nrelation, you can't get it any better than that. In short: the lack of \nan actual parameter in the test case did not result in an inferior plan. \nSo using a dynamic constructed query string in pl/pgsql to 'force' \nreplanning during iteration cannot be faster than this sql function.\n\nIt is possible to make the performance if this function worse by \ndisabling indexscans:\n\nCREATE FUNCTION high_score_for_user(user_id int) RETURNS SETOF INTEGER\nLANGUAGE SQL AS $$\n SELECT t.id FROM test t\n WHERE t.user_id = $1\n ORDER BY t.score DESC LIMIT 1\n$$\nSET enable_indexscan = off;\n\nNow the query time with test_users is over a second. So maybe the \nconverse could also be true in your production setup using the same \ntechnique.\n\nregards,\nYeb Havinga\n\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 05 Feb 2010 12:17:07 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Fri, Feb 5, 2010 at 6:17 AM, Yeb Havinga <[email protected]> wrote:\n> and the cache is used between each row of test_users. The plan is with a\n> parameter, that means the optimizer could not make use of an actual value\n> during planning. However, your test case is clever in the sense that there\n> is an index on users and score and the sql function has an order by that\n> matches the index, so the planner can avoid a sort by accessing the test\n> table using the index.\n\nThat's why the index exists. The point is that the window function\ndoesn't use the index in this way, and (I think) does a complete index\nscan.\n\nIt's not just about avoiding a sort, but avoiding touching all of the\nirrelevant data in the index and just index searching for each\nuser_id. The window function appears to scan the entire index. In\nprinciple, it could skip all of the \"rank() > 1\" data with an index\nsearch, which I'd expect to help many uses of rank(); I assume that's\njust hard to implement.\n\nI'll probably be implementing the \"temporary functions\" approach\ntonight, to help Postgres optimize the function. Maybe some day,\nPostgres will be able to inline functions in this case and that won't\nbe needed...\n\n-- \nGlenn Maynard\n",
"msg_date": "Fri, 5 Feb 2010 20:35:39 -0500",
"msg_from": "Glenn Maynard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "On Fri, Feb 5, 2010 at 8:35 PM, Glenn Maynard <[email protected]> wrote:\n> On Fri, Feb 5, 2010 at 6:17 AM, Yeb Havinga <[email protected]> wrote:\n>> and the cache is used between each row of test_users. The plan is with a\n>> parameter, that means the optimizer could not make use of an actual value\n>> during planning. However, your test case is clever in the sense that there\n>> is an index on users and score and the sql function has an order by that\n>> matches the index, so the planner can avoid a sort by accessing the test\n>> table using the index.\n>\n> That's why the index exists. The point is that the window function\n> doesn't use the index in this way, and (I think) does a complete index\n> scan.\n>\n> It's not just about avoiding a sort, but avoiding touching all of the\n> irrelevant data in the index and just index searching for each\n> user_id. The window function appears to scan the entire index. In\n> principle, it could skip all of the \"rank() > 1\" data with an index\n> search, which I'd expect to help many uses of rank(); I assume that's\n> just hard to implement.\n\nYeah. The window function stuff is all pretty new, and I seem to\nrecall some discussion around the fact that it's not all as\nwell-optimized as it could be yet. Maybe someone will feel the urge\nto take a whack at that for 9.1.\n\n...Robert\n",
"msg_date": "Sat, 6 Feb 2010 01:31:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
},
{
"msg_contents": "Glenn Maynard wrote:\n> CREATE FUNCTION high_score_for_user(user_id int) RETURNS SETOF INTEGER\n> LANGUAGE SQL AS $$\n> SELECT t.id FROM test t\n> WHERE t.user_id = $1\n> ORDER BY t.score DESC LIMIT 1\n> $$;\n> SELECT high_score_for_user(u.id) FROM test_users u;\n>\n> runs in 100ms.\n> \nThough it doesn't solve your problem without changing result format, but \nwhat about\n\naap=# explain select u.id, ARRAY(select t.id from test t where \nt.user_id=u.id order by t.score desc limit 2) as high from test_users u;\n QUERY \nPLAN \n-----------------------------------------------------------------------------------------\n Seq Scan on test_users u (cost=0.00..3290.84 rows=1000 width=4)\n SubPlan 1\n -> Limit (cost=0.00..3.28 rows=2 width=8)\n -> Index Scan using test_2 on test t (cost=0.00..1637.92 \nrows=1000 width=8)\n Index Cond: (user_id = $0)\n(5 rows)\n\n id | high \n------+-----------------\n 1 | {641,896}\n 2 | {1757,1167}\n 3 | {2765,2168}\n 4 | {3209,3674}\n 5 | {4479,4993}\n\nregards,\nYeb Havinga\n\n",
"msg_date": "Tue, 23 Feb 2010 13:18:19 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query: table iteration (8.3)"
}
] |
[
{
"msg_contents": "I have a relatively straightforward query that by itself isn't that\nslow, but we have to run it up to 40 times on one webpage load, so it\nneeds to run much faster than it does. Here it is:\n\nSELECT COUNT(*) FROM users, user_groups\n WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\nuser_groups.partner_id IN\n (partner_id_1, partner_id_2);\n\nThe structure is partners have user groups which have users. In the\ntest data there are over 200,000 user groups and users but only ~3000\npartners. Anyone have any bright ideas on how to speed this query up?\n\nHere's the query plan:\n\n Aggregate (cost=12574.53..12574.54 rows=1 width=0) (actual\ntime=2909.298..2909.299 rows=1 loops=1)\n -> Hash Join (cost=217.79..12566.08 rows=3378 width=0) (actual\ntime=2909.284..2909.284 rows=0 loops=1)\n Hash Cond: (users.user_group_id = user_groups.id)\n -> Seq Scan on users (cost=0.00..11026.11 rows=206144\nwidth=4) (actual time=0.054..517.811 rows=205350 loops=1)\n Filter: (NOT deleted)\n -> Hash (cost=175.97..175.97 rows=3346 width=4) (actual\ntime=655.054..655.054 rows=200002 loops=1)\n -> Nested Loop (cost=0.27..175.97 rows=3346 width=4)\n(actual time=1.327..428.406 rows=200002 loops=1)\n -> HashAggregate (cost=0.27..0.28 rows=1\nwidth=4) (actual time=1.259..1.264 rows=2 loops=1)\n -> Result (cost=0.00..0.26 rows=1\nwidth=0) (actual time=1.181..1.240 rows=2 loops=1)\n -> Index Scan using user_groups_partner_id_idx\non user_groups (cost=0.00..133.86 rows=3346 width=8) (actual\ntime=0.049..96.992 rows=100001 loops=2)\n Index Cond: (user_groups.partner_id =\n(partner_all_subpartners(3494)))\n\n\nThe one obvious thing that everyone will point out is the sequential\nscan on users, but there actually is an index on users.deleted. When I\nforced sequential scanning off, it ran slower, so the planner wins\nagain.\n\nThanks for any help you can offer.\n",
"msg_date": "Mon, 1 Feb 2010 16:53:56 -0800 (PST)",
"msg_from": "Matt White <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow-ish Query Needs Some Love"
},
{
"msg_contents": "On 2010-02-02, Matt White <[email protected]> wrote:\n> I have a relatively straightforward query that by itself isn't that\n> slow, but we have to run it up to 40 times on one webpage load, so it\n> needs to run much faster than it does. Here it is:\n>\n> SELECT COUNT(*) FROM users, user_groups\n> WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\n> user_groups.partner_id IN\n> (partner_id_1, partner_id_2);\n>\n> The structure is partners have user groups which have users. In the\n> test data there are over 200,000 user groups and users but only ~3000\n> partners. Anyone have any bright ideas on how to speed this query up?\n\nCan you avoid running it 40 times, maybe by restructuring the\nquery (or making a view) along the lines of the following and\nadding some logic to your page?\n\nSELECT p.partner_id, ug.user_group_id, u.id, count(*)\n FROM partners p\n LEFT JOIN user_groups ug\n ON ug.partner_id=p.partner_id\n LEFT JOIN users u\n ON u.user_group_id=ug.id\n WHERE NOT u.deleted\n GROUP BY 1,2,3\n;\n\n",
"msg_date": "Tue, 2 Feb 2010 13:06:46 +0000 (UTC)",
"msg_from": "Edgardo Portal <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow-ish Query Needs Some Love"
},
{
"msg_contents": "On Feb 2, 6:06 am, Edgardo Portal <[email protected]> wrote:\n> On 2010-02-02, Matt White <[email protected]> wrote:\n>\n> > I have a relatively straightforward query that by itself isn't that\n> > slow, but we have to run it up to 40 times on one webpage load, so it\n> > needs to run much faster than it does. Here it is:\n>\n> > SELECT COUNT(*) FROM users, user_groups\n> > WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\n> > user_groups.partner_id IN\n> > (partner_id_1, partner_id_2);\n>\n> > The structure is partners have user groups which have users. In the\n> > test data there are over 200,000 user groups and users but only ~3000\n> > partners. Anyone have any bright ideas on how to speed this query up?\n>\n> Can you avoid running it 40 times, maybe by restructuring the\n> query (or making a view) along the lines of the following and\n> adding some logic to your page?\n>\n> SELECT p.partner_id, ug.user_group_id, u.id, count(*)\n> FROM partners p\n> LEFT JOIN user_groups ug\n> ON ug.partner_id=p.partner_id\n> LEFT JOIN users u\n> ON u.user_group_id=ug.id\n> WHERE NOT u.deleted\n> GROUP BY 1,2,3\n> ;\n\nThanks for the suggestion. The view didn't seem to speed things up.\nPerhaps we can reduce the number of times it's called, we'll see. Any\nadditional ideas would be helpful. Thanks.\n",
"msg_date": "Tue, 2 Feb 2010 11:03:42 -0800 (PST)",
"msg_from": "Matt White <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow-ish Query Needs Some Love"
},
{
"msg_contents": "On 2/2/2010 1:03 PM, Matt White wrote:\n> On Feb 2, 6:06 am, Edgardo Portal<[email protected]> wrote:\n>> On 2010-02-02, Matt White<[email protected]> wrote:\n>>\n>>> I have a relatively straightforward query that by itself isn't that\n>>> slow, but we have to run it up to 40 times on one webpage load, so it\n>>> needs to run much faster than it does. Here it is:\n>>\n>>> SELECT COUNT(*) FROM users, user_groups\n>>> WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\n>>> user_groups.partner_id IN\n>>> (partner_id_1, partner_id_2);\n>>\n>>> The structure is partners have user groups which have users. In the\n>>> test data there are over 200,000 user groups and users but only ~3000\n>>> partners. Anyone have any bright ideas on how to speed this query up?\n>>\n>> Can you avoid running it 40 times, maybe by restructuring the\n>> query (or making a view) along the lines of the following and\n>> adding some logic to your page?\n>>\n>> SELECT p.partner_id, ug.user_group_id, u.id, count(*)\n>> FROM partners p\n>> LEFT JOIN user_groups ug\n>> ON ug.partner_id=p.partner_id\n>> LEFT JOIN users u\n>> ON u.user_group_id=ug.id\n>> WHERE NOT u.deleted\n>> GROUP BY 1,2,3\n>> ;\n>\n> Thanks for the suggestion. The view didn't seem to speed things up.\n> Perhaps we can reduce the number of times it's called, we'll see. Any\n> additional ideas would be helpful. Thanks.\n\nI agree with Edgardo, I think the biggest time saver will be reducing \ntrips to the database.\n\nBut... do you have an index on users.user_group_id?\n\nDoes rewriting it change the plan any?\n\nSELECT COUNT(*) FROM users\ninner join user_groups on (users.user_group_id = user_groups.id)\nwhere NOT users.deleted\nAND user_groups.partner_id IN (partner_id_1, partner_id_2);\n\n\nAnd... it looks like the row guestimate is off a litte:\n\nIndex Scan using user_groups_partner_id_idx\non user_groups\n(cost=0.00..133.86 rows=3346 width=8)\n(actual time=0.049..96.992 rows=100001 loops=2)\n\n\nIt guessed 3,346 rows, but actually got 100,001. Have you run an \nanalyze on it? If so, maybe bumping up the stats might help?\n\n-Andy\n",
"msg_date": "Tue, 02 Feb 2010 14:11:34 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow-ish Query Needs Some Love"
},
{
"msg_contents": "On 2/3/2010 11:17 AM, Matt White wrote:\n> On Feb 2, 1:11 pm, [email protected] (Andy Colson) wrote:\n>> On 2/2/2010 1:03 PM, Matt White wrote:\n>>\n>>\n>>\n>>\n>>\n>>> On Feb 2, 6:06 am, Edgardo Portal<[email protected]> wrote:\n>>>> On 2010-02-02, Matt White<[email protected]> wrote:\n>>\n>>>>> I have a relatively straightforward query that by itself isn't that\n>>>>> slow, but we have to run it up to 40 times on one webpage load, so it\n>>>>> needs to run much faster than it does. Here it is:\n>>\n>>>>> SELECT COUNT(*) FROM users, user_groups\n>>>>> WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\n>>>>> user_groups.partner_id IN\n>>>>> (partner_id_1, partner_id_2);\n>>\n>>>>> The structure is partners have user groups which have users. In the\n>>>>> test data there are over 200,000 user groups and users but only ~3000\n>>>>> partners. Anyone have any bright ideas on how to speed this query up?\n>>\n>>>> Can you avoid running it 40 times, maybe by restructuring the\n>>>> query (or making a view) along the lines of the following and\n>>>> adding some logic to your page?\n>>\n>>>> SELECT p.partner_id, ug.user_group_id, u.id, count(*)\n>>>> FROM partners p\n>>>> LEFT JOIN user_groups ug\n>>>> ON ug.partner_id=p.partner_id\n>>>> LEFT JOIN users u\n>>>> ON u.user_group_id=ug.id\n>>>> WHERE NOT u.deleted\n>>>> GROUP BY 1,2,3\n>>>> ;\n>>\n>>> Thanks for the suggestion. The view didn't seem to speed things up.\n>>> Perhaps we can reduce the number of times it's called, we'll see. Any\n>>> additional ideas would be helpful. Thanks.\n>>\n>> I agree with Edgardo, I think the biggest time saver will be reducing\n>> trips to the database.\n>>\n>> But... do you have an index on users.user_group_id?\n>>\n>> Does rewriting it change the plan any?\n>>\n>> SELECT COUNT(*) FROM users\n>> inner join user_groups on (users.user_group_id = user_groups.id)\n>> where NOT users.deleted\n>> AND user_groups.partner_id IN (partner_id_1, partner_id_2);\n>>\n>> And... it looks like the row guestimate is off a litte:\n>>\n>> Index Scan using user_groups_partner_id_idx\n>> on user_groups\n>> (cost=0.00..133.86 rows=3346 width=8)\n>> (actual time=0.049..96.992 rows=100001 loops=2)\n>>\n>> It guessed 3,346 rows, but actually got 100,001. Have you run an\n>> analyze on it? If so, maybe bumping up the stats might help?\n>>\n>> -Andy\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n>\n> Andy,\n>\n> I have run analyze, see my query plan in my original post. You'll have\n> to forgive me for being a bit of a Postgres noob but what do you mean\n> by \"bumping up the stats\"?\n\nThats not what I mean. \"explain analyze select...\" is what you did, and \ncorrect. What I meant was \"analyze user_groups\".\n\nsee:\nhttp://www.postgresql.org/docs/8.4/interactive/sql-analyze.html\n\n\nan analyze will make PG look at a table, and calc stats on it, so it can \nmake better guesses. By default analyze only looks at a few rows (well \na small percent of rows) and makes guesses about the entire table based \non those rows. If it guesses wrong, sometimes you need to tell it to \nanalyze more rows (ie. a bigger percentage of the table).\n\nBy \"bumping the stats\" I was referring to this:\n\nhttp://wiki.postgresql.org/wiki/Planner_Statistics\n\nI have never had to do it, so dont know much about it. It may or may \nnot help. Just thought it was something you could try.\n\n-Andy\n\n",
"msg_date": "Wed, 03 Feb 2010 12:50:56 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow-ish Query Needs Some Love"
},
{
"msg_contents": "On Mon, Feb 1, 2010 at 7:53 PM, Matt White <[email protected]> wrote:\n> I have a relatively straightforward query that by itself isn't that\n> slow, but we have to run it up to 40 times on one webpage load, so it\n> needs to run much faster than it does. Here it is:\n>\n> SELECT COUNT(*) FROM users, user_groups\n> WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND\n> user_groups.partner_id IN\n> (partner_id_1, partner_id_2);\n>\n> The structure is partners have user groups which have users. In the\n> test data there are over 200,000 user groups and users but only ~3000\n> partners. Anyone have any bright ideas on how to speed this query up?\n>\n> Here's the query plan:\n>\n> Aggregate (cost=12574.53..12574.54 rows=1 width=0) (actual\n> time=2909.298..2909.299 rows=1 loops=1)\n> -> Hash Join (cost=217.79..12566.08 rows=3378 width=0) (actual\n> time=2909.284..2909.284 rows=0 loops=1)\n> Hash Cond: (users.user_group_id = user_groups.id)\n> -> Seq Scan on users (cost=0.00..11026.11 rows=206144\n> width=4) (actual time=0.054..517.811 rows=205350 loops=1)\n> Filter: (NOT deleted)\n> -> Hash (cost=175.97..175.97 rows=3346 width=4) (actual\n> time=655.054..655.054 rows=200002 loops=1)\n> -> Nested Loop (cost=0.27..175.97 rows=3346 width=4)\n> (actual time=1.327..428.406 rows=200002 loops=1)\n> -> HashAggregate (cost=0.27..0.28 rows=1\n> width=4) (actual time=1.259..1.264 rows=2 loops=1)\n> -> Result (cost=0.00..0.26 rows=1\n> width=0) (actual time=1.181..1.240 rows=2 loops=1)\n> -> Index Scan using user_groups_partner_id_idx\n> on user_groups (cost=0.00..133.86 rows=3346 width=8) (actual\n> time=0.049..96.992 rows=100001 loops=2)\n> Index Cond: (user_groups.partner_id =\n> (partner_all_subpartners(3494)))\n>\n>\n> The one obvious thing that everyone will point out is the sequential\n> scan on users, but there actually is an index on users.deleted. When I\n> forced sequential scanning off, it ran slower, so the planner wins\n> again.\n\nYeah, I don't think the sequential scan is hurting you. What is\nbugging me is that it doesn't look like the plan you've posted is for\nthe query you've posted. The plan shows an index condition that\nreferences partner_all_subpartners(3494), which doesn't appear in your\noriginal query, and also has two aggregates in it, where your posted\nquery only has one.\n\n...Robert\n",
"msg_date": "Thu, 4 Feb 2010 20:49:26 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow-ish Query Needs Some Love"
}
] |
[
{
"msg_contents": "hi, first, thanks u for make so good opensource db . \r\n \r\nrecently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and \r\ni found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,\r\n \r\ni found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:\r\n \r\n* joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design . \r\n \r\n* joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to , \r\n \r\n this joke design's reason is avoid confliction on read committed isolation , such as this situation:\r\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \r\n when concurrency write transaction on read committed isolation , the hits may result wrong . \r\n \r\n this joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is \r\nuse a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :\r\n \r\nselect 1 from lock_table where lockId='lock1' for update ;\r\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \r\n \r\n \r\n \r\n \r\n \r\n* joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation . \r\n \r\nat last, sorry about my angry taste. \r\n \r\n \r\n \nhi, first, thanks u for make so good opensource db . \n \nrecently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and \ni found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,\n \ni found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:\n \n* joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design . \n \n* joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to , \n \n this joke design's reason is avoid confliction on read committed isolation , such as this situation:\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \n when concurrency write transaction on read committed isolation , the hits may result wrong . \n \n this joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is \nuse a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :\n \nselect 1 from lock_table where lockId='lock1' for update ;\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \n \n \n \n \n \n* joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation . \n \nat last, sorry about my angry taste.",
"msg_date": "Tue, 02 Feb 2010 12:39:59 +0800 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "use pgsql in a big project,\n\tbut i found pg has some big problem on concurrency write operation,\n\tmaybe a joke for myself !"
},
{
"msg_contents": "2010/2/2 <[email protected]>:\n> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';\n>\n> when concurrency write transaction on read committed isolation , the hits\n> may result wrong .\n\nThat should work fine. All updates for the same url will be serialized.\n\n\nThe rest I'm pretty uncertain about what you're describing but I think\nyou may want to check about whether you need indexes on the other side\nof your foreign key constraints. If you're deleting records that are\nreferred to by your foreign keys or you're updating the primary key\nthen you'll want this index on the table with the foreign key\nconstraint as well as the mandatory one on the referenced table.\n\n-- \ngreg\n",
"msg_date": "Tue, 2 Feb 2010 16:08:53 +0000",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: use pgsql in a big project, but i found pg has some big problem\n\ton concurrency write operation, maybe a joke for myself !"
}
] |
[
{
"msg_contents": "hi, first, thanks u for make so good opensource db . \r\n\r\nrecently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and \r\ni found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,\r\n\r\ni found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:\r\n\r\n* joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design . \r\n\r\n* joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to , \r\n\r\nthis joke design's reason is avoid confliction on read committed isolation , such as this situation:\r\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \r\nwhen concurrency write transaction on read committed isolation , the hits may result wrong . \r\n\r\nthis joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is \r\nuse a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :\r\n\r\nselect 1 from lock_table where lockId='lock1' for update ;\r\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \r\n\r\n\r\n\r\n\r\n\r\n* joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation . \r\n\r\n\nhi, first, thanks u for make so good opensource db . \n\nrecently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and \ni found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,\n\ni found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:\n\n* joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design . \n\n* joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to , \n\nthis joke design's reason is avoid confliction on read committed isolation , such as this situation:\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \nwhen concurrency write transaction on read committed isolation , the hits may result wrong . \n\nthis joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is \nuse a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :\n\nselect 1 from lock_table where lockId='lock1' for update ;\nUPDATE webpages SET hits = hits + 1 WHERE url ='some url '; \n\n\n\n\n\n* joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation .",
"msg_date": "Tue, 02 Feb 2010 12:57:01 +0800 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "the jokes for pg concurrency write performance"
},
{
"msg_contents": "On Tue, 2 Feb 2010, [email protected] wrote:\n\n> hi, first, thanks u for make so good opensource db .\n\nfirst you thank the developers, then you insult them. are you asking for \nhelp or just trying to cause problems.\n\n> recently maybe half an years ago ,i begin to use pg in a big project for insurance project, belong as the project go on ,and\n> i found some performance problem on concurrency write situation , then i do a research on concurrency write strategy on postgresql ,\n>\n> i found a joke ,maybe this joke concurrency strategy is the designer's pround idea, but i think it is a joke , next let me describe the problems:\n>\n> * joke 1: insert operation would use a excluse lock on reference row by the foreign key . a big big big performance killer , i think this is a stupid design .\n\nthis I don't know enough to answer\n\n> * joke 2: concurrency update on same row would lead to that other transaction must wait the earlier transaction complete , this would kill the concurrency performance in some long time transaction situation . a stupid design to ,\n>\n> this joke design's reason is avoid confliction on read committed isolation , such as this situation:\n> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';\n> when concurrency write transaction on read committed isolation , the hits may result wrong .\n>\n> this joke design would do seriable write , but i think any stupid developer would not write this code like this stupid sample code , a good code is\n> use a exclusive lock to do a seriable write on this same row , but the joker think he should help us to do this , i say ,u should no kill concurrency performance and help i do this fucking stupid sample code , i would use a select .. for update to do this :\n>\n> select 1 from lock_table where lockId='lock1' for update ;\n> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';\n>\n\nIf you have one transaction start modifying a row, and then have another \none start, how do you not have one wait for the other? Remember that any \ntransaction can end up running for a long time and may revert at any time.\n\nWhy would you want to lock the entire table for an update as simple as you \ndescribe?\n\n> * joke 3: update 100000 rows on a table no any index , it cost 5-8 seconds , this is not acceptable in some bulk update situation .\n\nThis one is easy, you did 100000 inserts as seperate transactions, if you \ndo them all as one transaction (or better still as a copy) they would \ncomplete much faster.\n\nYou seem to be assuming incopatence on the part of the developers whenever \nyou run into a problem. If you want them to help you, I would suggest that \nyou assume that they know what they are doing (after all, if they didn't \nyou wouldn't want to use their code for anything important anyway), and \ninstead ask what the right way is to do what you are trying to do.\n\nDavid Lang\n",
"msg_date": "Mon, 1 Feb 2010 21:08:44 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "On Feb 1, 2010, at 8:57 PM, [email protected] wrote:\n\n> i found a joke ,maybe this joke concurrency strategy is the \n> designer's pround idea, but i think it is a joke , next let me \n> describe the problems:\n>\n I would suggest that the behavior that you dislike so much is really \nnot idea of the postgresql developers as much as it is of Prof. Codd \nand the ANSI-SQL committee. I wonder if a creditable relational DBMS \nexists that doesn't behave in exactly the way you've described?\n\n\n> UPDATE webpages SET hits = hits + 1 WHERE url ='some url ';\n>\n> i say ,u should no kill concurrency performance\n>\n\nOne alternative design would be to log the timestamp of your web page \nhits rather than update a hits count field.\n\nOnce you have this design, if the table becomes volumous with \nhistorical logs you have the choice the use horizontal table \npartitioning or you can roll up all of the historical logs into an \naggregating materialized view(table).\n\nRegarding all of the jokes you mentioned, I found them all to be very \nfunny indeed. :)\n\nRegards,\nRichard\n",
"msg_date": "Mon, 1 Feb 2010 21:54:07 -0800",
"msg_from": "Richard Broersma <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "2010/2/1 <[email protected]>:\n> hi, first, thanks u for make so good opensource db .\n>\n> recently maybe half an years ago ,i begin to use pg in a big project for\n> insurance project, belong as the project go on ,and\n>\n> i found some performance problem on concurrency write situation , then i do\n> a research on concurrency write strategy on postgresql ,\n>\n> i found a joke ,maybe this joke concurrency strategy is the designer's\n> pround idea, but i think it is a joke , next let me describe the problems:\n\nPlease try not to insult the people you're asking for help. Maybe a\nlittle less inflamatory language. Something like \"It seems that there\nare some issues with concurrency\" would work wonders. It's amazing\nhow much better a response you can get wihtout insulting everybody on\nthe list, eh?\n\nLet's rewrite this assertion:\n> * joke 1: insert operation would use a excluse lock on reference row by the\n> foreign key . a big big big performance killer , i think this is a stupid\n> design .\n\n\"problem #1: insert operation would use a excluse lock on reference row by the\nforeign key . a big big big performance killer. \"\n\nThen post an example of how it affects your performance. Did you go\nto the page that was pointed out to you in a previous post on how to\npost effectively about pg problems and get a useful answer? If not,\nplease do so, and re-post your questions etc without all the insults\nand hand waving.\n",
"msg_date": "Tue, 2 Feb 2010 09:12:09 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "Scott Marlowe escribi�:\n> 2010/2/1 <[email protected]>:\n\n> Let's rewrite this assertion:\n> > * joke 1: insert operation would use a excluse lock on reference row by the\n> > foreign key . a big big big performance killer , i think this is a stupid\n> > design .\n> \n> \"problem #1: insert operation would use a excluse lock on reference row by the\n> foreign key . a big big big performance killer. \"\n\nYeah, if it had been written this way I could have told him that this\nis not the case since 8.1, but since he didn't, I simply skipped his\nemails.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 2 Feb 2010 13:20:46 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "On Tue, Feb 2, 2010 at 9:20 AM, Alvaro Herrera\n<[email protected]> wrote:\n> Scott Marlowe escribió:\n>> 2010/2/1 <[email protected]>:\n>\n>> Let's rewrite this assertion:\n>> > * joke 1: insert operation would use a excluse lock on reference row by the\n>> > foreign key . a big big big performance killer , i think this is a stupid\n>> > design .\n>>\n>> \"problem #1: insert operation would use a excluse lock on reference row by the\n>> foreign key . a big big big performance killer. \"\n>\n> Yeah, if it had been written this way I could have told him that this\n> is not the case since 8.1, but since he didn't, I simply skipped his\n> emails.\n\nI wonder if having paid technical support to abuse leads to people\nthinking they can treat other people like crap and get the answer they\nwant anyway... Well, we'll see if the OP can write a non-flame-filled\ninquiry on their performance issues or not.\n",
"msg_date": "Tue, 2 Feb 2010 09:27:26 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "Scott Marlowe wrote:\n> I wonder if having paid technical support to abuse leads to people\n> thinking they can treat other people like crap and get the answer they\n> want anyway...\n\nYou have technical support somewhere you get to abuse? For me it always \nseems to be the other way around...\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 03 Feb 2010 02:35:52 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
},
{
"msg_contents": "2010/2/1 <[email protected]>:\n> * joke 1: insert operation would use a excluse lock on reference row by the\n> foreign key . a big big big performance killer , i think this is a stupid\n> design .\n>\n> * joke 2: concurrency update on same row would lead to that other\n> transaction must wait the earlier transaction complete , this would kill the\n> concurrency performance in some long time transaction situation . a stupid\n> design to ,\n\nI hear that MySQL can work wonders in performance by bypassing the\nchecks you're concerned about...don't count on the data being\nconsistent, but by golly it'll get to the client FAAAAAAAST...\n",
"msg_date": "Wed, 3 Feb 2010 02:31:30 -0600",
"msg_from": "J Sisson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: the jokes for pg concurrency write performance"
}
] |
[
{
"msg_contents": "wrote:\n> hi, first, thanks u for make so good opensource db .\n \nNot a bad start.\n \n> [insults and hand-waving]\n \nNot a good way to continue.\n \nIf there's some particular performance problem you would like to try\nto solve, please read these pages, and try again:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n \nIt would also help if you posted something where the plain-text\nformat was a bit more readable.\n \n-Kevin\n\n",
"msg_date": "Mon, 01 Feb 2010 23:06:55 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: use pgsql in a big project, but i found pg has\n\tsome big problem on concurrency write operation, maybe a joke\n\tfor myself !"
}
] |
[
{
"msg_contents": " I want say to the developer of pg, Thanks very much , you make so great project。 \r\n \r\nI think i am not a rough guy , Please forgive me. \n I want say to the developer of pg, Thanks very much , you make so great project。 \n \nI think i am not a rough guy , Please forgive me.",
"msg_date": "Tue, 02 Feb 2010 14:55:35 +0800 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Sincere apology for my insults and hand-waving in these mails:the\n\tjokes for pg concurrency write performance"
}
] |
[
{
"msg_contents": " Let's say you have one partitioned table, \"tbl_p\", partitioned according to \nthe PK \"p_pk\". I have made something similar with triggers, basing myself on \nthe manual for making partitioned tables.\nAccording to the manual, optimizer searches the CHECKs of the partitions to \ndetermine which table(s) to use (if applicable).\n\nSo if one has CHECKs of kind \"p_pk = some number\", queries like \"SELECT * \nfrom tbl_p where p_pk = 1\" will only be searched in the appropriate table. \nOne can check this with EXPLAIN. So far so good.\n\nNow, if one takes a subquery for \"1\", the optimizer evaluates it first \n(let's say to \"1\"), but then searches for it (sequentially) in every \npartition, which, for large partitions, can be very time-consuming and goes \nbeyond the point of partitioning.\n\nIs this normal, or am I missing something?\n\nKind regards,\nDavor\n\n\n\n",
"msg_date": "Tue, 2 Feb 2010 10:48:08 +0100",
"msg_from": "\"Davor J.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "queries with subquery constraints on partitioned tables not\n optimized?"
},
{
"msg_contents": "\"Davor J.\" <[email protected]> writes:\n> Now, if one takes a subquery for \"1\", the optimizer evaluates it first \n> (let's say to \"1\"), but then searches for it (sequentially) in every \n> partition, which, for large partitions, can be very time-consuming and goes \n> beyond the point of partitioning.\n\nNo, the optimizer doesn't \"evaluate it first\". Subqueries aren't ever\nassumed to reduce to constants. (If you actually do have a constant\nexpression, why don't you just leave out the word SELECT?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Feb 2010 19:14:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with subquery constraints on partitioned tables not\n\toptimized?"
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n> \"Davor J.\" <[email protected]> writes:\n>> Now, if one takes a subquery for \"1\", the optimizer evaluates it first \n>> (let's say to \"1\"), but then searches for it (sequentially) in every \n>> partition, which, for large partitions, can be very time-consuming and goes \n>> beyond the point of partitioning.\n>\n> No, the optimizer doesn't \"evaluate it first\". Subqueries aren't ever\n> assumed to reduce to constants. (If you actually do have a constant\n> expression, why don't you just leave out the word SELECT?)\n\nIt's easy to experience the same problem with a JOIN you'd want to\nhappen at the partition level that the planner will apply on the Append\nNode.\n\nI'm yet to figure out if 8.4 is smarter about this, meanwhile I'm using\narray tricks to force the push-down.\n\n WHERE ... \n AND service = ANY ((SELECT array_accum(id) FROM services WHERE x=281)\n || (SELECT array_accum(id) FROM services WHERE y=281))\n\nIt happens that I need the array concatenation more than the = ANY\noperator (as compared to IN), so I also have queries using = ANY\n('{}':int[] || (SELECT array_accum(x) ...)) to really force the planner\ninto doing the join in the partitions rather than after the Append has\ntaken place.\n\nRegards,\n-- \ndim\n\nPS: If you're interrested into complete examples, I'll be able to\nprovide for them in private.\n",
"msg_date": "Wed, 03 Feb 2010 11:23:58 +0100",
"msg_from": "Dimitri Fontaine <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with subquery constraints on partitioned tables not\n\toptimized?"
},
{
"msg_contents": "On Tue, Feb 2, 2010 at 7:14 PM, Tom Lane <[email protected]> wrote:\n\n> \"Davor J.\" <[email protected]> writes:\n> > Now, if one takes a subquery for \"1\", the optimizer evaluates it first\n> > (let's say to \"1\"), but then searches for it (sequentially) in every\n> > partition, which, for large partitions, can be very time-consuming and\n> goes\n> > beyond the point of partitioning.\n>\n> No, the optimizer doesn't \"evaluate it first\". Subqueries aren't ever\n> assumed to reduce to constants. (If you actually do have a constant\n> expression, why don't you just leave out the word SELECT?)\n>\n> regards, tom lane\n\n\nIf you don't have a constant expression then you can either explicitly loop\nin the calling code or a function or you could index the key in all the\nsubtables. The index isn't really optimal but it gets the job done.\n\nNik\n\nOn Tue, Feb 2, 2010 at 7:14 PM, Tom Lane <[email protected]> wrote:\n\"Davor J.\" <[email protected]> writes:\n> Now, if one takes a subquery for \"1\", the optimizer evaluates it first\n> (let's say to \"1\"), but then searches for it (sequentially) in every\n> partition, which, for large partitions, can be very time-consuming and goes\n> beyond the point of partitioning.\n\nNo, the optimizer doesn't \"evaluate it first\". Subqueries aren't ever\nassumed to reduce to constants. (If you actually do have a constant\nexpression, why don't you just leave out the word SELECT?)\n\n regards, tom laneIf you don't have a constant expression then you can either explicitly loop in the calling code or a function or you could index the key in all the subtables. The index isn't really optimal but it gets the job done.\nNik",
"msg_date": "Wed, 3 Feb 2010 10:54:13 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: queries with subquery constraints on partitioned tables\n\tnot optimized?"
}
] |
[
{
"msg_contents": "2010/2/1 <[email protected]>:\n\n> yet, this code is just for fun , why ? because i found this code in one\n> article:PostgreSQL Concurrency Issues .\n\nI'm not familiar with the \"code\" that you are referring to. Do you\nhave a hyper-link to it?\n\n\n> Tom Lane\n> Red Hat Database Group\n> Red Hat, Inc.\n\nAlso, I'm not sure why you mentioned Tom's name here. Does he have\nanything to do with the \"code\" that you've previously mentioned?\n\n> so many design should be better than this , But now we should talk about\n> this ,I just want talk about confick update on concurrency write\n> transaction.\n\nRegarding the update conflict, what aspect of it did you want to talk\nabout? Also, so that others (having much more experience that I do)\ncan participate in this discussion please reply-all to the email.\nThis will ensure that the [email protected] mailing is\nincluded.\n\n\n-- \nRegards,\nRichard Broersma Jr.\n\nVisit the Los Angeles PostgreSQL Users Group (LAPUG)\nhttp://pugs.postgresql.org/lapug\n",
"msg_date": "Tue, 2 Feb 2010 07:43:52 -0800",
"msg_from": "Richard Broersma <[email protected]>",
"msg_from_op": true,
"msg_subject": "\n =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_=5BPERFORM=5D_the_jokes_for_pg_concurre?=\n\t=?UTF-8?Q?ncy_write_performance?="
},
{
"msg_contents": "Richard Broersma <[email protected]> writes:\n> 2010/2/1 <[email protected]>:\n>> yet, this code is just for fun , why ? because i found this code in one\n>> article:PostgreSQL Concurrency Issues .\n\n> I'm not familiar with the \"code\" that you are referring to. Do you\n> have a hyper-link to it?\n\n>> Tom Lane\n>> Red Hat Database Group\n>> Red Hat, Inc.\n\n> Also, I'm not sure why you mentioned Tom's name here. Does he have\n> anything to do with the \"code\" that you've previously mentioned?\n\nHmmm ... I gave a talk by that name at OSCON.\n\nIn 2002.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Feb 2010 14:58:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re:\n =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_=5BPERFORM=5D_the_jokes_for_pg_concurre?=\n\t=?UTF-8?Q?ncy_write_performance?="
}
] |
[
{
"msg_contents": "Hi,\n I am running a bunch of queries within a function, creating some temp tables and populating them. When the data exceeds say, 100k the queries start getting really slow and timeout (30 min). when these are run outside of a transaction(in auto commit mode), they run in a few seconds. Any ideas on what may be going on and any postgresql.conf parameters etc that might help?\nThanks\n\n\n\n\n\n\n\n\n\n\nHi,\n I am running a bunch of queries within a function, creating\nsome temp tables and populating them. When the data exceeds say, 100k the queries\nstart getting really slow and timeout (30 min). when these are run outside of a\ntransaction(in auto commit mode), they run in a few seconds. Any ideas on what\nmay be going on and any postgresql.conf parameters etc that might help?\nThanks",
"msg_date": "Tue, 2 Feb 2010 10:17:15 -0800",
"msg_from": "Mridula Mahadevan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries within a function"
},
{
"msg_contents": "Hello\n\nlook on http://blog.endpoint.com/2008/12/why-is-my-function-slow.html\n\nRegards\nPavel Stehule\n\n2010/2/2 Mridula Mahadevan <[email protected]>:\n> Hi,\n>\n> I am running a bunch of queries within a function, creating some temp\n> tables and populating them. When the data exceeds say, 100k the queries\n> start getting really slow and timeout (30 min). when these are run outside\n> of a transaction(in auto commit mode), they run in a few seconds. Any ideas\n> on what may be going on and any postgresql.conf parameters etc that might\n> help?\n>\n> Thanks\n",
"msg_date": "Tue, 2 Feb 2010 20:07:10 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries within a function"
},
{
"msg_contents": "Mridula Mahadevan <[email protected]> writes:\n> I am running a bunch of queries within a function, creating some temp tables and populating them. When the data exceeds say, 100k the queries start getting really slow and timeout (30 min). when these are run outside of a transaction(in auto commit mode), they run in a few seconds. Any ideas on what may be going on and any postgresql.conf parameters etc that might help?\n\nI'll bet the function is caching query plans that stop being appropriate\nonce the table grows in size. You might have to resort to using\nEXECUTE, although if you're on 8.4 DISCARD PLANS ought to help too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Feb 2010 14:27:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries within a function "
},
{
"msg_contents": "Mridula Mahadevan wrote:\n>\n> Hi,\n>\n> I am running a bunch of queries within a function, creating some temp \n> tables and populating them. When the data exceeds say, 100k the \n> queries start getting really slow and timeout (30 min). when these are \n> run outside of a transaction(in auto commit mode), they run in a few \n> seconds. Any ideas on what may be going on and any postgresql.conf \n> parameters etc that might help?\n>\n> Thanks\n>\nDo you put here the result of the explain command of the query?\nDo you put here the postgresql.conf parameters that you have in your box?\n\nRegards\n\n\n-- \n--------------------------------------------------------------------------------\n\"Para ser realmente grande, hay que estar con la gente, no por encima de ella.\"\n Montesquieu\nIng. Marcos Lu�s Ort�z Valmaseda\nPostgreSQL System DBA && DWH -- BI Apprentice\n\nCentro de Tecnolog�as de Almacenamiento y An�lisis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\n\n-- PostgreSQL --\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n-- DWH + BI --\nThe Data WareHousing Institute\nhttp://www.tdwi.org\nhttp://www.tdwi.org/cbip\n---------------------------------------------------------------------------------",
"msg_date": "Tue, 02 Feb 2010 14:59:07 -0500",
"msg_from": "=?ISO-8859-1?Q?=22Ing=2E_Marcos_Orti=ADz_Valmaseda=22?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries within a function"
},
{
"msg_contents": "Thanks Tom and Marcos. \n\nMore details - \n\nI have 2 temp tables \nTable a - \nCreate table a (id int primary key,\n\t\t\tpromoted int,\n\t\t\tunq_str varchar )\nTable b - \nCREATE TABLE b (\n\t\t\t\tid int primary key,\n\t\t\t\tdup_id int\n\t\t\t ) TABLESPACE tblspc_tmp;\n\nAnd this is my insert statement\n\nINSERT INTO b SELECT a2.id , (SELECT MIN(a1.id) FROM a a1\n\t\t\tWHERE a1.unq_str=a2.unq_str AND a1.promoted = 1) as dup_id\n\t\tFROM a a2\n\t\tWHERE a2.promoted = 0\n\n\nExplain - \n\n\"Seq Scan on a a2 (cost=0.00..517148464.79 rows=126735 width=12)\"\n\" Filter: (promoted = 0)\"\n\" SubPlan\"\n\" -> Aggregate (cost=4080.51..4080.52 rows=1 width=4)\"\n\" -> Seq Scan on a a1 (cost=0.00..4080.51 rows=1 width=4)\"\n\" Filter: (((unq_str)::text = ($0)::text) AND (promoted = 1))\"\n\n\n\nPostgresql.conf options - \n\n\n# RESOURCE USAGE (except WAL)\n#------------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 128MB # min 128kB or max_connections*16kB\n# (change requires restart) (Changed from 24 MB to 128 MB)\n#temp_buffers = 128MB # min 800kB\nmax_prepared_transactions = 10 # can be 0 or more (changed from 5 to 20)\n # (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 5MB # min 64kB (Changed from 1MB to 5 MB)\n#maintenance_work_mem = 16MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 153600 # min max_fsm_relations*16, 6 bytes each\n # (change requires restart)\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n # (change requires restart)\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n # (change requires restart)\n#shared_preload_libraries = '' # (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms # 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/ro\nUnd\n#fsync = on # turns forced synchronization on or off\n#synchronous_commit = on # immediate fsync at commit\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n#wal_buffers = 64kB # min 32kB\n # (change requires restart)\n#wal_writer_delay = 200ms # 1-10000 milliseconds\n\ncommit_delay = 5000 # range 0-100000, in microseconds (changed from\n0.5 to 5000)\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 10 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min # range 30s-1h\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n#checkpoint_warning = 30s # 0 is off\n\n# - Archiving -\n\n#archive_mode = off # allows archiving to be done\n # (change requires restart)\n#archive_command = '' # command to use to archive a logfile segment\n#archive_timeout = 0 # force a logfile segment switch after this\n # time; 0 is off\n\n\n#------------------------------------------------------------------------------\n# QUERY TUNING\n#------------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\nenable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\neffective_cache_size = 512MB #(Changed from 128 MB to 256 MB)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100 # range 1-1000 (changed from 10 to 100)\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOIN clauses\n\n\n\n-----Original Message-----\nFrom: \"Ing. Marcos Ortiz Valmaseda\" [mailto:[email protected]] \nSent: Tuesday, February 02, 2010 11:59 AM\nTo: Mridula Mahadevan\nCc: [email protected]\nSubject: Re: [PERFORM] Queries within a function\n\nMridula Mahadevan wrote:\n>\n> Hi,\n>\n> I am running a bunch of queries within a function, creating some temp \n> tables and populating them. When the data exceeds say, 100k the \n> queries start getting really slow and timeout (30 min). when these are \n> run outside of a transaction(in auto commit mode), they run in a few \n> seconds. Any ideas on what may be going on and any postgresql.conf \n> parameters etc that might help?\n>\n> Thanks\n>\nDo you put here the result of the explain command of the query?\nDo you put here the postgresql.conf parameters that you have in your box?\n\nRegards\n\n\n--\n--------------------------------------------------------------------------------\n\"Para ser realmente grande, hay que estar con la gente, no por encima de ella.\"\n Montesquieu Ing. Marcos Luís Ortíz Valmaseda PostgreSQL System DBA && DWH -- BI Apprentice\n\nCentro de Tecnologías de Almacenamiento y Análisis de Datos (CENTALAD) Universidad de las Ciencias Informáticas\n\nLinux User # 418229\n\n-- PostgreSQL --\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n-- DWH + BI --\nThe Data WareHousing Institute\nhttp://www.tdwi.org\nhttp://www.tdwi.org/cbip\n---------------------------------------------------------------------------------\n\n",
"msg_date": "Tue, 2 Feb 2010 12:36:37 -0800",
"msg_from": "Mridula Mahadevan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries within a function"
},
{
"msg_contents": "Tom,\n I cannot run execute because all these are temp tables with drop on auto commit within a function. This should have something to do with running it in a transaction, when I run them in autocommit mode (without a drop on autocommit) the queries return in a few seconds. \n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, February 02, 2010 11:28 AM\nTo: Mridula Mahadevan\nCc: [email protected]\nSubject: Re: [PERFORM] Queries within a function \n\nMridula Mahadevan <[email protected]> writes:\n> I am running a bunch of queries within a function, creating some temp tables and populating them. When the data exceeds say, 100k the queries start getting really slow and timeout (30 min). when these are run outside of a transaction(in auto commit mode), they run in a few seconds. Any ideas on what may be going on and any postgresql.conf parameters etc that might help?\n\nI'll bet the function is caching query plans that stop being appropriate\nonce the table grows in size. You might have to resort to using\nEXECUTE, although if you're on 8.4 DISCARD PLANS ought to help too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 2 Feb 2010 12:53:56 -0800",
"msg_from": "Mridula Mahadevan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries within a function "
},
{
"msg_contents": "Hi,\nTry using dynamic sql. Query will be faster in a function\nregards\nRam\n----- Original Message ----- \nFrom: \"Mridula Mahadevan\" <[email protected]>\nTo: \"Tom Lane\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, February 03, 2010 2:23 AM\nSubject: Re: [PERFORM] Queries within a function\n\n\nTom,\n I cannot run execute because all these are temp tables with drop on auto \ncommit within a function. This should have something to do with running it \nin a transaction, when I run them in autocommit mode (without a drop on \nautocommit) the queries return in a few seconds.\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Tuesday, February 02, 2010 11:28 AM\nTo: Mridula Mahadevan\nCc: [email protected]\nSubject: Re: [PERFORM] Queries within a function\n\nMridula Mahadevan <[email protected]> writes:\n> I am running a bunch of queries within a function, creating some temp \n> tables and populating them. When the data exceeds say, 100k the queries \n> start getting really slow and timeout (30 min). when these are run outside \n> of a transaction(in auto commit mode), they run in a few seconds. Any \n> ideas on what may be going on and any postgresql.conf parameters etc that \n> might help?\n\nI'll bet the function is caching query plans that stop being appropriate\nonce the table grows in size. You might have to resort to using\nEXECUTE, although if you're on 8.4 DISCARD PLANS ought to help too.\n\nregards, tom lane\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n",
"msg_date": "Wed, 3 Feb 2010 09:54:47 +0530",
"msg_from": "\"ramasubramanian\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries within a function "
},
{
"msg_contents": "2010/2/2 Mridula Mahadevan <[email protected]>\n\n> Hi,\n>\n> I am running a bunch of queries within a function, creating some temp\n> tables and populating them. When the data exceeds say, 100k the queries\n> start getting really slow and timeout (30 min). when these are run outside\n> of a transaction(in auto commit mode), they run in a few seconds. Any ideas\n> on what may be going on and any postgresql.conf parameters etc that might\n> help?\n>\n> Thanks\n>\nHave you tried to analyze temp tables after you've populated them? Because\nAFAIK it won't do it automatically for tables created, filled and then used\n in same transaction.\n\n2010/2/2 Mridula Mahadevan <[email protected]>\n\n\nHi,\n I am running a bunch of queries within a function, creating\nsome temp tables and populating them. When the data exceeds say, 100k the queries\nstart getting really slow and timeout (30 min). when these are run outside of a\ntransaction(in auto commit mode), they run in a few seconds. Any ideas on what\nmay be going on and any postgresql.conf parameters etc that might help?\nThanks\n\n\nHave you tried to analyze temp tables after you've populated them? Because AFAIK it won't do it automatically for tables created, filled and then used in same transaction.",
"msg_date": "Wed, 3 Feb 2010 18:10:46 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries within a function"
},
{
"msg_contents": "Thank you all, You were right on the analyze. Insert statement with an aggregated subquery had a problem on an empty table.\n\nI had to change the queries to do a simple insert then analyze on the table followed by an update with an aggregated sub query. That goes thru very fast.\n\n-mridula\n\nFrom: Віталій Тимчишин [mailto:[email protected]]\nSent: Wednesday, February 03, 2010 8:11 AM\nTo: Mridula Mahadevan\nCc: [email protected]\nSubject: Re: [PERFORM] Queries within a function\n\n\n2010/2/2 Mridula Mahadevan <[email protected]<mailto:[email protected]>>\nHi,\n I am running a bunch of queries within a function, creating some temp tables and populating them. When the data exceeds say, 100k the queries start getting really slow and timeout (30 min). when these are run outside of a transaction(in auto commit mode), they run in a few seconds. Any ideas on what may be going on and any postgresql.conf parameters etc that might help?\nThanks\nHave you tried to analyze temp tables after you've populated them? Because AFAIK it won't do it automatically for tables created, filled and then used in same transaction.\n\n\n\n\n\n\n\nThank you all, You were right on the analyze. Insert statement\nwith an aggregated subquery had a problem on an empty table.\n \nI had to change the queries to do a simple insert then analyze\non the table followed by an update with an aggregated sub query. That\ngoes thru very fast.\n \n-mridula\n \n\nFrom: Віталій Тимчишин\n[mailto:[email protected]] \nSent: Wednesday, February 03, 2010 8:11 AM\nTo: Mridula Mahadevan\nCc: [email protected]\nSubject: Re: [PERFORM] Queries within a function\n\n \n \n\n2010/2/2 Mridula Mahadevan <[email protected]>\n\n\nHi,\n I\nam running a bunch of queries within a function, creating some temp tables and\npopulating them. When the data exceeds say, 100k the queries start getting\nreally slow and timeout (30 min). when these are run outside of a\ntransaction(in auto commit mode), they run in a few seconds. Any ideas on what\nmay be going on and any postgresql.conf parameters etc that might help?\nThanks\n\n\n\nHave you tried to analyze temp tables after you've populated\nthem? Because AFAIK it won't do it automatically for tables created, filled and\nthen used in same transaction.",
"msg_date": "Wed, 3 Feb 2010 12:06:12 -0800",
"msg_from": "Mridula Mahadevan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries within a function"
}
] |
[
{
"msg_contents": "pg 8.3.9, Debian Etch, 8gb ram, quadcore xeon, megaraid (more details at end)\n~240 active databases, 800+ db connections via tcp.\n\nEverything goes along fairly well, load average from 0.5 to 4.0. Disk\nIO is writing about 12-20 MB every 4 or 5 seconds. Cache memory about\n4gb. Then under load, we see swapping and then context switch storm and\nthen oom-killer.\n\nI'm hoping to find some ideas for spreading out the load of bgwriter\nand/or autovacuum somehow or possibly reconfiguring memory to help\nalleviate the problem, or at least to avoid crashing.\n\n(Hardware/software/configuration specs are below the following dstat output).\n\nI've been able to recreate the context switch storm (without the crash)\nby running 4 simultaneous 'vacuum analyze' tasks during a pg_dump.\nDuring these times, htop shows all 8 cpu going red bar 100% for a second\nor two or three, and this is when I see the context switch storm.\nThe following stat data however is from a production workload crash.\n\nDuring the dstat output below, postgresql was protected by oom_adj -17.\nvm_overcommit_memory set to 2, but at this time vm_overcommit_ratio was\nstill at 50 (has since been changed to 90, should this be 100?). The\nmemory usage was fairly constant 4056M 91M 3906M, until the end and after\nheavier swapping it went to 4681M 984k 3305M (used/buf/cache).\n\ndstat output under light to normal load:\n---procs--- ---paging-- -dsk/total- ---system-- ----total-cpu-usage----\nrun blk new|__in_ _out_|_read _writ|_int_ _csw_|usr sys idl wai hiq siq\n 0 2 5| 0 0 | 608k 884k| 756 801 | 11 2 83 4 0 0\n 1 0 4| 0 0 | 360k 1636k|1062 1147 | 13 1 83 2 0 0\n 2 2 5| 0 0 | 664k 1404k| 880 998 | 13 2 82 4 0 0\n 0 4 4| 0 0 |2700k 6724k|1004 909 | 10 1 72 16 0 0\n 0 2 4| 0 0 | 13M 14M|1490 1496 | 13 2 72 12 0 0\n 1 1 4| 0 0 | 21M 1076k|1472 1413 | 12 2 74 11 0 0\n 0 3 5| 0 0 | 15M 1712k|1211 1192 | 10 1 76 12 0 0\n 1 0 4| 0 0 |7384k 1124k|1277 1403 | 15 2 75 9 0 0\n 0 7 4| 0 0 |8864k 9528k|1431 1270 | 11 2 63 24 0 0\n 1 3 4| 0 0 |2520k 15M|2225 3410 | 13 2 66 19 0 0\n 2 1 5| 0 0 |4388k 1720k|1823 2246 | 14 2 70 13 0 0\n 2 0 4| 0 0 |2804k 1276k|1284 1378 | 12 2 80 6 0 0\n 0 0 4| 0 0 | 224k 884k| 825 900 | 12 2 86 1 0 0\n\nunder heavy load, just before crash, swap use has been increasing for\nseveral seconds or minutes:\n---procs--- ---paging-- -dsk/total- ---system-- ----total-cpu-usage----\nrun blk new|__in_ _out_|_read _writ|_int_ _csw_|usr sys idl wai hiq siq\n 2 22 9| 124k 28k| 12M 1360k|1831 2536 | 7 4 46 44 0 0\n 4 7 8| 156k 80k| 14M 348k|1742 2625 | 5 3 53 38 0 0\n 1 14 7| 60k 232k|9028k 24M|1278 1642 | 4 3 50 42 0 0\n 0 24 7| 564k 0 | 15M 5832k|1640 2199 | 7 2 41 50 0 0\n 1 26 7| 172k 0 | 13M 1052k|1433 2121 | 5 3 54 37 0 0\n 0 15 6| 36k 0 |6912k 35M|1295 3486 | 2 3 58 37 0 0\n 3 30 2| 0 0 |9724k 13M|1373 2378 | 4 3 48 45 0 0\n 5 20 4|4096B 0 | 10M 26M|2945 87k | 0 1 44 55 0 0\n 1 29 8| 0 0 | 19M 8192B| 840 19k | 0 0 12 87 0 0\n 4 33 3| 0 0 |4096B 0 | 14 39 | 17 17 0 67 0 0\n 3 31 0| 64k 0 | 116k 0 | 580 8418 | 0 0 0 100 0 0\n 0 36 0| 0 0 |8192B 0 | 533 12k | 0 0 9 91 0 0\n 2 32 1| 0 0 | 0 0 | 519 12k | 0 0 11 89 0 0\n 2 34 1| 0 0 | 16k 0 | 28 94 | 9 0 0 91 0 0\n 1 32 0| 0 0 | 20k 0 | 467 2295 | 1 0 13 87 0 0\n 2 32 0| 0 0 | 0 0 | 811 21k | 0 0 12 87 0 0\n 4 35 3| 0 0 | 44k 0 | 582 11k | 0 0 0 100 0 0\n 3 37 0| 0 0 | 0 0 | 16 67 | 0 9 0 91 0 0\n 2 35 0| 0 0 | 0 0 | 519 8205 | 0 2 21 77 0 0\n 0 37 0| 0 0 | 0 0 | 11 60 | 0 4 12 85 0 0\n 1 35 1| 0 0 | 20k 0 | 334 2499 | 0 0 23 77 0 0\n 0 36 1| 0 0 | 80k 0 | 305 8144 | 0 1 23 76 0 0\n 0 35 3| 0 0 | 952k 0 | 541 2537 | 0 0 16 84 0 0\n 2 35 2| 0 0 | 40k 0 | 285 8162 | 0 0 24 75 0 0\n 2 35 0| 100k 0 | 108k 0 | 550 9595 | 0 0 37 63 0 0\n 0 40 3| 0 0 | 16k 0 |1092 26k | 0 0 26 74 0 0\n 4 37 3| 0 0 | 96k 0 | 790 12k | 0 0 34 66 0 0\n 2 39 2| 0 0 | 24k 0 | 77 116 | 8 8 0 83 0 0\n 2 37 1| 0 0 | 0 0 | 354 2457 | 0 0 29 71 0 0\n 2 37 0|4096B 0 | 28k 0 |1909 57k | 0 0 27 73 0 0\n 0 39 1| 0 0 | 32k 0 |1060 25k | 0 0 12 88 0 0\n---procs--- ---paging-- -dsk/total- ---system-- ----total-cpu-usage----\nrun blk new|__in_ _out_|_read _writ|_int_ _csw_|usr sys idl wai hiq siq\n\nSPECS:\n\nPostgreSQL 8.3.9 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115 (prerelease) (Debian 4.1.1-21)\nInstalled from the debian etch-backports package.\n\nLinux 2.6.18-6-686-bigmem #1 SMP Thu Nov 5 17:30:05 UTC 2009 i686\nGNU/Linux (Debian Etch)\n\n8 MB RAM\n4 Quad Core Intel(R) Xeon(R) CPU E5440 @ 2.83GHz stepping 06\nL1 I cache: 32K, L1 D cache: 32K, L2 cache: 6144K\n\nLSI Logic SAS based MegaRAID driver (batter backed/write cache enabled)\nDell PERC 6/i\n# 8 SEAGATE Model: ST973451SS Rev: SM04 (72 GB) ANSI SCSI revision: 05\n\nRAID Configuration:\nsda RAID1 2 disks (with pg_xlog wal files on it's own partition)\nsdb RAID10 6 disks (pg base dir only)\n\nPOSTGRES:\n\n261 databases\n238 active databases (w/connection processes)\n863 connections to those 238 databases\n\npostgresql.conf:\nmax_connections = 1100\nshared_buffers = 800MB\nmax_prepared_transactions = 0\nwork_mem = 32MB\nmaintenance_work_mem = 64MB\nmax_fsm_pages = 3300000\nmax_fsm_relations = 10000\nvacuum_cost_delay = 50ms\nbgwriter_delay = 150ms\nbgwriter_lru_maxpages = 250\nbgwriter_lru_multiplier = 2.5\nwal_buffers = 8MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 5min\ncheckpoint_completion_target = 0.9\neffective_cache_size = 5000MB\ndefault_statistics_target = 100\nlog_min_duration_statement = 1000\nlog_checkpoints = on\nlog_connections = on\nlog_disconnections = on\nlog_temp_files = 0\ntrack_counts = on\nautovacuum = on\nlog_autovacuum_min_duration = 0\n\nThanks for any ideas!\nRob\n\n\n",
"msg_date": "Tue, 02 Feb 2010 13:11:27 -0600",
"msg_from": "Rob <[email protected]>",
"msg_from_op": true,
"msg_subject": "System overload / context switching / oom, 8.3"
},
{
"msg_contents": "On Tue, Feb 2, 2010 at 12:11 PM, Rob <[email protected]> wrote:\n> pg 8.3.9, Debian Etch, 8gb ram, quadcore xeon, megaraid (more details at end)\n> ~240 active databases, 800+ db connections via tcp.\n>\n> Everything goes along fairly well, load average from 0.5 to 4.0. Disk\n> IO is writing about 12-20 MB every 4 or 5 seconds. Cache memory about\n> 4gb. Then under load, we see swapping and then context switch storm and\n> then oom-killer.\n\nSNIP\n\n> postgresql.conf:\n> max_connections = 1100\n> work_mem = 32MB\n\n32MB * 1000 = 32,000MB... And that's if you max out connections and\nthey each only do 1 sort. If you're running many queries that run > 1\nsorts it'll happen a lot sooner.\n\nEither drop max connections or work_mem is what I'd do to start with.\nIf you have one or two reporting apps that need it higher, then set it\nhigher for just those connections / users.\n",
"msg_date": "Tue, 2 Feb 2010 13:25:28 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Rob <[email protected]> wrote:\n \n> 8gb ram\n> ~240 active databases\n> 800+ db connections via tcp.\n \n8 GB RAM divided by 800 DB connections is 10 MB per connection. You\nseriously need to find some way to use connection pooling. I'm not\nsure the best way to do that with 240 active databases.\n \n-Kevin\n",
"msg_date": "Tue, 02 Feb 2010 14:29:55 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "On 2/2/2010 1:11 PM, Rob wrote:\n>\n> Linux 2.6.18-6-686-bigmem #1 SMP Thu Nov 5 17:30:05 UTC 2009 i686\n> GNU/Linux (Debian Etch)\n>\n> 8 MB RAM\n> 4 Quad Core Intel(R) Xeon(R) CPU E5440 @ 2.83GHz stepping 06\n> L1 I cache: 32K, L1 D cache: 32K, L2 cache: 6144K\n>\n\nWell _there's_ your problem! Ya need more RAM! hee hee, I know, I \nknow, probably 8 gig, but just had to be done.\n\n-Andy\n",
"msg_date": "Tue, 02 Feb 2010 14:36:20 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "On 2/2/2010 1:11 PM, Rob wrote:\n>\n> postgresql.conf:\n> max_connections = 1100\n> shared_buffers = 800MB\n> max_prepared_transactions = 0\n> work_mem = 32MB\n> maintenance_work_mem = 64MB\n> max_fsm_pages = 3300000\n> max_fsm_relations = 10000\n> vacuum_cost_delay = 50ms\n> bgwriter_delay = 150ms\n> bgwriter_lru_maxpages = 250\n> bgwriter_lru_multiplier = 2.5\n> wal_buffers = 8MB\n> checkpoint_segments = 32\n> checkpoint_timeout = 5min\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 5000MB\n> default_statistics_target = 100\n> log_min_duration_statement = 1000\n> log_checkpoints = on\n> log_connections = on\n> log_disconnections = on\n> log_temp_files = 0\n> track_counts = on\n> autovacuum = on\n> log_autovacuum_min_duration = 0\n\n\nOk, seriously this time.\n\n > work_mem = 32MB\n > maintenance_work_mem = 64MB\n\n\nif you have lots and lots of connections, you might need to cut these down?\n\n > effective_cache_size = 5000MB\n\nI see your running a 32bit, but with bigmem support, but still, one \nprocess is limited to 4gig. You'd make better use of all that ram if \nyou switched to 64bit. And this cache, I think, would be limited to 4gig.\n\nThe oom-killer is kicking in, at some point, so somebody is using too \nmuch ram. There should be messages or logs or something, right? (I've \nnever enabled the oom stuff so dont know much about it). But the log \nmessages might be helpful.\n\nAlso, do you know what the oom max memory usage is set to? You said:\n\"oom_adj -17. vm_overcommit_memory set to 2, but at this time \nvm_overcommit_ratio was still at 50 (has since been changed to 90, \nshould this be 100?)\"\n\nbut I have no idea what that means.\n\n-Andy\n\n",
"msg_date": "Tue, 02 Feb 2010 14:47:50 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Tue, Feb 2, 2010 at 12:11 PM, Rob <[email protected]> wrote:\n> \n>> postgresql.conf:\n>> max_connections = 1100\n>> work_mem = 32MB\n>> \n>\n> 32MB * 1000 = 32,000MB... And that's if you max out connections and\n> they each only do 1 sort. If you're running many queries that run > 1\n> sorts it'll happen a lot sooner.\n>\n> Either drop max connections or work_mem is what I'd do to start with.\n> If you have one or two reporting apps that need it higher, then set it\n> higher for just those connections / users\n\nThanks much. So does dropping work_mem to the default of 1MB sound good?\n\nBy moving databases around we're getting max_connections below 600 or 700.\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Tue, Feb 2, 2010 at 12:11 PM, Rob <[email protected]> wrote:\n \n\npostgresql.conf:\nmax_connections = 1100\nwork_mem = 32MB\n \n\n\n32MB * 1000 = 32,000MB... And that's if you max out connections and\nthey each only do 1 sort. If you're running many queries that run > 1\nsorts it'll happen a lot sooner.\n\nEither drop max connections or work_mem is what I'd do to start with.\nIf you have one or two reporting apps that need it higher, then set it\nhigher for just those connections / users\n\n\nThanks much. So does dropping work_mem to the default of 1MB sound\ngood?\n\nBy moving databases around we're getting max_connections below 600 or\n700.",
"msg_date": "Tue, 02 Feb 2010 15:44:36 -0600",
"msg_from": "Rob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Rob <[email protected]> wrote:\n> \n> \n>> 8gb ram\n>> ~240 active databases\n>> 800+ db connections via tcp.\n>> \n> \n> 8 GB RAM divided by 800 DB connections is 10 MB per connection. You\n> seriously need to find some way to use connection pooling. I'm not\n> sure the best way to do that with 240 active databases.\n> \n\nBy wrangling the applications, We've got the number of connections down\nto 530 and number of active databases down to 186.\n\nThe application's poor connection management exacerbates the problem.\n\nThanks for the idea,\nRob\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nRob <[email protected]> wrote:\n \n \n\n8gb ram\n~240 active databases\n800+ db connections via tcp.\n \n\n \n8 GB RAM divided by 800 DB connections is 10 MB per connection. You\nseriously need to find some way to use connection pooling. I'm not\nsure the best way to do that with 240 active databases.\n \n\n\nBy wrangling the applications, We've got the number of connections down\nto 530 and number of active databases down to 186.\n\nThe application's poor connection management exacerbates the problem.\n\nThanks for the idea,\nRob",
"msg_date": "Tue, 02 Feb 2010 15:59:45 -0600",
"msg_from": "Rob <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Andy Colson wrote:\n> The oom-killer is kicking in, at some point, so somebody is using too \n> much ram. There should be messages or logs or something, right? \n> (I've never enabled the oom stuff so dont know much about it). But \n> the log messages might be helpful.\n\nThey probably won't be. The information logged about what the OOM \nkiller decided to kill is rarely sufficient to tell anything interesting \nabout the true cause in a PostgreSQL context--only really helpful if \nyou've got some memory hog process it decided to kill. In this case, \nseems to be a simple situation: way too many connections for the \nwork_mem setting used for a 8GB server to support. I'd take a look at \nthe system using \"top -c\" as well, in good times and bad if possible, \njust to see if any weird memory use is showing up somewhere, perhaps \neven outside the database.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 03 Feb 2010 02:48:19 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "On Tue, 2 Feb 2010, Rob wrote:\n> pg 8.3.9, Debian Etch, 8gb ram, quadcore xeon, megaraid (more details at end)\n> ~240 active databases, 800+ db connections via tcp.\n\n> Linux 2.6.18-6-686-bigmem #1 SMP Thu Nov 5 17:30:05 UTC 2009 i686\n> GNU/Linux (Debian Etch)\n>\n> 8 MB RAM\n> 4 Quad Core Intel(R) Xeon(R) CPU E5440 @ 2.83GHz stepping 06\n\nMy advice?\n\n1. Switch to 64-bit operating system and Postgres. Debian provides that, \nand it works a charm. You have a 64-bit system, so why not use it?\n\n2. Buy more RAM. Think about it - you have 800 individual processes \nrunning on your box, and they will all want their own process space. To be \nhonest, I'm impressed that the current machine works at all. You can get \nan idea of how much RAM you might need by multiplying the number of \nconnections by (work_mem + about 3MB), and add on shared_buffers. So even \nwhen the system is idle you're currently burning 3200MB just sustaining \n800 processes - more if they are actually doing something.\n\n3. Try to reduce the number of connections to the database server.\n\n4. Think about your work_mem. Finding the correct value for you is going \nto be a matter of testing. Smaller values will result in large queries \nrunning slowly, but have the danger of driving the system to swap and OOM.\n\nMatthew\n\n-- \n A good programmer is one who looks both ways before crossing a one-way street.\n Considering the quality and quantity of one-way streets in Cambridge, it\n should be no surprise that there are so many good programmers there.\n",
"msg_date": "Wed, 3 Feb 2010 11:30:53 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "On Tue, Feb 2, 2010 at 3:47 PM, Andy Colson <[email protected]> wrote:\n>> effective_cache_size = 5000MB\n>\n> I see your running a 32bit, but with bigmem support, but still, one process\n> is limited to 4gig. You'd make better use of all that ram if you switched\n> to 64bit. And this cache, I think, would be limited to 4gig.\n\nJust to be clear, effective_cache_size does not allocate any memory of\nany kind, in any way, ever...\n\n...Robert\n",
"msg_date": "Wed, 3 Feb 2010 15:40:56 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Andy Colson wrote:\n> > work_mem = 32MB\n> > maintenance_work_mem = 64MB\n>\n>\n> if you have lots and lots of connections, you might need to cut these \n> down?\n\ndefinitely, work_mem is the main focus.\n\nIf I understand correctly, th 64MB maintenance_work_mem is per vacuum \ntask, and on this system there are 3 autovacuums. I was wondering if \nwith this many databases, possibly decreasing the maintenance_work_mem \nsignificantly and starting up more autovacuums.\n\nYes, also moving databases to other servers in order to decrease the \nnumber of connections.\n>\n> > effective_cache_size = 5000MB\n>\n> I see your running a 32bit, but with bigmem support, but still, one \n> process is limited to 4gig. You'd make better use of all that ram if \n> you switched to 64bit. And this cache, I think, would be limited to \n> 4gig.\nAll of the cache is being used because the operating system kernel is \nbuilt with the memory extensions to access outside the 32bit range. \nThis is the cache size reported by free(1). However, there may be \nadvantages to switch to 64bit.\n\n>\n> The oom-killer is kicking in, at some point, so somebody is using too \n> much ram. There should be messages or logs or something, right? \n> (I've never enabled the oom stuff so dont know much about it). But \n> the log messages might be helpful.\n>\n> Also, do you know what the oom max memory usage is set to? You said:\n> \"oom_adj -17. vm_overcommit_memory set to 2, but at this time \n> vm_overcommit_ratio was still at 50 (has since been changed to 90, \n> should this be 100?)\"\n\nOh man. I encourage everyone to find out what /proc/<pid>/oom_adj \nmeans. You have to set this to keep the Linux \"oom-killer\" from doing a \nkill -9 on postgres postmaster. On Debian:\n\n echo -17 >> /proc/$(cat /var/run/postgresql/8.3-main.pid)/oom_adj\n\nThis is my experience with oom-killer. After putting -17 into \n/proc/pid/oom_adj, oom-killer seemed to kill one of the database \nconnection processes. Then the postmaster attempted to shut down all \nprocesses because of possible shared memory corruption. The database \nthen went into recovery mode. After stopping the database some of the \nprocesses were stuck and could not be killed. The operating system was \nrebooted and the database returned with no data loss.\n\nMy earlier experience with oom-killer: If you don't have this setting in \noom_adj, then it seems likely (certain?) that oom-killer kills the \npostmaster because of the algorithm oom-killer uses (called badness()) \nwhich adds children process scores to their parent's scores. I don't \nknow if sshd was killed but I don't think anyone could log in to the \nOS. After rebooting there was a segmentation violation when trying to \nstart the postmaster. I don't think that running pg_resetxlog with \ndefaults is a good idea. My colleague who has been investigating the \ncrash believes that we could have probably eliminated at least some of \nthe data loss with more judicious use of pg_resetxlog.\n\nThere was a discussion on the postgres lists about somehow having the \npostgres distribution include the functionality to set oom_adj on \nstartup. To my knowledge, that's not in 8.3 so I wrote a script and \ninit.d script to do this on Debian systems.\n\nAs far as vm.over_commit memory goes, there are three settings and most \nrecommend setting it to 2 for postgres. However, this does not turn off \noom-killer! You need to put -17 in /proc/<pid>/oom_adj whether you do \nanything about vm.over_commit memory or not We had vm_overcommit_memory \nset to 2 and oom-killer became active and killed the postmaster.\n\nKind of off-topic, but a Linux kernel parameter that's often not set on \ndatabase servers is elevator=deadline which sets up the io scheduling \nalgorithm. The algorithm can be viewed/set at runtime for example the \ndisk /dev/sdc in /sys/block/sdc/queue/scheduler.\n\nRob\n\n\n\n",
"msg_date": "Wed, 03 Feb 2010 19:27:48 -0600",
"msg_from": "Rob Lemley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
},
{
"msg_contents": "Rob Lemley wrote:\n> here was a discussion on the postgres lists about somehow having the \n> postgres distribution include the functionality to set oom_adj on \n> startup. To my knowledge, that's not in 8.3 so I wrote a script and \n> init.d script to do this on Debian systems.\n\nThat's not in anything earlier than the upcoming 9.0 because the support \ncode involved just showed up: \nhttp://archives.postgresql.org/pgsql-committers/2010-01/msg00169.php\n\nIt was always possible to do this in an init script as you describe. \nThe specific new feature added is the ability to remove client child \nprocesses from having that protection, so that they can still be killed \nnormally. Basically, limiting the protection just at the process that \nyou really need it on. The updated documentation for the new version \nhas more details about this whole topic, useful to people running older \nversions too: \nhttp://developer.postgresql.org/pgdocs/postgres/kernel-resources.html\n\n> Kind of off-topic, but a Linux kernel parameter that's often not set \n> on database servers is elevator=deadline which sets up the io \n> scheduling algorithm. The algorithm can be viewed/set at runtime for \n> example the disk /dev/sdc in /sys/block/sdc/queue/scheduler.\n\nI've never seen a real-world PostgreSQL workload where deadline worked \nbetter than CFQ, and I've seen a couple where it was significantly \nworse. Playing with that parameter needs a heavy disclaimer that you \nshould benchmark *your app* before and after changing it to make sure it \nwas actually useful. Actually, three times: return to CFQ again \nafterwards, too, just to confirm it's not a \"faster on the second run\" \neffect.\n\nThe important things to get right on Linux are read-ahead and reducing \nthe size of the write cache size--the latter being the more direct and \neffective way to improve the problem that the scheduler change happens \nto impact too. Those have dramatically more importance than sensible \nchanges to the scheduler used (with using the anticipatory one on a \nserver system or the no-op one on a desktop would be non-sensible changes).\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 03 Feb 2010 21:17:51 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System overload / context switching / oom, 8.3"
}
] |
[
{
"msg_contents": "Sorry again for previous rough messages, some good people advice me to post these problems again With sincere and friendly attitude 。\n\nI think i should do this . \n\nIn recently projects , I determine use pg in some medium or big projects , as the projects has been finished, it prove that I made a right decision. Maturity and stability of the postgresql has left us a deep iompression, of coz, there is some problems in postgresql , and finally we take some interim measures to avoid this problems \nENV: postgresql 8.4.2 , CentOS5.4, JDK6.0 \n\nproblems 1: My previous view is that the insert operation would use a exclusive lock on referenced row on FK , but now I realyzed that I am wrong , after test , pg does not take a exclusive lock on fk row, My prvous test procedure make a stupid mistake: when i check the performence problem , i remove the fks and unique constraints in one time , and the good result make me think the fk is the problem, but actually the unique constraints is the problem . \n\ni shame myself . \nafter shaming , I think i should pick out some my points:\n the unique constraints actualy kill concurrency write transaction when concurrency insert violate the unique constraints , they block each other , i test this in oracle10g, has the same behavour. I think this may be reasonable because the uqniue check must be the seriazable check . \nfor resolve this problem , i do the unique check in application as possible , but in big concurrency env , this is not good way . \n\n\nproblems 2: my mistake too , i think i misunderstanding read committed isolation , shame myself again too . \n\n\nproblems 3: \n\nAfter i do some config by this link: http://wiki.postgresql.org/wiki/SlowQueryQuestions . \n\nthe cost now just is 2-4 seconds , it is acceptable . \n\n\nthanks u very much and forgive me . \n\n \n\n\nSorry again for previous rough messages, some good people advice me to post these problems again With sincere and friendly attitude 。I think i should do this . In recently projects , I determine use pg in some medium or big projects , as the projects has been finished, it prove that I made a right decision. Maturity and stability of the postgresql has left us a deep iompression, of coz, there is some problems in postgresql , and finally we take some interim measures to avoid this problems ENV: postgresql 8.4.2 , CentOS5.4, JDK6.0 problems 1: My previous view is that the insert operation would use a exclusive lock on referenced row on FK , but now I realyzed that I am wrong , after test , pg does not take a exclusive lock on fk row, My prvous test procedure make a stupid mistake: when i check the performence problem , i remove the fks and unique constraints in one time , and the good result make me think the fk is the problem, but actually the unique constraints is the problem . i shame myself . after shaming , I think i should pick out some my points: the unique constraints actualy kill concurrency write transaction when concurrency insert violate the unique constraints , they block each other , i test this in oracle10g, has the same behavour. I think this may be reasonable because the uqniue check must be the seriazable check . for resolve this problem , i do the unique check in application as possible , but in big concurrency env , this is not good way . problems 2: my mistake too , i think i misunderstanding read committed isolation , shame myself again too . problems 3: After i do some config by this link: http://wiki.postgresql.org/wiki/SlowQueryQuestions . the cost now just is 2-4 seconds , it is acceptable . thanks u very much and forgive me .",
"msg_date": "Wed, 03 Feb 2010 10:28:30 +0800 ",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "some problems when i use postgresql 8.4.2 in my projects . "
},
{
"msg_contents": "2010/2/2 <[email protected]>:\n> the unique constraints actualy kill concurrency write transaction when\n> concurrency insert violate the unique constraints , they block each other ,\n> i test this in oracle10g, has the same behavour. I think this may be\n> reasonable because the uqniue check must be the seriazable check .\n> for resolve this problem , i do the unique check in application as possible\n> , but in big concurrency env , this is not good way .\n\nYou may find that your way isn't actually very reliable, and that\nmaking it reliable will be very, very much harder (and likely no\nfaster) than letting PostgreSQL do it.\n\n...Robert\n",
"msg_date": "Wed, 3 Feb 2010 09:58:18 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: some problems when i use postgresql 8.4.2 in my\n\tprojects ."
},
{
"msg_contents": "[email protected] wrote:\n> after shaming , I think i should pick out some my points:\n> the unique constraints actualy kill concurrency write transaction when\n> concurrency insert violate the unique constraints , they block each\n> other , i test this in oracle10g, has the same behavour. I think this\n> may be reasonable because the uqniue check must be the seriazable check .\n> for resolve this problem , i do the unique check in application as\n> possible , but in big concurrency env , this is not good way .\n> \n\nHow can you enforce uniqueness in the application? If you implement it\ncorrectly, you need considerably longer than letting it do PostgreSQL.\nEven if you use some kind of magic, I could not imagine, how you can\nimplement a unique constraint in the application and gaurantee\nuniqueness while at the same time be faster than the RDBMS.\n\nLeo\n",
"msg_date": "Wed, 03 Feb 2010 16:16:30 +0100",
"msg_from": "Leo Mannhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: some problems when i use postgresql 8.4.2 in my projects\n ."
},
{
"msg_contents": "\n> when concurrency insert violate the unique constraints , they block each \n> other , i test this in oracle10g, has the same behavour. I think this \n> may be reasonable because the uqniue check must be the seriazable \n> check .\n> for resolve this problem , i do the unique check in application as \n> possible , but in big concurrency env , this is not good way .\n\n\tYou probably can't do that in the application.\n\n\tAbout exclusive constraints :\n\nTransaction A : begin\nTransaction A : insert value X\nTransaction A : do some work, or just wait for client\n...\n\nMeanwhile :\n\nTransaction B : begin\nTransaction B : insert same value X\nTransaction B : locked because A hasn't committed yet so the exclusive \nconstraint can't be resolved\n\nTransaction A : commit or rollback\nTransaction B : lock is released, constraint is either OK or violated \ndepending on txn A rollback/rommit.\n\n\tAs you can see, the longer the transactions are, the more problems you \nget.\n\nSolution 1 : change design.\n\n- Why do you need this exclusive constraint ?\n- How are the unique ids generated ?\n- What kind of data do those ids represent ?\n- Can you sidestep it by using a sequence or something ?\n- Without knowing anything about your application, impossible to answer.\n\nSolution 2 : reduce the transaction time.\n\n- Optimize your queries (post here)\n- Commit as soon as possible\n- Long transactions (waiting for user input) are generally not such a good \nidea\n- Anything that makes the txn holding the locks wait more is bad \n(saturated network, slow app server, etc)\n- Optimize your xlog to make writes & commits faster\n\nSolution 3 : reduce the lock time\n\nInstead of doing :\nBEGIN\nINSERT X\n... do some stuff ...\nCOMMIT;\n\ndo :\n\nBEGIN\n... do some stuff that doesn't depend on X...\nINSERT X\n... do less stuff while holding lock ...\nCOMMIT;\n\nSolution 4 :\n\nIf you have really no control over value \"X\" and you need a quick reply \n\"is X already there ?\", you can use 2 transactions.\nOne transaction will \"reserve\" the value of X :\n\n- SELECT WHERE col = X\n\tensures row and index are in cache whilst taking no locks)\n\n- Set autocommit to 1\n- INSERT X;\n\tinserts X and commits immediately, else cause an error. Lock will not be \nheld for long, since autocommit means it commits ASAP.\n\n- Perform the rest of your (long) operations in another transaction.\n\nThis is a bit less safe since, if the second transaction fails, insert of \nX is not rolled back.\n\n",
"msg_date": "Thu, 04 Feb 2010 08:32:43 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: some problems when i use postgresql 8.4.2 in my\n projects ."
}
] |
[
{
"msg_contents": "Hello\n\nI have a server dedicated for Postgres with the following specs:\n\nRAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n2.33GHz\nOS: FreeBSD 8.0\n\nIt runs multiple (approx 10) databases ranging from 500MB to over 24 GB in\nsize. All of them are of the same structure, and almost all of them have\nvery heavy read and writes.\n\npgtune (http://pgfoundry.org/projects/pgtune/) suggests the settings to be\nchanged as :\n\nmaintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\neffective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\nwork_mem = 160MB # pg_generate_conf wizard 2010-02-03\nwal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\nshared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\nmax_connections = 100 # pg_generate_conf wizard 2010-02-03\n\n\nWhile this gives me the changes for postgresql.conf, I am not sure of of\nthe chnages that I need to make in FreeBSD to support such large memory\nallocations. The last time I tried, Postgres refused to start and I had to\nfall back to the default settings.\n\nI would appreciate if somebody could point out the sysctl/loader.conf\nsettings that I need to have in FreeBSD.\n\nWith regards\n\nAmitabh Kant\n\nHello I have a server dedicated for Postgres with the following specs:RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @ 2.33GHz OS: FreeBSD 8.0It runs multiple (approx 10) databases ranging from 500MB to \nover 24 GB in size. All of them are of the same structure, and almost \nall of them have very heavy read and writes. \npgtune (http://pgfoundry.org/projects/pgtune/) suggests the settings to be changed as :maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03work_mem = 160MB # pg_generate_conf wizard 2010-02-03wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-02-03shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03max_connections = 100 # pg_generate_conf wizard 2010-02-03While this gives me the changes for postgresql.conf, I am not sure of of the chnages that I need to make in FreeBSD to support such large memory allocations. The last time I tried, Postgres refused to start and I had to fall back to the default settings.\nI would appreciate if somebody could point out the sysctl/loader.conf settings that I need to have in FreeBSD.With regardsAmitabh Kant",
"msg_date": "Wed, 3 Feb 2010 20:40:57 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing Postgresql server and FreeBSD for heavy read and writes"
},
{
"msg_contents": "Forgot to add that I am using Postgres 8.4.2 from the default ports of\nFreeBSD.\n\nWith regards\n\nAmitabh Kant\n\nOn Wed, Feb 3, 2010 at 8:40 PM, Amitabh Kant <[email protected]> wrote:\n\n> Hello\n>\n> I have a server dedicated for Postgres with the following specs:\n>\n> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n> 2.33GHz\n> OS: FreeBSD 8.0\n>\n> It runs multiple (approx 10) databases ranging from 500MB to over 24 GB in\n> size. All of them are of the same structure, and almost all of them have\n> very heavy read and writes.\n>\n> pgtune (http://pgfoundry.org/projects/pgtune/) suggests the settings to be\n> changed as :\n>\n> maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n> effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n> wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n> shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n> max_connections = 100 # pg_generate_conf wizard 2010-02-03\n>\n>\n> While this gives me the changes for postgresql.conf, I am not sure of of\n> the chnages that I need to make in FreeBSD to support such large memory\n> allocations. The last time I tried, Postgres refused to start and I had to\n> fall back to the default settings.\n>\n> I would appreciate if somebody could point out the sysctl/loader.conf\n> settings that I need to have in FreeBSD.\n>\n> With regards\n>\n> Amitabh Kant\n>\n\nForgot to add that I am using Postgres 8.4.2 from the default ports of FreeBSD.With regardsAmitabh KantOn Wed, Feb 3, 2010 at 8:40 PM, Amitabh Kant <[email protected]> wrote:\nHello I have a server dedicated for Postgres with the following specs:RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @ 2.33GHz \n\nOS: FreeBSD 8.0It runs multiple (approx 10) databases ranging from 500MB to \nover 24 GB in size. All of them are of the same structure, and almost \nall of them have very heavy read and writes. \npgtune (http://pgfoundry.org/projects/pgtune/) suggests the settings to be changed as :maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n\n\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03work_mem = 160MB # pg_generate_conf wizard 2010-02-03wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n\n\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-02-03shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03max_connections = 100 # pg_generate_conf wizard 2010-02-03While this gives me the changes for postgresql.conf, I am not sure of of the chnages that I need to make in FreeBSD to support such large memory allocations. The last time I tried, Postgres refused to start and I had to fall back to the default settings.\nI would appreciate if somebody could point out the sysctl/loader.conf settings that I need to have in FreeBSD.With regardsAmitabh Kant",
"msg_date": "Wed, 3 Feb 2010 20:42:24 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read and\n\twrites"
},
{
"msg_contents": "On Wed, 2010-02-03 at 20:42 +0530, Amitabh Kant wrote:\n> Forgot to add that I am using Postgres 8.4.2 from the default ports of\n> FreeBSD.\n\nstart with this page\nhttp://www.postgresql.org/docs/8.4/static/kernel-resources.html\n\n",
"msg_date": "Wed, 03 Feb 2010 11:19:32 -0500",
"msg_from": "Reid Thompson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: Optimizing Postgresql server and FreeBSD for\n\theavy read and writes"
},
{
"msg_contents": "On 02/03/10 16:10, Amitabh Kant wrote:\n> Hello\n>\n> I have a server dedicated for Postgres with the following specs:\n>\n> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n> 2.33GHz\n> OS: FreeBSD 8.0\n\nIf you really do have \"heavy read and write\" load on the server, nothing \nwill save you from the bottleneck of having only 4 drives in the system \n(or more accurately: adding more memory will help reads but nothing \nhelps writes except more drivers or faster (SSD) drives). If you can, \nadd another 2 drives in RAID 1 and move+symlink the pg_xlog directory to \nthe new array.\n\n> maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n> effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n> wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n> shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n> max_connections = 100 # pg_generate_conf wizard 2010-02-03\n\n> I would appreciate if somebody could point out the sysctl/loader.conf\n> settings that I need to have in FreeBSD.\n\nFirstly, you need to run a 64-bit version (\"amd64\") of FreeBSD.\n\nIn /boot/loader.conf you will probably need to increase the number of \nsysv ipc semaphores:\n\nkern.ipc.semmni=512\nkern.ipc.semmns=1024\n\nThis depends mostly on the number of connections allowed to the server. \nThe example values I gave above are more than enough but since this is a \nboot-only tunable it is expensive to modify later.\n\nIn /etc/sysctl.conf you will need to increase the shared memory sizes, \ne.g. for a 3900 MB shared_buffer:\n\nkern.ipc.shmmax=4089446400\nThis is the maximum shared memory segment size, in bytes.\n\nkern.ipc.shmall=1050000\nThis is the maximum amount of memory allowed to be used as sysv shared \nmemory, in 4 kB pages.\n\nIf the database contains many objects (tables, indexes, etc.) you may \nneed to increase the maximum number of open files and the amount of \nmemory for the directory list cache:\n\nkern.maxfiles=16384\nvfs.ufs.dirhash_maxmem=4194304\n\nIf you estimate you will have large sequential reads on the database, \nyou should increase read-ahead count:\n\nvfs.read_max=32\n\nBe sure that soft-updates is enabled on the file system you are using \nfor data. Ignore all Linux-centric discussions about problems with \njournaling and write barriers :)\n\nAll settings in /etc/sysctl.conf can be changed at runtime (individually \nor by invoking \"/etc/rc.d/sysctl restart\"), settings in loader.conf are \nboot-time only.\n\n",
"msg_date": "Wed, 03 Feb 2010 17:35:55 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read and\n writes"
},
{
"msg_contents": "On 2/3/2010 9:10 AM, Amitabh Kant wrote:\n> Hello\n>\n> I have a server dedicated for Postgres with the following specs:\n>\n> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n> 2.33GHz\n> OS: FreeBSD 8.0\n>\n> It runs multiple (approx 10) databases ranging from 500MB to over 24 GB\n> in size. All of them are of the same structure, and almost all of them\n> have very heavy read and writes.\n>\n>\n> With regards\n>\n> Amitabh Kant\n\nWhat problems are you having? Is it slow? Is there something you are \ntrying to fix, or is this just the first tune up?\n\n\n > memory allocations. The last time I tried, Postgres refused to start and\n > I had to fall back to the default settings.\n\nIts probably upset about the amount of shared mem. There is probably a \nway in bsd to set the max amount of shared memory available. A Quick \ngoogle turned up:\n\nkern.ipc.shmmax\n\nDunno if thats right. When you try to start PG, if it cannot allocate \nenough shared mem it'll spit out an error message into its log saying \nhow much it tried to allocate.\n\nCheck:\nhttp://archives.postgresql.org/pgsql-admin/2004-06/msg00155.php\n\n\n\n\n > maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n > checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n > effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n > work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n > wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n > checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n > shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n > max_connections = 100 # pg_generate_conf wizard 2010-02-03\n\nSome of these seem like too much. I'd recommend starting with one or \ntwo and see how it runs. Then increase if you're still slow.\n\nStart with effective_cache_size, shared_buffers and checkpoint_segments.\n\nWait until very last to play with work_mem and maintenance_work_mem.\n\n\n-Andy\n",
"msg_date": "Wed, 03 Feb 2010 12:41:39 -0600",
"msg_from": "Andy Colson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read and\n writes"
},
{
"msg_contents": "On Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant <[email protected]> wrote:\n> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n\nOverall these settings look sane, but this one looks like an\nexception. That is an enormous value for that parameter...\n\n...Robert\n",
"msg_date": "Wed, 3 Feb 2010 15:43:50 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "Robert Haas wrote:\n> On Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant <[email protected]> wrote:\n> \n>> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n>> \n>\n> Overall these settings look sane, but this one looks like an\n> exception. That is an enormous value for that parameter...\n> \n\nYeah, I think I need to retune the suggestions for that parameter. The \nidea behind the tuning profile used in the \"web\" and \"OLTP\" setups is \nthat you're unlikely to have all the available connections doing \nsomething involving sorting at the same time with those workloads, and \nwhen it does happen you want it to use the fastest approach possible \neven if that takes more RAM so the client waiting for a response is more \nlikely to get one on time. That's why the work_mem figure in those \nsituations is set very aggressively: total_mem / connections, so on a \n16GB server that comes out to the 160MB seen here. I'm going to adjust \nthat so that it's capped a little below (total_mem - shared_buffers) / \nconnections instead.\n\npgtune just got a major bit of refactoring recently from Matt Harrison \nto make it more Python-esque, and I'll be pushing toward an official 1.0 \nwith all the major loose ends cleaned up and an adjusted tuning model \nthat will be available before 9.0 ships. I'm seeing enough people \ninterested in it now to justify putting another block of work into \nimproving it.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n\n\n\n\n\n\nRobert Haas wrote:\n\nOn Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant <[email protected]> wrote:\n \n\nwork_mem = 160MB # pg_generate_conf wizard 2010-02-03\n \n\n\nOverall these settings look sane, but this one looks like an\nexception. That is an enormous value for that parameter...\n \n\n\nYeah, I think I need to retune the suggestions for that parameter. The\nidea behind the tuning profile used in the \"web\" and \"OLTP\" setups is\nthat you're unlikely to have all the available connections doing\nsomething involving sorting at the same time with those workloads, and\nwhen it does happen you want it to use the fastest approach possible\neven if that takes more RAM so the client waiting for a response is\nmore likely to get one on time. That's why the work_mem figure in\nthose situations is set very aggressively: total_mem / connections, so\non a 16GB server that comes out to the 160MB seen here. I'm going to\nadjust that so that it's capped a little below (total_mem -\nshared_buffers) / connections instead.\n\npgtune just got a major bit of refactoring recently from Matt Harrison\nto make it more Python-esque, and I'll be pushing toward an official\n1.0 with all the major loose ends cleaned up and an adjusted tuning\nmodel that will be available before 9.0 ships. I'm seeing enough\npeople interested in it now to justify putting another block of work\ninto improving it.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us",
"msg_date": "Wed, 03 Feb 2010 16:59:31 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy\n\tread \tand writes"
},
{
"msg_contents": "On Wed, Feb 3, 2010 at 9:49 PM, Reid Thompson <[email protected]>wrote:\n\n> On Wed, 2010-02-03 at 20:42 +0530, Amitabh Kant wrote:\n> > Forgot to add that I am using Postgres 8.4.2 from the default ports of\n> > FreeBSD.\n>\n> start with this page\n> http://www.postgresql.org/docs/8.4/static/kernel-resources.html\n>\n>\nI somehow missed the doc. Thanks Reid.\n\nWith regards\n\nAmitabh Kant\n\nOn Wed, Feb 3, 2010 at 9:49 PM, Reid Thompson <[email protected]> wrote:\nOn Wed, 2010-02-03 at 20:42 +0530, Amitabh Kant wrote:\n> Forgot to add that I am using Postgres 8.4.2 from the default ports of\n> FreeBSD.\n\nstart with this page\nhttp://www.postgresql.org/docs/8.4/static/kernel-resources.html\n\nI somehow missed the doc. Thanks Reid.With regardsAmitabh Kant",
"msg_date": "Thu, 4 Feb 2010 14:29:54 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: Optimizing Postgresql server and FreeBSD for heavy\n\tread and writes"
},
{
"msg_contents": "On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\n\n> On 02/03/10 16:10, Amitabh Kant wrote:\n>\n>> Hello\n>>\n>> I have a server dedicated for Postgres with the following specs:\n>>\n>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n>> 2.33GHz\n>> OS: FreeBSD 8.0\n>>\n>\n> If you really do have \"heavy read and write\" load on the server, nothing\n> will save you from the bottleneck of having only 4 drives in the system (or\n> more accurately: adding more memory will help reads but nothing helps writes\n> except more drivers or faster (SSD) drives). If you can, add another 2\n> drives in RAID 1 and move+symlink the pg_xlog directory to the new array.\n>\n>\nCan't do anything about this server now, but would surely keep in mind\nbefore upgrading other servers. Would you recommend the same speed\ndrives(15K SAS) for RAID 1, or would a slower drive also work here (10K SAS\nor even SATA II)?\n\n\n\n>\n> maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n>> checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n>> effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n>> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n>> wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n>> checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n>> shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n>> max_connections = 100 # pg_generate_conf wizard 2010-02-03\n>>\n>\n> I would appreciate if somebody could point out the sysctl/loader.conf\n>> settings that I need to have in FreeBSD.\n>>\n>\n> Firstly, you need to run a 64-bit version (\"amd64\") of FreeBSD.\n>\n>\nYes, its running amd64 arch.\n\n\n> In /boot/loader.conf you will probably need to increase the number of sysv\n> ipc semaphores:\n>\n> kern.ipc.semmni=512\n> kern.ipc.semmns=1024\n>\n> This depends mostly on the number of connections allowed to the server. The\n> example values I gave above are more than enough but since this is a\n> boot-only tunable it is expensive to modify later.\n>\n> In /etc/sysctl.conf you will need to increase the shared memory sizes, e.g.\n> for a 3900 MB shared_buffer:\n>\n> kern.ipc.shmmax=4089446400\n> This is the maximum shared memory segment size, in bytes.\n>\n> kern.ipc.shmall=1050000\n> This is the maximum amount of memory allowed to be used as sysv shared\n> memory, in 4 kB pages.\n>\n> If the database contains many objects (tables, indexes, etc.) you may need\n> to increase the maximum number of open files and the amount of memory for\n> the directory list cache:\n>\n> kern.maxfiles=16384\n> vfs.ufs.dirhash_maxmem=4194304\n>\n> If you estimate you will have large sequential reads on the database, you\n> should increase read-ahead count:\n>\n> vfs.read_max=32\n>\n> Be sure that soft-updates is enabled on the file system you are using for\n> data. Ignore all Linux-centric discussions about problems with journaling\n> and write barriers :)\n>\n> All settings in /etc/sysctl.conf can be changed at runtime (individually or\n> by invoking \"/etc/rc.d/sysctl restart\"), settings in loader.conf are\n> boot-time only.\n>\n\nThanks Ivan. That's a great explanation of the variables involved.\n\n\nWith regards\n\nAmitabh Kant\n\nOn Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\nOn 02/03/10 16:10, Amitabh Kant wrote:\n\nHello\n\nI have a server dedicated for Postgres with the following specs:\n\nRAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n2.33GHz\nOS: FreeBSD 8.0\n\n\nIf you really do have \"heavy read and write\" load on the server, nothing will save you from the bottleneck of having only 4 drives in the system (or more accurately: adding more memory will help reads but nothing helps writes except more drivers or faster (SSD) drives). If you can, add another 2 drives in RAID 1 and move+symlink the pg_xlog directory to the new array.\nCan't do anything about this server now, but would surely keep in mind before upgrading other servers. Would you recommend the same speed drives(15K SAS) for RAID 1, or would a slower drive also work here (10K SAS or even SATA II)?\n \n\n\nmaintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\neffective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\nwork_mem = 160MB # pg_generate_conf wizard 2010-02-03\nwal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\nshared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\nmax_connections = 100 # pg_generate_conf wizard 2010-02-03\n\n\n\nI would appreciate if somebody could point out the sysctl/loader.conf\nsettings that I need to have in FreeBSD.\n\n\nFirstly, you need to run a 64-bit version (\"amd64\") of FreeBSD.Yes, its running amd64 arch. \n\n\nIn /boot/loader.conf you will probably need to increase the number of sysv ipc semaphores:\n\nkern.ipc.semmni=512\nkern.ipc.semmns=1024\n\nThis depends mostly on the number of connections allowed to the server. The example values I gave above are more than enough but since this is a boot-only tunable it is expensive to modify later.\n\nIn /etc/sysctl.conf you will need to increase the shared memory sizes, e.g. for a 3900 MB shared_buffer:\n\nkern.ipc.shmmax=4089446400\nThis is the maximum shared memory segment size, in bytes.\n\nkern.ipc.shmall=1050000\nThis is the maximum amount of memory allowed to be used as sysv shared memory, in 4 kB pages.\n\nIf the database contains many objects (tables, indexes, etc.) you may need to increase the maximum number of open files and the amount of memory for the directory list cache:\n\nkern.maxfiles=16384\nvfs.ufs.dirhash_maxmem=4194304\n\nIf you estimate you will have large sequential reads on the database, you should increase read-ahead count:\n\nvfs.read_max=32\n\nBe sure that soft-updates is enabled on the file system you are using for data. Ignore all Linux-centric discussions about problems with journaling and write barriers :)\n\nAll settings in /etc/sysctl.conf can be changed at runtime (individually or by invoking \"/etc/rc.d/sysctl restart\"), settings in loader.conf are boot-time only.\n\nThanks Ivan. That's a great explanation of the variables involved.With regardsAmitabh Kant",
"msg_date": "Thu, 4 Feb 2010 14:32:59 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 12:11 AM, Andy Colson <[email protected]> wrote:\n\n> On 2/3/2010 9:10 AM, Amitabh Kant wrote:\n>\n>> Hello\n>>\n>> I have a server dedicated for Postgres with the following specs:\n>>\n>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n>> 2.33GHz\n>> OS: FreeBSD 8.0\n>>\n>> It runs multiple (approx 10) databases ranging from 500MB to over 24 GB\n>> in size. All of them are of the same structure, and almost all of them\n>> have very heavy read and writes.\n>>\n>>\n>> With regards\n>>\n>> Amitabh Kant\n>>\n>\n> What problems are you having? Is it slow? Is there something you are\n> trying to fix, or is this just the first tune up?\n>\n\nThis is the first tune up. The system has worked pretty fine till now, but\nit does lag once in a while, and I would like to optimize it before it\nbecomes a bigger issue.\n\n\n>\n> > memory allocations. The last time I tried, Postgres refused to start and\n> > I had to fall back to the default settings.\n>\n> Its probably upset about the amount of shared mem. There is probably a way\n> in bsd to set the max amount of shared memory available. A Quick google\n> turned up:\n>\n> kern.ipc.shmmax\n>\n> Dunno if thats right. When you try to start PG, if it cannot allocate\n> enough shared mem it'll spit out an error message into its log saying how\n> much it tried to allocate.\n>\n> Check:\n> http://archives.postgresql.org/pgsql-admin/2004-06/msg00155.php\n>\n>\n>\n>\n>\n> > maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n> > checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n> > effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n> > work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n> > wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n> > checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n> > shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n> > max_connections = 100 # pg_generate_conf wizard 2010-02-03\n>\n> Some of these seem like too much. I'd recommend starting with one or two\n> and see how it runs. Then increase if you're still slow.\n>\n> Start with effective_cache_size, shared_buffers and checkpoint_segments.\n>\n> Wait until very last to play with work_mem and maintenance_work_mem.\n>\n>\n> -Andy\n>\n\nI would keep that in mind. Thanks Andy\n\nWith regards\n\nAmitabh\n\nOn Thu, Feb 4, 2010 at 12:11 AM, Andy Colson <[email protected]> wrote:\nOn 2/3/2010 9:10 AM, Amitabh Kant wrote:\n\nHello\n\nI have a server dedicated for Postgres with the following specs:\n\nRAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n2.33GHz\nOS: FreeBSD 8.0\n\nIt runs multiple (approx 10) databases ranging from 500MB to over 24 GB\nin size. All of them are of the same structure, and almost all of them\nhave very heavy read and writes.\n\n\nWith regards\n\nAmitabh Kant\n\n\nWhat problems are you having? Is it slow? Is there something you are trying to fix, or is this just the first tune up?This is the first tune up. The system has worked pretty fine till now, but it does lag once in a while, and I would like to optimize it before it becomes a bigger issue.\n\n\n\n> memory allocations. The last time I tried, Postgres refused to start and\n> I had to fall back to the default settings.\n\nIts probably upset about the amount of shared mem. There is probably a way in bsd to set the max amount of shared memory available. A Quick google turned up:\n\nkern.ipc.shmmax\n\nDunno if thats right. When you try to start PG, if it cannot allocate enough shared mem it'll spit out an error message into its log saying how much it tried to allocate.\n\nCheck:\nhttp://archives.postgresql.org/pgsql-admin/2004-06/msg00155.php\n\n\n\n\n> maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n> effective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03\n> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n> wal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n> checkpoint_segments = 16 # pg_generate_conf wizard 2010-02-03\n> shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03\n> max_connections = 100 # pg_generate_conf wizard 2010-02-03\n\nSome of these seem like too much. I'd recommend starting with one or two and see how it runs. Then increase if you're still slow.\n\nStart with effective_cache_size, shared_buffers and checkpoint_segments.\n\nWait until very last to play with work_mem and maintenance_work_mem.\n\n\n-Andy\nI would keep that in mind. Thanks AndyWith regardsAmitabh",
"msg_date": "Thu, 4 Feb 2010 14:46:31 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "On 4 February 2010 10:02, Amitabh Kant <[email protected]> wrote:\n> On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\n>>\n>> On 02/03/10 16:10, Amitabh Kant wrote:\n>>>\n>>> Hello\n>>>\n>>> I have a server dedicated for Postgres with the following specs:\n>>>\n>>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n>>> 2.33GHz\n>>> OS: FreeBSD 8.0\n>>\n>> If you really do have \"heavy read and write\" load on the server, nothing\n>> will save you from the bottleneck of having only 4 drives in the system (or\n>> more accurately: adding more memory will help reads but nothing helps writes\n>> except more drivers or faster (SSD) drives). If you can, add another 2\n>> drives in RAID 1 and move+symlink the pg_xlog directory to the new array.\n>\n> Can't do anything about this server now, but would surely keep in mind\n> before upgrading other servers. Would you recommend the same speed\n> drives(15K SAS) for RAID 1, or would a slower drive also work here (10K SAS\n> or even SATA II)?\n\nAgain, it depends on your load. It would probably be best if they are\napproximately the same speed; the location of pg_xlog will dictate\nyour write (UPDATE / INSERT / CREATE) speed.\n\nWrites to your database go like this: the data is first written to the\nWAL (this is the pg_xlog directory - the transaction log), then it is\nread and written to the \"main\" database. If the main database is very\nbusy reading, transfers from WAL to the database will be slower.\n",
"msg_date": "Thu, 4 Feb 2010 10:40:17 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 3:10 PM, Ivan Voras <[email protected]> wrote:\n\n> On 4 February 2010 10:02, Amitabh Kant <[email protected]> wrote:\n> > On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\n> >>\n> >> On 02/03/10 16:10, Amitabh Kant wrote:\n> >>>\n> >>> Hello\n> >>>\n> >>> I have a server dedicated for Postgres with the following specs:\n> >>>\n> >>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n> >>> 2.33GHz\n> >>> OS: FreeBSD 8.0\n> >>\n> >> If you really do have \"heavy read and write\" load on the server, nothing\n> >> will save you from the bottleneck of having only 4 drives in the system\n> (or\n> >> more accurately: adding more memory will help reads but nothing helps\n> writes\n> >> except more drivers or faster (SSD) drives). If you can, add another 2\n> >> drives in RAID 1 and move+symlink the pg_xlog directory to the new\n> array.\n> >\n> > Can't do anything about this server now, but would surely keep in mind\n> > before upgrading other servers. Would you recommend the same speed\n> > drives(15K SAS) for RAID 1, or would a slower drive also work here (10K\n> SAS\n> > or even SATA II)?\n>\n> Again, it depends on your load. It would probably be best if they are\n> approximately the same speed; the location of pg_xlog will dictate\n> your write (UPDATE / INSERT / CREATE) speed.\n>\n> Writes to your database go like this: the data is first written to the\n> WAL (this is the pg_xlog directory - the transaction log), then it is\n> read and written to the \"main\" database. If the main database is very\n> busy reading, transfers from WAL to the database will be slower.\n>\n\nThanks Ivan. I have to go in for upgrade of couple of more servers. I will\nbe going in for RAID 1 (OS + pg_xlog ) and RAID 10 (Pgsql data), all of them\nof same speed.\n\nWith regards\n\nAmitabh Kant\n\nOn Thu, Feb 4, 2010 at 3:10 PM, Ivan Voras <[email protected]> wrote:\nOn 4 February 2010 10:02, Amitabh Kant <[email protected]> wrote:\n> On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\n>>\n>> On 02/03/10 16:10, Amitabh Kant wrote:\n>>>\n>>> Hello\n>>>\n>>> I have a server dedicated for Postgres with the following specs:\n>>>\n>>> RAM 16GB, 146GB SAS (15K) x 4 - RAID 10 with BBU, Dual Xeon E5345 @\n>>> 2.33GHz\n>>> OS: FreeBSD 8.0\n>>\n>> If you really do have \"heavy read and write\" load on the server, nothing\n>> will save you from the bottleneck of having only 4 drives in the system (or\n>> more accurately: adding more memory will help reads but nothing helps writes\n>> except more drivers or faster (SSD) drives). If you can, add another 2\n>> drives in RAID 1 and move+symlink the pg_xlog directory to the new array.\n>\n> Can't do anything about this server now, but would surely keep in mind\n> before upgrading other servers. Would you recommend the same speed\n> drives(15K SAS) for RAID 1, or would a slower drive also work here (10K SAS\n> or even SATA II)?\n\nAgain, it depends on your load. It would probably be best if they are\napproximately the same speed; the location of pg_xlog will dictate\nyour write (UPDATE / INSERT / CREATE) speed.\n\nWrites to your database go like this: the data is first written to the\nWAL (this is the pg_xlog directory - the transaction log), then it is\nread and written to the \"main\" database. If the main database is very\nbusy reading, transfers from WAL to the database will be slower.\nThanks Ivan. I have to go in for upgrade of couple of more servers. I will be going in for RAID 1 (OS + pg_xlog ) and RAID 10 (Pgsql data), all of them of same speed.With regardsAmitabh Kant",
"msg_date": "Thu, 4 Feb 2010 15:22:43 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 3:29 AM, Greg Smith <[email protected]> wrote:\n\n> Robert Haas wrote:\n>\n> On Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant <[email protected]> <[email protected]> wrote:\n>\n>\n> work_mem = 160MB # pg_generate_conf wizard 2010-02-03\n>\n>\n> Overall these settings look sane, but this one looks like an\n> exception. That is an enormous value for that parameter...\n>\n>\n>\n> Yeah, I think I need to retune the suggestions for that parameter. The\n> idea behind the tuning profile used in the \"web\" and \"OLTP\" setups is that\n> you're unlikely to have all the available connections doing something\n> involving sorting at the same time with those workloads, and when it does\n> happen you want it to use the fastest approach possible even if that takes\n> more RAM so the client waiting for a response is more likely to get one on\n> time. That's why the work_mem figure in those situations is set very\n> aggressively: total_mem / connections, so on a 16GB server that comes out\n> to the 160MB seen here. I'm going to adjust that so that it's capped a\n> little below (total_mem - shared_buffers) / connections instead.\n>\n\nThanks Robert & Greg. From what others have suggested, I am going in for\nthe following changes:\n/boot/loader.conf:\n\nkern.ipc.semmni=512\nkern.ipc.semmns=1024\nkern.ipc.semmnu=512\n\n\n\n/etc/sysctl.conf:\n\nkern.ipc.shm_use_phys=1\nkern.ipc.shmmax=4089446400\nkern.ipc.shmall=1050000\nkern.maxfiles=16384\nkern.ipc.semmsl=1024\nkern.ipc.semmap=512\nvfs.ufs.dirhash_maxmem=4194304\nvfs.read_max=32\n\n\n\n/usr/local/pgsql/data/postgresql.conf:\n\nmaintenance_work_mem = 960MB # pg_generate_conf wizard\n2010-02-03\ncheckpoint_completion_target = 0.9 # pg_generate_conf wizard\n2010-02-03\neffective_cache_size = 11GB # pg_generate_conf wizard\n2010-02-03\nwork_mem = 110MB # pg_generate_conf wizard\n2010-02-03 Reduced as per Robert/Greg suggestions\nwal_buffers = 8MB # pg_generate_conf wizard\n2010-02-03\ncheckpoint_segments = 16 # pg_generate_conf wizard\n2010-02-03\nshared_buffers = 3840MB # pg_generate_conf wizard\n2010-02-03\nmax_connections = 100 # pg_generate_conf wizard\n2010-02-03\n\n\nHope this works out good in my case.\n\nWith regards\n\nAmitabh Kant\n\nOn Thu, Feb 4, 2010 at 3:29 AM, Greg Smith <[email protected]> wrote:\n\nRobert Haas wrote:\n\nOn Wed, Feb 3, 2010 at 10:10 AM, Amitabh Kant <[email protected]> wrote:\n \n\nwork_mem = 160MB # pg_generate_conf wizard 2010-02-03\n \n\nOverall these settings look sane, but this one looks like an\nexception. That is an enormous value for that parameter...\n \n\n\nYeah, I think I need to retune the suggestions for that parameter. The\nidea behind the tuning profile used in the \"web\" and \"OLTP\" setups is\nthat you're unlikely to have all the available connections doing\nsomething involving sorting at the same time with those workloads, and\nwhen it does happen you want it to use the fastest approach possible\neven if that takes more RAM so the client waiting for a response is\nmore likely to get one on time. That's why the work_mem figure in\nthose situations is set very aggressively: total_mem / connections, so\non a 16GB server that comes out to the 160MB seen here. I'm going to\nadjust that so that it's capped a little below (total_mem -\nshared_buffers) / connections instead.Thanks Robert & Greg. From what others have suggested, I am going in for the following changes:/boot/loader.conf:kern.ipc.semmni=512\n\nkern.ipc.semmns=1024kern.ipc.semmnu=512/etc/sysctl.conf:kern.ipc.shm_use_phys=1kern.ipc.shmmax=4089446400kern.ipc.shmall=1050000kern.maxfiles=16384kern.ipc.semmsl=1024kern.ipc.semmap=512\n\nvfs.ufs.dirhash_maxmem=4194304vfs.read_max=32/usr/local/pgsql/data/postgresql.conf:maintenance_work_mem = 960MB # pg_generate_conf wizard 2010-02-03checkpoint_completion_target = 0.9 # pg_generate_conf wizard 2010-02-03\n\neffective_cache_size = 11GB # pg_generate_conf wizard 2010-02-03work_mem = 110MB # pg_generate_conf wizard 2010-02-03 Reduced as per Robert/Greg suggestionswal_buffers = 8MB # pg_generate_conf wizard 2010-02-03\n\ncheckpoint_segments = 16 # pg_generate_conf wizard 2010-02-03shared_buffers = 3840MB # pg_generate_conf wizard 2010-02-03max_connections = 100 # pg_generate_conf wizard 2010-02-03\nHope this works out good in my case.With regardsAmitabh Kant",
"msg_date": "Thu, 4 Feb 2010 15:37:57 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy read\n\tand writes"
},
{
"msg_contents": "On Thu, 4 Feb 2010, Amitabh Kant wrote:\n> On Wed, Feb 3, 2010 at 10:05 PM, Ivan Voras <[email protected]> wrote:\n>> If you can, add another 2 drives in RAID 1 and move+symlink the pg_xlog \n>> directory to the new array.\n>\n> Can't do anything about this server now, but would surely keep in mind\n> before upgrading other servers. Would you recommend the same speed\n> drives(15K SAS) for RAID 1, or would a slower drive also work here (10K SAS\n> or even SATA II)?\n\nThe performance requirements for the WAL are significantly lower than for \nthe main database. This is for two reasons - firstly the WAL is \nwrite-only, and has no other activity. The WAL only gets read again in the \nevent of a crash. Secondly, writes to the WAL are sequential writes, which \nis the fastest mode of operation for a disc, whereas the main database \ndiscs will have to handle random access.\n\nThe main thing you need to make sure of is that the WAL is on a disc \nsystem that has a battery-backed up cache. That way, it will be able to \nhandle the high rate of fsyncs that the WAL generates, and the cache will \nconvert that into a simple sequential write. Otherwise, you will be \nlimited to one fsync every 5ms (or whatever the access speed of your WAL \ndiscs is).\n\nIf you make sure of that, then there is no reason to get expensive fast \ndiscs for the WAL at all (assuming they are expensive enough to not lie \nabout flushing writes properly).\n\nMatthew\n\n-- \nSo, given 'D' is undeclared too, with a default of zero, C++ is equal to D.\n mnw21, commenting on the \"Surely the value of C++ is zero, but C is now 1\"\n response to \"No, C++ isn't equal to D. 'C' is undeclared [...] C++ should\n really be called 1\" response to \"C++ -- shouldn't it be called D?\"\n",
"msg_date": "Thu, 4 Feb 2010 11:19:57 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing Postgresql server and FreeBSD for heavy\n\tread \tand writes"
}
] |
[
{
"msg_contents": "foreign key constraint lock behavour :\n\n\nThe referenced FK row would be added some exclusive lock , following is the case:\n\nCREATE TABLE tb_a\n(\n id character varying(255) NOT NULL,\n \"name\" character varying(255),\n b_id character varying(255) NOT NULL,\n CONSTRAINT tb_a_pkey PRIMARY KEY (id),\n CONSTRAINT fk_a_1 FOREIGN KEY (b_id)\n REFERENCES tb_b (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nCREATE TABLE tb_b\n(\n id character varying(255) NOT NULL,\n \"name\" character varying(255),\n CONSTRAINT tb_b_pkey PRIMARY KEY (id)\n)\n\nbefore these two transaction begin ,the tb_b has one rows: {id:\"b1\",name:\"b1\"}\n\n\ntransaction 1:\n\nbegin transaction;\ninsert into tb_a(id,b_id) values('a1','b1');\n\n //block here;\n\nend transaction;\n-----------------\ntransaction 2:\n\nbegin transaction;\n// if transaction 1 first run , then this statement would be lock untill transaction1 complete. \nupdate tb_b set name='changed' where id='b1';\n\nend transction;\n-----------------\n\ntransaction 3:\n\nbegin transaction;\n\ndelete tb_b where id='b1';\n\nend transaction;\n-------------\n\nresult:\nin postgresql8.4 , transaction 2 and transaction 3 would be block until transaction 1 complete. \nin oracle10g , transaction 2 would ne be block ,but transaction 3 would be block . \nin mysql5 with innoDB, same behavour with postgresql5\n\n\nmy analyze:\n\nFor the FK constraints ,this is reasonable , there is this case may happen:\n\nwhen one transaction do insert into tb_a with the fk reference to one row ('b1') on tb_b, \nsimultaneously , another transaction delete the 'b1' row, for avoid this concurrency confliction , then need to lock the 'b1' row. \n\nfrom this point ,I think i can find some magic why mysql take so better performance for bulk update or delete on concurrency transactions .\n\noracle use better level lock to avoid block when do update \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nforeign key constraint lock behavour :\n\n\nThe referenced FK row would be added some exclusive lock , following is the case:\n\nCREATE TABLE tb_a\n(\n id character varying(255) NOT NULL,\n \"name\" character varying(255),\n b_id character varying(255) NOT NULL,\n CONSTRAINT tb_a_pkey PRIMARY KEY (id),\n CONSTRAINT fk_a_1 FOREIGN KEY (b_id)\n REFERENCES tb_b (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\n\nCREATE TABLE tb_b\n(\n id character varying(255) NOT NULL,\n \"name\" character varying(255),\n CONSTRAINT tb_b_pkey PRIMARY KEY (id)\n)\n\nbefore these two transaction begin ,the tb_b has one rows: {id:\"b1\",name:\"b1\"}\n\n\ntransaction 1:\n\nbegin transaction;\ninsert into tb_a(id,b_id) values('a1','b1');\n\n //block here;\n\nend transaction;\n-----------------\ntransaction 2:\n\nbegin transaction;\n// if transaction 1 first run , then this statement would be lock untill transaction1 complete. \nupdate tb_b set name='changed' where id='b1';\n\nend transction;\n-----------------\n\ntransaction 3:\n\nbegin transaction;\n\ndelete tb_b where id='b1';\n\nend transaction;\n-------------\n\nresult:\nin postgresql8.4 , transaction 2 and transaction 3 would be block until transaction 1 complete. \nin oracle10g , transaction 2 would ne be block ,but transaction 3 would be block . \nin mysql5 with innoDB, same behavour with postgresql5\n\n\nmy analyze:\n\nFor the FK constraints ,this is reasonable , there is this case may happen:\n\nwhen one transaction do insert into tb_a with the fk reference to one row ('b1') on tb_b, \nsimultaneously , another transaction delete the 'b1' row, for avoid this concurrency confliction , then need to lock the 'b1' row. \n\nfrom this point ,I think i can find some magic why mysql take so better performance for bulk update or delete on concurrency transactions .\n\noracle use better level lock to avoid block when do update",
"msg_date": "Thu, 04 Feb 2010 12:05:33 +0800",
"msg_from": "wangyuxiang <[email protected]>",
"msg_from_op": true,
"msg_subject": "foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "On Thu, 4 Feb 2010, wangyuxiang wrote:\n\n> foreign key constraint lock behavour :\n>\n>\n> The referenced FK row would be added some exclusive lock , following is the case:\n>\n> CREATE TABLE tb_a\n> (\n> id character varying(255) NOT NULL,\n> \"name\" character varying(255),\n> b_id character varying(255) NOT NULL,\n> CONSTRAINT tb_a_pkey PRIMARY KEY (id),\n> CONSTRAINT fk_a_1 FOREIGN KEY (b_id)\n> REFERENCES tb_b (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n>\n> CREATE TABLE tb_b\n> (\n> id character varying(255) NOT NULL,\n> \"name\" character varying(255),\n> CONSTRAINT tb_b_pkey PRIMARY KEY (id)\n> )\n>\n> before these two transaction begin ,the tb_b has one rows: {id:\"b1\",name:\"b1\"}\n>\n>\n> transaction 1:\n>\n> begin transaction;\n> insert into tb_a(id,b_id) values('a1','b1');\n>\n> //block here;\n>\n> end transaction;\n> -----------------\n> transaction 2:\n>\n> begin transaction;\n> // if transaction 1 first run , then this statement would be lock untill transaction1 complete.\n> update tb_b set name='changed' where id='b1';\n>\n> end transction;\n> -----------------\n>\n> transaction 3:\n>\n> begin transaction;\n>\n> delete tb_b where id='b1';\n>\n> end transaction;\n> -------------\n>\n> result:\n> in postgresql8.4 , transaction 2 and transaction 3 would be block until transaction 1 complete.\n> in oracle10g , transaction 2 would ne be block ,but transaction 3 would be block .\n> in mysql5 with innoDB, same behavour with postgresql5\n>\n>\n> my analyze:\n>\n> For the FK constraints ,this is reasonable , there is this case may happen:\n>\n> when one transaction do insert into tb_a with the fk reference to one row ('b1') on tb_b,\n> simultaneously , another transaction delete the 'b1' row, for avoid this concurrency confliction , then need to lock the 'b1' row.\n>\n> from this point ,I think i can find some magic why mysql take so better performance for bulk update or delete on concurrency transactions .\n>\n> oracle use better level lock to avoid block when do update\n\nI could be wrong in this (if so I know I'll be corrected :-)\n\nbut Postgres doesn't need to lock anything for what you are describing.\n\ninstead there will be multiple versions of the 'b1' row, one version will \nbe deleted, one version that will be kept around until the first \ntransaction ends, after which a vaccum pass will remove the data.\n\nDavid Lang\n",
"msg_date": "Wed, 3 Feb 2010 21:40:54 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 12:40 AM, <[email protected]> wrote:\n> I could be wrong in this (if so I know I'll be corrected :-)\n>\n> but Postgres doesn't need to lock anything for what you are describing.\n>\n> instead there will be multiple versions of the 'b1' row, one version will be\n> deleted, one version that will be kept around until the first transaction\n> ends, after which a vaccum pass will remove the data.\n\nJust for kicks I tried this out and the behavior is as the OP\ndescribes: after a little poking around, it sees that the INSERT grabs\na share-lock on the referenced row so that a concurrent update can't\nmodify the referenced column.\n\nIt's not really clear how to get around this. If it were possible to\nlock individual columns within a tuple, then the particular update\nabove could be allowed since only the name is being changed. Does\nanyone know what happens in Oracle if the update targets the id column\nrather than the name column?\n\nAnother possibility is that instead of locking the row, you could\nrecheck that the foreign key constraint still holds at commit time.\nBut that seems like it could potentially be quite expensive.\n\n...Robert\n",
"msg_date": "Thu, 4 Feb 2010 21:11:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "Robert Haas wrote:\n> Just for kicks I tried this out and the behavior is as the OP\n> describes: after a little poking around, it sees that the INSERT grabs\n> a share-lock on the referenced row so that a concurrent update can't\n> modify the referenced column.\n> \n> It's not really clear how to get around this. If it were possible to\n> lock individual columns within a tuple, then the particular update\n> above could be allowed since only the name is being changed. Does\n> anyone know what happens in Oracle if the update targets the id column\n> rather than the name column?\n\nI have investigated what Oracle (10.2) does in this situation.\n\nFirst the original sample as posted by wangyuxiang:\n\ninsert into tb_a(id,b_id) values('a1','b1');\n\nwill place a ROW EXCLUSIVE lock on tb_a, an EXCLUSIVE lock\non the row that was inserted and a ROW SHARE lock on tb_b.\nNo lock on any row in the parent table is taken.\n\nupdate tb_b set name='changed' where id='b1';\n\nwill place a ROW EXCLUSIVE lock on tb_b and an EXCLUSIVE\nlock on the modified column.\n\nSince ROW EXCLUSIVE and ROW SHARE do not conflict, both statements\nwill succeed.\n\n\nNow to your question:\n\nupdate tb_b set id='b2' where id='b1';\n\nThis will place a ROW EXCLUSIVE lock on tb_b, an EXCLUSIVE lock\non the updated row and a SHARE lock on tb_a.\nThis last lock is only held for the duration of the UPDATE statement\nand *not* until the end of the transaction.\n\nSo this update will block, because the SHARE and the ROW EXCLUSIVE\nlock on tb_a are incompatible.\n\n\nSo it seems that Oracle handles this quite differently.\nI was particularly surprised that it uses locks that are not held\nuntil end-of-transaction.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Fri, 5 Feb 2010 10:00:09 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "On Fri, Feb 5, 2010 at 4:00 AM, Albe Laurenz <[email protected]> wrote:\n> Robert Haas wrote:\n>> Just for kicks I tried this out and the behavior is as the OP\n>> describes: after a little poking around, it sees that the INSERT grabs\n>> a share-lock on the referenced row so that a concurrent update can't\n>> modify the referenced column.\n>>\n>> It's not really clear how to get around this. If it were possible to\n>> lock individual columns within a tuple, then the particular update\n>> above could be allowed since only the name is being changed. Does\n>> anyone know what happens in Oracle if the update targets the id column\n>> rather than the name column?\n>\n> I have investigated what Oracle (10.2) does in this situation.\n>\n> First the original sample as posted by wangyuxiang:\n>\n> insert into tb_a(id,b_id) values('a1','b1');\n>\n> will place a ROW EXCLUSIVE lock on tb_a, an EXCLUSIVE lock\n> on the row that was inserted and a ROW SHARE lock on tb_b.\n> No lock on any row in the parent table is taken.\n>\n> update tb_b set name='changed' where id='b1';\n>\n> will place a ROW EXCLUSIVE lock on tb_b and an EXCLUSIVE\n> lock on the modified column.\n>\n> Since ROW EXCLUSIVE and ROW SHARE do not conflict, both statements\n> will succeed.\n>\n>\n> Now to your question:\n>\n> update tb_b set id='b2' where id='b1';\n>\n> This will place a ROW EXCLUSIVE lock on tb_b, an EXCLUSIVE lock\n> on the updated row and a SHARE lock on tb_a.\n> This last lock is only held for the duration of the UPDATE statement\n> and *not* until the end of the transaction.\n>\n> So this update will block, because the SHARE and the ROW EXCLUSIVE\n> lock on tb_a are incompatible.\n>\n>\n> So it seems that Oracle handles this quite differently.\n> I was particularly surprised that it uses locks that are not held\n> until end-of-transaction.\n\nYeah, that seems odd. I assume they know what they're doing; they're\nOracle, after all. It does sound, too, like they have column level\nlocks based on your comment about \"an EXCLUSIVE lock on the modified\ncolumn\". I doubt we're likely to implement such a thing, but who\nknows. Another interesting point is that a statement that involves\nonly tb_b can trigger a share lock on tb_a; presumably that means they\nknow they need to take a share lock on every table that references the\nupdated column, which seems like it could be fairly expensive in the\nworst case.\n\nOne idea that occurs to me is that it might be possible to add to PG\nsome tuple lock modes that are intended to cover updates that don't\ntouch indexed columns. So, say:\n\nSHARED NONINDEX - conflicts only with EXCLUSIVE locks\nSHARED - conflicts with EXCLUSIVE or EXCLUSIVE NONINDEX locks\nEXCLUSIVE NONINDEX - conflicts with any lock except SHARED NONINDEX.\nmust have this level or higher to update tuple.\nEXCLUSIVE - conflicts with any other lock. must have this to update\nany indexed column of a tuple.\n\nThen a foreign key constraint could take a SHARED NONINDEX lock on the\ntarget tuple, because any column that's the target of a foreign key\nmust be indexed; and so we don't care if the nonindexed columns get\nupdated under us. I think. Also, I believe you'd also need to\nduplicate any SHARED NONINDEX locks for any new versions of the tuple\nthat got created while the lock was held, which might be sticky.\n\n...Robert\n",
"msg_date": "Fri, 5 Feb 2010 13:17:01 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "Robert Haas wrote:\n[explanation of how Oracle locks on Updates involving foreign keys]\n> \n> Yeah, that seems odd. I assume they know what they're doing; they're\n> Oracle, after all. It does sound, too, like they have column level\n> locks based on your comment about \"an EXCLUSIVE lock on the modified\n> column\". I doubt we're likely to implement such a thing, but who\n> knows.\n\nSorry, that was a mistake. I meant \"an EXCLUSIVE lock on the modified\nrow\". Oracle works quite like PostgreSQL in locking modified rows.\n\n> Another interesting point is that a statement that involves\n> only tb_b can trigger a share lock on tb_a; presumably that means they\n> know they need to take a share lock on every table that references the\n> updated column, which seems like it could be fairly expensive in the\n> worst case.\n\nYes, that's the only way Oracle's method makes sense, by taking out\na shared lock on every table that references the updated table.\n\nIt may be expensive, but as the example shows, it also allows concurrency\nin a way that PostgreSQL doesn't, so maybe it's worth the pain.\n\nOn the other hand, Oracle has some problems that PostgreSQl doesn't.\nIf you run the following example, assuming the original setup of\nwangyuxiang:\n\nSESSION 2:\n BEGIN;\n UPDATE tb_b SET id='b2' WHERE id='b1';\n\nSESSION 1:\n INSERT INTO tb_a (id,b_id) VALUES ('a1','b1');\n\nSESSION 2:\n UPDATE tb_b SET id='b1' WHERE id='b2';\n COMMIT;\n\nit will succeed just fine on PostgreSQL (with SESSION 1 blocking until\nSESSION 2 COMMITs), but on Oracle it will cause a deadlock aborting\nSESSION 1.\n\nSo, according the the principle of preservation of difficulties, both\nimplementations have their snags, and I wouldn't say that PostgreSQL\nis worse off.\n\n> One idea that occurs to me is that it might be possible to add to PG\n> some tuple lock modes that are intended to cover updates that don't\n> touch indexed columns. So, say:\n> \n> SHARED NONINDEX - conflicts only with EXCLUSIVE locks\n> SHARED - conflicts with EXCLUSIVE or EXCLUSIVE NONINDEX locks\n> EXCLUSIVE NONINDEX - conflicts with any lock except SHARED NONINDEX.\n> must have this level or higher to update tuple.\n> EXCLUSIVE - conflicts with any other lock. must have this to update\n> any indexed column of a tuple.\n> \n> Then a foreign key constraint could take a SHARED NONINDEX lock on the\n> target tuple, because any column that's the target of a foreign key\n> must be indexed; and so we don't care if the nonindexed columns get\n> updated under us. I think. Also, I believe you'd also need to\n> duplicate any SHARED NONINDEX locks for any new versions of the tuple\n> that got created while the lock was held, which might be sticky.\n\nThat should work and improve concurrency in PostgreSQL!\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 8 Feb 2010 09:57:14 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
},
{
"msg_contents": "I wrote:\n> > One idea that occurs to me is that it might be possible to add to PG\n> > some tuple lock modes that are intended to cover updates that don't\n> > touch indexed columns. So, say:\n> > \n> > SHARED NONINDEX - conflicts only with EXCLUSIVE locks\n> > SHARED - conflicts with EXCLUSIVE or EXCLUSIVE NONINDEX locks\n> > EXCLUSIVE NONINDEX - conflicts with any lock except SHARED NONINDEX.\n> > must have this level or higher to update tuple.\n> > EXCLUSIVE - conflicts with any other lock. must have this to update\n> > any indexed column of a tuple.\n> > \n> > Then a foreign key constraint could take a SHARED NONINDEX lock on the\n> > target tuple, because any column that's the target of a foreign key\n> > must be indexed; and so we don't care if the nonindexed columns get\n> > updated under us. I think. Also, I believe you'd also need to\n> > duplicate any SHARED NONINDEX locks for any new versions of the tuple\n> > that got created while the lock was held, which might be sticky.\n> \n> That should work and improve concurrency in PostgreSQL!\n\nEven more if EXCLUSIVE NONINDEX is also used for updates that\nchange indexed columns where the index is not UNIQUE.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Tue, 9 Feb 2010 16:22:50 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: foreign key constraint lock behavour in postgresql"
}
] |
[
{
"msg_contents": "Greetings,\n\nI have a column that is a bigint that needs to store integers up to 19\ndigits long. For the most part this works but we sometimes have\nnumbers that are greater than 9223372036854775807.\n\nI was thinking of changing this to a real or double precision field,\nbut read in the docs that the value stored is not always the value\ninserted. From the docs \" Inexact means that some values cannot be\nconverted exactly to the internal format and are stored as\napproximations, so that storing and printing back out a value may show\nslight discrepancies\".\n\nIs it known what level of precision is provided by the double data\ntype. My number will always be 19 digits long and always an integer.\n\nI looked into the numeric data type, but the docs say that it can be slow.\n\n\nAny feedback would be appreciated.\nThanks\nTory\n",
"msg_date": "Thu, 4 Feb 2010 10:15:56 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "bigint integers up to 19 digits."
},
{
"msg_contents": "Tory M Blue wrote:\n> I have a column that is a bigint that needs to store integers up to 19\n> digits long. For the most part this works but we sometimes have\n> numbers that are greater than 9223372036854775807.\n> ...\n> I was thinking of changing this to a real or double precision field,\n> but read in the docs that the value stored is not always the value\n> inserted...\n\nThey're actually less precise than the same size of integer. Real/double datatypes trade more range for less precision in the same number of bytes.\n\n> My number will always be 19 digits long and always an integer.\n> I looked into the numeric data type, but the docs say that it can be slow.\n\nIf it's *always* going to be 19 digits, couldn't you make it a text or char field? You didn't say if this is really a number. Do you do arithmetic with it? Sort it numerically? Or is it just a long identifier that happens to only used digits?\n\nCraig James\n",
"msg_date": "Thu, 04 Feb 2010 10:43:09 -0800",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bigint integers up to 19 digits."
},
{
"msg_contents": "Tory M Blue escribi�:\n\n> I looked into the numeric data type, but the docs say that it can be slow.\n\nIt is slower than values that fit in a single CPU register, sure. Is it\nslow enough that you can't use it? That's a different question. I'd\ngive it a try -- maybe it's not all that slow.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 4 Feb 2010 15:47:58 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bigint integers up to 19 digits."
},
{
"msg_contents": "On Thu, Feb 4, 2010 at 10:43 AM, Craig James <[email protected]> wrote:\n> Tory M Blue wrote:\n>>\n>> I have a column that is a bigint that needs to store integers up to 19\n>> digits long. For the most part this works but we sometimes have\n>> numbers that are greater than 9223372036854775807.\n>> ...\n>> I was thinking of changing this to a real or double precision field,\n>> but read in the docs that the value stored is not always the value\n>> inserted...\n>\n> They're actually less precise than the same size of integer. Real/double\n> datatypes trade more range for less precision in the same number of bytes.\n>\n>> My number will always be 19 digits long and always an integer.\n>> I looked into the numeric data type, but the docs say that it can be slow.\n>\n> If it's *always* going to be 19 digits, couldn't you make it a text or char\n> field? You didn't say if this is really a number. Do you do arithmetic\n> with it? Sort it numerically? Or is it just a long identifier that happens\n> to only used digits?\n\nit is an identifier and is always a number and is used in grouping and\nquerying. I thought I would lose performance if it is text vs an\ninteger/double field.\n\nTory\n",
"msg_date": "Thu, 4 Feb 2010 10:51:37 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bigint integers up to 19 digits."
},
{
"msg_contents": "Thursday, February 4, 2010, 7:51:37 PM you wrote:\n\n> it is an identifier and is always a number and is used in grouping and\n> querying. I thought I would lose performance if it is text vs an\n> integer/double field.\n\nMaybe using 'numeric(19)' instead of bigint is an alternative. I actually\ndon't know how these numbers are stored internally (some kind of BCD, or as\nbase-100?), but IMHO they should be faster than strings, although not as\nfast as 'native' types.\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n",
"msg_date": "Thu, 4 Feb 2010 20:01:19 +0100",
"msg_from": "Jochen Erwied <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bigint integers up to 19 digits."
},
{
"msg_contents": "Jochen Erwied escribi�:\n\n> Maybe using 'numeric(19)' instead of bigint is an alternative. I actually\n> don't know how these numbers are stored internally (some kind of BCD, or as\n> base-100?), but IMHO they should be faster than strings, although not as\n> fast as 'native' types.\n\nbase 10000 in the current implementation\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 4 Feb 2010 17:09:29 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bigint integers up to 19 digits."
}
] |
[
{
"msg_contents": "Dear Postgres Community,\n\nI'm running postgres 8.3\n\nI have a table, partitioned by month\n\n-- Table: datadump\n\n-- DROP TABLE datadump;\n\nCREATE TABLE datadump\n(\n sys_timestamp timestamp without time zone,\n sys_device_id integer,\n usefields integer,\n timedate timestamp without time zone,\n digitalthermometer1 integer,\n digitalthermometer2 integer,\n digitalthermometer3 integer,\n digitalthermometer4 integer,\n digitalthermometer5 integer,\n digitalthermometer6 integer,\n tco0 integer,\n tco1 integer,\n tco2 integer,\n tco3 integer\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE disk_d;\nALTER TABLE datadump OWNER TO postgres;\nGRANT ALL ON TABLE datadump TO postgres;\n\npartitioned by timedate, example:\n\nCREATE TABLE data_dmp_part_201036\n(\n{inherits from master table}\n CONSTRAINT data_dmp_part_201036_timedate_check CHECK (timedate >= '2010-09-06 00:00:00'::timestamp without time zone AND timedate < '2010-09-13 00:00:00'::timestamp without time zone)\n)\nINHERITS (datadump)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE data_dmp_part_201036 OWNER TO postgres;\n\npartitions are will typically have from 200k to 300k rows, i have 52 partitions per year and I'm keeping around 4-5 years of history. However, they will query last 3-4 months most often.\n\nmy first, pretty obvious choice, was to create index on partitions on timedate:\nCREATE INDEX data_dmp_part_201036_idx\n ON data_dmp_part_201036\n USING btree\n (timedate);\n\n\nMost of my queries will have where conditions on timedate and sys_device_id, but a lot of them will have additional clause: where usefields is not null. Some of the queries will be limited on timedate only.\n\nI'm trying to figure out the best indexing strategy for this table. If a query will have condition on sys_device_id and/or usefields is not null, postgres won't use my index. \nI've experimented turning on and off enable_seqscan and creating different indexes and so far btree index on (usefields, sys_device_id, timedate) turn out to be the best. \nIf I create btree index only on (usefields, timedate) or (sys_device_id, timedate), planner will go for seqscan. If I turn off seqscan, postgres will use index but performance will be worse than seqscan.\n\n\nMy question finally: is btree index on (usefields, sys_device_id, timedate) really the best choice? I'm yet to examine options of creating separate indexes for timedate, usefields and sys_device_id. Possibly I should try using GiST or GIN?\n\nAny advice, please?\n\nRegards,\nfoo\n\n\n",
"msg_date": "Fri, 05 Feb 2010 13:32:37 +0100",
"msg_from": "\"Wojtek\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "index on partitioned table"
},
{
"msg_contents": "2010/2/5 Wojtek <[email protected]>\n>\n> partitions are will typically have from 200k to 300k rows, i have 52\n> partitions per year and I'm keeping around 4-5 years of history. However,\n> they will query last 3-4 months most often.\n>\nDo you mean 12 partitions a year or weekly partitions?\n\n\n> Most of my queries will have where conditions on timedate and\n> sys_device_id, but a lot of them will have additional clause: where\n> usefields is not null. Some of the queries will be limited on timedate only.\n>\n> I'm trying to figure out the best indexing strategy for this table. If a\n> query will have condition on sys_device_id and/or usefields is not null,\n> postgres won't use my index.\n> I've experimented turning on and off enable_seqscan and creating different\n> indexes and so far btree index on (usefields, sys_device_id, timedate) turn\n> out to be the best.\n> If I create btree index only on (usefields, timedate) or (sys_device_id,\n> timedate), planner will go for seqscan. If I turn off seqscan, postgres will\n> use index but performance will be worse than seqscan.\n>\n>\n> My question finally: is btree index on (usefields, sys_device_id, timedate)\n> really the best choice? I'm yet to examine options of creating separate\n> indexes for timedate, usefields and sys_device_id. Possibly I should try\n> using GiST or GIN?\n>\n\nI'd start with no indexes and then add indexes as your queries start to take\ntoo long. I'd start with single column indexes. PostgreSQL is perfectly\ncapable of bitmap anding the indexes if it has to. Multicolumn indexes are\nthe last place I'd go.\n\nI'm not sure you'll need an index on timedate. It depends on the length of\nthe timedate segments you'll be querying. If they are typically a month\nlong then you shouldn't have an index on it at all. Even if they are a week\nlong its probably not worth it.\n\nMy guess is that an index sys_device_id will be selective enough for most of\nwhat you need. What does PostgreSQL tell you about the statistics of that\ncolumn?\n\n\n> Regards,\n> foo\n\n\nRegards,\nbar\n\n2010/2/5 Wojtek <[email protected]>\n\n\npartitions are will typically have from 200k to 300k rows, i have 52 partitions per year and I'm keeping around 4-5 years of history. However, they will query last 3-4 months most often.Do you mean 12 partitions a year or weekly partitions?\n \nMost of my queries will have where conditions on timedate and sys_device_id, but a lot of them will have additional clause: where usefields is not null. Some of the queries will be limited on timedate only.\n\nI'm trying to figure out the best indexing strategy for this table. If a query will have condition on sys_device_id and/or usefields is not null, postgres won't use my index.\nI've experimented turning on and off enable_seqscan and creating different indexes and so far btree index on (usefields, sys_device_id, timedate) turn out to be the best.\nIf I create btree index only on (usefields, timedate) or (sys_device_id, timedate), planner will go for seqscan. If I turn off seqscan, postgres will use index but performance will be worse than seqscan.\n\n\nMy question finally: is btree index on (usefields, sys_device_id, timedate) really the best choice? I'm yet to examine options of creating separate indexes for timedate, usefields and sys_device_id. Possibly I should try using GiST or GIN?\nI'd start with no indexes and then add indexes as your queries start to take too long. I'd start with single column indexes. PostgreSQL is perfectly capable of bitmap anding the indexes if it has to. Multicolumn indexes are the last place I'd go.\nI'm not sure you'll need an index on timedate. It depends on the length of the timedate segments you'll be querying. If they are typically a month long then you shouldn't have an index on it at all. Even if they are a week long its probably not worth it.\nMy guess is that an index sys_device_id will be selective enough for most of what you need. What does PostgreSQL tell you about the statistics of that column? \n\n\nRegards,\nfooRegards,bar",
"msg_date": "Fri, 5 Feb 2010 10:08:28 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on partitioned table"
},
{
"msg_contents": "2010/2/5 Wojtek <[email protected]>:\n> Most of my queries will have where conditions on timedate and sys_device_id, but a lot of them will have additional clause: where usefields is not null. Some of the queries will be limited on timedate only.\n\nWhat about a partial index on (timedate) WHERE usefields IS NOT NULL;\nor maybe on (timedate, sys_device_id) WHERE usefields IS NOT NULL?\n\n...Robert\n",
"msg_date": "Fri, 5 Feb 2010 13:23:17 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index on partitioned table"
}
] |
[
{
"msg_contents": "Recently I've made a number of unsubstantiated claims that the deadline \nscheduler on Linux does bad things compared to CFQ when running \nreal-world mixed I/O database tests. Unfortunately every time I do one \nof these I end up unable to release the results due to client \nconfidentiality issues. However, I do keep an eye out for people who \nrun into the same issues in public benchmarks, and I just found one: \nhttp://insights.oetiker.ch/linux/fsopbench/\n\nThe problem analyzed in the \"Deadline considered harmful\" section looks \nexactly like what I run into: deadline just does some bad things when \nthe I/O workload gets complicated. And the conclusion reached there, \n\"the deadline scheduler did not have advantages in any of our test \ncases\", has been my conclusion for every round of pgbench-based testing \nI've done too. In that case, the specific issue is that reads get \nblocked badly when checkpoint writes are doing heavier work; you can see \nthe read I/O numbers reported by \"vmstat 1\" go completely to zero for a \nsecond or more when it happens. That can happen with CFQ, too, but it \nconsistently seems more likely to occur with deadline.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 08 Feb 2010 04:45:10 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Greg Smith wrote:\n> Recently I've made a number of unsubstantiated claims that the deadline \n> scheduler on Linux does bad things compared to CFQ when running \n> real-world mixed I/O database tests. Unfortunately every time I do one \n> of these I end up unable to release the results due to client \n> confidentiality issues. However, I do keep an eye out for people who \n> run into the same issues in public benchmarks, and I just found one: \n> http://insights.oetiker.ch/linux/fsopbench/\n\nThat is interesting; particularly since I have made one quite different\nexperience in which deadline outperformed CFQ by a factor of approximately 4.\n\nSo I tried to look for differences, and I found two possible places:\n- My test case was read-only, our production system is read-mostly.\n- We did not have a RAID array, but a SAN box (with RAID inside).\n\nThe \"noop\" scheduler performed about as well as \"deadline\".\nI wonder if the two differences above could explain the different\nresult.\n\nYours,\nLaurenz Albe\n",
"msg_date": "Mon, 8 Feb 2010 15:57:25 +0100",
"msg_from": "\"Albe Laurenz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\"Albe Laurenz\" <[email protected]> wrote:\n> Greg Smith wrote:\n \n>> http://insights.oetiker.ch/linux/fsopbench/\n> \n> That is interesting; particularly since I have made one quite\n> different experience in which deadline outperformed CFQ by a\n> factor of approximately 4.\n \nI haven't benchmarked it per se, but when we started using\nPostgreSQL on Linux, the benchmarks and posts I could find\nrecommended deadline=elevator, so we went with that, and when the\nsetting was missed on a machine it was generally found fairly\nquickly because people complained that the machine wasn't performing\nto expectations; changing this to deadline corrected the problem.\n \n> So I tried to look for differences, and I found two possible\n> places:\n> - My test case was read-only, our production system is\n> read-mostly.\n \nYeah, our reads are typically several times our writes -- up to\nmaybe 10 to 1.\n \n> - We did not have a RAID array, but a SAN box (with RAID inside).\n \nNo SAN here, but if I recall correctly, this was mostly an issue on\nour larger arrays -- RAID 5 with dozens of spindles on a BBU\nhardware controller.\n \nOther differences between our environment and that of the benchmarks\ncited above:\n \n - We use SuSE Linux Enterprise Server, so we've been on *much*\n earlier kernel versions that this benchmark.\n \n - We've been using xfs, with noatime,nobarrier.\n \nI'll keep this in mind as something to try if we have problem\nperformance in line with what that page describes, though....\n \n-Kevin\n",
"msg_date": "Mon, 08 Feb 2010 09:24:56 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Kevin Grittner wrote:\n> I'll keep this in mind as something to try if we have problem\n> performance in line with what that page describes, though....\n> \n\nThat's basically what I've been trying to make clear all along: people \nshould keep an open mind, watch what happens, and not make any \nassumptions. There's no clear cut preference for one scheduler or the \nother in all situations. I've seen CFQ do much better, you and Albe \nreport situations where the opposite is true. I was just happy to see \nanother report of someone running into the same sort of issue I've been \nseeing, because I didn't have very much data to offer about why the \nstandard advice of \"always use deadline for a database app\" might not \napply to everyone.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 08 Feb 2010 10:30:13 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\n> That's basically what I've been trying to make clear all along: people\n> should keep an open mind, watch what happens, and not make any\n> assumptions. There's no clear cut preference for one scheduler or the\n> other in all situations. I've seen CFQ do much better, you and Albe\n> report situations where the opposite is true. I was just happy to see\n> another report of someone running into the same sort of issue I've been\n> seeing, because I didn't have very much data to offer about why the\n> standard advice of \"always use deadline for a database app\" might not\n> apply to everyone.\n\nDamn, you would have to make things complicated, eh?\n\nFWIW, back when deadline was first introduced Mark Wong did some tests\nand found Deadline to be the fastest of 4 on DBT2 ... but only by about\n5%. If the read vs. checkpoint analysis is correct, what was happening\nis the penalty for checkpoints on deadline was almost wiping out the\nadvantage for reads, but not quite.\n\nThose tests were also done on attached storage.\n\nSo, what this suggests is:\nreads: deadline > CFQ\nwrites: CFQ > deadline\nattached storage: deadline > CFQ\n\nMan, we'd need a lot of testing to settle this. I guess that's why\nLinux gives us the choice of 4 ...\n\n--Josh Berkus\n",
"msg_date": "Mon, 08 Feb 2010 09:49:20 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Mon, Feb 8, 2010 at 10:49 AM, Josh Berkus <[email protected]> wrote:\n>\n>> That's basically what I've been trying to make clear all along: people\n>> should keep an open mind, watch what happens, and not make any\n>> assumptions. There's no clear cut preference for one scheduler or the\n>> other in all situations. I've seen CFQ do much better, you and Albe\n>> report situations where the opposite is true. I was just happy to see\n>> another report of someone running into the same sort of issue I've been\n>> seeing, because I didn't have very much data to offer about why the\n>> standard advice of \"always use deadline for a database app\" might not\n>> apply to everyone.\n>\n> Damn, you would have to make things complicated, eh?\n>\n> FWIW, back when deadline was first introduced Mark Wong did some tests\n> and found Deadline to be the fastest of 4 on DBT2 ... but only by about\n> 5%. If the read vs. checkpoint analysis is correct, what was happening\n> is the penalty for checkpoints on deadline was almost wiping out the\n> advantage for reads, but not quite.\n>\n> Those tests were also done on attached storage.\n>\n> So, what this suggests is:\n> reads: deadline > CFQ\n> writes: CFQ > deadline\n> attached storage: deadline > CFQ\n>\n> Man, we'd need a lot of testing to settle this. I guess that's why\n> Linux gives us the choice of 4 ...\n\nJust to add to the data points. On an 8 core opteron Areca 1680 and a\n12 disk RAID-10 for data and 2 disk RAID-1 for WAL, I get noticeably\nbetter performance (approximately 15%) and lower load factors (they\ndrop from about 8 to 5 or 6) running noop over the default scheduler,\nwith RHEL 5.4 with the 2.6.18-92.el5 kernel from RHEL 5.2.\n",
"msg_date": "Mon, 8 Feb 2010 11:59:09 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Mon, Feb 8, 2010 at 9:49 AM, Josh Berkus <[email protected]> wrote:\n>\n>> That's basically what I've been trying to make clear all along: people\n>> should keep an open mind, watch what happens, and not make any\n>> assumptions. There's no clear cut preference for one scheduler or the\n>> other in all situations. I've seen CFQ do much better, you and Albe\n>> report situations where the opposite is true. I was just happy to see\n>> another report of someone running into the same sort of issue I've been\n>> seeing, because I didn't have very much data to offer about why the\n>> standard advice of \"always use deadline for a database app\" might not\n>> apply to everyone.\n>\n> Damn, you would have to make things complicated, eh?\n>\n> FWIW, back when deadline was first introduced Mark Wong did some tests\n> and found Deadline to be the fastest of 4 on DBT2 ... but only by about\n> 5%. If the read vs. checkpoint analysis is correct, what was happening\n> is the penalty for checkpoints on deadline was almost wiping out the\n> advantage for reads, but not quite.\n>\n> Those tests were also done on attached storage.\n>\n> So, what this suggests is:\n> reads: deadline > CFQ\n> writes: CFQ > deadline\n> attached storage: deadline > CFQ\n>\n> Man, we'd need a lot of testing to settle this. I guess that's why\n> Linux gives us the choice of 4 ...\n\nI wonder what the impact is from the underlying RAID configuration.\nThose DBT2 tests were also LVM striped volumes on top of single RAID0\nLUNS (no jbod option).\n\nRegards.\nMark\n",
"msg_date": "Mon, 8 Feb 2010 17:14:03 -0800",
"msg_from": "Mark Wong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\nOn Feb 8, 2010, at 9:49 AM, Josh Berkus wrote:\n\n> \n> Those tests were also done on attached storage.\n> \n> So, what this suggests is:\n> reads: deadline > CFQ\n> writes: CFQ > deadline\n> attached storage: deadline > CFQ\n> \n\n From my experience on reads:\nLarge sequential scans mixed with concurrent random reads behave very differently between the two schedulers.\nDeadline has _significantly_ higher throughput in this situation, but the random read latency is higher. CFQ will starve the sequential scan in favor of letting each concurrent read get some time. If your app is very latency sensitive on reads, that is good. If you need max throughput, getting the sequential scan out of the way instead of breaking it up into lots of small chunks is critical.\n\nI think it is this behavior that causes the delays on writes -- from the scheduler's point of view, a large set of writes is usually somewhat sequential and deadline favors throughput over latency.\n\nGenerally, my writes are large bulk writes, and I am not very latency sensitive but am very throughput sensitive. So deadline helps a great deal (combined with decently sized readahead). Other use cases will clearly have different preferences.\n\nMy experience with scheduler performace tuning is on CentOS 5.3 and 5.4. With the changes to much of the I/O layer in the latest kernels, I would not be surprised if things have changed. \n\n\n> Man, we'd need a lot of testing to settle this. I guess that's why\n> Linux gives us the choice of 4 ...\n> \n> --Josh Berkus\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Mon, 8 Feb 2010 19:50:06 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Josh Berkus wrote:\n> FWIW, back when deadline was first introduced Mark Wong did some tests\n> and found Deadline to be the fastest of 4 on DBT2 ... but only by about\n> 5%. If the read vs. checkpoint analysis is correct, what was happening\n> is the penalty for checkpoints on deadline was almost wiping out the\n> advantage for reads, but not quite.\n> \n\nWasn't that before 8.3, where the whole checkpoint spreading logic \nshowed up? That's really a whole different write pattern now than it \nwas then. 8.2 checkpoint writes were one big batch write amenable to \noptimizing for throughput. The new ones are not; the I/O is intermixed \nwith reads most of the time.\n\n> Man, we'd need a lot of testing to settle this. I guess that's why\n> Linux gives us the choice of 4 ...\n> \n\nA recent on of these I worked on started with 4096 possible I/O \nconfigurations we pruned down the most likely good candidates from. I'm \nalmost ready to schedule a week on Mark's HP performance test system in \nthe lab now, to try and nail this down in a fully public environment for \nonce.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 08 Feb 2010 23:14:15 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Hannu Krosing wrote:\n> Have you kept trace of what filesystems are in use ?\n> \n\nAlmost everything I do on Linux has been with ext3. I had a previous \ndiversion into VxFS and an upcoming one into XFS that may shed more \nlight on all this.\n\nAnd, yes, the whole I/O scheduling approach in Linux was just completely \nredesigned for a very recent kernel update. So even what we think we \nknow is already obsolete in some respects.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Mon, 08 Feb 2010 23:16:13 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Mon, 8 Feb 2010, Greg Smith wrote:\n\n> Hannu Krosing wrote:\n>> Have you kept trace of what filesystems are in use ?\n>> \n>\n> Almost everything I do on Linux has been with ext3. I had a previous \n> diversion into VxFS and an upcoming one into XFS that may shed more light on \n> all this.\n\nit would be nice if you could try ext4 when doing your tests.\n\nIt's new enough that I won't trust it for production data yet, but a lot \nof people are jumping on it as if it was just a minor update to ext3 \ninstead of an almost entirely new filesystem.\n\nDavid Lang\n\n> And, yes, the whole I/O scheduling approach in Linux was just completely \n> redesigned for a very recent kernel update. So even what we think we know is \n> already obsolete in some respects.\n>\n>\n",
"msg_date": "Mon, 8 Feb 2010 20:35:05 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\nOn Feb 8, 2010, at 11:35 PM, [email protected] wrote:\n>\n>> And, yes, the whole I/O scheduling approach in Linux was just \n>> completely redesigned for a very recent kernel update. So even \n>> what we think we know is already obsolete in some respects.\n>>\n\nI'd done some testing a while ago on the schedulers and at the time \ndeadline or noop smashed cfq. Now, it is 100% possible since then \nthat they've made vast improvements to cfq and or the VM to get better \nor similar performance. I recall a vintage of 2.6 where they severely \nmessed up the VM. Glad I didn't upgrade to that one :)\n\nHere's the old post: http://archives.postgresql.org/pgsql-performance/2008-04/msg00155.php\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Tue, 09 Feb 2010 14:14:11 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Jeff wrote:\n> I'd done some testing a while ago on the schedulers and at the time \n> deadline or noop smashed cfq. Now, it is 100% possible since then \n> that they've made vast improvements to cfq and or the VM to get better \n> or similar performance. I recall a vintage of 2.6 where they severely \n> messed up the VM. Glad I didn't upgrade to that one :)\n>\n> Here's the old post: \n> http://archives.postgresql.org/pgsql-performance/2008-04/msg00155.php\n\npgiosim doesn't really mix writes into there though, does it? The mixed \nread/write situations are the ones where the scheduler stuff gets messy.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 10 Feb 2010 01:37:04 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Tue, Feb 9, 2010 at 11:37 PM, Greg Smith <[email protected]> wrote:\n> Jeff wrote:\n>>\n>> I'd done some testing a while ago on the schedulers and at the time\n>> deadline or noop smashed cfq. Now, it is 100% possible since then that\n>> they've made vast improvements to cfq and or the VM to get better or similar\n>> performance. I recall a vintage of 2.6 where they severely messed up the\n>> VM. Glad I didn't upgrade to that one :)\n>>\n>> Here's the old post:\n>> http://archives.postgresql.org/pgsql-performance/2008-04/msg00155.php\n>\n> pgiosim doesn't really mix writes into there though, does it? The mixed\n> read/write situations are the ones where the scheduler stuff gets messy.\n\nI agree. I think the only way to really test it is by testing it\nagainst the system it's got to run under. I'd love to see someone do\na comparison of early to mid 2.6 kernels (2.6.18 like RHEL5) to very\nup to date 2.6 kernels. On fast hardware. What it does on a laptop\nisn't that interesting and I don't have a big machine idle to test it\non.\n",
"msg_date": "Wed, 10 Feb 2010 00:11:30 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\nOn Feb 10, 2010, at 1:37 AM, Greg Smith wrote:\n\n> Jeff wrote:\n>> I'd done some testing a while ago on the schedulers and at the time \n>> deadline or noop smashed cfq. Now, it is 100% possible since then \n>> that they've made vast improvements to cfq and or the VM to get \n>> better or similar performance. I recall a vintage of 2.6 where \n>> they severely messed up the VM. Glad I didn't upgrade to that one :)\n>>\n>> Here's the old post: http://archives.postgresql.org/pgsql-performance/2008-04/msg00155.php\n>\n> pgiosim doesn't really mix writes into there though, does it? The \n> mixed read/write situations are the ones where the scheduler stuff \n> gets messy.\n>\n\nIt has the abillity to rewrite blocks randomly as well - but I \nhonestly don't remember if I did that during my cfq/deadline test. \nI'd wager I didn't. Maybe I'll get some time to run some more tests \non it in the next couple days\n\n> -- \n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n>\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Wed, 10 Feb 2010 12:22:57 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "\nOn Feb 9, 2010, at 10:37 PM, Greg Smith wrote:\n\n> Jeff wrote:\n>> I'd done some testing a while ago on the schedulers and at the time \n>> deadline or noop smashed cfq. Now, it is 100% possible since then \n>> that they've made vast improvements to cfq and or the VM to get better \n>> or similar performance. I recall a vintage of 2.6 where they severely \n>> messed up the VM. Glad I didn't upgrade to that one :)\n>> \n>> Here's the old post: \n>> http://archives.postgresql.org/pgsql-performance/2008-04/msg00155.php\n> \n> pgiosim doesn't really mix writes into there though, does it? The mixed \n> read/write situations are the ones where the scheduler stuff gets messy.\n> \n\nAlso, read/write mix performance depend on the file system not just the scheduler.\nThe block device readahead parameter can have a big impact too.\n\nIf you test xfs, make sure you configure the 'allocsize' mount parameter properly as well. If there are any sequential reads or writes in there mixed with other reads/writes, that can have a big impact on how fragmented the filesystem gets.\n\nExt3 has several characteristics for writes that might favor cfq that other file systems do not. Features like delayed allocation, extents, and write barriers significantly change the pattern of writes seen by the I/O scheduler.\n\nIn short, one scheduler may be best for one filesystem, but not a good idea for others.\n\nAnd then on top of that, it all depends on what type of DB you're running. Lots of small fast mostly read queries? Large number of small writes? Large bulk writes? Large reporting queries? Different configurations and tuning is required to maximize performance on each.\n\nThere is no single rule for Postgres on Linux that I can think of other than \"never have ext3 in 'ordered' or 'journal' mode for your WAL on the same filesystem as your data\".\n\n> -- \n> Greg Smith 2ndQuadrant Baltimore, MD\n> PostgreSQL Training, Services and Support\n> [email protected] www.2ndQuadrant.com\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 10 Feb 2010 10:04:38 -0800",
"msg_from": "Scott Carey <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "Scott Marlowe wrote:\n> I'd love to see someone do a comparison of early to mid 2.6 kernels (2.6.18 like RHEL5) to very\n> up to date 2.6 kernels. On fast hardware.\n\nI'd be happy just to find fast hardware that works on every kernel from \nthe RHEL5 2.6.18 up to the latest one without issues.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 10 Feb 2010 20:46:03 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Wed, 10 Feb 2010, Greg Smith wrote:\n\n> Scott Marlowe wrote:\n>> I'd love to see someone do a comparison of early to mid 2.6 kernels (2.6.18 \n>> like RHEL5) to very\n>> up to date 2.6 kernels. On fast hardware.\n>\n> I'd be happy just to find fast hardware that works on every kernel from the \n> RHEL5 2.6.18 up to the latest one without issues.\n\nit depends on your definition of 'fast hardware'\n\nI have boxes that were very fast at the time that work on all these \nkernels, but they wouldn't be considered fast by todays's standards.\n\nremember that there is a point release about every 3 months, 2.6.33 is \nabout to be released, so this is a 3 x (33-18) = ~45 month old kernel.\n\nhardware progresses a LOT on 4 years.\n\nmost of my new hardware has no problems with the old kernels as well, but \nonce in a while I run into something that doesn't work.\n\nDavid Lang\n",
"msg_date": "Wed, 10 Feb 2010 17:52:08 -0800 (PST)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "[email protected] wrote:\n> most of my new hardware has no problems with the old kernels as well, \n> but once in a while I run into something that doesn't work.\n\nQuick survey just of what's within 20 feet of me:\n-Primary desktop: 2 years old, requires 2.6.23 or later for SATA to work\n-Server: 3 years old, requires 2.6.22 or later for the Areca card not \nto panic under load\n-Laptops: both about 2 years old, and require 2.6.28 to work at all; \nmostly wireless issues, but some power management ones that impact the \nprocessor working right too, occasional SATA ones too.\n\nI'm looking into a new primary desktop to step up to 8 HT cores; I fully \nexpect it won't boot anything older than 2.6.28 and may take an even \nnewer kernel just for basic processor and disks parts to work.\n\nWe're kind of at a worst-case point right now for this sort of thing, on \nthe tail side of the almost 3 year old RHEL5 using a 3.5 year old kernel \nas the standard for so many Linux server deployments. Until RHEL6 is \nready to go, there's little motivation for the people who make server \nhardware to get all their drivers perfect in the newer kernels. Just \nafter that ships will probably be a good time to do that sort of \ncomparison, like it was possible to easily compare RHEL4 using 2.6.9 and \nRHEL5 with 2.6.18 easily in mid to late 2007 with many bits of \nhigh-performance hardware known to work well on each.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 10 Feb 2010 21:09:22 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
},
{
"msg_contents": "On Mon, 2010-02-08 at 09:49 -0800, Josh Berkus wrote:\n> FWIW, back when deadline was first introduced Mark Wong did some tests\n> and found Deadline to be the fastest of 4 on DBT2 ... but only by about\n> 5%. If the read vs. checkpoint analysis is correct, what was happening\n> is the penalty for checkpoints on deadline was almost wiping out the\n> advantage for reads, but not quite.\n\nI also did some tests when I was putting together my Synchronized Scan\nbenchmarks:\n\nhttp://j-davis.com/postgresql/83v82_scans.html\n\nCFQ was so slow that I didn't include it in the results at all.\n\nThe tests weren't intended to compare schedulers, so I did most of the\ntests with anticipatory (at least the ones on linux; I also tested\nfreebsd). However, I have some raw data from the tests I did run with\nCFQ:\n\nhttp://j-davis.com/postgresql/results/\n\nThey will take some interpretation (again, not intended as scheduler\nbenchmarks). The server was modified to record a log message every N\npage accesses.\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Tue, 16 Feb 2010 13:34:11 -0800",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Linux I/O tuning: CFQ vs. deadline"
}
] |
[
{
"msg_contents": "I have created a index\ncreate index leadaddress_phone_idx on\nleadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n'g'::text));\n\nBut the index is not using.\n\nexplain select * from leadaddress where\nregexp_replace(phone,'[^0-9]*','','g') like '%2159438606';\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------\n Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97)\n Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n'g'::text) ~~ '%2159438606'::text)\n\nCould anyone please tell me why? I analyzed the table after index creation.\n\nI have created a index\ncreate index leadaddress_phone_idx on leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text, 'g'::text));\n \nBut the index is not using.\n \nexplain select * from leadaddress where regexp_replace(phone,'[^0-9]*','','g') like '%2159438606'; QUERY PLAN \n-------------------------------------------------------------------------------------------------------- Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97) Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text, 'g'::text) ~~ '%2159438606'::text)\n \nCould anyone please tell me why? I analyzed the table after index creation.",
"msg_date": "Tue, 9 Feb 2010 13:43:56 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "index is not using"
},
{
"msg_contents": "Le 09/02/2010 08:43, AI Rumman a �crit :\n> I have created a index\n> create index leadaddress_phone_idx on\n> leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n> 'g'::text));\n> \n> But the index is not using.\n> \n> explain select * from leadaddress where\n> regexp_replace(phone,'[^0-9]*','','g') like '%2159438606';\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------\n> Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97)\n> Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n> 'g'::text) ~~ '%2159438606'::text)\n> \n> Could anyone please tell me why? I analyzed the table after index creation.\n> \n\nThe index cannot be used if the filter is '%something' or\n'%somethingelse%'. I can only be used for 'this%'.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n",
"msg_date": "Tue, 09 Feb 2010 10:34:04 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index is not using"
},
{
"msg_contents": "I just answered this less than an hour ago... And please don't cross\npost to multiple mailing lists.\n\nOn Tue, Feb 9, 2010 at 12:43 AM, AI Rumman <[email protected]> wrote:\n> I have created a index\n> create index leadaddress_phone_idx on\n> leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n> 'g'::text));\n>\n> But the index is not using.\n>\n> explain select * from leadaddress where\n> regexp_replace(phone,'[^0-9]*','','g') like '%2159438606';\n> QUERY\n> PLAN\n> --------------------------------------------------------------------------------------------------------\n> Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97)\n> Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n> 'g'::text) ~~ '%2159438606'::text)\n>\n> Could anyone please tell me why? I analyzed the table after index creation.\n>\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n",
"msg_date": "Tue, 9 Feb 2010 02:43:58 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index is not using"
}
] |
[
{
"msg_contents": "I have created a index\ncreate index leadaddress_phone_idx on\nleadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n'g'::text));\n\nBut the index is not using.\n\nexplain select * from leadaddress where\nregexp_replace(phone,'[^0-9]*','','g') like '%2159438606';\n QUERY\nPLAN\n--------------------------------------------------------------------------------------------------------\n Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97)\n Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n'g'::text) ~~ '%2159438606'::text)\n\nCould anyone please tell me why? I analyzed the table after index creation.\n\nI have created a indexcreate index leadaddress_phone_idx on leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text, 'g'::text)); But the index is not using. explain select * from leadaddress where regexp_replace(phone,'[^0-9]*','','g') like '%2159438606';\n QUERY PLAN -------------------------------------------------------------------------------------------------------- Seq Scan on leadaddress (cost=100000000.00..100009699.81 rows=1 width=97)\n Filter: (regexp_replace((phone)::text, '[^0-9]*'::text, ''::text, 'g'::text) ~~ '%2159438606'::text) Could anyone please tell me why? I analyzed the table after index creation.",
"msg_date": "Tue, 9 Feb 2010 13:55:49 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "index is not using?"
},
{
"msg_contents": "On Tue, Feb 9, 2010 at 12:55 AM, AI Rumman <[email protected]> wrote:\n> I have created a index\n> create index leadaddress_phone_idx on\n> leadaddress(regexp_replace((phone)::text, '[^0-9]*'::text, ''::text,\n> 'g'::text));\n>\n> But the index is not using.\n\nlike '%yada'\n\nisn't capable of using an index. If it's left anchored:\n\nlike 'yada%'\n\nit can.\n",
"msg_date": "Tue, 9 Feb 2010 01:33:13 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index is not using?"
}
] |
[
{
"msg_contents": "Due to an error during an update to the system kernel on a database\nserver backing a web application, we ran for a month (mid-December\nto mid-January) with the WAL files in the pg_xlog subdirectory on\nthe same 40-spindle array as the data -- only the OS was on a\nseparate mirrored pair of drives. When we found the problem, we\nmoved the pg_xlog directory back to its own mirrored pair of drives\nand symlinked to it. It's a pretty dramatic difference.\n \nGraph attached, this is response time for a URL request (like what a\nuser from the Internet would issue) which runs 15 queries and\nformats the results. Response time is 85 ms without putting WAL on\nits own RAID; 50 ms with it on its own RAID. This is a real-world,\nproduction web site with 1.3 TB data, millions of hits per day, and\nit's an active replication target 24/7.\n \nFrankly, I was quite surprised by this, since some of the benchmarks\npeople have published on the effects of using a separate RAID for\nthe WAL files have only shown a one or two percent difference when\nusing a hardware RAID controller with BBU cache configured for\nwrite-back.\n \nNo assistance needed; just posting a performance data point.\n \n-Kevin",
"msg_date": "Tue, 09 Feb 2010 10:14:38 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": ">\n> Frankly, I was quite surprised by this, since some of the benchmarks\n> people have published on the effects of using a separate RAID for\n> the WAL files have only shown a one or two percent difference when\n> using a hardware RAID controller with BBU cache configured for\n> write-back.\n\nHi Kevin.\n\nNice report, but just a few questions.\n\nSorry if it is obvious.. but what filesystem/OS are you using and do you\n have BBU-writeback on the main data catalog also?\n\nJesper\n\n",
"msg_date": "Tue, 09 Feb 2010 17:22:51 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Jesper Krogh <[email protected]> wrote:\n \n> Sorry if it is obvious.. but what filesystem/OS are you using and\n> do you have BBU-writeback on the main data catalog also?\n \nSorry for not providing more context.\n \nATHENA:/var/pgsql/data # uname -a\nLinux ATHENA 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC\n2009 x86_64 x86_64 x86_64 GNU/Linux\nATHENA:/var/pgsql/data # cat /etc/SuSE-release\nSUSE Linux Enterprise Server 10 (x86_64)\nVERSION = 10\nPATCHLEVEL = 2\n \nFile system is xfs noatime,nobarrier for all data; OS is on ext3. I\n*think* the pg_xlog mirrored pair is hanging off the same\nBBU-writeback controller as the big RAID, but I'd have to track down\nthe hardware tech to confirm, and he's out today. System has 16\nXeon CPUs and 64 GB RAM.\n \n-Kevin\n",
"msg_date": "Tue, 09 Feb 2010 10:33:01 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "On Tue, Feb 9, 2010 at 10:03 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Jesper Krogh <[email protected]> wrote:\n> File system is xfs noatime,nobarrier for all data; OS is on ext3. I\n> *think* the pg_xlog mirrored pair is hanging off the same\n> BBU-writeback controller as the big RAID, but I'd have to track down\n> the hardware tech to confirm, and he's out today. System has 16\n> Xeon CPUs and 64 GB RAM.\n>\n> -Kevin\n>\n>\nHi Kevin\n\nJust curious if you have a 16 physical CPU's or 16 cores on 4 CPU/8 cores\nover 2 CPU with HT.\n\nWith regards\n\nAmitabh Kant\n\nOn Tue, Feb 9, 2010 at 10:03 PM, Kevin Grittner <[email protected]> wrote:\nJesper Krogh <[email protected]> wrote:\nFile system is xfs noatime,nobarrier for all data; OS is on ext3. I\n*think* the pg_xlog mirrored pair is hanging off the same\nBBU-writeback controller as the big RAID, but I'd have to track down\nthe hardware tech to confirm, and he's out today. System has 16\nXeon CPUs and 64 GB RAM.\n\n-Kevin\nHi KevinJust curious if you have a 16 physical CPU's or 16 cores on 4 CPU/8 cores over 2 CPU with HT. With regards\n\nAmitabh Kant",
"msg_date": "Tue, 9 Feb 2010 23:07:21 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Amitabh Kant <[email protected]> wrote:\n \n> Just curious if you have a 16 physical CPU's or 16 cores on 4\n> CPU/8 cores over 2 CPU with HT.\n \nFour quad core CPUs:\n \nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 15\nmodel name : Intel(R) Xeon(R) CPU X7350 @ 2.93GHz\nstepping : 11\ncpu MHz : 2931.978\ncache size : 4096 KB\nphysical id : 3\nsiblings : 4\ncore id : 0\ncpu cores : 4\n \n-Kevin\n",
"msg_date": "Tue, 09 Feb 2010 11:44:42 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Jesper Krogh <[email protected]> wrote:\n> \n> > Sorry if it is obvious.. but what filesystem/OS are you using and\n> > do you have BBU-writeback on the main data catalog also?\n> \n> Sorry for not providing more context.\n> \n> ATHENA:/var/pgsql/data # uname -a\n> Linux ATHENA 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC\n> 2009 x86_64 x86_64 x86_64 GNU/Linux\n> ATHENA:/var/pgsql/data # cat /etc/SuSE-release\n> SUSE Linux Enterprise Server 10 (x86_64)\n> VERSION = 10\n> PATCHLEVEL = 2\n> \n> File system is xfs noatime,nobarrier for all data; OS is on ext3. I\n> *think* the pg_xlog mirrored pair is hanging off the same\n> BBU-writeback controller as the big RAID, but I'd have to track down\n> the hardware tech to confirm, and he's out today. System has 16\n> Xeon CPUs and 64 GB RAM.\n\nI would be surprised if the RAID controller had a BBU-writeback cache. \nI don't think having xlog share a BBU-writeback makes things slower, and\nif it does, I would love for someone to explain why.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Thu, 11 Feb 2010 06:29:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "On Thu, Feb 11, 2010 at 4:29 AM, Bruce Momjian <[email protected]> wrote:\n> Kevin Grittner wrote:\n>> Jesper Krogh <[email protected]> wrote:\n>>\n>> > Sorry if it is obvious.. but what filesystem/OS are you using and\n>> > do you have BBU-writeback on the main data catalog also?\n>>\n>> Sorry for not providing more context.\n>>\n>> ATHENA:/var/pgsql/data # uname -a\n>> Linux ATHENA 2.6.16.60-0.39.3-smp #1 SMP Mon May 11 11:46:34 UTC\n>> 2009 x86_64 x86_64 x86_64 GNU/Linux\n>> ATHENA:/var/pgsql/data # cat /etc/SuSE-release\n>> SUSE Linux Enterprise Server 10 (x86_64)\n>> VERSION = 10\n>> PATCHLEVEL = 2\n>>\n>> File system is xfs noatime,nobarrier for all data; OS is on ext3. I\n>> *think* the pg_xlog mirrored pair is hanging off the same\n>> BBU-writeback controller as the big RAID, but I'd have to track down\n>> the hardware tech to confirm, and he's out today. System has 16\n>> Xeon CPUs and 64 GB RAM.\n>\n> I would be surprised if the RAID controller had a BBU-writeback cache.\n> I don't think having xlog share a BBU-writeback makes things slower, and\n> if it does, I would love for someone to explain why.\n\nI believe in the past when this discussion showed up it was mainly due\nto them being on the same file system (and then not with pg_xlog\nseparate) that made the biggest difference. I recall there being a\nnoticeable performance gain from having two file systems on the same\nlogical RAID device even.\n",
"msg_date": "Thu, 11 Feb 2010 09:12:08 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "\"Kevin Grittner\" <[email protected]> wrote:\n> Jesper Krogh <[email protected]> wrote:\n> \n>> Sorry if it is obvious.. but what filesystem/OS are you using and\n>> do you have BBU-writeback on the main data catalog also?\n \n> File system is xfs noatime,nobarrier for all data; OS is on ext3. \n> I *think* the pg_xlog mirrored pair is hanging off the same\n> BBU-writeback controller as the big RAID, but I'd have to track\n> down the hardware tech to confirm\n \nAnother example of why I shouldn't trust my memory. Per the\nhardware tech:\n \n \nOS: /dev/sda is RAID1 - 2 x 2.5\" 15k SAS disk\npg_xlog: /dev/sdb is RAID1 - 2 x 2.5\" 15k SAS disk\n \nThese reside on a ServeRAID-MR10k controller with 256MB BB cache.\n \n \ndata: /dev/sdc is RAID5 - 30 x 3.5\" 15k SAS disk\n \nThese reside on the DS3200 disk subsystem with 512MB BB cache per\ncontroller and redundant drive loops.\n \n \nAt least I had the file systems and options right. ;-)\n \n-Kevin\n",
"msg_date": "Thu, 11 Feb 2010 11:48:37 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Kevin Grittner wrote:\n\n> Another example of why I shouldn't trust my memory. Per the\n> hardware tech:\n> \n> \n> OS: /dev/sda is RAID1 - 2 x 2.5\" 15k SAS disk\n> pg_xlog: /dev/sdb is RAID1 - 2 x 2.5\" 15k SAS disk\n> \n> These reside on a ServeRAID-MR10k controller with 256MB BB cache.\n> \n> \n> data: /dev/sdc is RAID5 - 30 x 3.5\" 15k SAS disk\n> \n> These reside on the DS3200 disk subsystem with 512MB BB cache per\n> controller and redundant drive loops.\n\nHmm, so maybe the performance benefit is not from it being on a separate\narray, but from it being RAID1 instead of RAID5?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 11 Feb 2010 14:57:52 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "* Alvaro Herrera <[email protected]> [100211 12:58]:\n> Hmm, so maybe the performance benefit is not from it being on a separate\n> array, but from it being RAID1 instead of RAID5?\n\nOr the cumulative effects of:\n1) Dedicated spindles/Raid1\n2) More BBU cache available (I can't imagine the OS pair writing much)\n3) not being queued behind data writes before getting to controller\n3) Not waiting for BBU cache to be available (which is shared with all data\n writes) which requires RAID5 writes to complete...\n\nReally, there's *lots* of variables here. The basics being that WAL on\nthe same FS as data, on a RAID5, even with BBU is worse than WAL on a\ndedicated set of RAID1 spindles with it's own BBU.\n\nWow!\n\n;-)\n\n\n-- \nAidan Van Dyk Create like a god,\[email protected] command like a king,\nhttp://www.highrise.ca/ work like a slave.",
"msg_date": "Thu, 11 Feb 2010 13:04:21 -0500",
"msg_from": "Aidan Van Dyk <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Aidan Van Dyk <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> wrote:\n>> Hmm, so maybe the performance benefit is not from it being on a\n>> separate array, but from it being RAID1 instead of RAID5?\n> \n> Or the cumulative effects of:\n> 1) Dedicated spindles/Raid1\n> 2) More BBU cache available (I can't imagine the OS pair writing\n> much)\n> 3) not being queued behind data writes before getting to\n> controller\n> 3) Not waiting for BBU cache to be available (which is shared with\n> all data writes) which requires RAID5 writes to complete...\n> \n> Really, there's *lots* of variables here. The basics being that\n> WAL on the same FS as data, on a RAID5, even with BBU is worse\n> than WAL on a dedicated set of RAID1 spindles with it's own BBU.\n> \n> Wow!\n \nSure, OK, but what surprised me was that a set of 15 read-only\nqueries (with pretty random reads) took almost twice as long when\nthe WAL files were on the same file system. That's with OS writes\nbeing only about 10% of reads, and *that's* with 128 GB of RAM which\nkeeps a lot of the reads from having to go to the disk. I would not\nhave expected that a read-mostly environment like this would be that\nsensitive to the WAL file placement. (OK, I *did* request the\nseparate file system for them anyway, but I thought it was going to\nbe a marginal benefit, not something this big.)\n \n-Kevin\n",
"msg_date": "Thu, 11 Feb 2010 12:19:07 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
}
] |
[
{
"msg_contents": ">From what I've read on the net, these should be very similar,\nand should generate equivalent plans, in such cases:\n\nSELECT DISTINCT x FROM mytable\nSELECT x FROM mytable GROUP BY x\n\nHowever, in my case (postgresql-server-8.1.18-2.el5_4.1),\nthey generated different results with quite different\nexecution times (73ms vs 40ms for DISTINCT and GROUP BY\nrespectively):\n\ntts_server_db=# EXPLAIN ANALYZE select userdata from tagrecord where clientRmaInId = 'CPC-RMA-00110' group by userdata;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=775.68..775.69 rows=1 width=146) (actual time=40.058..40.058 rows=0 loops=1)\n -> Bitmap Heap Scan on tagrecord (cost=4.00..774.96 rows=286 width=146) (actual time=40.055..40.055 rows=0 loops=1)\n Recheck Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n -> Bitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00 rows=286 width=0) (actual time=40.050..40.050 rows=0 loops=1)\n Index Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n Total runtime: 40.121 ms\n\ntts_server_db=# EXPLAIN ANALYZE select distinct userdata from tagrecord where clientRmaInId = 'CPC-RMA-00109';\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=786.63..788.06 rows=1 width=146) (actual time=73.018..73.018 rows=0 loops=1)\n -> Sort (cost=786.63..787.34 rows=286 width=146) (actual time=73.016..73.016 rows=0 loops=1)\n Sort Key: userdata\n -> Bitmap Heap Scan on tagrecord (cost=4.00..774.96 rows=286 width=146) (actual time=72.940..72.940 rows=0 loops=1)\n Recheck Cond: ((clientrmainid)::text = 'CPC-RMA-00109'::text)\n -> Bitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00 rows=286 width=0) (actual time=72.936..72.936 rows=0 loops=1)\n Index Cond: ((clientrmainid)::text = 'CPC-RMA-00109'::text)\n Total runtime: 73.144 ms\n\nWhat gives?\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Tue, 09 Feb 2010 16:46:16 -0500",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "DISTINCT vs. GROUP BY"
},
{
"msg_contents": "On 9 February 2010 21:46, Dimi Paun <[email protected]> wrote:\n> >From what I've read on the net, these should be very similar,\n> and should generate equivalent plans, in such cases:\n>\n> SELECT DISTINCT x FROM mytable\n> SELECT x FROM mytable GROUP BY x\n>\n> However, in my case (postgresql-server-8.1.18-2.el5_4.1),\n> they generated different results with quite different\n> execution times (73ms vs 40ms for DISTINCT and GROUP BY\n> respectively):\n>\n> tts_server_db=# EXPLAIN ANALYZE select userdata from tagrecord where clientRmaInId = 'CPC-RMA-00110' group by userdata;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=775.68..775.69 rows=1 width=146) (actual time=40.058..40.058 rows=0 loops=1)\n> -> Bitmap Heap Scan on tagrecord (cost=4.00..774.96 rows=286 width=146) (actual time=40.055..40.055 rows=0 loops=1)\n> Recheck Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n> -> Bitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00 rows=286 width=0) (actual time=40.050..40.050 rows=0 loops=1)\n> Index Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n> Total runtime: 40.121 ms\n>\n> tts_server_db=# EXPLAIN ANALYZE select distinct userdata from tagrecord where clientRmaInId = 'CPC-RMA-00109';\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=786.63..788.06 rows=1 width=146) (actual time=73.018..73.018 rows=0 loops=1)\n> -> Sort (cost=786.63..787.34 rows=286 width=146) (actual time=73.016..73.016 rows=0 loops=1)\n> Sort Key: userdata\n> -> Bitmap Heap Scan on tagrecord (cost=4.00..774.96 rows=286 width=146) (actual time=72.940..72.940 rows=0 loops=1)\n> Recheck Cond: ((clientrmainid)::text = 'CPC-RMA-00109'::text)\n> -> Bitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00 rows=286 width=0) (actual time=72.936..72.936 rows=0 loops=1)\n> Index Cond: ((clientrmainid)::text = 'CPC-RMA-00109'::text)\n> Total runtime: 73.144 ms\n>\n> What gives?\n>\nFirstly, the 2 queries aren't equal. They're matching against\ndifferent clientrmainid values.\n\nAlso, look at the bitmap index scan for each:\n\nBitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00\nrows=286 width=0) (actual time=40.050..40.050 rows=0 loops=1)\n\nBitmap Index Scan on idx_tagdata_clientrmainid (cost=0.00..4.00\nrows=286 width=0) (actual time=72.936..72.936 rows=0 loops=1)\n\nThat's where the difference is. An identical scan takes longer in one\nthan the other, either due to the index scan looking for different\nvalues in each case, or at the time you were running it, another\nprocess was using more resources. You'd have to run these several\ntimes to get an idea of average times.\n\nHave you run ANALYZE on the table beforehand to make sure your stats\nare up to date?\n\nRegards\n\nThom\n",
"msg_date": "Tue, 9 Feb 2010 22:22:17 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DISTINCT vs. GROUP BY"
},
{
"msg_contents": "Dimi Paun <[email protected]> writes:\n>> From what I've read on the net, these should be very similar,\n> and should generate equivalent plans, in such cases:\n\n> SELECT DISTINCT x FROM mytable\n> SELECT x FROM mytable GROUP BY x\n\n> However, in my case (postgresql-server-8.1.18-2.el5_4.1),\n> they generated different results with quite different\n> execution times (73ms vs 40ms for DISTINCT and GROUP BY\n> respectively):\n\nThe results certainly ought to be the same (although perhaps not with\nthe same ordering) --- if they aren't, please provide a reproducible\ntest case.\n\nAs for efficiency, though, 8.1 didn't understand how to use hash\naggregation for DISTINCT. Less-obsolete versions do know how to do\nthat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Feb 2010 17:38:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DISTINCT vs. GROUP BY "
},
{
"msg_contents": "On Tue, 2010-02-09 at 17:38 -0500, Tom Lane wrote:\n> The results certainly ought to be the same (although perhaps not with\n> the same ordering) --- if they aren't, please provide a reproducible\n> test case.\n\nThe results are the same, this is not a problem.\n\n> As for efficiency, though, 8.1 didn't understand how to use hash\n> aggregation for DISTINCT. Less-obsolete versions do know how to do\n> that.\n\nIndeed, this seem to be the issue:\n\ntts_server_db=# EXPLAIN ANALYZE select userdata from tagrecord where clientRmaInId = 'CPC-RMA-00110' group by userdata;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=253.34..253.50 rows=16 width=15) (actual time=0.094..0.094 rows=0 loops=1)\n -> Index Scan using idx_tagdata_clientrmainid on tagrecord (cost=0.00..252.85 rows=195 width=15) (actual time=0.091..0.091 rows=0 loops=1)\n Index Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n Total runtime: 0.146 ms\n(4 rows)\n\ntts_server_db=# EXPLAIN ANALYZE select distinct userdata from tagrecord where clientRmaInId = 'CPC-RMA-00110';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=260.27..261.25 rows=16 width=15) (actual time=0.115..0.115 rows=0 loops=1)\n -> Sort (cost=260.27..260.76 rows=195 width=15) (actual time=0.113..0.113 rows=0 loops=1)\n Sort Key: userdata\n -> Index Scan using idx_tagdata_clientrmainid on tagrecord (cost=0.00..252.85 rows=195 width=15) (actual time=0.105..0.105 rows=0 loops=1)\n Index Cond: ((clientrmainid)::text = 'CPC-RMA-00110'::text)\n Total runtime: 0.151 ms\n(6 rows)\n\nFor now we are stuck with 8.1, so the easiest fix for us is to use GROUP BY.\nSince this is fixed in later versions, I guess there's not much to see here... :)\n\nThanks for the quick reply!\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n",
"msg_date": "Tue, 09 Feb 2010 20:43:47 -0500",
"msg_from": "Dimi Paun <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: DISTINCT vs. GROUP BY"
}
] |
[
{
"msg_contents": "I've got a very slow query, which I can make faster by doing something\nseemingly trivial. \nThe query has been trouble for years (always slow, sometimes taking hours):\n\n 512,600ms Original, filter on articles.indexed (622 results)\n 7,500ms Remove \"AND articles.indexed\" (726 results, undesirable).\n 7,675ms Extra join for \"AND articles.indexed\" (622 results, same as\noriginal).\n\nHardware is Postgres 8.3 on a Sunfire X4240 under Debian Lenny, with a\nfresh ANALYZE. What I don't understand is why the improvement? Is the\nsecond way of doing things actually superior, or is this just a query\nplanner edge case?\n\n\nOriginal (512,600ms)\n---------------------\nEXPLAIN ANALYZE\nSELECT contexts.context_key FROM contexts\nJOIN articles ON (articles.context_key=contexts.context_key)\nJOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)\nWHERE contexts.context_key IN\n (SELECT context_key FROM article_words JOIN words using (word_key)\nWHERE word = 'insider'\n INTERSECT\n SELECT context_key FROM article_words JOIN words using (word_key)\nWHERE word = 'trading')\nAND contexts.context_key IN\n (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON\n(a.ancestor_key = bp_categories.context_key)\n WHERE lower(bp_categories.category) = 'law')\nAND articles.indexed;\n\nExtra join (7,675ms)\n---------------------------------------------\nEXPLAIN ANALYZE\nSELECT contexts.context_key FROM contexts JOIN articles using (context_key)\nWHERE contexts.context_key IN\n(\nSELECT contexts.context_key FROM contexts\nJOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)\nWHERE contexts.context_key IN\n (SELECT context_key FROM article_words JOIN words using (word_key)\nWHERE word = 'insider'\n INTERSECT\n SELECT context_key FROM article_words JOIN words using (word_key)\nWHERE word = 'trading')\nAND contexts.context_key IN\n (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories ON\n(a.ancestor_key = bp_categories.context_key)\n WHERE lower(bp_categories.category) = 'law')\n)\nAND articles.indexed;\n\n\n\n# select indexed,count(*) from articles group by indexed;\n indexed | count\n---------+--------\n t | 354605\n f | 513552\n\n\nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=131663.39..140005.40 rows=4769 width=4) (actual\ntime=511893.241..512599.348 rows=622 loops=1)\n Hash Cond: (matview_82034.context_key = articles.context_key)\n -> Seq Scan on matview_82034 (cost=0.00..6596.00 rows=465600\nwidth=4) (actual time=0.019..463.278 rows=438220 loops=1)\n -> Hash (cost=131663.38..131663.38 rows=1 width=16) (actual\ntime=511659.671..511659.671 rows=622 loops=1)\n -> Nested Loop IN Join (cost=46757.70..131663.38 rows=1\nwidth=16) (actual time=1142.789..511656.211 rows=622 loops=1)\n Join Filter: (a.context_key = articles.context_key)\n -> Nested Loop (cost=46757.70..46789.06 rows=2\nwidth=12) (actual time=688.057..839.297 rows=1472 loops=1)\n -> Nested Loop (cost=46757.70..46780.26 rows=2\nwidth=8) (actual time=688.022..799.945 rows=1472 loops=1)\n -> Subquery Scan \"IN_subquery\" \n(cost=46757.70..46757.97 rows=5 width=4) (actual time=687.963..743.587\nrows=1652 loops=1)\n -> SetOp Intersect \n(cost=46757.70..46757.93 rows=5 width=4) (actual time=687.961..738.955\nrows=1652 loops=1)\n -> Sort \n(cost=46757.70..46757.81 rows=46 width=4) (actual time=687.943..709.972\nrows=19527 loops=1)\n Sort Key: \"*SELECT*\n1\".context_key\n Sort Method: quicksort \nMemory: 1684kB\n -> Append \n(cost=0.00..46756.43 rows=46 width=4) (actual time=8.385..657.839\nrows=19527 loops=1)\n -> Subquery Scan\n\"*SELECT* 1\" (cost=0.00..23378.21 rows=23 width=4) (actual\ntime=8.383..215.613 rows=4002 loops=1)\n -> Nested\nLoop (cost=0.00..23377.98 rows=23 width=4) (actual time=8.380..207.499\nrows=4002 loops=1)\n -> Index\nScan using words_word on words (cost=0.00..5.47 rows=1 width=4) (actual\ntime=0.102..0.105 rows=1 loops=1)\n \nIndex Cond: ((word)::text = 'insider'::text)\n -> Index\nScan using article_words_wc on article_words (cost=0.00..23219.17\nrows=12268 width=8) (actual time=8.272..199.224 rows=4002 loops=1)\n \nIndex Cond: (public.article_words.word_key = public.words.word_key)\n -> Subquery Scan\n\"*SELECT* 2\" (cost=0.00..23378.21 rows=23 width=4) (actual\ntime=5.397..404.164 rows=15525 loops=1)\n -> Nested\nLoop (cost=0.00..23377.98 rows=23 width=4) (actual time=5.394..372.883\nrows=15525 loops=1)\n -> Index\nScan using words_word on words (cost=0.00..5.47 rows=1 width=4) (actual\ntime=0.054..0.056 rows=1 loops=1)\n \nIndex Cond: ((word)::text = 'trading'::text)\n -> Index\nScan using article_words_wc on article_words (cost=0.00..23219.17\nrows=12268 width=8) (actual time=5.331..341.535 rows=15525 loops=1)\n \nIndex Cond: (public.article_words.word_key = public.words.word_key)\n -> Index Scan using article_key_idx on\narticles (cost=0.00..4.44 rows=1 width=4) (actual time=0.026..0.029\nrows=1 loops=1652)\n Index Cond: (articles.context_key =\n\"IN_subquery\".context_key)\n Filter: articles.indexed\n -> Index Scan using contexts_pkey on contexts \n(cost=0.00..4.39 rows=1 width=4) (actual time=0.018..0.021 rows=1\nloops=1472)\n Index Cond: (contexts.context_key =\narticles.context_key)\n -> Nested Loop (cost=0.00..111539.51 rows=1261757\nwidth=4) (actual time=0.019..306.679 rows=39189 loops=1472)\n -> Seq Scan on bp_categories (cost=0.00..1315.59\nrows=16613 width=4) (actual time=0.008..57.960 rows=14529 loops=1472)\n Filter: (lower(category) = 'law'::text)\n -> Index Scan using virtual_ancestor_key_idx on\nvirtual_ancestors a (cost=0.00..5.18 rows=116 width=8) (actual\ntime=0.005..0.010 rows=3 loops=21386112)\n Index Cond: (a.ancestor_key =\nbp_categories.context_key)\n Total runtime: 512600.354 ms\n(37 rows)\n\n",
"msg_date": "Tue, 09 Feb 2010 20:04:18 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "512,\n\t600ms query becomes 7500ms... but why? Postgres 8.3 query planner\n\tquirk?"
},
{
"msg_contents": "Or, if you want to actually read that query plan, try:\nhttp://explain.depesz.com/s/qYq\n\n",
"msg_date": "Wed, 10 Feb 2010 00:29:53 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? Postgres 8.3 query\n\tplanner quirk?"
},
{
"msg_contents": "2010/2/10 Bryce Nesbitt <[email protected]>:\n> Or, if you want to actually read that query plan, try:\n> http://explain.depesz.com/s/qYq\n>\n\nhello,\n\ncheck your work_mem sesttings. Hash join is very slow in your case.\n\nPavel\n\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n",
"msg_date": "Wed, 10 Feb 2010 10:11:51 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <[email protected]> wrote:\n> Or, if you want to actually read that query plan, try:\n> http://explain.depesz.com/s/qYq\n\nMuch better, though I prefer a text attachment... anyhow, I think the\nroot of the problem may be that both of the subquery scans under the\nappend node are seeing hundreds of times more rows than they're\nexpecting, which is causing the planner to choose nested loops higher\nup that it otherwise might have preferred to implement in some other\nway. I'm not quite sure why, though.\n\n...Robert\n",
"msg_date": "Wed, 10 Feb 2010 15:31:58 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "That sure looks like the source of the problem to me too. I've seen similar behavior in queries not very different from that. It's hard to guess what the problem is exactly without having more knowledge of the data distribution in article_words though.\n\nGiven the results of analyze, I'd try to run the deepest subquery and try to see if I could get the estimate to match reality, either by altering statistics targets, or tweaking the query to give more information to the planner. \n\nFor example, i'd check if the number of expected rows from \n\nSELECT context_key FROM article_words JOIN words using (word_key) WHERE word = 'insider'\n\nis much less accurate than the estimate for\n\nSELECT context_key FROM article_words WHERE word_key = (whatever the actual word_key for insider is)\n\n\n>>> Robert Haas <[email protected]> 02/10/10 2:31 PM >>>\nOn Wed, Feb 10, 2010 at 3:29 AM, Bryce Nesbitt <[email protected]> wrote:\n> Or, if you want to actually read that query plan, try:\n> http://explain.depesz.com/s/qYq \n\nMuch better, though I prefer a text attachment... anyhow, I think the\nroot of the problem may be that both of the subquery scans under the\nappend node are seeing hundreds of times more rows than they're\nexpecting, which is causing the planner to choose nested loops higher\nup that it otherwise might have preferred to implement in some other\nway. I'm not quite sure why, though.\n\n...Robert\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n",
"msg_date": "Wed, 10 Feb 2010 17:18:36 -0600",
"msg_from": "\"Jorge Montero\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why?\n\tPostgres 8.3 query planner quirk?"
},
{
"msg_contents": "If you guys succeed in making this class of query perform, you'll have\nbeat out the professional consulting firm we hired, which was all but\nuseless! The query is usually slow, but particular combinations of\nwords seem to make it obscenely slow.\n\nThe query plans are now attached (sorry I did not start there: many\nlists reject attachments). Or you can click on \"text\" at the query\nplanner analysis site http://explain.depesz.com/s/qYq\n\n\n_Here's typical server load:_\nTasks: 166 total, 1 running, 165 sleeping, 0 stopped, 0 zombie\nCpu(s): 1.2%us, 0.9%sy, 0.0%ni, 86.5%id, 11.2%wa, 0.0%hi, 0.1%si, \n0.0%st\nMem: 32966936k total, 32873860k used, 93076k free, 2080k buffers\nSwap: 33554424k total, 472k used, 33553952k free, 30572904k cached\n\n_\nConfigurations modified from Postgres 8.3 default are:_\nlisten_addresses = '10.100.2.11, 10.101.2.11' # what IP address(es) to\nlisten on;\nport = 5432 # (change requires restart)\nmax_connections = 400 # (change requires restart)\nshared_buffers = 4096MB # min 128kB or max_connections*16kB\nwork_mem = 16MB # min 64kB\nmax_fsm_pages = 500000 # default:20000\nmin:max_fsm_relations*16,6 bytes see:MAIN-5740\nmax_fsm_relations = 2700 # min 100, ~70 bytes each\ncheckpoint_segments = 20 # in logfile segments, min 1,\n16MB each\nrandom_page_cost = 2.0 # same scale as above\neffective_cache_size = 28672MB\ndefault_statistics_target = 150 # range 1-1000\nlog_destination = 'syslog' # Valid values are combinations of\nlog_min_error_statement = error # values in order of decreasing detail:\nlog_min_duration_statement = 5000 # -1 is disabled, 0 logs all\nstatements\nlog_checkpoints = on # default off\nautovacuum_naptime = 5min # time between autovacuum runs\nescape_string_warning = off # default:on (See bepress\nMAIN-4857)\nstandard_conforming_strings = off # deafult:off (See bepress\nMAIN-4857)\n\n\n\nproduction=# EXPLAIN ANALYZE SELECT context_key FROM article_words\nJOIN words using (word_key) WHERE word = 'insider';\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..23393.15 rows=23 width=4) (actual\ntime=0.077..15.637 rows=4003 loops=1)\n -> Index Scan using words_word on words (cost=0.00..5.47 rows=1\nwidth=4) (actual time=0.049..0.051 rows=1 loops=1)\n Index Cond: ((word)::text = 'insider'::text)\n -> Index Scan using article_words_wc on article_words \n(cost=0.00..23234.38 rows=12264 width=8) (actual time=0.020..7.237\nrows=4003 loops=1)\n Index Cond: (article_words.word_key = words.word_key)\n Total runtime: 19.776 ms\n\nproduction=# EXPLAIN ANALYZE SELECT context_key FROM article_words\nWHERE word_key = 3675;\n-------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using article_words_wc on article_words \n(cost=0.00..21433.53 rows=11309 width=4) (actual time=0.025..7.579\nrows=4003 loops=1)\n Index Cond: (word_key = 3675)\n Total runtime: 11.704 ms\n\n\n\nproduction=# explain analyze select count(*) from article_words;\nAggregate (cost=263831.63..263831.64 rows=1 width=0) (actual\ntime=35851.654..35851.655 rows=1 loops=1)\n -> Seq Scan on words (cost=0.00..229311.30 rows=13808130 width=0)\n(actual time=0.043..21281.124 rows=13808184 loops=1)\n Total runtime: 35851.723 ms\n\nproduction=# select count(*) from words;\n13,808,184\n\n\nproduction=# explain analyze select count(*) from article_words;\nAggregate (cost=5453242.40..5453242.41 rows=1 width=0) (actual\ntime=776504.017..776504.018 rows=1 loops=1)\n -> Seq Scan on article_words (cost=0.00..4653453.52 rows=319915552\nwidth=0) (actual time=0.034..438969.347 rows=319956663 loops=1)\n Total runtime: 776504.177 ms\n\nproduction=# select count(*) from article_words;\n319,956,720",
"msg_date": "Wed, 10 Feb 2010 17:52:28 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "Bryce Nesbitt <[email protected]> writes:\n> The query plans are now attached (sorry I did not start there: many\n> lists reject attachments). Or you can click on \"text\" at the query\n> planner analysis site http://explain.depesz.com/s/qYq\n\nAt least some of the problem is the terrible quality of the rowcount\nestimates in the IN subquery, as you extracted here:\n\n> Nested Loop (cost=0.00..23393.15 rows=23 width=4) (actual time=0.077..15.637 rows=4003 loops=1)\n> -> Index Scan using words_word on words (cost=0.00..5.47 rows=1 width=4) (actual time=0.049..0.051 rows=1 loops=1)\n> Index Cond: ((word)::text = 'insider'::text)\n> -> Index Scan using article_words_wc on article_words (cost=0.00..23234.38 rows=12264 width=8) (actual time=0.020..7.237 rows=4003 loops=1)\n> Index Cond: (article_words.word_key = words.word_key)\n> Total runtime: 19.776 ms\n\nGiven that it estimated 1 row out of \"words\" (quite correctly) and 12264\nrows out of each scan on article_words, you'd think that the join size\nestimate would be 12264, which would be off by \"only\" a factor of 3 from\nthe true result. Instead it's 23, off by a factor of 200 :-(.\n\nRunning a roughly similar test case here, I see that 8.4 gives\nsignificantly saner estimates, which I think is because of this patch:\nhttp://archives.postgresql.org/pgsql-committers/2008-10/msg00191.php\n\nAt the time I didn't want to risk back-patching it, because there\nwere a lot of other changes in the same general area in 8.4. But\nit would be interesting to see what happens with your example if\nyou patch 8.3 similarly. (Note: I think only the first diff hunk\nis relevant to 8.3.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Feb 2010 22:52:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: 512,\n\t600ms query becomes 7500ms... but why? Postgres 8.3 query planner\n\tquirk?"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 8:52 PM, Bryce Nesbitt <[email protected]> wrote:\n> If you guys succeed in making this class of query perform, you'll have beat\n> out the professional consulting firm we hired, which was all but useless!\n> The query is usually slow, but particular combinations of words seem to make\n> it obscenely slow.\n\nHeh heh heh professional consulting firm.\n\n> production=# EXPLAIN ANALYZE SELECT context_key FROM article_words\n> WHERE word_key = 3675;\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using article_words_wc on article_words (cost=0.00..21433.53\n> rows=11309 width=4) (actual time=0.025..7.579 rows=4003 loops=1)\n> Index Cond: (word_key = 3675)\n> Total runtime: 11.704 ms\n\nThat's surprisingly inaccurate. Since this table is large:\n\n> production=# explain analyze select count(*) from article_words;\n> Aggregate (cost=263831.63..263831.64 rows=1 width=0) (actual\n> time=35851.654..35851.655 rows=1 loops=1)\n> -> Seq Scan on words (cost=0.00..229311.30 rows=13808130 width=0)\n> (actual time=0.043..21281.124 rows=13808184 loops=1)\n> Total runtime: 35851.723 ms\n\n...you may need to crank up the statistics target. I would probably\ntry cranking it all the way up to the max, though there is a risk that\nmight backfire, in which case you'll need to decrease it again.\n\nALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 1000;\n\nThat's probably not going to fix your whole problem, but it should be\ninteresting to see whether it makes things better or worse and by how\nmuch.\n\n...Robert\n",
"msg_date": "Thu, 11 Feb 2010 08:29:52 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "Bryce Nesbitt <[email protected]> wrote:\n \n> I've got a very slow query, which I can make faster by doing\n> something seemingly trivial. \n \nOut of curiosity, what kind of performance do you get with?:\n \nEXPLAIN ANALYZE\nSELECT contexts.context_key\n FROM contexts\n JOIN articles ON (articles.context_key = contexts.context_key)\n JOIN matview_82034 ON (matview_82034.context_key =\n contexts.context_key)\n WHERE EXISTS\n (\n SELECT *\n FROM article_words\n JOIN words using (word_key)\n WHERE context_key = contexts.context_key\n AND word = 'insider'\n )\n AND EXISTS\n (\n SELECT *\n FROM article_words\n JOIN words using (word_key)\n WHERE context_key = contexts.context_key\n AND word = 'trading'\n )\n AND EXISTS\n (\n SELECT *\n FROM virtual_ancestors a\n JOIN bp_categories ON (bp_categories.context_key =\n a.ancestor_key)\n WHERE a.context_key = contexts.context_key\n AND lower(bp_categories.category) = 'law'\n )\n AND articles.indexed\n;\n \n(You may have to add some table aliases in the subqueries.)\n \nIf you are able to make a copy on 8.4 and test the various forms,\nthat would also be interesting. I suspect that the above might do\npretty well in 8.4.\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 09:09:41 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why?\n\tPostgres 8.3 query planner quirk?"
},
{
"msg_contents": "\"Exists\" can be quite slow. So can \"not exists\"\n\nSee if you can re-write it using a sub-select - just replace the \"exists\n....\" with \"(select ...) is not null\"\n\nSurprisingly this often results in a MUCH better query plan under\nPostgresql. Why the planner evaluates it \"better\" eludes me (it\nshouldn't) but the differences are often STRIKING - I've seen\nfactor-of-10 differences in execution performance.\n\n\nKevin Grittner wrote:\n> Bryce Nesbitt <[email protected]> wrote:\n> \n> \n>> I've got a very slow query, which I can make faster by doing\n>> something seemingly trivial. \n>> \n> \n> Out of curiosity, what kind of performance do you get with?:\n> \n> EXPLAIN ANALYZE\n> SELECT contexts.context_key\n> FROM contexts\n> JOIN articles ON (articles.context_key = contexts.context_key)\n> JOIN matview_82034 ON (matview_82034.context_key =\n> contexts.context_key)\n> WHERE EXISTS\n> (\n> SELECT *\n> FROM article_words\n> JOIN words using (word_key)\n> WHERE context_key = contexts.context_key\n> AND word = 'insider'\n> )\n> AND EXISTS\n> (\n> SELECT *\n> FROM article_words\n> JOIN words using (word_key)\n> WHERE context_key = contexts.context_key\n> AND word = 'trading'\n> )\n> AND EXISTS\n> (\n> SELECT *\n> FROM virtual_ancestors a\n> JOIN bp_categories ON (bp_categories.context_key =\n> a.ancestor_key)\n> WHERE a.context_key = contexts.context_key\n> AND lower(bp_categories.category) = 'law'\n> )\n> AND articles.indexed\n> ;\n> \n> (You may have to add some table aliases in the subqueries.)\n> \n> If you are able to make a copy on 8.4 and test the various forms,\n> that would also be interesting. I suspect that the above might do\n> pretty well in 8.4.\n> \n> -Kevin\n>\n>",
"msg_date": "Fri, 12 Feb 2010 10:05:45 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? \t Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "Karl Denninger <[email protected]> wrote:\nKevin Grittner wrote:\n \n>> I suspect that the above might do pretty well in 8.4.\n \n> \"Exists\" can be quite slow. So can \"not exists\"\n> \n> See if you can re-write it using a sub-select - just replace the\n> \"exists ....\" with \"(select ...) is not null\"\n> \n> Surprisingly this often results in a MUCH better query plan under\n> Postgresql. Why the planner evaluates it \"better\" eludes me (it\n> shouldn't) but the differences are often STRIKING - I've seen\n> factor-of-10 differences in execution performance.\n \nHave you seen such a difference under 8.4? Can you provide a\nself-contained example?\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 10:11:00 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? \t\n\tPostgres 8.3 query planner quirk?"
},
{
"msg_contents": "Yes:\n\nselect forum, * from post where\n marked is not true\n and toppost = 1\n and (select login from ignore_thread where login='xxx' and\nnumber=post.number) is null\n and (replied > now() - '30 days'::interval)\n and (replied > (select lastview from forumlog where login='xxx' and\nforum=post.forum and number is null)) is not false\n and (replied > (select lastview from forumlog where login='xxx' and\nforum=post.forum and number=post.number)) is not false\n and ((post.forum = (select who from excludenew where who='xxx' and\nforum_name = post.forum)) or (select who from excludenew where who='xxx'\nand forum_name = post.forum) is null)\n order by pinned desc, replied desc offset 0 limit 100\n\nReturns the following query plan:\n QUERY\nPLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5301575.63..5301575.63 rows=1 width=433) (actual\ntime=771.000..771.455 rows=100 loops=1)\n -> Sort (cost=5301575.63..5301575.63 rows=1 width=433) (actual\ntime=770.996..771.141 rows=100 loops=1)\n Sort Key: post.pinned, post.replied\n Sort Method: top-N heapsort Memory: 120kB\n -> Index Scan using post_top on post (cost=0.00..5301575.62\nrows=1 width=433) (actual time=0.664..752.994 rows=3905 loops=1)\n Index Cond: (toppost = 1)\n Filter: ((marked IS NOT TRUE) AND (replied > (now() - '30\ndays'::interval)) AND ((SubPlan 1) IS NULL) AND ((replied > (SubPlan 2))\nIS NOT FALSE) AND ((replied > (SubPlan 3)) IS NOT FALSE) AND ((forum =\n(SubPlan 4)) OR ((SubPlan 5) IS NULL)))\n SubPlan 1\n -> Seq Scan on ignore_thread (cost=0.00..5.45 rows=1\nwidth=7) (actual time=0.037..0.037 rows=0 loops=3905)\n Filter: ((login = 'xxx'::text) AND (number = $0))\n SubPlan 2\n -> Index Scan using forumlog_composite on forumlog \n(cost=0.00..9.50 rows=1 width=8) (actual time=0.008..0.008 rows=0\nloops=3905)\n Index Cond: ((login = 'xxx'::text) AND (forum =\n$1) AND (number IS NULL))\n SubPlan 3\n -> Index Scan using forumlog_composite on forumlog \n(cost=0.00..9.50 rows=1 width=8) (actual time=0.006..0.006 rows=0\nloops=3905)\n Index Cond: ((login = 'xxx'::text) AND (forum =\n$1) AND (number = $0))\n SubPlan 4\n -> Index Scan using excludenew_pkey on excludenew \n(cost=0.00..8.27 rows=1 width=9) (actual time=0.004..0.004 rows=0\nloops=3905)\n Index Cond: ((who = 'xxx'::text) AND (forum_name\n= $1))\n SubPlan 5\n -> Index Scan using excludenew_pkey on excludenew \n(cost=0.00..8.27 rows=1 width=9) (actual time=0.004..0.004 rows=0\nloops=3905)\n Index Cond: ((who = 'xxx'::text) AND (forum_name\n= $1))\n Total runtime: 771.907 ms\n(23 rows)\n\nThe alternative:\n\nselect forum, * from post where\n marked is not true\n and toppost = 1\n and not exists (select login from ignore_thread where login='xxx'\nand number=post.number)\n and (replied > now() - '30 days'::interval)\n and (replied > (select lastview from forumlog where login='xxx' and\nforum=post.forum and number is null)) is not false\n and (replied > (select lastview from forumlog where login='xxx' and\nforum=post.forum and number=post.number)) is not false\n and ((post.forum = (select who from excludenew where who='xxx' and\nforum_name = post.forum)) or (select who from excludenew where who='xxx'\nand forum_name = post.forum) is null)\n order by pinned desc, replied desc offset 0 limit 100\n\ngoes nuts.\n\n(Yes, I know, most of those others which are \"not false\" could be\n\"Exists\" too)\n\nExplain Analyze on the alternative CLAIMS the same query planner time\n(within a few milliseconds) with explain analyze. But if I replace the\nexecuting code with one that has the alternative (\"not exists\") syntax\nin it, the system load goes to crap instantly and the execution times\n\"in the wild\" go bananas.\n\nI don't know why it does - I just know THAT it does. When I first added\nthat top clause in there (to allow people an individual \"ignore thread\"\nlist) the system load went bananas immediately and forced me to back it\nout. When I re-wrote the query as above the performance was (and\nremains) fine.\n\nI'm running 8.4.2.\n\nI agree (in advance) it shouldn't trash performance - all I know is that\nit does and forced me to re-write the query.\n\n\nKevin Grittner wrote:\n> Karl Denninger <[email protected]> wrote:\n> Kevin Grittner wrote:\n> \n> \n>>> I suspect that the above might do pretty well in 8.4.\n>>> \n> \n> \n>> \"Exists\" can be quite slow. So can \"not exists\"\n>>\n>> See if you can re-write it using a sub-select - just replace the\n>> \"exists ....\" with \"(select ...) is not null\"\n>>\n>> Surprisingly this often results in a MUCH better query plan under\n>> Postgresql. Why the planner evaluates it \"better\" eludes me (it\n>> shouldn't) but the differences are often STRIKING - I've seen\n>> factor-of-10 differences in execution performance.\n>> \n> \n> Have you seen such a difference under 8.4? Can you provide a\n> self-contained example?\n> \n> -Kevin\n>\n>",
"msg_date": "Fri, 12 Feb 2010 10:43:10 -0600",
"msg_from": "Karl Denninger <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? \t\t Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "Karl Denninger <[email protected]> wrote:\n> Kevin Grittner wrote:\n \n>> Have you seen such a difference under 8.4? Can you provide a\n>> self-contained example?\n \n> Yes:\n> \n> [query and EXPLAIN ANALYZE of fast query]\n \n> The alternative:\n> \n> [query with no other information]\n> \n> goes nuts.\n \nWhich means what? Could you post an EXPLAIN ANALYZE, or at least an\nEXPLAIN, of the slow version? Can you post the table structure,\nincluding indexes?\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 10:54:27 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? \t\t\n\tPostgres 8.3 query planner quirk?"
},
{
"msg_contents": "Karl Denninger <[email protected]> writes:\n> Explain Analyze on the alternative CLAIMS the same query planner time\n> (within a few milliseconds) with explain analyze. But if I replace the\n> executing code with one that has the alternative (\"not exists\") syntax\n> in it, the system load goes to crap instantly and the execution times\n> \"in the wild\" go bananas.\n\nCould we see the actual explain analyze output, and not some handwaving?\n\nWhat I would expect 8.4 to do with the NOT EXISTS version is to convert\nit to an antijoin --- probably a hash antijoin given that the subtable\nis apparently small. That should be a significant win compared to\nrepeated seqscans as you have now. The only way I could see for it to\nbe a loss is that that join would probably be performed after the other\nsubplan tests instead of before. However, the rowcounts for your\noriginal query suggest that all the subplans get executed the same\nnumber of times; so at least on the test values you used here, all\nthose conditions succeed. Maybe your test values were not\nrepresentative of \"in the wild\" cases, and in the real usage it's\nimportant to make this test before the others.\n\nIf that's what it is, you might see what happens when all of the\nsub-selects are converted to exists/not exists style, instead of\nhaving a mishmash...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Feb 2010 15:07:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,\n\t600ms query becomes 7500ms... but why? Postgres 8.3 query planner\n\tquirk?"
},
{
"msg_contents": "Kevin Grittner wrote:\n\nBryce Nesbitt <[email protected]> wrote:\n \n \n\nI've got a very slow query, which I can make faster by doing\nsomething seemingly trivial. \n \n\n \nOut of curiosity, what kind of performance do you get with?:\n \nEXPLAIN ANALYZE\nSELECT contexts.context_key\n FROM contexts\n JOIN articles ON (articles.context_key = contexts.context_key)\n JOIN matview_82034 ON (matview_82034.context_key =\n contexts.context_key)\n WHERE EXISTS\n (\n SELECT *\n FROM article_words\n JOIN words using (word_key)\n WHERE context_key = contexts.context_key\n AND word = 'insider'\n )\n AND EXISTS\n (\n SELECT *\n FROM article_words\n JOIN words using (word_key)\n WHERE context_key = contexts.context_key\n AND word = 'trading'\n )\n AND EXISTS\n (\n SELECT *\n FROM virtual_ancestors a\n JOIN bp_categories ON (bp_categories.context_key =\n a.ancestor_key)\n WHERE a.context_key = contexts.context_key\n AND lower(bp_categories.category) = 'law'\n )\n AND articles.indexed\n;\n \n\n512,600ms query becomes 225,976ms. Twice as fast on pos\nDefinitely not beating the 7500ms version.\nPostgreSQL 8.3.4",
"msg_date": "Fri, 12 Feb 2010 23:45:23 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? \t Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "Tom Lane wrote:\n> Given that it estimated 1 row out of \"words\" (quite correctly) and 12264\n> rows out of each scan on article_words, you'd think that the join size\n> estimate would be 12264, which would be off by \"only\" a factor of 3 from\n> the true result. Instead it's 23, off by a factor of 200 :-(.\n> \nHas anyone every proposed a \"learning\" query planner? One that\neventually clues in to estimate mismatches like this?\n",
"msg_date": "Fri, 12 Feb 2010 23:55:07 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "So as the op, back to the original posting....\n\nIn the real world, what should I do? Does it make sense to pull the\n\"AND articles.indexed\" clause into an outer query? Will that query\nsimply perform poorly on other arbitrary combinations of words?\n\n\nI'm happy to test any given query against the\nsame set of servers. If it involves a persistent change\nit has to run on a test server). For example, the Robert Haas method:\n# ...\nTotal runtime: 254207.857 ms\n\n# ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 1000;\n# ANALYZE VERBOSE article_words\nINFO: analyzing \"public.article_words\"\nINFO: \"article_words\": scanned 300000 of 1342374 pages, containing 64534899 live rows and 3264839 dead rows; 300000 rows in sample, 288766568 estimated total rows\nANALYZE\n# ...\nTotal runtime: 200591.751 ms\n\n# ALTER TABLE article_words ALTER COLUMN word_key SET STATISTICS 50;\n# ANALYZE VERBOSE article_words\n# ...\nTotal runtime: 201204.972 ms\n\n\nSadly, it made essentially zero difference. Attached.",
"msg_date": "Fri, 12 Feb 2010 23:58:12 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "So how about the removal of the \"AND\" clause? On a test server, this\ndrops the query from 201204 to 438 ms.\nIs this just random, or is it a real solution that might apply to any\narbitrary combination of words?\n\nAttached are three test runs:\nTotal runtime: 201204.972 ms\nTotal runtime: 437.766 ms\nTotal runtime: 341.727 ms",
"msg_date": "Sat, 13 Feb 2010 00:09:26 -0800",
"msg_from": "Bryce Nesbitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: 512,600ms query becomes 7500ms... but why? Postgres\n\t8.3 query planner quirk?"
},
{
"msg_contents": "\nYour Query :\n\nSELECT contexts.context_key FROM contexts\nJOIN articles ON (articles.context_key=contexts.context_key)\nJOIN matview_82034 ON (contexts.context_key=matview_82034.context_key)\nWHERE contexts.context_key IN\n (SELECT context_key FROM article_words JOIN words using (word_key) \nWHERE word = 'insider'\n INTERSECT\n SELECT context_key FROM article_words JOIN words using (word_key) \nWHERE word = 'trading')\nAND contexts.context_key IN\n (SELECT a.context_key FROM virtual_ancestors a JOIN bp_categories \nON (a.ancestor_key = bp_categories.context_key)\n WHERE lower(bp_categories.category) = 'law') AND articles.indexed;\n\n\nI guess this is some form of keyword search, like :\n- search for article\n- with keywords \"insider\" and \"trading\"\n- and belongs to a subcategory of \"law\"\n\nThe way you do it is exactly the same as the way phpBB forum implements \nit, in the case you use a database that doesn't support full text search. \nIt is a fallback mechanism only meant for small forums on old versions of \nMySQL, because it is extremely slow.\n\nEven your faster timing (7500 ms) is extremely slow.\n\nOption 1 :\n\na) Instead of building your own keywords table, use Postgres' fulltext \nsearch, which is a lot smarter about combining keywords than using \nINTERSECT.\nYou can either index the entire article, or use a separate keyword field, \nor both.\n\nb) If an article belongs to only one category, use an integer field. If, \nas is most often the case, an article can belong to several categories, \nuse gist. When an article belongs to categories 1,2,3, set a column \narticle_categories to the integer array {1,2,3}::INTEGER[]. Then, use a \ngist index on it.\n\nYou can then do a SELECT from articles (only one table) using an AND on \nthe intersection of article_categories with an array of the required \ncategories, and using Postgres' full text search on keywords.\n\nThis will most likely result in a Bitmap Scan, which will do the ANDing \nmuch faster than any other solution.\n\nAlternately, you can also use keywords like category_1234, stuff \neverything in your keywords column, and use only Fulltext search.\n\nYou should this solution first, it works really well. When the data set \nbecomes quite a bit larger than your RAM, it can get slow, though.\n\nOption 2 :\n\nPostgres' full text search is perfectly integrated and has benefits : \nfast, high write concurrency, etc. However full text search can be made \nmuch faster with some compromises.\n\nFor instance, I have tried Xapian : it is a lot faster than Postgres for \nfull text search (and more powerful too), but the price you pay is\n- a bit of work to integrate it\n\t- I suggest using triggers and a Python indexer script running in the \nbackground to update the index\n\t- You can't SQL query it, so you need some interfacing\n- updates are not concurrent (single-writer).\n\nSo, if you don't make lots of updates, Xapian may work for you. Its \nperformance is unbelievable, even on huge datasets.\n",
"msg_date": "Sat, 13 Feb 2010 16:40:09 +0100",
"msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? Postgres 8.3\n\tquery planner quirk?"
},
{
"msg_contents": "On Sat, Feb 13, 2010 at 2:58 AM, Bryce Nesbitt <[email protected]> wrote:\n> So as the op, back to the original posting....\n>\n> In the real world, what should I do? Does it make sense to pull the \"AND\n> articles.indexed\" clause into an outer query? Will that query simply\n> perform poorly on other arbitrary combinations of words?\n\nIt's really hard to say whether a query that performs better is going\nto always perform better on every combination of words. My suggestion\nis - try it and see. It's my experience that rewriting queries is a\npretty effective way of speeding them up, but I can't vouch for what\nwill happen in your particular case. It's depressing to see that\nincreasing the statistics target didn't help much; but that makes me\nthink that the problem is that your join selectivity estimates are\noff, as Tom and Jorge said upthread. Rewriting the query or trying an\nupgrade are probably your only options.\n\n...Robert\n",
"msg_date": "Mon, 15 Feb 2010 11:41:43 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 512,600ms query becomes 7500ms... but why? Postgres 8.3\n\tquery planner quirk?"
}
] |
[
{
"msg_contents": "Hello all,\nApologies for the long mail.\nI work for a company that is provides solutions mostly on a Java/Oracle \nplatform. Recently we moved on of our products to PostgreSQL. The main \nreason was PostgreSQL's GIS capabilities and the inability of government \ndepartments (especially road/traffic) to spend a lot of money for such \nprojects. This product is used to record details about accidents and \nrelated analysis (type of road, when/why etc) with maps. Fortunately, even \nin India, an accident reporting application does not have to handle many \ntps :). So, I can't say PostgreSQL's performance was really tested in \nthis case.\nLater, I tested one screen of one of our products - load testing with \nJmeter. We tried it with Oracle, DB2, PostgreSQL and Ingres, and \nPostgreSQL easily out-performed the rest. We tried a transaction mix with \n20+ SELECTS, update, delete and a few inserts.\nAfter a really good experience with the database, I subscribed to all \nPostgreSQL groups (my previous experience is all-Oracle) and reading these \nmails, I realized that many organizations are using plan, 'not customized' \n PostgreSQL for databases that handle critical applications. Since there \nis no company trying to 'sell' PostgreSQL, many of us are not aware of \nsuch cases.\nCould some of you please share some info on such scenarios- where you are \nsupporting/designing/developing databases that run into at least a few \nhundred GBs of data (I know, that is small by todays' standards)?\nI went through\nhttp://www.postgresql.org/about/casestudies/\nand felt those are a bit old. I am sure PostgreSQL has matured a lot more \nfrom the days when these case studies where posted. I went through the \ncase studies at EnterpiseDB and similar vendors too. But those are \ncustomized PostgreSQL servers.\nI am looking more for a 'first-hand' feedback\nAny feedback - a few sentences with the db size, tps, h/w necessary to \nsupport that, and acceptable down-time, type of application etc will be \ngreatly appreciated.\nOur products are not of the blog/social networking type, but more of \non-line reservation type where half an hour down-time can lead to \nsignificant revenue losses for customers.\nThank you,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello all,\nApologies for the long mail.\nI work for a company that is provides\nsolutions mostly on a Java/Oracle platform. Recently we moved on of our\nproducts to PostgreSQL. The main reason was PostgreSQL's GIS capabilities\nand the inability of government departments (especially road/traffic) to\nspend a lot of money for such projects. This product is used to record\ndetails about accidents and related analysis (type of road, when/why etc)\nwith maps. Fortunately, even in India, an accident reporting application\ndoes not have to handle many tps :). So, I can't say PostgreSQL's\nperformance was really tested in this case.\nLater, I tested one screen of one of\nour products - load testing with Jmeter. We tried it with Oracle, DB2,\nPostgreSQL and Ingres, and PostgreSQL easily out-performed the rest. We\ntried a transaction mix with 20+ SELECTS, update, delete and a few inserts.\nAfter a really good experience with\nthe database, I subscribed to all PostgreSQL groups (my previous experience\nis all-Oracle) and reading these mails, I realized that many organizations\nare using plan, 'not customized' PostgreSQL for databases that handle\ncritical applications. Since there is no company trying to 'sell'\nPostgreSQL, many of us are not aware of such cases.\nCould some of you please share some\ninfo on such scenarios- where you are supporting/designing/developing databases\nthat run into at least a few hundred GBs of data (I know, that is small\nby todays' standards)?\nI went through\nhttp://www.postgresql.org/about/casestudies/\nand felt those are a bit old. I am sure\nPostgreSQL has matured a lot more from the days when these case studies\nwhere posted. I went through the case studies at EnterpiseDB and similar\nvendors too. But those are customized PostgreSQL servers.\nI am looking more for a 'first-hand'\nfeedback\nAny feedback - a few sentences with\nthe db size, tps, h/w necessary to support that, and acceptable down-time,\ntype of application etc will be greatly appreciated.\nOur products are not of the blog/social\nnetworking type, but more of on-line reservation type where half an hour\ndown-time can lead to significant revenue losses for customers.\nThank you,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"",
"msg_date": "Wed, 10 Feb 2010 09:39:02 +0530",
"msg_from": "Jayadevan M <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL - case studies"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 9:39 AM, Jayadevan M\n<[email protected]>wrote:\n\n\n> Any feedback - a few sentences with the db size, tps, h/w necessary to\n> support that, and acceptable down-time, type of application etc will be\n> greatly appreciated.\n> Our products are not of the blog/social networking type, but more of\n> on-line reservation type where half an hour down-time can lead to\n> significant revenue losses for customers.\n> Thank you,\n> Jayadevan\n>\n>\nI don't have experience with DB size greater than 25-26 GB at the moment,\nbut Postgres surely has no problems handling my requirements. Mind you,\nthis Was on a stock Postgresql server running on FreeBSD 7.x without any\noptimizations. Didn't seen any downtime apart from the ones that I goofed\nup. H/w config was dual processor Xeon/8GB RAM and a single SAS 15K disk\n(146 GB). The rig has been upgraded last week, but it ran fine for more than\n18 months.\n\nThat said, we run an application which generates around 170-200 transactions\nper second (mix of select insert, update and delete). AFAIK, most of us\nusing Postgres are running some or the other critical application where\ndowntime has a significant cost attached to it.\n\nWith regards\n\nAmitabh\n\nOn Wed, Feb 10, 2010 at 9:39 AM, Jayadevan M <[email protected]> wrote: \nAny feedback - a few sentences with\nthe db size, tps, h/w necessary to support that, and acceptable down-time,\ntype of application etc will be greatly appreciated.\nOur products are not of the blog/social\nnetworking type, but more of on-line reservation type where half an hour\ndown-time can lead to significant revenue losses for customers.\nThank you,\nJayadevan\nI don't have experience with DB size greater than 25-26 GB at the moment, but Postgres surely has no problems handling my requirements. Mind you, this Was on a stock Postgresql server running on FreeBSD 7.x without any optimizations. Didn't seen any downtime apart from the ones that I goofed up. H/w config was dual processor Xeon/8GB RAM and a single SAS 15K disk (146 GB). The rig has been upgraded last week, but it ran fine for more than 18 months.\nThat said, we run an application which generates around 170-200 transactions per second (mix of select insert, update and delete). AFAIK, most of us using Postgres are running some or the other critical application where downtime has a significant cost attached to it.\nWith regardsAmitabh",
"msg_date": "Wed, 10 Feb 2010 10:08:37 +0530",
"msg_from": "Amitabh Kant <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL - case studies"
},
{
"msg_contents": "Quick note, please stick to text formatted email for the mailing list,\nit's the preferred format.\n\nOn Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M\n<[email protected]> wrote:\n>\n> Hello all,\n> Apologies for the long mail.\n> I work for a company that is provides solutions mostly on a Java/Oracle platform. Recently we moved on of our products to PostgreSQL. The main reason was PostgreSQL's GIS capabilities and the inability of government departments (especially road/traffic) to spend a lot of money for such projects. This product is used to record details about accidents and related analysis (type of road, when/why etc) with maps. Fortunately, even in India, an accident reporting application does not have to handle many tps :). So, I can't say PostgreSQL's performance was really tested in this case.\n> Later, I tested one screen of one of our products - load testing with Jmeter. We tried it with Oracle, DB2, PostgreSQL and Ingres, and PostgreSQL easily out-performed the rest. We tried a transaction mix with 20+ SELECTS, update, delete and a few inserts.\n\nPlease note that benchmarking oracle (and a few other commercial dbs)\nand then publishing those results without permission of oracle is\nconsidered to be in breech of their contract. Yeah, another wonderful\naspect of using Oracle.\n\nThat said, and as someone who is not an oracle licensee in any way,\nthis mimics my experience that postgresql is a match for oracle, db2,\nand most other databases in the simple, single db on commodity\nhardware scenario.\n\n> After a really good experience with the database, I subscribed to all PostgreSQL groups (my previous experience is all-Oracle) and reading these mails, I realized that many organizations are using plan, 'not customized' PostgreSQL for databases that handle critical applications. Since there is no company trying to 'sell' PostgreSQL, many of us are not aware of such cases.\n\nActually there are several companies that sell pgsql service, and some\nthat sell customized versions. RedHat, Command Prompt, EnterpriseDB,\nand so on.\n\n> Could some of you please share some info on such scenarios- where you are supporting/designing/developing databases that run into at least a few hundred GBs of data (I know, that is small by todays' standards)?\n\nThere are other instances of folks on the list sharing this kind of\ninfo you can find by searching the archives. I've used pgsql for\nabout 10 years for anywhere from a few megabytes to hundreds of\ngigabytes, and all kinds of applications.\n\nWhere I currently work we have a main data store for a web app that is\nabout 180Gigabytes and growing, running on three servers with slony\nreplication. We handle somewhere in the range of 10k to 20k queries\nper minute (a mix of 90% or so reads to 10% writes). Peak load can be\ninto the 30k or higher reqs / minute.\n\nThe two big servers that handle this load are dual quad core opteron\n2.1GHz machines with 32Gig RAM and 16 15krpm SAS drives configured as\n2 in RAID-1 for OS and pg_xlog, 2 hot spares, and 12 in a RAID-10 for\nthe main data. HW Raid controller is the Areca 1680 which is mostly\nstable, except for the occasional (once a year or so) hang problem\nwhich has been described, and which Areca has assured me they are\nworking on.\n\nOur total downtime due to database outages in the last year or so has\nbeen 10 to 20 minutes, and that was due to a RAID card driver bug that\nhits us about once every 300 to 400 days. the majority of the down\ntime has been waiting for our hosting provider to hit the big red\nswitch and restart the main server.\n\nOur other pgsql servers provide search facility, with a db size of\naround 300Gig, and statistics at around ~1TB.\n\n> I am sure PostgreSQL has matured a lot more from the days when these case studies where posted. I went through the case studies at EnterpiseDB and similar vendors too. But those are customized PostgreSQL servers.\n\nNot necessarily. They sell support more than anything, and the\nmajority of customization is not for stability but for additional\nfeatures, such as mpp queries or replication etc.\n\nThe real issue you run into is that many people don't want to tip\ntheir hand that they are using pgsql because it is a competitive\nadvantage. It's inexpensive, capable, and relatively easy to use. If\nyour competitor is convinced that Oracle or MSSQL server with $240k in\nlicensing each year is the best choice, and you're whipping them with\npgsql, the last thing you want is for them to figure that out and\nswitch.\n",
"msg_date": "Tue, 9 Feb 2010 22:49:24 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL - case studies"
},
{
"msg_contents": "El 10/02/2010 6:49, Scott Marlowe escribi�:\n> Quick note, please stick to text formatted email for the mailing list,\n> it's the preferred format.\n>\n> On Tue, Feb 9, 2010 at 9:09 PM, Jayadevan M\n> <[email protected]> wrote:\n> \n>> Hello all,\n>> Apologies for the long mail.\n>> I work for a company that is provides solutions mostly on a Java/Oracle platform. Recently we moved on of our products to PostgreSQL. The main reason was PostgreSQL's GIS capabilities and the inability of government departments (especially road/traffic) to spend a lot of money for such projects. This product is used to record details about accidents and related analysis (type of road, when/why etc) with maps. Fortunately, even in India, an accident reporting application does not have to handle many tps :). So, I can't say PostgreSQL's performance was really tested in this case.\n>> Later, I tested one screen of one of our products - load testing with Jmeter. We tried it with Oracle, DB2, PostgreSQL and Ingres, and PostgreSQL easily out-performed the rest. We tried a transaction mix with 20+ SELECTS, update, delete and a few inserts.\n>> \n> Please note that benchmarking oracle (and a few other commercial dbs)\n> and then publishing those results without permission of oracle is\n> considered to be in breech of their contract. Yeah, another wonderful\n> aspect of using Oracle.\n>\n> That said, and as someone who is not an oracle licensee in any way,\n> this mimics my experience that postgresql is a match for oracle, db2,\n> and most other databases in the simple, single db on commodity\n> hardware scenario.\n>\n> \n>> After a really good experience with the database, I subscribed to all PostgreSQL groups (my previous experience is all-Oracle) and reading these mails, I realized that many organizations are using plan, 'not customized' PostgreSQL for databases that handle critical applications. Since there is no company trying to 'sell' PostgreSQL, many of us are not aware of such cases.\n>> \n> Actually there are several companies that sell pgsql service, and some\n> that sell customized versions. RedHat, Command Prompt, EnterpriseDB,\n> and so on.\n>\n> \n>> Could some of you please share some info on such scenarios- where you are supporting/designing/developing databases that run into at least a few hundred GBs of data (I know, that is small by todays' standards)?\n>> \n> There are other instances of folks on the list sharing this kind of\n> info you can find by searching the archives. I've used pgsql for\n> about 10 years for anywhere from a few megabytes to hundreds of\n> gigabytes, and all kinds of applications.\n>\n> Where I currently work we have a main data store for a web app that is\n> about 180Gigabytes and growing, running on three servers with slony\n> replication. We handle somewhere in the range of 10k to 20k queries\n> per minute (a mix of 90% or so reads to 10% writes). Peak load can be\n> into the 30k or higher reqs / minute.\n>\n> The two big servers that handle this load are dual quad core opteron\n> 2.1GHz machines with 32Gig RAM and 16 15krpm SAS drives configured as\n> 2 in RAID-1 for OS and pg_xlog, 2 hot spares, and 12 in a RAID-10 for\n> the main data. HW Raid controller is the Areca 1680 which is mostly\n> stable, except for the occasional (once a year or so) hang problem\n> which has been described, and which Areca has assured me they are\n> working on.\n>\n> Our total downtime due to database outages in the last year or so has\n> been 10 to 20 minutes, and that was due to a RAID card driver bug that\n> hits us about once every 300 to 400 days. the majority of the down\n> time has been waiting for our hosting provider to hit the big red\n> switch and restart the main server.\n>\n> Our other pgsql servers provide search facility, with a db size of\n> around 300Gig, and statistics at around ~1TB.\n>\n> \n>> I am sure PostgreSQL has matured a lot more from the days when these case studies where posted. I went through the case studies at EnterpiseDB and similar vendors too. But those are customized PostgreSQL servers.\n>> \n> Not necessarily. They sell support more than anything, and the\n> majority of customization is not for stability but for additional\n> features, such as mpp queries or replication etc.\n>\n> The real issue you run into is that many people don't want to tip\n> their hand that they are using pgsql because it is a competitive\n> advantage. It's inexpensive, capable, and relatively easy to use. If\n> your competitor is convinced that Oracle or MSSQL server with $240k in\n> licensing each year is the best choice, and you're whipping them with\n> pgsql, the last thing you want is for them to figure that out and\n> switch.\n>\n> \nFollowing with that subject, there are many apps on the world that are \nusing PostgreSQL for its business.\nWe are planning the design and deployment of the a large PostgreSQL \nCluster for a DWH-ODS-BI apps.\nWe are documenting everthing for give the information later to be \npublished on the PostgreSQL CaseStudies section.\n\nWe are using Slony-I for replication, PgBouncer for pooling \nconnections,Heartbeat for monitoring and fault detections and CentOS nd \nFreeBSD like OS base.\nThe pg_xlog directory are in a RAID-1 and the main data in a RAID-10.\n\nDo you have any recommendation?\n\nNote: Any has a MPP querys implementation for PostgreSQL that can be shared?\n\nRegards\n\n\n-- \n--------------------------------------------------------------------------------\n\"Para ser realmente grande, hay que estar con la gente, no por encima de ella.\"\n Montesquieu\nIng. Marcos Lu�s Ort�z Valmaseda\nPostgreSQL System DBA&& DWH -- BI Apprentice\n\nCentro de Tecnolog�as de Almacenamiento y An�lisis de Datos (CENTALAD)\nUniversidad de las Ciencias Inform�ticas\n\nLinux User # 418229\n\n-- PostgreSQL --\n\"TIP 4: No hagas 'kill -9' a postmaster\"\nhttp://www.postgresql-es.org\nhttp://www.postgresql.org\nhttp://www.planetpostgresql.org\n\n-- DWH + BI --\nThe Data WareHousing Institute\nhttp://www.tdwi.org\nhttp://www.tdwi.org/cbip\n---------------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 10 Feb 2010 10:47:51 +0100",
"msg_from": "\"Ing. Marcos L. Ortiz Valmaseda\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL - case studies"
},
{
"msg_contents": "Jayadevan M <[email protected]> wrote:\n \n> Could some of you please share some info on such scenarios- where\n> you are supporting/designing/developing databases that run into at\n> least a few hundred GBs of data (I know, that is small by todays'\n> standards)?\n \nI'm a database administrator for the Wisconsin Courts. We've got\nabout 200 PostgreSQL database clusters on about 100 servers spread\nacross the state. Databases range from tiny (few MB) to 1.3 TB. \n \nCheck out this for more info:\n \nhttp://www.pgcon.org/2009/schedule/events/129.en.html\n \nI hope that helps. If you have any particular questions not\nanswered by the above, just ask.\n \n-Kevin\n",
"msg_date": "Wed, 10 Feb 2010 09:47:51 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL - case studies"
},
{
"msg_contents": "* Kevin Grittner ([email protected]) wrote:\n> > Could some of you please share some info on such scenarios- where\n> > you are supporting/designing/developing databases that run into at\n> > least a few hundred GBs of data (I know, that is small by todays'\n> > standards)?\n\nJust saw this, so figured I'd comment:\n\ntsf=> \\l+\n List of databases\n Name | Owner | Encoding | Collation | Ctype | Access privileges | Size | Tablespace | Description \n-----------+----------+----------+-------------+-------------+----------------------------+---------+-------------+---------------------------\n beac | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres | 1724 GB | pg_default | \n\nDoesn't look very pretty, but the point is that its 1.7TB. There's a\nfew other smaller databases on that system too. PG handles it quite\nwell, though this is primairly for data-mining.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Wed, 10 Feb 2010 10:55:03 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL - case studies"
},
{
"msg_contents": "Kevin Grittner ([email protected]) wrote:\n>>> Could some of you please share some info on such scenarios- where\n>>> you are supporting/designing/developing databases that run into at\n>>> least a few hundred GBs of data (I know, that is small by todays'\n>>> standards)?\n>>> \nAt NuevaSync we use PG in a one-database-per-server design, with our own\nreplication system between cluster nodes. The largest node has more than \n200G online.\nThis is an OLTP type workload.\n\n\n\n\n\n\n",
"msg_date": "Wed, 10 Feb 2010 09:10:31 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] PostgreSQL - case studies"
}
] |
[
{
"msg_contents": "Can anybody briefly explain me how each postgres process allocate\nmemory for it needs?\nI mean, what is the biggest size of malloc() it may want? How many\nsuch chunks? What is the average size of allocations?\n\nI think that at first it allocates contiguous piece of shared memory\nfor \"shared buffers\" (rather big, hundreds of megabytes usually, by\none chunk).\nWhat next? temp_buffers, work_mem, maintenance_work_mem - are they\nallocated as contiguous too?\nWhat about other needs? By what size they are typically allocated?\n-- \nantonvm\n",
"msg_date": "Wed, 10 Feb 2010 10:10:36 +0500",
"msg_from": "Anton Maksimenkov <[email protected]>",
"msg_from_op": true,
"msg_subject": "How exactly PostgreSQL allocates memory for its needs?"
},
{
"msg_contents": "On 2/10/2010 12:10 AM, Anton Maksimenkov wrote:\n> Can anybody briefly explain me how each postgres process allocate\n> memory for it needs?\n> I mean, what is the biggest size of malloc() it may want? How many\n> such chunks? What is the average size of allocations?\n>\n> I think that at first it allocates contiguous piece of shared memory\n> for \"shared buffers\" (rather big, hundreds of megabytes usually, by\n> one chunk).\n> What next? temp_buffers, work_mem, maintenance_work_mem - are they\n> allocated as contiguous too?\n> What about other needs? By what size they are typically allocated?\n> \n\nThere is no short answer to this, you should read section 18 of the manual\nhttp://www.postgresql.org/docs/8.4/interactive/runtime-config.html\nspecifically section 18.4\nhttp://www.postgresql.org/docs/8.4/interactive/runtime-config-resource.html\n\nand performance section of the wiki\nhttp://wiki.postgresql.org/wiki/Performance_Optimization\n\nHere is a link annotated postgresql.conf\nhttp://www.pgcon.org/2008/schedule/attachments/44_annotated_gucs_draft1.pdf\n\nKeep in mind each connection/client that connecting to the server \ncreates a new process on the server. Each one the settings you list \nabove is the max amount of memory each one of those sessions is allowed \nto consume.\n\n\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\nThank you.\n\n",
"msg_date": "Wed, 10 Feb 2010 11:43:43 -0500",
"msg_from": "Justin Graf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How exactly PostgreSQL allocates memory for its needs?"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 9:43 AM, Justin Graf <[email protected]> wrote:\n> Keep in mind each connection/client that connecting to the server\n> creates a new process on the server. Each one the settings you list\n> above is the max amount of memory each one of those sessions is allowed\n> to consume.\n\nIt's even worse for work_mem (formerly sort_mem) in that each\nindividual hash agg or sort can grab that much memory. A complex\nquery with 4 sorts and 2 hash aggregates could chew through 6 x\nwork_mem if it needed it. Which is why work_mem can be such a\nhorrific foot gun.\n",
"msg_date": "Wed, 10 Feb 2010 20:46:52 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How exactly PostgreSQL allocates memory for its needs?"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to improve delete performance on a database with several\nforeign keys between relations that have 100M or so rows.\n\nUntil now, I have manually disabled the triggers, done the delete, and\nre-enabled the triggers.\n\nThis works, but I have to do that when I am sure no other user will\naccess the database...\n\nI am wondering if deferring foreign key constraints (instead of\ndisableing them) would increase performance, compared to non deferred\nconstraints (and compared to disableing the constraints, but I guess no\nin this case).\n\nThanks,\n\nFranck\n\n\n\n",
"msg_date": "Wed, 10 Feb 2010 11:55:31 +0100",
"msg_from": "Franck Routier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Deferred constraint and delete performance"
},
{
"msg_contents": "Franck Routier <[email protected]> writes:\n> I am wondering if deferring foreign key constraints (instead of\n> disableing them) would increase performance, compared to non deferred\n> constraints\n\nNo, it wouldn't make any noticeable difference AFAICS. It would\npostpone the work from end-of-statement to end-of-transaction,\nbut not make the work happen any more (or less) efficiently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Feb 2010 09:56:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred constraint and delete performance "
},
{
"msg_contents": "On Wednesday 10 February 2010 15:56:40 Tom Lane wrote:\n> Franck Routier <[email protected]> writes:\n> > I am wondering if deferring foreign key constraints (instead of\n> > disableing them) would increase performance, compared to non deferred\n> > constraints\n> \n> No, it wouldn't make any noticeable difference AFAICS. It would\n> postpone the work from end-of-statement to end-of-transaction,\n> but not make the work happen any more (or less) efficiently.\nIt could make a difference if the transaction is rather long and updates the \nsame row repeatedly because of better cache usage. But I admit thats a bit of \na constructed scenario (where one likely would get into trigger-queue size \nproblems as well)\n\nAndres\n",
"msg_date": "Wed, 10 Feb 2010 18:36:23 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred constraint and delete performance"
},
{
"msg_contents": "2010/2/10 Tom Lane <[email protected]>\n\n> Franck Routier <[email protected]> writes:\n> > I am wondering if deferring foreign key constraints (instead of\n> > disableing them) would increase performance, compared to non deferred\n> > constraints\n>\n> No, it wouldn't make any noticeable difference AFAICS. It would\n> postpone the work from end-of-statement to end-of-transaction,\n> but not make the work happen any more (or less) efficiently.\n>\n> What about disc access? Won't \"working\" with one table, then another be\nfaster than working with both at the same time?\n\n2010/2/10 Tom Lane <[email protected]>\nFranck Routier <[email protected]> writes:\n> I am wondering if deferring foreign key constraints (instead of\n> disableing them) would increase performance, compared to non deferred\n> constraints\n\nNo, it wouldn't make any noticeable difference AFAICS. It would\npostpone the work from end-of-statement to end-of-transaction,\nbut not make the work happen any more (or less) efficiently.\nWhat about disc access? Won't \"working\" with one table, then another be faster than working with both at the same time?",
"msg_date": "Sun, 14 Feb 2010 11:45:14 +0200",
"msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Deferred constraint and delete performance"
}
] |
[
{
"msg_contents": "\n\nHi all,\n\ni am trying to move my app from M$sql to PGsql, but i need a bit of help :)\n\n\non M$sql, i had certain tables that was made as follow (sorry pseudo code)\n\ncontab_y\n date\n amt\n uid\n\n\ncontab_yd\n date\n amt\n uid\n\ncontab_ymd\n date\n amt \n uid\n\n\nand so on..\n\nthis was used to \"solidify\" (aggregate..btw sorry for my terrible english) the data on it..\n\nso basically, i get\n\ncontab_y\ndate = 2010\namt = 100\nuid = 1\n\ncontab_ym\n date = 2010-01\n amt = 10\n uid = 1\n----\n date = 2010-02\n amt = 90\n uid = 1\n\n\ncontab_ymd\n date=2010-01-01\n amt = 1\n uid = 1\n----\nblabla\n\n\nin that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y\nif i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between \nthe different table using some huge (but fast) queries.\n\n\nNow, the matter is that this design is hard to mantain, and the tables are difficult to check\n\nwhat i have try is to go for a \"normal\" approach, using just a table that contains all the data, and some proper indexing.\nThe issue is that this table can contains easilly 100M rows :)\nthat's why the other guys do all this work to speed-up queryes splitting data on different table and precalculating the sums.\n\n\nI am here to ask for an advice to PGsql experts: \nwhat do you think i can do to better manage this situation? \nthere are some other cases where i can take a look at? maybe some documentation, or some technique that i don't know?\nany advice is really appreciated!\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Wed, 10 Feb 2010 23:13:19 +0100",
"msg_from": "rama <[email protected]>",
"msg_from_op": true,
"msg_subject": "perf problem with huge table"
},
{
"msg_contents": "On 2/10/2010 5:13 PM, rama wrote:\n> in that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y\n> if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between\n> the different table using some huge (but fast) queries.\n>\n>\n> Now, the matter is that this design is hard to mantain, and the tables are difficult to check\n>\n> what i have try is to go for a \"normal\" approach, using just a table that contains all the data, and some proper indexing.\n> The issue is that this table can contains easilly 100M rows :)\n> that's why the other guys do all this work to speed-up queryes splitting data on different table and precalculating the sums.\n>\n>\n> I am here to ask for an advice to PGsql experts:\n> what do you think i can do to better manage this situation?\n> there are some other cases where i can take a look at? maybe some documentation, or some technique that i don't know?\n> any advice is really appreciated!\n> \nLook into table partitioning\nhttp://www.postgresql.org/docs/current/static/ddl-partitioning.html\n\nIts similar to what you are doing but it simplifies queries and logic to \naccess large data sets.\n\nAll legitimate Magwerks Corporation quotations are sent in a .PDF file attachment with a unique ID number generated by our proprietary quotation system. Quotations received via any other form of communication will not be honored.\n\nCONFIDENTIALITY NOTICE: This e-mail, including attachments, may contain legally privileged, confidential or other information proprietary to Magwerks Corporation and is intended solely for the use of the individual to whom it addresses. If the reader of this e-mail is not the intended recipient or authorized agent, the reader is hereby notified that any unauthorized viewing, dissemination, distribution or copying of this e-mail is strictly prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and destroy all occurrences of this e-mail immediately.\nThank you.\n\n",
"msg_date": "Wed, 10 Feb 2010 17:45:15 -0500",
"msg_from": "Justin Graf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "Hi Rama\n\nI'm actually looking at going in the other direction ....\n\nI have an app using PG where we have a single table where we just added a\nlot of data, and I'm ending up with many millions of rows, and I'm finding\nthat the single table schema simply doesn't scale.\n\nIn PG, the table partitioning is only handled by the database for reads, for\ninsert/update you need to do quite a lot of DIY (setting up triggers, etc.)\nso I am planning to just use named tables and generate the necessary DDL /\nDML in vanilla SQL the same way that your older code does.\n\nMy experience is mostly with Oracle, which is not MVCC, so I've had to\nrelearn some stuff:\n\n- Oracle often answers simple queries (e.g. counts and max / min) using only\nthe index, which is of course pre-sorted. PG has to go out and fetch the\nrows to see if they are still in scope, and if they are stored all over the\nplace on disk it means an 8K random page fetch for each row. This means that\nadding an index to PG is not nearly the silver bullet that it can be with\nsome non-MVCC databases.\n\n- PG's indexes seem to be quite a bit larger than Oracle's, but that's gut\nfeel, I haven't been doing true comparisons ... however, for my app I have\nlimited myself to only two indexes on that table, and each index is larger\n(in disk space) than the table itself ... I have 60GB of data and 140GB of\nindexes :-)\n\n- There is a lot of row turnover in my big table (I age out data) .... a big\ndelete (millions of rows) in PG seems a bit more expensive to process than\nin Oracle, however PG is not nearly as sensitive to transaction sizes as\nOracle is, so you can cheerfully throw out one big \"DELETE from FOO where\n...\" and let the database chew on it\n\nI am interested to hear about your progress.\n\nCheers\nDave\n\nOn Wed, Feb 10, 2010 at 4:13 PM, rama <[email protected]> wrote:\n\n>\n>\n> Hi all,\n>\n> i am trying to move my app from M$sql to PGsql, but i need a bit of help :)\n>\n>\n> on M$sql, i had certain tables that was made as follow (sorry pseudo code)\n>\n> contab_y\n> date\n> amt\n> uid\n>\n>\n> contab_yd\n> date\n> amt\n> uid\n>\n> contab_ymd\n> date\n> amt\n> uid\n>\n>\n> and so on..\n>\n> this was used to \"solidify\" (aggregate..btw sorry for my terrible english)\n> the data on it..\n>\n> so basically, i get\n>\n> contab_y\n> date = 2010\n> amt = 100\n> uid = 1\n>\n> contab_ym\n> date = 2010-01\n> amt = 10\n> uid = 1\n> ----\n> date = 2010-02\n> amt = 90\n> uid = 1\n>\n>\n> contab_ymd\n> date=2010-01-01\n> amt = 1\n> uid = 1\n> ----\n> blabla\n>\n>\n> in that way, when i need to do a query for a long ranges (ie: 1 year) i\n> just take the rows that are contained to contab_y\n> if i need to got a query for a couple of days, i can go on ymd, if i need\n> to get some data for the other timeframe, i can do some cool intersection\n> between\n> the different table using some huge (but fast) queries.\n>\n>\n> Now, the matter is that this design is hard to mantain, and the tables are\n> difficult to check\n>\n> what i have try is to go for a \"normal\" approach, using just a table that\n> contains all the data, and some proper indexing.\n> The issue is that this table can contains easilly 100M rows :)\n> that's why the other guys do all this work to speed-up queryes splitting\n> data on different table and precalculating the sums.\n>\n>\n> I am here to ask for an advice to PGsql experts:\n> what do you think i can do to better manage this situation?\n> there are some other cases where i can take a look at? maybe some\n> documentation, or some technique that i don't know?\n> any advice is really appreciated!\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi RamaI'm actually looking at going in the other direction .... I have an app using PG where we have a single table where we just added a lot of data, and I'm ending up with many millions of rows, and I'm finding that the single table schema simply doesn't scale. \nIn PG, the table partitioning is only handled by the database for reads, for insert/update you need to do quite a lot of DIY (setting up triggers, etc.) so I am planning to just use named tables and generate the necessary DDL / DML in vanilla SQL the same way that your older code does. \nMy experience is mostly with Oracle, which is not MVCC, so I've had to relearn some stuff:- Oracle often answers simple queries (e.g. counts and max / min) using only the index, which is of course pre-sorted. PG has to go out and fetch the rows to see if they are still in scope, and if they are stored all over the place on disk it means an 8K random page fetch for each row. This means that adding an index to PG is not nearly the silver bullet that it can be with some non-MVCC databases.\n- PG's indexes seem to be quite a bit larger than Oracle's, but that's gut feel, I haven't been doing true comparisons ... however, for my app I have limited myself to only two indexes on that table, and each index is larger (in disk space) than the table itself ... I have 60GB of data and 140GB of indexes :-)\n- There is a lot of row turnover in my big table (I age out data) .... a big delete (millions of rows) in PG seems a bit more expensive to process than in Oracle, however PG is not nearly as sensitive to transaction sizes as Oracle is, so you can cheerfully throw out one big \"DELETE from FOO where ...\" and let the database chew on it\nI am interested to hear about your progress.CheersDaveOn Wed, Feb 10, 2010 at 4:13 PM, rama <[email protected]> wrote:\n\n\nHi all,\n\ni am trying to move my app from M$sql to PGsql, but i need a bit of help :)\n\n\non M$sql, i had certain tables that was made as follow (sorry pseudo code)\n\ncontab_y\n date\n amt\n uid\n\n\ncontab_yd\n date\n amt\n uid\n\ncontab_ymd\n date\n amt\n uid\n\n\nand so on..\n\nthis was used to \"solidify\" (aggregate..btw sorry for my terrible english) the data on it..\n\nso basically, i get\n\ncontab_y\ndate = 2010\namt = 100\nuid = 1\n\ncontab_ym\n date = 2010-01\n amt = 10\n uid = 1\n----\n date = 2010-02\n amt = 90\n uid = 1\n\n\ncontab_ymd\n date=2010-01-01\n amt = 1\n uid = 1\n----\nblabla\n\n\nin that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y\nif i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between\nthe different table using some huge (but fast) queries.\n\n\nNow, the matter is that this design is hard to mantain, and the tables are difficult to check\n\nwhat i have try is to go for a \"normal\" approach, using just a table that contains all the data, and some proper indexing.\nThe issue is that this table can contains easilly 100M rows :)\nthat's why the other guys do all this work to speed-up queryes splitting data on different table and precalculating the sums.\n\n\nI am here to ask for an advice to PGsql experts:\nwhat do you think i can do to better manage this situation?\nthere are some other cases where i can take a look at? maybe some documentation, or some technique that i don't know?\nany advice is really appreciated!\n\n\n\n\n\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 10 Feb 2010 17:16:14 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 4:16 PM, Dave Crooke <[email protected]> wrote:\n\n> Hi Rama\n>\n> I'm actually looking at going in the other direction ....\n>\n> I have an app using PG where we have a single table where we just added a\n> lot of data, and I'm ending up with many millions of rows, and I'm finding\n> that the single table schema simply doesn't scale.\n>\n> In PG, the table partitioning is only handled by the database for reads,\n> for insert/update you need to do quite a lot of DIY (setting up triggers,\n> etc.) so I am planning to just use named tables and generate the necessary\n> DDL / DML in vanilla SQL the same way that your older code does.\n>\n> My experience is mostly with Oracle, which is not MVCC, so I've had to\n> relearn some stuff:\n>\n\nJust a nit, but Oracle implements MVCC. 90% of the databases out there do.\n\n\n> - Oracle often answers simple queries (e.g. counts and max / min) using\n> only the index, which is of course pre-sorted. PG has to go out and fetch\n> the rows to see if they are still in scope, and if they are stored all over\n> the place on disk it means an 8K random page fetch for each row. This means\n> that adding an index to PG is not nearly the silver bullet that it can be\n> with some non-MVCC databases.\n>\n> - PG's indexes seem to be quite a bit larger than Oracle's, but that's gut\n> feel, I haven't been doing true comparisons ... however, for my app I have\n> limited myself to only two indexes on that table, and each index is larger\n> (in disk space) than the table itself ... I have 60GB of data and 140GB of\n> indexes :-)\n>\n> - There is a lot of row turnover in my big table (I age out data) .... a\n> big delete (millions of rows) in PG seems a bit more expensive to process\n> than in Oracle, however PG is not nearly as sensitive to transaction sizes\n> as Oracle is, so you can cheerfully throw out one big \"DELETE from FOO where\n> ...\" and let the database chew on it .\n\n\nI find partitioning pretty useful in this scenario if the data allows is.\nAging out data just means dropping a partition rather than a delete\nstatement.\n\nOn Wed, Feb 10, 2010 at 4:16 PM, Dave Crooke <[email protected]> wrote:\nHi RamaI'm actually looking at going in the other direction .... I have an app using PG where we have a single table where we just added a lot of data, and I'm ending up with many millions of rows, and I'm finding that the single table schema simply doesn't scale. \nIn PG, the table partitioning is only handled by the database for reads, for insert/update you need to do quite a lot of DIY (setting up triggers, etc.) so I am planning to just use named tables and generate the necessary DDL / DML in vanilla SQL the same way that your older code does. \nMy experience is mostly with Oracle, which is not MVCC, so I've had to relearn some stuff:Just a nit, but Oracle implements MVCC. 90% of the databases out there do. \n- Oracle often answers simple queries (e.g. counts and max / min) using only the index, which is of course pre-sorted. PG has to go out and fetch the rows to see if they are still in scope, and if they are stored all over the place on disk it means an 8K random page fetch for each row. This means that adding an index to PG is not nearly the silver bullet that it can be with some non-MVCC databases.\n- PG's indexes seem to be quite a bit larger than Oracle's, but that's gut feel, I haven't been doing true comparisons ... however, for my app I have limited myself to only two indexes on that table, and each index is larger (in disk space) than the table itself ... I have 60GB of data and 140GB of indexes :-)\n- There is a lot of row turnover in my big table (I age out data) .... a big delete (millions of rows) in PG seems a bit more expensive to process than in Oracle, however PG is not nearly as sensitive to transaction sizes as Oracle is, so you can cheerfully throw out one big \"DELETE from FOO where ...\" and let the database chew on it .\nI find partitioning pretty useful in this scenario if the data allows is. Aging out data just means dropping a partition rather than a delete statement.",
"msg_date": "Wed, 10 Feb 2010 16:30:54 -0700",
"msg_from": "Jon Lewison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <[email protected]> wrote:\n\n>\n>\n> Just a nit, but Oracle implements MVCC. 90% of the databases out there do.\n>\n\nSorry, I spoke imprecisely. What I meant was the difference in how the rows\nare stored internally .... in Oracle, the main tablespace contains only the\nnewest version of a row, which is (where possible) updated in place -\nqueries in a transaction that can still \"see\" an older version have to pull\nit from the UNDO tablespace (rollback segments in Oracle 8 and older).\n\nIn Postgres, all versions of all rows are in the main table, and have\nvalidity ranges associated with them (\"this version of this row existed\nbetween transaction ids x and y\"). Once a version goes out of scope, it has\nto be garbage collected by the vacuuming process so the space can be\nre-used.\n\nIn general, this means Oracle is faster *if* you're only doing lots of small\ntransactions (consider how these different models handle an update to a\nsingle field in a single row) but it is more sensitive to the scale of\ntransactions .... doing a really big transaction against a database with an\nOLTP workload can upset Oracle's digestion as it causes a lot of UNDO\nlookups, PG's performance is a lot more predictable in this regard.\n\nBoth models have benefits and drawbacks ... when designing a schema for\nperformance it's important to understand these differences.\n\n\n> I find partitioning pretty useful in this scenario if the data allows is.\n> Aging out data just means dropping a partition rather than a delete\n> statement.\n>\n>\nForgot to say this - yes, absolutely agree .... dropping a table is a lot\ncheaper than a transactional delete.\n\nIn general, I think partitioning is more important / beneficial with PG's\nstyle of MVCC than with Oracle or SQL-Server (which I think is closer to\nOracle than PG).\n\n\nCheers\nDave\n\nOn Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <[email protected]> wrote:\nJust a nit, but Oracle implements MVCC. 90% of the databases out there do.Sorry, I spoke imprecisely. What I meant was the difference in how the rows are stored internally .... in Oracle, the main tablespace contains only the newest version of a row, which is (where possible) updated in place - queries in a transaction that can still \"see\" an older version have to pull it from the UNDO tablespace (rollback segments in Oracle 8 and older).\n In Postgres, all versions of all rows are in the main table, and have validity ranges associated with them (\"this version of this row existed between transaction ids x and y\"). Once a version goes out of scope, it has to be garbage collected by the vacuuming process so the space can be re-used.\nIn general, this means Oracle is faster *if* you're only doing lots of small transactions (consider how these different models handle an update to a single field in a single row) but it is more sensitive to the scale of transactions .... doing a really big transaction against a database with an OLTP workload can upset Oracle's digestion as it causes a lot of UNDO lookups, PG's performance is a lot more predictable in this regard.\nBoth models have benefits and drawbacks ... when designing a schema for performance it's important to understand these differences.\nI find partitioning pretty useful in this scenario if the data allows is. Aging out data just means dropping a partition rather than a delete statement.\nForgot to say this - yes, absolutely agree .... dropping a table is a lot cheaper than a transactional delete.In general, I think partitioning is more important / beneficial with PG's style of MVCC than with Oracle or SQL-Server (which I think is closer to Oracle than PG).\nCheersDave",
"msg_date": "Wed, 10 Feb 2010 17:48:09 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "On Wed, Feb 10, 2010 at 4:48 PM, Dave Crooke <[email protected]> wrote:\n\n> On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <[email protected]> wrote:\n>\n>>\n>>\n>> Just a nit, but Oracle implements MVCC. 90% of the databases out there\n>> do.\n>>\n>\n> Sorry, I spoke imprecisely. What I meant was the difference in how the rows\n> are stored internally .... in Oracle, the main tablespace contains only the\n> newest version of a row, which is (where possible) updated in place -\n> queries in a transaction that can still \"see\" an older version have to pull\n> it from the UNDO tablespace (rollback segments in Oracle 8 and older).\n>\n> In Postgres, all versions of all rows are in the main table, and have\n> validity ranges associated with them (\"this version of this row existed\n> between transaction ids x and y\"). Once a version goes out of scope, it has\n> to be garbage collected by the vacuuming process so the space can be\n> re-used.\n>\n> In general, this means Oracle is faster *if* you're only doing lots of\n> small transactions (consider how these different models handle an update to\n> a single field in a single row) but it is more sensitive to the scale of\n> transactions .... doing a really big transaction against a database with an\n> OLTP workload can upset Oracle's digestion as it causes a lot of UNDO\n> lookups, PG's performance is a lot more predictable in this regard.\n>\n> Both models have benefits and drawbacks ... when designing a schema for\n> performance it's important to understand these differences.\n>\n\nYes, absolutely. It's not unusual to see the UNDO tablespace increase in\nsize by several gigs for a large bulk load.\n\nSpeaking of rollback segments I'm assuming that since all storage for\nnon-visible row versions is in the main table that PostgreSQL has no\nequivalent for an ORA-01555.\n\n- Jon\n\nOn Wed, Feb 10, 2010 at 4:48 PM, Dave Crooke <[email protected]> wrote:\nOn Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <[email protected]> wrote:\nJust a nit, but Oracle implements MVCC. 90% of the databases out there do.Sorry, I spoke imprecisely. What I meant was the difference in how the rows are stored internally .... in Oracle, the main tablespace contains only the newest version of a row, which is (where possible) updated in place - queries in a transaction that can still \"see\" an older version have to pull it from the UNDO tablespace (rollback segments in Oracle 8 and older).\n\n In Postgres, all versions of all rows are in the main table, and have validity ranges associated with them (\"this version of this row existed between transaction ids x and y\"). Once a version goes out of scope, it has to be garbage collected by the vacuuming process so the space can be re-used.\nIn general, this means Oracle is faster *if* you're only doing lots of small transactions (consider how these different models handle an update to a single field in a single row) but it is more sensitive to the scale of transactions .... doing a really big transaction against a database with an OLTP workload can upset Oracle's digestion as it causes a lot of UNDO lookups, PG's performance is a lot more predictable in this regard.\nBoth models have benefits and drawbacks ... when designing a schema for performance it's important to understand these differences.Yes, absolutely. It's not unusual to see the UNDO tablespace increase in size by several gigs for a large bulk load. \nSpeaking of rollback segments I'm assuming that since all storage for non-visible row versions is in the main table that PostgreSQL has no equivalent for an ORA-01555.- Jon",
"msg_date": "Wed, 10 Feb 2010 17:09:17 -0700",
"msg_from": "Jon Lewison <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "Actually, in a way it does .... \"No space left on device\" or similar ;-)\n\nCheers\nDave\n\nP.S. for those not familiar with Oracle, ORA-01555 translates to \"your query\n/ transaction is kinda old and I've forgotten the data, so I'm just going to\nthrow an error at you now\". If you're reading, your SELECT randomly fails,\nif you're writing it forces a rollback of your transaction.\n\nOn Wed, Feb 10, 2010 at 6:09 PM, Jon Lewison <[email protected]> wrote:\n\n>\n> Speaking of rollback segments I'm assuming that since all storage for\n> non-visible row versions is in the main table that PostgreSQL has no\n> equivalent for an ORA-01555.\n>\n> - Jon\n>\n>\n\nActually, in a way it does .... \"No space left on device\" or similar ;-)CheersDaveP.S. for those not familiar with Oracle, ORA-01555 translates to \"your query / transaction is kinda old and I've forgotten the data, so I'm just going to throw an error at you now\". If you're reading, your SELECT randomly fails, if you're writing it forces a rollback of your transaction.\nOn Wed, Feb 10, 2010 at 6:09 PM, Jon Lewison <[email protected]> wrote:\n\nSpeaking of rollback segments I'm assuming that since all storage for non-visible row versions is in the main table that PostgreSQL has no equivalent for an ORA-01555.- Jon",
"msg_date": "Wed, 10 Feb 2010 18:51:30 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "rama wrote:\n> in that way, when i need to do a query for a long ranges (ie: 1 year) i just take the rows that are contained to contab_y\n> if i need to got a query for a couple of days, i can go on ymd, if i need to get some data for the other timeframe, i can do some cool intersection between \n> the different table using some huge (but fast) queries.\n>\n> Now, the matter is that this design is hard to mantain, and the tables are difficult to check\n> \n\nYou sound like you're trying to implement something like materialized \nviews one at a time; have you considered adopting the more general \ntechniques used to maintain those so that you're not doing custom \ndevelopment each time for the design?\n\nhttp://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views\nhttp://www.pgcon.org/2008/schedule/events/69.en.html\n\nI think that sort of approach is more practical than it would have been \nfor you in MySQL, so maybe this wasn't on your list of possibilities before.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Wed, 10 Feb 2010 20:42:13 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "> Hi all,\n>\n> i am trying to move my app from M$sql to PGsql, but i need a bit of help\n> :)\n\nExcept from all the other good advises about partitioning the dataset and\nsuch there is another aspect to \"keep in mind\". When you have a large\ndataset and your queries become \"IO-bound\" the \"tuple density\" is going to\nhit you in 2 ways. Your dataset seems to have a natural clustering around\nthe time, which is also what you would use for the partitioning. That also\nmeans that if you sort of have the clustering of data on disk you would\nhave the tuples you need to satisfy a query on the \"same page\" or pages\n\"close to\".\n\nThe cost of checking visibillity for a tuple is to some degree a function\nof the \"tuple size\", so if you can do anything to increase the tuple\ndensity that will most likely benefit speed in two ways:\n\n* You increace the likelyhood that the next tuple was in the same page and\n then dont result in a random I/O seek.\n* You increace the total amount of tuples you have sitting in your system\n cache in the same amount of pages (memory) so they dont result in a\n random I/O seek.\n\nSo .. if you are carrying around columns you \"dont really need\", then\nthrow them away. (that could be colums that trivially can be computed\nbases on other colums), but you need to do your own testing here. To\nstress the first point theres a sample run on a fairly old desktop with\none SATA drive.\n\ntesttable has the \"id integer\" and a \"data\" which is 486 bytes of text.\ntesttable2 has the \"id integer\" and a data integer.\n\nboth filled with 10M tuples and PG restarted and rand drop caches before\nto simulate \"totally disk bound system\".\n\ntestdb=# select count(id) from testtable where id > 8000000 and id < 8500000;\n count\n--------\n 499999\n(1 row)\n\nTime: 7909.464 ms\ntestdb=# select count(id) from testtable2 where id > 8000000 and id <\n8500000;\n count\n--------\n 499999\n(1 row)\n\nTime: 2149.509 ms\n\nIn this sample.. 4 times faster, the query does not touch the \"data\" column.\n(on better hardware you'll most likely see better results).\n\nIf the columns are needed, you can push less frequently used columns to a\n1:1 relation.. but that gives you some administrative overhead, but then\nyou can desice at query time if you want the extra random seeks to\naccess data.\n\nYou have the same picture the \"other way around\" if your queries are\naccession data sitting in TOAST, you'll be paying \"double random IO\"-cost\nfor getting the tuple. So it is definately a tradeoff, that should be done\nwith care.\n\nI've monkeypatched my own PG using this patch to toy around with criteria\nto send the \"less frequently used data\" to a TOAST table.\nhttp://article.gmane.org/gmane.comp.db.postgresql.devel.general/135158/match=\n\nGoogle \"vertical partition\" for more, this is basically what it is.\n\n(I belive this could benefit my own application, so I'm also\ntrying to push some interest into the area).\n\n-- \nJesper\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 11 Feb 2010 11:05:17 +0100 (CET)",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
},
{
"msg_contents": "Dave Crooke wrote:\n> On Wed, Feb 10, 2010 at 5:30 PM, Jon Lewison <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> \n> \n> Just a nit, but Oracle implements MVCC. 90% of the databases out\n> there do.\n> \n> \n> Sorry, I spoke imprecisely. What I meant was the difference in how the\n> rows are stored internally .... in Oracle, the main tablespace contains\n> only the newest version of a row, which is (where possible) updated in\n> place - queries in a transaction that can still \"see\" an older version\n> have to pull it from the UNDO tablespace (rollback segments in Oracle 8\n> and older).\n> \n> In Postgres, all versions of all rows are in the main table, and have\n> validity ranges associated with them (\"this version of this row existed\n> between transaction ids x and y\"). Once a version goes out of scope, it\n> has to be garbage collected by the vacuuming process so the space can be\n> re-used.\n> \n> In general, this means Oracle is faster *if* you're only doing lots of\n> small transactions (consider how these different models handle an update\n> to a single field in a single row) but it is more sensitive to the scale\n> of transactions .... doing a really big transaction against a database\n> with an OLTP workload can upset Oracle's digestion as it causes a lot of\n> UNDO lookups, PG's performance is a lot more predictable in this regard.\n> \n> Both models have benefits and drawbacks ... when designing a schema for\n> performance it's important to understand these differences.\n> \n> \n> I find partitioning pretty useful in this scenario if the data\n> allows is. Aging out data just means dropping a partition rather\n> than a delete statement.\n> \n> \n> Forgot to say this - yes, absolutely agree .... dropping a table is a\n> lot cheaper than a transactional delete.\n> \n> In general, I think partitioning is more important / beneficial with\n> PG's style of MVCC than with Oracle or SQL-Server (which I think is\n> closer to Oracle than PG).\n\nI would like to disagree here a little bit\n\nWhere Oracle's table partitioning is coming in very handy is for example\nwhen you have to replace the data of a big (read-only) table on a\nregularly basis (typically the replicated data from another system).\nIn this case, you just create a partitioned table of exact the same\ncolumns/indexes whatsoever as the data table.\n\nTo load, you then do load the data into the partitioned table, i.e.\n- truncate the partitioned table, disable constraints, drop indexes\n- load the data into the partitioned table\n- rebuild all indexes etc. on the partitioned table\n\nduring all this time (even if it takes hours) the application can still\naccess the data in the data table without interfering the bulk load.\n\nOnce you have prepared the data in the partitioned table, you\n- exchange the partition with the data table\nwich is a dictionary operation, that means, the application is (if ever)\nonly blocked during this operation which is in the sub-seconds range.\n\nIf you have to do this with convetional updates or deletes/inserts resp.\nthen this might not even be possible in the given timeframe.\n\njust as an example\nLeo\n\np.s. just to make it a little bit clearer about the famous ORA-01555:\nOracle is not \"forgetting\" the data as the Oracle RDBMS is of course\nalso ACID-compliant. The ORA-01555 can happen\n\n- when the rollback tablespace is really to small to hold all the data\nchanged in the transaction (which I consider a configuration error)\n\n- when a long running (read) transaction is trying to change a record\nwhich is already updated AND COMMITTED by another transaction. The key\nhere is, that a second transaction has changed a record which is also\nneeded by the first transaction and the second transaction commited the\nwork. Committing the change means, the data in the rollback segment is\nno longer needed, as it can be read directly from the data block (after\nall it is commited and this means valid and visible to other\ntransactions). If the first transaction now tries to read the data from\nthe rollback segment to see the unchanged state, it will still succeed\n(it is still there, nothing happend until now to the rollback segment).\nThe problem of the ORA-01555 shows up only, if now a third transaction\nneeds space in the rollback segment. As the entry from the first/second\ntransaction is marked committed (and therefore no longer needed), it is\nperfectly valid for transaction #3 to grab this rollback segment and to\nstore its old value there. If THEN (and only then) comes transaction #1\nagain, asking for the old, unchanged value when the transaction started,\nTHEN the famous ORA-01555 is raised as this value is now overwritten by\ntransaction #3.\nThats why in newer versions you have to set the retention time of the\nrollback blocks/segments to a value bigger than your expected longest\ntransaction. This will decrease the likelihood of the ORA-01555\ndrastically (but it is still not zero, as you could easily construct an\nexample where it still will fail with ORA-0155 as a transaction can\nstill run longer than you expected or the changed data is bigger the the\nwhole rollback tablespace)\n\n> \n> \n> Cheers\n> Dave\n> \n> \n> \n\n",
"msg_date": "Thu, 11 Feb 2010 15:39:01 +0100",
"msg_from": "Leo Mannhart <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: perf problem with huge table"
}
] |
[
{
"msg_contents": "\nJust a heads up - apparently the more recent Dell RAID controllers will no \nlonger recognise hard discs that weren't sold through Dell.\n\nhttp://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n\nAs one of the comments points out, that kind of makes them no longer SATA \nor SAS compatible, and they shouldn't be allowed to use those acronyms any \nmore.\n\nMatthew\n\n-- \n An optimist sees the glass as half full, a pessimist as half empty,\n and an engineer as having redundant storage capacity.\n",
"msg_date": "Thu, 11 Feb 2010 12:39:18 +0000 (GMT)",
"msg_from": "Matthew Wakeling <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dell PERC H700/H800"
},
{
"msg_contents": "On Thu, 2010-02-11 at 12:39 +0000, Matthew Wakeling wrote:\n> Just a heads up - apparently the more recent Dell RAID controllers will no \n> longer recognise hard discs that weren't sold through Dell.\n> \n> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n> \n> As one of the comments points out, that kind of makes them no longer SATA \n> or SAS compatible, and they shouldn't be allowed to use those acronyms any \n> more.\n\nThat's interesting. I know that IBM at least on some of their models\nhave done the same. Glad I use HP :)\n\nJoshua D. Drake\n\n> \n> Matthew\n> \n> -- \n> An optimist sees the glass as half full, a pessimist as half empty,\n> and an engineer as having redundant storage capacity.\n> \n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nRespect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.\n\n",
"msg_date": "Thu, 11 Feb 2010 09:26:15 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "On Thu, Feb 11, 2010 at 5:39 AM, Matthew Wakeling <[email protected]> wrote:\n>\n> Just a heads up - apparently the more recent Dell RAID controllers will no\n> longer recognise hard discs that weren't sold through Dell.\n>\n> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n>\n> As one of the comments points out, that kind of makes them no longer SATA or\n> SAS compatible, and they shouldn't be allowed to use those acronyms any\n> more.\n\nYet one more reason I'm glad I no longer source servers from Dell.\n\nI just ask my guy at Aberdeen if he thinks drive X is a good choice,\nwe discuss it like adults and I make my decision. And I generally\nlisten to him because he's usually right. But I'd spit nails if my my\nRAID controller refused to work with whatever drives I decided to plug\ninto it.\n",
"msg_date": "Thu, 11 Feb 2010 10:42:04 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "Matthew Wakeling wrote:\n>\n> Just a heads up - apparently the more recent Dell RAID controllers \n> will no longer recognise hard discs that weren't sold through Dell.\n>\n> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/ \n>\n>\n> As one of the comments points out, that kind of makes them no longer \n> SATA or SAS compatible, and they shouldn't be allowed to use those \n> acronyms any more.\n>\n> Matthew\n>\nI think that's potentially FUD. Its all about 'Dell qualified drives'. \nI can't see anything that suggests that Dell will OEM drives and somehow \ntag them so that the drive must have come from them. Of course they are \nbig enough that they could have special BIOS I guess, but I read it that \nthe drive types (and presumably revisions thereof) had to be recognised \nby the controller from a list, which presumably can be reflashed, which \nis not quite saying that if some WD enterprise drive model is \n'qualified' then you have to buy it from Dell..\n\nDo you have any further detail?\n\n",
"msg_date": "Thu, 11 Feb 2010 20:11:38 +0000",
"msg_from": "James Mansion <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "2010/2/11 James Mansion <[email protected]>:\n> Matthew Wakeling wrote:\n>>\n>> Just a heads up - apparently the more recent Dell RAID controllers will no\n>> longer recognise hard discs that weren't sold through Dell.\n>>\n>>\n>> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n>>\n>> As one of the comments points out, that kind of makes them no longer SATA\n>> or SAS compatible, and they shouldn't be allowed to use those acronyms any\n>> more.\n>>\n> I think that's potentially FUD. Its all about 'Dell qualified drives'. I\n> can't see anything that suggests that Dell will OEM drives and somehow tag\n> them so that the drive must have come from them. Of course they are big\n> enough that they could have special BIOS I guess, but I read it that the\n> drive types (and presumably revisions thereof) had to be recognised by the\n> controller from a list, which presumably can be reflashed, which is not\n> quite saying that if some WD enterprise drive model is 'qualified' then you\n> have to buy it from Dell..\n>\n> Do you have any further detail?\n\nFor example: SAMSUNG MCCOE50G, 50GB SSD which you can buy only from\nDell. It's unknown at Samsung page. I think they can easy order own\nmodel.\n\n-- \nŁukasz Jagiełło\nSystem Administrator\nG-Forces Web Management Polska sp. z o.o. (www.gforces.pl)\n\nUl. Kruczkowskiego 12, 80-288 Gdańsk\nSpółka wpisana do KRS pod nr 246596 decyzją Sądu Rejonowego Gdańsk-Północ\n",
"msg_date": "Thu, 11 Feb 2010 21:58:12 +0100",
"msg_from": "=?UTF-8?B?xYF1a2FzeiBKYWdpZcWCxYJv?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "On Thu, Feb 11, 2010 at 1:11 PM, James Mansion\n<[email protected]> wrote:\n> Matthew Wakeling wrote:\n>>\n>> Just a heads up - apparently the more recent Dell RAID controllers will no\n>> longer recognise hard discs that weren't sold through Dell.\n>>\n>>\n>> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n>>\n>> As one of the comments points out, that kind of makes them no longer SATA\n>> or SAS compatible, and they shouldn't be allowed to use those acronyms any\n>> more.\n>>\n>> Matthew\n>>\n> I think that's potentially FUD. Its all about 'Dell qualified drives'. I\n> can't see anything that suggests that Dell will OEM drives and somehow tag\n> them so that the drive must have come from them. Of course they are big\n> enough that they could have special BIOS I guess, but I read it that the\n> drive types (and presumably revisions thereof) had to be recognised by the\n> controller from a list, which presumably can be reflashed, which is not\n> quite saying that if some WD enterprise drive model is 'qualified' then you\n> have to buy it from Dell..\n>\n> Do you have any further detail?\n\nIn the post to the dell mailing list (\nhttp://lists.us.dell.com/pipermail/linux-poweredge/2010-February/041335.html\n) It was pointed out that the user had installed Seagate ES.2 drives,\nwhich are enterprise class drives that have been around a while and\nare kind of the standard SATA enterprise clas drives and are listed so\nby Seagate:\n\nhttp://www.seagate.com/www/en-us/products/servers/barracuda_es/barracuda_es.2\n\nThese drives were marked as BLOCKED and unusable by the system.\n\nThe pdf linked to in the dell forum specifically states that the hard\ndrives are loaded with a dell specific firmware. The PDF seems\notherwise free of useful information, and is mostly a marketing tool\nas near as I can tell.\n",
"msg_date": "Thu, 11 Feb 2010 14:55:33 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "I do think it's valid to prevent idiot customers from installing drives that\nuse too much power or run too hot, or desktop drives that don't support\nfast-fail reads, thus driving up Dell's support load, but it sounds like\nthis is more of a lock-in attempt.\n\nThis is kind of a dumb move on their part .... most enterprise buyers will\nbuy drives through them anyway for support reasons, and the low end guys who\nare price sensitive will just take their business elsewhere. I'm not sure\nwho thought this would increase revenue materially.\n\nCheers\nDave\n\nOn Thu, Feb 11, 2010 at 3:55 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Thu, Feb 11, 2010 at 1:11 PM, James Mansion\n> <[email protected]> wrote:\n> > Matthew Wakeling wrote:\n> >>\n> >> Just a heads up - apparently the more recent Dell RAID controllers will\n> no\n> >> longer recognise hard discs that weren't sold through Dell.\n> >>\n> >>\n> >>\n> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n> >>\n> >> As one of the comments points out, that kind of makes them no longer\n> SATA\n> >> or SAS compatible, and they shouldn't be allowed to use those acronyms\n> any\n> >> more.\n> >>\n> >> Matthew\n> >>\n> > I think that's potentially FUD. Its all about 'Dell qualified drives'.\n> I\n> > can't see anything that suggests that Dell will OEM drives and somehow\n> tag\n> > them so that the drive must have come from them. Of course they are big\n> > enough that they could have special BIOS I guess, but I read it that the\n> > drive types (and presumably revisions thereof) had to be recognised by\n> the\n> > controller from a list, which presumably can be reflashed, which is not\n> > quite saying that if some WD enterprise drive model is 'qualified' then\n> you\n> > have to buy it from Dell..\n> >\n> > Do you have any further detail?\n>\n> In the post to the dell mailing list (\n>\n> http://lists.us.dell.com/pipermail/linux-poweredge/2010-February/041335.html\n> ) It was pointed out that the user had installed Seagate ES.2 drives,\n> which are enterprise class drives that have been around a while and\n> are kind of the standard SATA enterprise clas drives and are listed so\n> by Seagate:\n>\n>\n> http://www.seagate.com/www/en-us/products/servers/barracuda_es/barracuda_es.2\n>\n> These drives were marked as BLOCKED and unusable by the system.\n>\n> The pdf linked to in the dell forum specifically states that the hard\n> drives are loaded with a dell specific firmware. The PDF seems\n> otherwise free of useful information, and is mostly a marketing tool\n> as near as I can tell.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI do think it's valid to prevent idiot customers from installing\ndrives that use too much power or run too hot, or desktop drives that don't support fast-fail reads, thus driving up Dell's support load, but it\nsounds like this is more of a lock-in attempt.This is kind of a dumb move on their part .... most enterprise buyers will buy drives through them anyway for support reasons, and the low end guys who are price sensitive will just take their business elsewhere. I'm not sure who thought this would increase revenue materially.\nCheersDaveOn Thu, Feb 11, 2010 at 3:55 PM, Scott Marlowe <[email protected]> wrote:\nOn Thu, Feb 11, 2010 at 1:11 PM, James Mansion\n<[email protected]> wrote:\n> Matthew Wakeling wrote:\n>>\n>> Just a heads up - apparently the more recent Dell RAID controllers will no\n>> longer recognise hard discs that weren't sold through Dell.\n>>\n>>\n>> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n>>\n>> As one of the comments points out, that kind of makes them no longer SATA\n>> or SAS compatible, and they shouldn't be allowed to use those acronyms any\n>> more.\n>>\n>> Matthew\n>>\n> I think that's potentially FUD. Its all about 'Dell qualified drives'. I\n> can't see anything that suggests that Dell will OEM drives and somehow tag\n> them so that the drive must have come from them. Of course they are big\n> enough that they could have special BIOS I guess, but I read it that the\n> drive types (and presumably revisions thereof) had to be recognised by the\n> controller from a list, which presumably can be reflashed, which is not\n> quite saying that if some WD enterprise drive model is 'qualified' then you\n> have to buy it from Dell..\n>\n> Do you have any further detail?\n\nIn the post to the dell mailing list (\nhttp://lists.us.dell.com/pipermail/linux-poweredge/2010-February/041335.html\n) It was pointed out that the user had installed Seagate ES.2 drives,\nwhich are enterprise class drives that have been around a while and\nare kind of the standard SATA enterprise clas drives and are listed so\nby Seagate:\n\nhttp://www.seagate.com/www/en-us/products/servers/barracuda_es/barracuda_es.2\n\nThese drives were marked as BLOCKED and unusable by the system.\n\nThe pdf linked to in the dell forum specifically states that the hard\ndrives are loaded with a dell specific firmware. The PDF seems\notherwise free of useful information, and is mostly a marketing tool\nas near as I can tell.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 12 Feb 2010 13:03:54 -0600",
"msg_from": "Dave Crooke <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "I've been full-on vocally anti-Dell ever since they started releasing \nPCs with the non-standard ATX power supply pinout; that was my final \nstraw with their terrible quality decisions. But after doing two tuning \nexercises with PERC6 controllers and getting quite good results this \nyear, just a few weeks ago I begrudgingly added them to my \"known good \nhardware\" list as a viable candidate to suggest to people. They finally \ntook a good LSI card and didn't screw anything up in their version.\n\nI am somehow relieved that sanity has returned to my view of the world \nnow, with Dell right back onto the shit list again. If they want a HCL \nand to warn people they're in an unsupported configuration when they \nviolate it, which happens on some of their equipment, fine. This move \nis just going to kill sales of their servers into the low-end of the \nmarket, which relied heavily on buying the base system from them and \nthen dropping their own drives in rather than pay the full \"enterprise \ndrive\" markup for non-critical systems.\n\nI do not as a rule ever do business with a vendor who tries to lock me \ninto being their sole supplier, particularly for consumable replacement \nparts--certainly a category hard drives fall into. Probably the best \nplace to complain and suggest others do the same at is \nhttp://www.ideastorm.com/ideaView?id=087700000000dwTAAQ\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sat, 13 Feb 2010 02:05:41 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "Recently most of our Dell servers came up for warranty extensions and\nwe decided against it. They're all 3+ years old now and replacement\nparts are cheaper than the Dell warranty extension. In fact the price\nDell quoted us on warranty extension was about twice what these\nmachines are going for on Ebay used right now (i.e. about $800 or for\neach server). I can't imagine that warranty becoming a better value\nover time.\n\nRecently, a 73GB 2.5\" drive in one of our 1950s died. The Dell price\nwas something insane like $350 or something, and they were just out of\nwarranty. Put the part number into Ebay and found two guaranteed\npulls for about $70 each. Ordered both and with shipping it was right\nat $150. So now I've got a replacement and a spare for about half the\ncost of the single replacement drive from Dell. And they both work\njust fine.\n\nI now buy hardware from a whitebox vendor who covers my whole system\nfor 5 years. In the two years I've used them I've had four drive\nfailures and they either let me ship it back and then get the\nreplacement for non-urgent parts, or cross-ship with a CC charge /\nrefund on priority drives, like for a db server. Note that we run a\nlot of drives and we run them ragged. This many failures is not\nuncommon.\n\nThese machines are burnt in, and if something critical fails they just\nship it out and I have the replacement the next day. And there's no\nbumbling idiot (other than myself) turning a screwdriver in my server\nwithout a wrist strap or a clue.\n",
"msg_date": "Sat, 13 Feb 2010 01:31:03 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "Greg Smith wrote:\n> I've been full-on vocally anti-Dell ever since they started releasing \n> PCs with the non-standard ATX power supply pinout; that was my final \n> straw with their terrible quality decisions.\n\nYep, makes me feel validated for all of the anti-Dell advice I have\ngiven over the years at conference and training classes.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Tue, 16 Feb 2010 09:12:34 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> On Thu, 2010-02-11 at 12:39 +0000, Matthew Wakeling wrote:\n>> Just a heads up - apparently the more recent Dell RAID controllers will no \n>> longer recognise hard discs that weren't sold through Dell.\n>>\n>> http://www.channelregister.co.uk/2010/02/10/dell_perc_11th_gen_qualified_hdds_only/\n>>\n>> As one of the comments points out, that kind of makes them no longer SATA \n>> or SAS compatible, and they shouldn't be allowed to use those acronyms any \n>> more.\n> \n> That's interesting. I know that IBM at least on some of their models\n> have done the same. Glad I use HP :)\n\nall of the main vendors do that - IBM does and so does HP (unless you \ncount the toy boxes without a real raid controller). The later actually \ngoes so far and blacklists some of their own hdd firmware levels in more \nrecent controller versions which can cause quite some \"surprising\" \nresults during maintenance operations.\nI find it quite strange that people seem to be surprised by Dell now \nstarting with that as well (I atually find it really surprising they \nhave not done that before).\n\n\nStefan\n",
"msg_date": "Wed, 17 Feb 2010 14:07:49 +0100",
"msg_from": "Stefan Kaltenbrunner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PERC H700/H800"
}
] |
[
{
"msg_contents": "Hello guys,\n\n\n I don't know if this is the correct list. Correct me if I'm wrong.\n\nI have a directed graph, or better, a tree in postgresql 8.3. One table are\nthe nodes and another one are the connections. Given any node, I need to get\nall nodes down to it(to the leafs) that have relation with anotherTable.\nAlso, this connection change on time, so I have a connectionDate and a\ndisconnectionDate for each connection (which might be null to represent open\ninterval). This way, I wrote a pgsql function (I rename the tables and\ncolumns to generic names). These are the tables and the function:\n\n\n CREATE TABLE node (\n\nid_node integer NOT NULL,\n\nCONSTRAINT node_pkey PRIMARY KEY (id_node)\n\n);\n\nCREATE TABLE anotherTable\n\n(\n\nid_anotherTable integer NOT NULL,\n\nid_node integer NOT NULL,\n\nCONSTRAINT anothertable_pkey PRIMARY KEY (id_anotherTable),\n\nCONSTRAINT anothertable_node_fkey FOREIGN KEY (id_node) REFERENCES node.\nid_node\n\nMATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n\n);\n\nCREATE TABLE connection\n\n(\n\nid_connection integer NOT NULL,\n\ndown integer NOT NULL,\n\nup integer NOT NULL,\n\nconnectionDate timestamp with time zone,\n\ndisconnectionDate timestamp with time zone,\n\nCONSTRAINT connection_pkey PRIMARY KEY (id_connection),\n\nCONSTRAINT down_fkey FOREIGN KEY (down)\n\nREFERENCES (id_node) REFERENCES node. id_node MATCH SIMPLE\n\nON UPDATE NO ACTION ON DELETE NO ACTION,\n\nCONSTRAINT up_fkey FOREIGN KEY (up)\n\nREFERENCES (id_node) REFERENCES node. id_node MATCH SIMPLE\n\nON UPDATE NO ACTION ON DELETE NO ACTION);\n\nCREATE TABLE observation\n\n(\n\nid_observation integer NOT NULL,\n\nid_node integer NOT NULL,\n\ndate timestamp with time zone,\n\nCONSTRAINT observation_pkey PRIMARY KEY (id_observation),\n\nCONSTRAINT observation_node_fkey FOREIGN KEY (id_node) REFERENCES node.\nid_node\n\nMATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n\n);\n\n\n CREATE OR REPLACE FUNCTION\nget_nodes_related_to_anothertable(integer,timestamp with time zone) RETURNS\nSETOF integer AS 'DECLARE\n _id ALIAS FOR $1;\n _date ALIAS FOR $2;\n _conn RECORD;\nBEGIN\n return query SELECT 1 FROM anothertable WHERE id_node = _id;\n FOR _ conn IN SELECT * FROM connection c where c.up = _id LOOP\n if _conn. connectionDate > _date then\n continue;\n end if;\n if _conn. disconnectionDate < _data then\n continue;\n end if;\n return query SELECT * from\nget_nodes_related_to_anothertable(_conn.down, _date);\n END LOOP;\nEND' LANGUAGE 'plpgsql' IMMUTABLE;\n\n\n And I use it on my SELECT:\n\n\n SELECT\n\n*\n\nFROM\n\n(SELECT\n\nid_node,\n\ndate\n\nFROM\n\nobservation\n\n) root_node_obs,\n\nnode,\n\nanotherTable\n\nWHERE\n\nanotherTable.id_node = node.id_node\n\nAND\n\nnode.id_node IN (\n\nselect * from get_nodes_related_to_anothertable(root_node_obs\n.id_node,root_node_obs .date));\n\n\n Even with IMMUTABLE on the function, postgresql executes the function many\ntimes with the same parameters. In a single run:\n\n\n select * from get_nodes_related_to_anothertable(236,now());\n\n\n it returns 5 rows and runs in 27ms. But in the bigger SELECT, it take 5s to\neach observation row (and I may have some :-) ).\n\n\n I know that IN generally is not optimization-friendly but I don't know how\nto use my function without it.\n\nAny clues guys?\n\n\n Thanks,\n\n\n\n\n\nHello\nguys,\n\n\nI\ndon't know if this is the correct list. Correct me if I'm wrong.\nI\nhave a directed graph, or better, a tree in postgresql 8.3. One table\nare the nodes and another one are the connections. Given any node, I\nneed to get all nodes down to it(to the leafs) that have relation\nwith anotherTable. Also, this connection change on time, so I have a\nconnectionDate and a disconnectionDate for each connection (which\nmight be null to represent open interval). This way, I wrote a pgsql\nfunction (I rename the tables and columns to generic names). These\nare the tables and the function:\n\n\nCREATE\nTABLE node (id_node\ninteger NOT NULL,\n\tCONSTRAINT\nnode_pkey PRIMARY KEY (id_node)\n);\nCREATE\nTABLE anotherTable\n(\n\nid_anotherTable integer NOT NULL,\n id_node\ninteger NOT NULL,\n\nCONSTRAINT anothertable_pkey PRIMARY KEY\n(id_anotherTable),\n\nCONSTRAINT anothertable_node_fkey FOREIGN KEY (id_node)\nREFERENCES node. id_node\n\nMATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n);\nCREATE\nTABLE connection\n(\n\nid_connection integer NOT NULL,\n down\ninteger NOT NULL,\n up\ninteger NOT NULL,\n\nconnectionDate timestamp with time zone,\n\ndisconnectionDate timestamp with time zone,\n\nCONSTRAINT connection_pkey PRIMARY KEY (id_connection),\n\nCONSTRAINT down_fkey FOREIGN KEY (down)\n\nREFERENCES (id_node) REFERENCES node. id_node MATCH\nSIMPLE\n ON\nUPDATE NO ACTION ON DELETE NO ACTION,\n\nCONSTRAINT up_fkey FOREIGN KEY (up)\n\nREFERENCES (id_node) REFERENCES node. id_node MATCH\nSIMPLE\n ON\nUPDATE NO ACTION ON DELETE NO ACTION);\nCREATE\nTABLE observation\n(\n\nid_observation integer NOT NULL,\n id_node\ninteger NOT NULL,\n date\ntimestamp with time zone, \n\n\nCONSTRAINT observation_pkey PRIMARY KEY\n(id_observation),\n\nCONSTRAINT observation_node_fkey FOREIGN KEY (id_node)\nREFERENCES node. id_node\n\nMATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION\n);\n\n\nCREATE\nOR REPLACE FUNCTION\nget_nodes_related_to_anothertable(integer,timestamp with time zone)\nRETURNS SETOF integer AS 'DECLARE _id ALIAS FOR\n$1; _date ALIAS FOR $2; \n_conn RECORD;BEGIN return query SELECT 1\nFROM anothertable WHERE id_node = _id; FOR _\nconn IN SELECT * FROM connection c where c.up = _id LOOP \n if _conn. connectionDate > _date then \n continue; \n end if; \nif _conn. disconnectionDate < _data then \n continue; \nend if; return query SELECT\n* from get_nodes_related_to_anothertable(_conn.down, _date); \nEND LOOP;END' LANGUAGE 'plpgsql' IMMUTABLE;\n\n\nAnd\nI use it on my SELECT:\n\n\nSELECT\n\t*\nFROM\n\t(SELECT\n\n\n\t\tid_node,\n\t\tdate\n\tFROM\n\n\n\t\tobservation\n\t)\nroot_node_obs,\n\tnode,\n\tanotherTable\nWHERE\n\tanotherTable.id_node\n= node.id_node\n\tAND\n\tnode.id_node\nIN (\n\t\tselect\n* from get_nodes_related_to_anothertable(root_node_obs\n.id_node,root_node_obs .date));\n\n\nEven\nwith IMMUTABLE on the function, postgresql executes the function many\ntimes with the same parameters. In a single run:\n\n\nselect\n* from get_nodes_related_to_anothertable(236,now());\n\n\nit\nreturns 5 rows and runs in 27ms. But in the bigger SELECT, it take 5s\nto each observation row (and I may have some :-) ).\n\n\nI\nknow that IN generally is not optimization-friendly but I don't know\nhow to use my function without it.\nAny\nclues guys?\n\n\nThanks,",
"msg_date": "Fri, 12 Feb 2010 10:54:58 -0200",
"msg_from": "Luiz Angelo Daros de Luca <[email protected]>",
"msg_from_op": true,
"msg_subject": "Immutable table functions"
},
{
"msg_contents": "Luiz Angelo Daros de Luca wrote:\n>\n> I have a directed graph, or better, a tree in postgresql 8.3. One \n> table are the nodes and another one are the connections. Given any \n> node, I need to get all nodes down to it(to the leafs) that have \n> relation with anotherTable. Also, this connection change on time, so I \n> have a connectionDate and a disconnectionDate for each connection \n> (which might be null to represent open interval). This way, I wrote a \n> pgsql function (I rename the tables and columns to generic names). \n> These are the tables and the function:\n>\nHello Luiz,\n\nIf you could upgrade to 8.4, you could use WITH RECURSIVE - my \nexperience is that it is several orders of magnitude faster than \nrecursive functions.\n\nhttp://developer.postgresql.org/pgdocs/postgres/queries-with.html\n\nregards,\nYeb Havinga\n\n\n",
"msg_date": "Fri, 12 Feb 2010 14:31:22 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Immutable table functions"
}
] |
[
{
"msg_contents": "Hannu Krosing wrote:\n \n> Can it be, that each request does at least 1 write op (update\n> session or log something) ?\n \nWell, the web application connects through a login which only has\nSELECT rights; but, in discussing a previous issue we've pretty well\nestablished that it's not unusual for a read to force a dirty buffer\nto write to the OS. Perhaps this is the issue here again. Nothing\nis logged on the database server for every request.\n \n> If you can, set\n>\n> synchronous_commit = off;\n>\n> and see if it further increases performance.\n \nI wonder if it might also pay to make the background writer even more\naggressive than we have, so that SELECT-only queries don't spend so\nmuch time writing pages. Anyway, given that these are replication\ntargets, and aren't the \"database of origin\" for any data of their\nown, I guess there's no reason not to try asynchronous commit. \nThanks for the suggestion.\n \n-Kevin\n\n\n",
"msg_date": "Fri, 12 Feb 2010 07:21:55 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Hannu Krosing wrote:\n> \n> > Can it be, that each request does at least 1 write op (update\n> > session or log something) ?\n> \n> Well, the web application connects through a login which only has\n> SELECT rights; but, in discussing a previous issue we've pretty well\n> established that it's not unusual for a read to force a dirty buffer\n> to write to the OS. Perhaps this is the issue here again. Nothing\n> is logged on the database server for every request.\n\nI don't think it explains it, because dirty buffers are obviously\nwritten to the data area, not pg_xlog.\n\n> I wonder if it might also pay to make the background writer even more\n> aggressive than we have, so that SELECT-only queries don't spend so\n> much time writing pages.\n\nThat's worth trying.\n\n> Anyway, given that these are replication\n> targets, and aren't the \"database of origin\" for any data of their\n> own, I guess there's no reason not to try asynchronous commit. \n\nYeah; since the transactions only ever write commit records to WAL, it\nwouldn't matter a bit that they are lost on crash. And you should see\nan improvement, because they wouldn't have to flush at all.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 12 Feb 2010 11:03:48 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Kevin Grittner wrote:\n\n> > Anyway, given that these are replication\n> > targets, and aren't the \"database of origin\" for any data of their\n> > own, I guess there's no reason not to try asynchronous commit. \n> \n> Yeah; since the transactions only ever write commit records to WAL, it\n> wouldn't matter a bit that they are lost on crash. And you should see\n> an improvement, because they wouldn't have to flush at all.\n\nActually, a transaction that performed no writes doesn't get a commit\nWAL record written, so it shouldn't make any difference at all.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Fri, 12 Feb 2010 11:10:36 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> wrote:\n> Alvaro Herrera wrote:\n>> Kevin Grittner wrote:\n> \n>> > Anyway, given that these are replication targets, and aren't\n>> > the \"database of origin\" for any data of their own, I guess\n>> > there's no reason not to try asynchronous commit. \n>> \n>> Yeah; since the transactions only ever write commit records to\n>> WAL, it wouldn't matter a bit that they are lost on crash. And\n>> you should see an improvement, because they wouldn't have to\n>> flush at all.\n> \n> Actually, a transaction that performed no writes doesn't get a\n> commit WAL record written, so it shouldn't make any difference at\n> all.\n \nWell, concurrent to the web application is the replication. Would\nasynchronous commit of that potentially alter the pattern of writes\nsuch that it had less impact on the reads? I'm thinking, again, of\nwhy the placement of the pg_xlog on a separate file system made such\na dramatic difference to the read-only response time -- might it\nmake less difference if the replication was using asynchronous\ncommit?\n\nBy the way, the way our replication system works is that each target\nkeeps track of how far it has replicated in the transaction stream\nof each source, so as long as a *later* transaction from a source is\nnever persisted before an *earlier* one, there's no risk of data\nloss; it's strictly a performance issue. It will be able to catch\nup from wherever it is in the transaction stream when it comes back\nup after any down time (planned or otherwise).\n \nBy the way, I have no complaints about the performance with the\npg_xlog directory on its own file system (although if it could be\n*even faster* with a configuration change I will certainly take\nadvantage of that). I do like to understand the dynamics of these\nthings when I can, though.\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 08:49:50 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Kevin Grittner wrote:\n> Alvaro Herrera <[email protected]> wrote:\n\n> > Actually, a transaction that performed no writes doesn't get a\n> > commit WAL record written, so it shouldn't make any difference at\n> > all.\n> \n> Well, concurrent to the web application is the replication. Would\n> asynchronous commit of that potentially alter the pattern of writes\n> such that it had less impact on the reads?\n\nWell, certainly async commit would completely change the pattern of\nwrites: it would give the controller an opportunity to reorder them\naccording to some scheduler. Otherwise they are strictly serialized.\n\n> I'm thinking, again, of\n> why the placement of the pg_xlog on a separate file system made such\n> a dramatic difference to the read-only response time -- might it\n> make less difference if the replication was using asynchronous\n> commit?\n\nYeah, I think it would have been less notorious, but this is all\ntheoretical.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 12 Feb 2010 17:11:06 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Kevin Grittner wrote:\n> I wonder if it might also pay to make the background writer even more\n> aggressive than we have, so that SELECT-only queries don't spend so\n> much time writing pages.\nYou can easily quantify if the BGW is aggressive enough. Buffers leave \nthe cache three ways, and they each show up as separate counts in \npg_stat_bgwriter: buffers_checkpoint, buffers_clean (the BGW), and \nbuffers_backend (the queries). Cranking it up further tends to shift \nwrites out of buffers_backend, which are the ones you want to avoid, \ntoward buffers_clean instead. If buffers_backend is already low on a \npercentage basis compared to the other two, there's little benefit in \ntrying to make the BGW do more.\n\n-- \nGreg Smith 2ndQuadrant Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.com\n\n",
"msg_date": "Sat, 13 Feb 2010 01:29:38 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "Greg Smith <[email protected]> wrote:\n \n> You can easily quantify if the BGW is aggressive enough. Buffers\n> leave the cache three ways, and they each show up as separate\n> counts in pg_stat_bgwriter: buffers_checkpoint, buffers_clean\n> (the BGW), and buffers_backend (the queries). Cranking it up\n> further tends to shift writes out of buffers_backend, which are\n> the ones you want to avoid, toward buffers_clean instead. If\n> buffers_backend is already low on a percentage basis compared to\n> the other two, there's little benefit in trying to make the BGW do\n> more.\n \nHere are the values from our two largest and busiest systems (where\nwe found the pg_xlog placement to matter so much). It looks to me\nlike a more aggressive bgwriter would help, yes?\n \ncir=> select * from pg_stat_bgwriter ;\n-[ RECORD 1 ]------+------------\ncheckpoints_timed | 125996\ncheckpoints_req | 16932\nbuffers_checkpoint | 342972024\nbuffers_clean | 343634920\nmaxwritten_clean | 9928\nbuffers_backend | 575589056\nbuffers_alloc | 52397855471\n \ncir=> select * from pg_stat_bgwriter ;\n-[ RECORD 1 ]------+------------\ncheckpoints_timed | 125992\ncheckpoints_req | 16840\nbuffers_checkpoint | 260358442\nbuffers_clean | 474768152\nmaxwritten_clean | 9778\nbuffers_backend | 565837397\nbuffers_alloc | 71463873477\n \nCurrent settings:\n \nbgwriter_delay = '200ms'\nbgwriter_lru_maxpages = 1000\nbgwriter_lru_multiplier = 4\n \nAny suggestions on how far to push it?\n \n-Kevin\n",
"msg_date": "Tue, 23 Feb 2010 15:23:54 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
},
{
"msg_contents": "On Feb 23, 2010, at 2:23 PM, Kevin Grittner wrote:\n\n> \n> Here are the values from our two largest and busiest systems (where\n> we found the pg_xlog placement to matter so much). It looks to me\n> like a more aggressive bgwriter would help, yes?\n> \n> cir=> select * from pg_stat_bgwriter ;\n> -[ RECORD 1 ]------+------------\n> checkpoints_timed | 125996\n> checkpoints_req | 16932\n> buffers_checkpoint | 342972024\n> buffers_clean | 343634920\n> maxwritten_clean | 9928\n> buffers_backend | 575589056\n> buffers_alloc | 52397855471\n> \n> cir=> select * from pg_stat_bgwriter ;\n> -[ RECORD 1 ]------+------------\n> checkpoints_timed | 125992\n> checkpoints_req | 16840\n> buffers_checkpoint | 260358442\n> buffers_clean | 474768152\n> maxwritten_clean | 9778\n> buffers_backend | 565837397\n> buffers_alloc | 71463873477\n> \n> Current settings:\n> \n> bgwriter_delay = '200ms'\n> bgwriter_lru_maxpages = 1000\n> bgwriter_lru_multiplier = 4\n> \n> Any suggestions on how far to push it?\n\nI don't know how far to push it, but you could start by reducing the delay time and observe how that affects performance.\nOn Feb 23, 2010, at 2:23 PM, Kevin Grittner wrote:Here are the values from our two largest and busiest systems (wherewe found the pg_xlog placement to matter so much). It looks to melike a more aggressive bgwriter would help, yes?cir=> select * from pg_stat_bgwriter ;-[ RECORD 1 ]------+------------checkpoints_timed | 125996checkpoints_req | 16932buffers_checkpoint | 342972024buffers_clean | 343634920maxwritten_clean | 9928buffers_backend | 575589056buffers_alloc | 52397855471cir=> select * from pg_stat_bgwriter ;-[ RECORD 1 ]------+------------checkpoints_timed | 125992checkpoints_req | 16840buffers_checkpoint | 260358442buffers_clean | 474768152maxwritten_clean | 9778buffers_backend | 565837397buffers_alloc | 71463873477Current settings:bgwriter_delay = '200ms'bgwriter_lru_maxpages = 1000bgwriter_lru_multiplier = 4Any suggestions on how far to push it?I don't know how far to push it, but you could start by reducing the delay time and observe how that affects performance.",
"msg_date": "Tue, 23 Feb 2010 15:14:02 -0700",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: moving pg_xlog -- yeah, it's worth it!"
}
] |
[
{
"msg_contents": "Some informations:\nThe following problem has been detected on\n Postgresql 8.3 and 8.4. on System linux or windows\n Default AutoVacuum daemon working\n One pg_dump every day\nThis happens sometimes and i don't see what can be the cause.\nA manual Vacuum Analyse repair that problem.\n\nDear you all,\n\nHope someone would help me understand why only changing a where clause \nvalue attribute will have a big impact on query plan and lead to almost \nunending query.\nregards\n\nlionel\n\n\n\nThis is my query:\n\nselect element2_.element_seqnum as col_0_0_,\nelement1_.element_seqnum as col_1_0_,\n link0_.link_rank_in_bunch as col_2_0_,\n element2_.element_state as col_3_0_\n from public.link link0_\n inner join public.element element1_ on \nlink0_.element_target=element1_.element_seqnum\n inner join public.user_element users3_ on \nelement1_.element_seqnum=users3_.element_seqnum\n inner join public.user user4_ on users3_.user_seqnum=user4_.user_seqnum\n inner join public.element_block blocks7_ on \nelement1_.element_seqnum=blocks7_.element_seqnum\n inner join public.block block8_ on \nblocks7_.block_seqnum=block8_.block_seqnum\n\n inner join public.element element2_ on \nlink0_.element_source=element2_.element_seqnum\n inner join public.user_element users5_ on \nelement2_.element_seqnum=users5_.element_seqnum\n inner join public.user user6_ on \nusers5_.user_seqnum=user6_.user_seqnum\n inner join public.element_block blocks9_ on \nelement2_.element_seqnum=blocks9_.element_seqnum\n inner join public.block block10_ on \nblocks9_.block_seqnum=block10_.block_seqnum\n where block10_.block_seqnum=5\n and block8_.block_seqnum=5\n and user6_.user_seqnum=XX\n and (link0_.link_sup_date is null)\n and user4_.user_seqnum=XX\n\n\n\n\n-------------------------------------------------------\n\nThis one works well: Query Plan for that user \"2\" \n(\"user4_.user_seqnum=2\" and \"user6_.user_seqnum=2 \") will be:\n\nNested Loop (cost=36.33..5932.28 rows=1 width=16)\n -> Nested Loop (cost=36.33..5926.38 rows=1 width=20)\n -> Nested Loop (cost=36.33..5925.23 rows=1 width=24)\n Join Filter: (link0_.element_source = blocks9_.element_seqnum)\n -> Index Scan using fki_element_block_block on \nelement_block blocks9_ (cost=0.00..8.29 rows=1 width=8)\n Index Cond: (block_seqnum = 5)\n -> Nested Loop (cost=36.33..5916.64 rows=24 width=28)\n -> Nested Loop (cost=36.33..5883.29 rows=4 width=40)\n -> Seq Scan on \"user\" user4_ (cost=0.00..5.89 \nrows=1 width=4)\n Filter: (user_seqnum = 2)\n -> Nested Loop (cost=36.33..5877.36 rows=4 \nwidth=36)\n -> Nested Loop (cost=36.33..5860.81 \nrows=4 width=28)\n -> Nested Loop \n(cost=36.33..5835.59 rows=6 width=20)\n -> Nested Loop \n(cost=0.00..17.76 rows=1 width=8)\n -> Nested Loop \n(cost=0.00..16.61 rows=1 width=12)\n -> Index Scan \nusing fki_element_block_block on element_block blocks7_ \n(cost=0.00..8.29 rows=1 width=8)\n Index Cond: \n(block_seqnum = 5)\n -> Index Scan \nusing pk_element on element element1_ (cost=0.00..8.31 rows=1 width=4)\n Index Cond: \n(element1_.element_seqnum = blocks7_.element_seqnum)\n -> Seq Scan on block \nblock8_ (cost=0.00..1.14 rows=1 width=4)\n Filter: \n(block8_.block_seqnum = 5)\n -> Bitmap Heap Scan on link \nlink0_ (cost=36.33..5792.21 rows=2050 width=12)\n Recheck Cond: \n(link0_.element_target = element1_.element_seqnum)\n Filter: \n(link0_.link_sup_date IS NULL)\n -> Bitmap Index Scan \non element_target_fk (cost=0.00..35.82 rows=2050 width=0)\n Index Cond: \n(link0_.element_target = element1_.element_seqnum)\n -> Index Scan using \npk_user_element on user_element users5_ (cost=0.00..4.19 rows=1 width=8)\n Index Cond: \n((users5_.user_seqnum = 2) AND (users5_.element_seqnum = \nlink0_.element_source))\n -> Index Scan using pk_element on \nelement element2_ (cost=0.00..4.12 rows=1 width=8)\n Index Cond: \n(element2_.element_seqnum = link0_.element_source)\n -> Index Scan using pk_user_element on user_element \nusers3_ (cost=0.00..8.33 rows=1 width=8)\n Index Cond: ((users3_.user_seqnum = 2) AND \n(users3_.element_seqnum = link0_.element_target))\n -> Seq Scan on block block10_ (cost=0.00..1.14 rows=1 width=4)\n Filter: (block10_.block_seqnum = 5)\n -> Seq Scan on \"user\" user6_ (cost=0.00..5.89 rows=1 width=4)\n Filter: (user6_.user_seqnum = 2)\n*\nThis one is very very very long (was still in process 10 mins later with \n100%cpu*): Query Plan for user \"10\" (\"user4_.user_seqnum=10\" and \n\"user6_.user_seqnum=10 \") will be:\n\n\nQUERY PLAN\nNested Loop (cost=54.34..1490.62 rows=1 width=16)\n -> Nested Loop (cost=54.34..1484.72 rows=1 width=20)\n Join Filter: (link0_.element_source = blocks9_.element_seqnum)\n -> Nested Loop (cost=54.34..1476.41 rows=1 width=32)\n -> Nested Loop (cost=54.34..1475.26 rows=1 width=28)\n -> Nested Loop (cost=54.34..1466.95 rows=1 width=36)\n -> Nested Loop (cost=54.34..1461.05 rows=1 \nwidth=40)\n -> Nested Loop (cost=54.34..1459.90 \nrows=1 width=44)\n -> Nested Loop \n(cost=54.34..1455.52 rows=1 width=36)\n -> Nested Loop \n(cost=13.15..1410.30 rows=1 width=24)\n -> Nested Loop \n(cost=0.00..16.63 rows=1 width=16)\n -> Index Scan \nusing fki_element_block_block on element_block blocks7_ \n(cost=0.00..8.29 rows=1 width=8)\n Index Cond: \n(block_seqnum = 5)\n -> Index Scan \nusing pk_user_element on user_element users3_ (cost=0.00..8.33 rows=1 \nwidth=8)\n Index Cond: \n((users3_.user_seqnum = 10) AND (users3_.element_seqnum = \nblocks7_.element_seqnum))\n -> Bitmap Heap Scan on \nuser_element users5_ (cost=13.15..1387.40 rows=627 width=8)\n Recheck Cond: \n(users5_.user_seqnum = 10)\n -> Bitmap Index \nScan on fki_user_element_user (cost=0.00..12.99 rows=627 width=0)\n Index Cond: \n(users5_.user_seqnum = 10)\n -> Bitmap Heap Scan on link \nlink0_ (cost=41.19..45.20 rows=1 width=12)\n Recheck Cond: \n((link0_.element_source = users5_.element_seqnum) AND \n(link0_.element_target = users3_.element_seqnum))\n Filter: \n(link0_.link_sup_date IS NULL)\n -> BitmapAnd \n(cost=41.19..41.19 rows=1 width=0)\n -> Bitmap Index \nScan on element_source_fk (cost=0.00..4.60 rows=21 width=0)\n Index Cond: \n(link0_.element_source = users5_.element_seqnum)\n -> Bitmap Index \nScan on element_target_fk (cost=0.00..35.82 rows=2050 width=0)\n Index Cond: \n(link0_.element_target = users3_.element_seqnum)\n -> Index Scan using pk_element on \nelement element2_ (cost=0.00..4.37 rows=1 width=8)\n Index Cond: \n(element2_.element_seqnum = link0_.element_source)\n -> Seq Scan on block block8_ \n(cost=0.00..1.14 rows=1 width=4)\n Filter: (block8_.block_seqnum = 5)\n -> Seq Scan on \"user\" user4_ (cost=0.00..5.89 \nrows=1 width=4)\n Filter: (user4_.user_seqnum = 10)\n -> Index Scan using pk_element on element element1_ \n(cost=0.00..8.31 rows=1 width=4)\n Index Cond: (element1_.element_seqnum = \nlink0_.element_target)\n -> Seq Scan on block block10_ (cost=0.00..1.14 rows=1 \nwidth=4)\n Filter: (block10_.block_seqnum = 5)\n -> Index Scan using fki_element_block_block on element_block \nblocks9_ (cost=0.00..8.29 rows=1 width=8)\n Index Cond: (blocks9_.block_seqnum = 5)\n -> Seq Scan on \"user\" user6_ (cost=0.00..5.89 rows=1 width=4)\n Filter: (user6_.user_seqnum = 10)\n\n\n\n\n\n_______________________________________________\nBoozter-dev mailing list\[email protected]\nhttp://ns355324.ovh.net/mailman/listinfo/boozter-dev\n\n\n",
"msg_date": "Fri, 12 Feb 2010 14:35:15 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Almost infinite query -> Different Query Plan when changing where\n\tclause value"
},
{
"msg_contents": "lionel duboeuf <[email protected]> wrote:\n \n> Some informations:\n> The following problem has been detected on\n> Postgresql 8.3 and 8.4. on System linux or windows\n> Default AutoVacuum daemon working\n> One pg_dump every day\n> This happens sometimes and i don't see what can be the cause.\n> A manual Vacuum Analyse repair that problem.\n \nIt's good to know that the issue has been observed in more than one\nrelease or on more than one platform, but it's also useful to get a\nbit more information about one particular occurrence. Please read\nthis:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIn particular, knowing such things as your postgresql.conf settings,\nthe disk configuration, how much RAM the machine has, etc. can allow\nus to provide more useful advice.\n \nPlease run these as EXPLAIN ANALYZE (or at least whatever you can\nget to finish that way) instead of just EXPLAIN. If you can let the\nslow one run through EXPLAIN ANALYZE overnight or on a test machine\nso that it can complete, it will give us a lot more with which to\nwork. Please attach wide output (like that from EXPLAIN) as a text\nattachment, to prevent wrapping which makes it hard to read.\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 09:05:09 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\n\twhen changing where clause value"
},
{
"msg_contents": "Thanks kevin for your answer. Here is some additionnal informations \nattached as files.\n\n\nregards.\nLionel\n\nKevin Grittner a �crit :\n> lionel duboeuf <[email protected]> wrote:\n> \n> \n>> Some informations:\n>> The following problem has been detected on\n>> Postgresql 8.3 and 8.4. on System linux or windows\n>> Default AutoVacuum daemon working\n>> One pg_dump every day\n>> This happens sometimes and i don't see what can be the cause.\n>> A manual Vacuum Analyse repair that problem.\n>> \n> \n> It's good to know that the issue has been observed in more than one\n> release or on more than one platform, but it's also useful to get a\n> bit more information about one particular occurrence. Please read\n> this:\n> \n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n> \n> In particular, knowing such things as your postgresql.conf settings,\n> the disk configuration, how much RAM the machine has, etc. can allow\n> us to provide more useful advice.\n> \n> Please run these as EXPLAIN ANALYZE (or at least whatever you can\n> get to finish that way) instead of just EXPLAIN. If you can let the\n> slow one run through EXPLAIN ANALYZE overnight or on a test machine\n> so that it can complete, it will give us a lot more with which to\n> work. Please attach wide output (like that from EXPLAIN) as a text\n> attachment, to prevent wrapping which makes it hard to read.\n> \n> -Kevin\n>",
"msg_date": "Fri, 12 Feb 2010 21:32:30 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t when\n\tchanging where clause value"
},
{
"msg_contents": "lionel duboeuf <[email protected]> wrote:\n> Thanks kevin for your answer. Here is some additionnal\n> informations attached as files.\n \nCould you supply an EXPLAIN ANALYZE of the fast plan as an\nattachment, for comparison?\n \nAnyway, it looks like at least one big problem is the bad estimate\non how many rows will be generated by joining to the users5_ table:\n\n> (cost=13.20..1427.83 rows=1 width=24)\n> (actual time=1.374..517.662 rows=122850 loops=1)\n\nIf it had expected 122850 rows to qualify for that join, it probably\nwould have picked a different plan.\n \nI just reread your original email, and I'm not sure I understand\nwhat you meant regarding VACUUM ANALYZE. If you run that right\nbeforehand, do you still get the slow plan for user 10?\n \n-Kevin\n",
"msg_date": "Fri, 12 Feb 2010 14:59:49 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t\n\twhen changing where clause value"
},
{
"msg_contents": "See as attachment the \"correct\" query plan for an other 'user'.\nI confirm by executing manual \"VACUUM ANALYZE\" that the problem is solved.\nBut what i don't understand is that i would expect autovacuum to do the job.\n\nLionel\n\n\n\nKevin Grittner a �crit :\n> lionel duboeuf <[email protected]> wrote:\n> \n>> Thanks kevin for your answer. Here is some additionnal\n>> informations attached as files.\n>> \n> \n> Could you supply an EXPLAIN ANALYZE of the fast plan as an\n> attachment, for comparison?\n> \n> Anyway, it looks like at least one big problem is the bad estimate\n> on how many rows will be generated by joining to the users5_ table:\n>\n> \n>> (cost=13.20..1427.83 rows=1 width=24)\n>> (actual time=1.374..517.662 rows=122850 loops=1)\n>> \n>\n> If it had expected 122850 rows to qualify for that join, it probably\n> would have picked a different plan.\n> \n> I just reread your original email, and I'm not sure I understand\n> what you meant regarding VACUUM ANALYZE. If you run that right\n> beforehand, do you still get the slow plan for user 10?\n> \n> -Kevin",
"msg_date": "Mon, 15 Feb 2010 10:52:49 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t\t when\n\tchanging where clause value"
},
{
"msg_contents": "On Mon, Feb 15, 2010 at 2:52 AM, lionel duboeuf\n<[email protected]> wrote:\n> See as attachment the \"correct\" query plan for an other 'user'.\n> I confirm by executing manual \"VACUUM ANALYZE\" that the problem is solved.\n> But what i don't understand is that i would expect autovacuum to do the job.\n\nThere are two operations here. Vacuum, which reclaims lost space, and\nanalyze which analyzes your data and stores histograms to be used when\nbuilding queries, to determine how many rows are likely to be returned\nby each part of the plan.\n\nThe autovac daemon runs both vacuums and analyzes, often independently\nof each other, when needed, based on threshold settings. By running\nautovac your db would get analyzed when necessary and would then have\nup to date statistics when queries were run after that.\n",
"msg_date": "Mon, 15 Feb 2010 10:24:27 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan when\n\tchanging where clause value"
}
] |
[
{
"msg_contents": "I have been trying to track down a performance issue we've been having with a INSERT INTO ... SELECT query run against a partitioned table on postgres. The problem appears to be in the plan building of the query and after some further research I think I have nailed down a simplified example of the problem. Attached is a simple script that will build an example of our table structure load 2 records and run the explain that produces the plan in question. The query plan looks like the following:\n\n QUERY PLAN \n \n------------------------------------------------------------------\n-------------------------- \n Result (cost=0.00..0.01 rows=1 width=0) \n One-Time Filter: false \n \n Nested Loop (cost=23.50..47.08 rows=4 width=1036) \n -> Append (cost=0.00..23.50 rows=2 width=520) \n -> Seq Scan on base (cost=0.00..11.75 rows=1 width=520)\n Filter: (id = 1) \n -> Seq Scan on base_1 base (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n -> Materialize (cost=23.50..23.52 rows=2 width=520) \n -> Append (cost=0.00..23.50 rows=2 width=520) \n -> Seq Scan on another (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n -> Seq Scan on another_1 another (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n \n Result (cost=23.50..47.08 rows=1 width=1036) \n One-Time Filter: false \n -> Nested Loop (cost=23.50..47.08 rows=1 width=1036) \n -> Append (cost=0.00..23.50 rows=2 width=520) \n -> Seq Scan on base (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n -> Seq Scan on base_1 base (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n -> Materialize (cost=23.50..23.52 rows=2 width=520) \n -> Append (cost=0.00..23.50 rows=2 width=520) \n -> Seq Scan on another (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n -> Seq Scan on another_1 another (cost=0.00..11.75 rows=1 width=520) \n Filter: (id = 1) \n \n Result (cost=23.50..47.08 rows=1 width=1036) \n One-Time Filter: false \n -> Nested Loop (cost=23.50..47.08 rows=1 width=1036)\n -> Append (cost=0.00..23.50 rows=2 width=520)\n -> Seq Scan on base (cost=0.00..11.75 rows=1 width=520)\n Filter: (id = 1)\n -> Seq Scan on base_1 base (cost=0.00..11.75 rows=1 width=520)\n Filter: (id = 1)\n -> Materialize (cost=23.50..23.52 rows=2 width=520)\n -> Append (cost=0.00..23.50 rows=2 width=520)\n -> Seq Scan on another (cost=0.00..11.75 rows=1 width=520)\n Filter: (id = 1)\n -> Seq Scan on another_1 another (cost=0.00..11.75 rows=1 width=520)\n Filter: (id = 1)\n(45 rows)\n\n\nThe problem appears to be the multiple Result sections. I don't understand why this is happening but I do know that a new results section occurs for each new partition you add. The result is that in my actual system where we have a couple hundred partitions this query takes minutes to plan. I've tried this on a Dell (24 core 2.66 GHz) with 192 GB of RAM running postgres 8.3.7 and an IBM 570 (16 core 1.6 Ghz Power 5) with 16 GB of RAM running postgres 8.4.2 both running RedHat Enterprise 5.0 and both take what I would consider way to long to generate the plan.\n\nThe 8.3.7 version has constraint exclusion on and the 8.4.2 version has constraint exclusion partial.",
"msg_date": "Fri, 12 Feb 2010 11:03:05 -0500",
"msg_from": "\"Connors, Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questions on plan with INSERT/SELECT on partitioned table"
},
{
"msg_contents": "\"Connors, Bill\" <[email protected]> writes:\n> ... in my actual system where we have a couple hundred partitions this\n> query takes minutes to plan.\n\nPlease note what the documentation says under \"Partitioning Caveats\".\nThe current partitioning support is not meant to scale past a few dozen\npartitions. So the solution to your problem is to not have so many\npartitions.\n\nThere are plans to make some fundamental changes in partitioning\nsupport, and one of the main reasons for that is to allow it to scale to\nlarger numbers of partitions. This is just in the arm-waving stage\nthough ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 12 Feb 2010 13:20:29 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questions on plan with INSERT/SELECT on partitioned table "
}
] |
[
{
"msg_contents": "Hi,\n\nI have seen some work done about scaling PostgreSQL on SMP machines \nand see that PostgreSQL has been executed on several numbers of cores. \nI have some questions about scaling PostgreSQL on SMP architectures.\n\nSystem Details that I am running PostgreSQL-8.4.0:\nServer: HP Integrity Superdome SD32B\nProcessor: Intel Itanium2 1.6 GHz (dual-core)\nNumber of processors: 32\nNumber of compute cores: 64\nMemory architecture: Shared\nMemory amount: 128 GB\nDisk amount: 4.6 TB\nHigh performance network: InfiniBand 20 Gbps\nOperating system: RHEL 5.1 IA64\n\nQuestion 1) How can I run PostgreSQL on 2,4,8,16,32 and 64 cores and \nget performance results?\nI have seen some work about scaling POstgreSQL on SMP architectures \nand in these slides there were performance results for PostgreSQL \nrunning on different number of cpu cores. I want to execute PostgreSQL \nfor different number of cpu cores like 2,4,8,16,32,64 on above server. \nBut I do not know how to compile or run PostgreSQL on multiple cpu \ncores. What is the exact command to tell Postgres to run on specified \nnumber of cores? I need to run Postgres on multiple CPU cores and \ndocument this work.\n\nQuestion 2)Are there any official benchmarks for PostgreSQL \nperformance measurements?\n\nI mean when someone wants to run an official performance test on \nPostgreSQL which queries and tables must be used. I have executed my \nqueries on a 5 million record table in order to measure performance, \nbut I also want to use the official methods.\n\nQuestion 3) How can I change the allocated memory and measure the \nperformance of PostgreSQL?\n\nThanks for all answers,\n\nReydan\n",
"msg_date": "Sun, 14 Feb 2010 18:36:24 +0200",
"msg_from": "Reydan Cankur <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL on SMP Architectures"
},
{
"msg_contents": "Hi Reydan,\n\nPostgreSQL is unaware of the amount of cpu's and cores and doesn't \nreally care about that. Its up to your OS to spread the work across the \nsystem.\n\nBe careful with what you test, normal PostgreSQL cannot spread a single \nquery over multiple cores. But it will of course spread its full \nworkload of separate queries over the cores (or more precisely, allows \nthe OS to do that).\n\nYou could tell Linux to disable some cores from the scheduler (or \nperhaps itanium-systems support to disable processors completely), it \nworks quite easy. You can just do something like:\necho 0 >> /sys/devices/system/cpu/cpu1/online\n\nThis is probably not exactly the same as removing a cpu from the system, \nbut may be close enough for your needs. You should be careful which \ncores/cpus you disable, i.e. a supported dual-cpu configuration might \ncorrespond to cpu's 0 and 7 in a fully populated system, rather than \njust 0 and 1. And even more important, if you'd remove a cpu, you'd \nremove 2 cores and you should disable them together as well (not that \nLinux cares, but it might yield somewhat skewed results).\n\nIt is quite hard to benchmark PostgreSQL (or any software) on such a \nsystem, in the sense that all kinds of scalability issues may occur that \nare not normally ran into with PostgreSQL (simply because most people \nhave 16 cores or less in their systems). I'm not sure what to advice to \nyou on that matter, we've used a read-mostly internal benchmark that \nscaled quite well to 32 (Sun T2) cores/threads, but that is probably not \na very realistic benchmark for your system.\n\nThe only advice I can give you here is: make sure you understand all \nparts of the system and software well. Benchmarking is hard enough as it \nis, if you don't understand the software and/or system its running on, \nyou may mess things up.\n\nBest regards,\n\nArjen\n\nOn 14-2-2010 17:36 Reydan Cankur wrote:\n> Hi,\n>\n> I have seen some work done about scaling PostgreSQL on SMP machines and\n> see that PostgreSQL has been executed on several numbers of cores. I\n> have some questions about scaling PostgreSQL on SMP architectures.\n>\n> System Details that I am running PostgreSQL-8.4.0:\n> Server: HP Integrity Superdome SD32B\n> Processor: Intel Itanium2 1.6 GHz (dual-core)\n> Number of processors: 32\n> Number of compute cores: 64\n> Memory architecture: Shared\n> Memory amount: 128 GB\n> Disk amount: 4.6 TB\n> High performance network: InfiniBand 20 Gbps\n> Operating system: RHEL 5.1 IA64\n>\n> Question 1) How can I run PostgreSQL on 2,4,8,16,32 and 64 cores and get\n> performance results?\n> I have seen some work about scaling POstgreSQL on SMP architectures and\n> in these slides there were performance results for PostgreSQL running on\n> different number of cpu cores. I want to execute PostgreSQL for\n> different number of cpu cores like 2,4,8,16,32,64 on above server. But I\n> do not know how to compile or run PostgreSQL on multiple cpu cores. What\n> is the exact command to tell Postgres to run on specified number of\n> cores? I need to run Postgres on multiple CPU cores and document this work.\n>\n> Question 2)Are there any official benchmarks for PostgreSQL performance\n> measurements?\n>\n> I mean when someone wants to run an official performance test on\n> PostgreSQL which queries and tables must be used. I have executed my\n> queries on a 5 million record table in order to measure performance, but\n> I also want to use the official methods.\n>\n> Question 3) How can I change the allocated memory and measure the\n> performance of PostgreSQL?\n>\n> Thanks for all answers,\n>\n> Reydan\n>\n",
"msg_date": "Sun, 14 Feb 2010 18:12:00 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on SMP Architectures"
},
{
"msg_contents": "With all that info, your question is still kind of vague. Do you want\nto be able to run a single query across all 64 cores? Or do you have\n1,000 users and you want them to be spread out on all cores.\nPostgreSQL can do the second but has no native features to do the\nfirst.\n",
"msg_date": "Sun, 14 Feb 2010 11:25:11 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on SMP Architectures"
},
{
"msg_contents": "I want to do the second. I want to spread out the workload on all \ncores. But also I want to set the core number; for example first I \nwant to spread out the workload to 32 cores then 64 cores and see the \nscalability.\nOn Feb 14, 2010, at 8:25 PM, Scott Marlowe wrote:\n\n> With all that info, your question is still kind of vague. Do you want\n> to be able to run a single query across all 64 cores? Or do you have\n> 1,000 users and you want them to be spread out on all cores.\n> PostgreSQL can do the second but has no native features to do the\n> first.\n\n",
"msg_date": "Sun, 14 Feb 2010 20:46:03 +0200",
"msg_from": "Reydan Cankur <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL on SMP Architectures"
},
{
"msg_contents": "Reydan Cankur �rta:\n> I want to do the second. I want to spread out the workload on all\n> cores. But also I want to set the core number; for example first I\n> want to spread out the workload to 32 cores then 64 cores and see the\n> scalability.\n\nYou can use taskset(1) or schedtool(8) to set the CPU affinity\nof the postmaster. New backends inherit the setting.\n\n> On Feb 14, 2010, at 8:25 PM, Scott Marlowe wrote:\n>\n>> With all that info, your question is still kind of vague. Do you want\n>> to be able to run a single query across all 64 cores? Or do you have\n>> 1,000 users and you want them to be spread out on all cores.\n>> PostgreSQL can do the second but has no native features to do the\n>> first.\n>\n>\n\n\n-- \nBible has answers for everything. Proof:\n\"But let your communication be, Yea, yea; Nay, nay: for whatsoever is more\nthan these cometh of evil.\" (Matthew 5:37) - basics of digital technology.\n\"May your kingdom come\" - superficial description of plate tectonics\n\n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Sch�nig & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n",
"msg_date": "Sun, 14 Feb 2010 20:05:30 +0100",
"msg_from": "Boszormenyi Zoltan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on SMP Architectures"
}
] |
[
{
"msg_contents": "(Apologies if this ends up coming through multiple times - my first attempts seem to have gotten stuck.)\n\nWe recently took the much needed step in moving from 8.1.19 to 8.4.2. We took the downtime opportunity to also massively upgrade our hardware. Overall, this has been the major improvement you would expect, but there is at least one query which has degraded in performance quite a bit. Here is the plan on 8.4.2:\nhttp://wood.silentmedia.com/bench/842\n\nHere is the very much less compact plan for the same query on 8.1.19:\nhttp://wood.silentmedia.com/bench/8119\n\nI think the problem might be that 8.1.19 likes to use a few indexes which 8.4.2 doesn't seem to think would be worthwhile. Perhaps that's because on the new hardware almost everything fits into ram, but even so, it would be better if those indexes were used. The other differences I can think of are random_page_cost (2 on the new hardware vs. 2.5 on the old), a ten-fold increase in effective_cache_size, doubling work_mem from 8MB to 16MB, and that we analyze up to 100 samples per attribute on 8.4.2, while our 8.1.19 install does 10 at most. Still, the estimates for both plans seem fairly accurate, at least where there are differences in which indexes are getting used.\n\nEverything has been analyzed recently, and given that 8.4.2 already has 10x more analysis samples than 8.1.19, I'm not sure what to do to coax it towards using those indexes.",
"msg_date": "Sun, 14 Feb 2010 10:38:56 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.1 -> 8.4 regression"
},
{
"msg_contents": "Can you force 8.4 to generate the same plan as 8.1? For example by running\n\n SET enable_hashjoin = off;\n\nbefore you run EXPLAIN on the query? If so, then we can compare the\nnumbers from the forced plan with the old plan and maybe figure out why it\ndidn't use the same old plan in 8.4 as it did in 8.1.\n\nNote that the solution is not to force the plan, but it can give us more\ninformation.\n\n/Dennis\n\n> is at least one query which has degraded in performance quite a bit. Here\n> is the plan on 8.4.2:\n> http://wood.silentmedia.com/bench/842\n>\n> Here is the very much less compact plan for the same query on 8.1.19:\n> http://wood.silentmedia.com/bench/8119\n\n\n",
"msg_date": "Mon, 15 Feb 2010 10:40:49 +0100",
"msg_from": "=?utf-8?B?RGVubmlzIEJqw7Zya2x1bmQ=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 -> 8.4 regression"
}
] |
[
{
"msg_contents": "Please have a look at the following explain plan:\n\n\nexplain analyze\nselect *\nfrom vtiger_crmentity\ninner JOIN vtiger_users\n ON vtiger_users.id = vtiger_crmentity.smownerid\nwhere vtiger_crmentity.deleted = 0 ;\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3665.17..40019.25 rows=640439 width=1603) (actual\ntime=115.613..3288.436 rows=638081 loops=1)\n Hash Cond: (\"outer\".smownerid = \"inner\".id)\n -> Bitmap Heap Scan on vtiger_crmentity (cost=3646.54..30394.02\nrows=640439 width=258) (actual time=114.763..986.504 rows=638318 loops=1)\n Recheck Cond: (deleted = 0)\n -> Bitmap Index Scan on vtiger_crmentity_deleted_idx\n(cost=0.00..3646.54 rows=640439 width=0) (actual time=107.851..107.851\nrows=638318 loops=1)\n Index Cond: (deleted = 0)\n -> Hash (cost=18.11..18.11 rows=211 width=1345) (actual\ntime=0.823..0.823 rows=211 loops=1)\n -> Seq Scan on vtiger_users (cost=0.00..18.11 rows=211\nwidth=1345) (actual time=0.005..0.496 rows=211 loops=1)\n Total runtime: 3869.022 ms\n\n\nSequential index is occuring on vtiger_users table while it has primary key\nindex on id.\nCould anyone please tell me why?\n\n\n\n \\d vtiger_users\n Table\n\"public.vtiger_users\"\n Column | Type\n|\nModifiers\n---------------------+-----------------------------+----------------------------------------------------------------------------------------------\n id | integer | not null default\nnextval('vtiger_users_seq'::regclass)\n user_name | character varying(255) |\n user_password | character varying(30) |\n user_hash | character varying(32) |\n ...\n\nIndexes:\n \"vtiger_users_pkey\" PRIMARY KEY, btree (id)\n \"user_user_name_idx\" btree (user_name)\n \"user_user_password_idx\" btree (user_password)\n \"vtiger_users_user_name_lo_idx\" btree (lower(user_name::text)\nvarchar_pattern_ops)\n\n\n \\d vtiger_crmentity\n Table \"public.vtiger_crmentity\"\n Column | Type | Modifiers\n--------------+-----------------------------+--------------------\n crmid | integer | not null\n smcreatorid | integer | not null default 0\n smownerid | integer | not null default 0\n modifiedby | integer | not null default 0\n setype | character varying(30) | not null\n description | text |\n createdtime | timestamp without time zone | not null\n modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone |\n status | character varying(50) |\n version | integer | not null default 0\n presence | integer | default 1\n deleted | integer | not null default 0\nIndexes:\n \"vtiger_crmentity_pkey\" PRIMARY KEY, btree (crmid)\n \"crmentity_deleted_smownerid_idx\" btree (deleted, smownerid)\n \"crmentity_modifiedby_idx\" btree (modifiedby)\n \"crmentity_smcreatorid_idx\" btree (smcreatorid)\n \"crmentity_smownerid_deleted_idx\" btree (smownerid, deleted)\n \"crmentity_smownerid_idx\" btree (smownerid)\n \"vtiger_crmentity_deleted_idx\" btree (deleted)\n\nPlease have a look at the following explain plan:\nexplain analyzeselect *from vtiger_crmentityinner JOIN vtiger_users ON vtiger_users.id = vtiger_crmentity.smownerid where vtiger_crmentity.deleted = 0 ;\n QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3665.17..40019.25 rows=640439 width=1603) (actual time=115.613..3288.436 rows=638081 loops=1) Hash Cond: (\"outer\".smownerid = \"inner\".id) -> Bitmap Heap Scan on vtiger_crmentity (cost=3646.54..30394.02 rows=640439 width=258) (actual time=114.763..986.504 rows=638318 loops=1)\n Recheck Cond: (deleted = 0) -> Bitmap Index Scan on vtiger_crmentity_deleted_idx (cost=0.00..3646.54 rows=640439 width=0) (actual time=107.851..107.851 rows=638318 loops=1) Index Cond: (deleted = 0)\n -> Hash (cost=18.11..18.11 rows=211 width=1345) (actual time=0.823..0.823 rows=211 loops=1) -> Seq Scan on vtiger_users (cost=0.00..18.11 rows=211 width=1345) (actual time=0.005..0.496 rows=211 loops=1)\n Total runtime: 3869.022 ms\nSequential index is occuring on vtiger_users table while it has primary key index on id.Could anyone please tell me why?\n \n \\d vtiger_users Table \"public.vtiger_users\" Column | Type | Modifiers \n---------------------+-----------------------------+---------------------------------------------------------------------------------------------- id | integer | not null default nextval('vtiger_users_seq'::regclass)\n user_name | character varying(255) | user_password | character varying(30) | user_hash | character varying(32) | ...\nIndexes: \"vtiger_users_pkey\" PRIMARY KEY, btree (id) \"user_user_name_idx\" btree (user_name) \"user_user_password_idx\" btree (user_password) \"vtiger_users_user_name_lo_idx\" btree (lower(user_name::text) varchar_pattern_ops)\n \\d vtiger_crmentity Table \"public.vtiger_crmentity\" Column | Type | Modifiers --------------+-----------------------------+--------------------\n crmid | integer | not null smcreatorid | integer | not null default 0 smownerid | integer | not null default 0 modifiedby | integer | not null default 0\n setype | character varying(30) | not null description | text | createdtime | timestamp without time zone | not null modifiedtime | timestamp without time zone | not null\n viewedtime | timestamp without time zone | status | character varying(50) | version | integer | not null default 0 presence | integer | default 1\n deleted | integer | not null default 0Indexes: \"vtiger_crmentity_pkey\" PRIMARY KEY, btree (crmid) \"crmentity_deleted_smownerid_idx\" btree (deleted, smownerid)\n \"crmentity_modifiedby_idx\" btree (modifiedby) \"crmentity_smcreatorid_idx\" btree (smcreatorid) \"crmentity_smownerid_deleted_idx\" btree (smownerid, deleted) \"crmentity_smownerid_idx\" btree (smownerid)\n \"vtiger_crmentity_deleted_idx\" btree (deleted)",
"msg_date": "Mon, 15 Feb 2010 15:35:01 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why primary key index are not using in joining?"
},
{
"msg_contents": "On Mon, Feb 15, 2010 at 2:35 AM, AI Rumman <[email protected]> wrote:\n>\n> Please have a look at the following explain plan:\n>\n> explain analyze\n> select *\n> from vtiger_crmentity\n> inner JOIN vtiger_users\n> ON vtiger_users.id = vtiger_crmentity.smownerid\n> where vtiger_crmentity.deleted = 0 ;\n>\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=3665.17..40019.25 rows=640439 width=1603) (actual\n> time=115.613..3288.436 rows=638081 loops=1)\n> Hash Cond: (\"outer\".smownerid = \"inner\".id)\n> -> Bitmap Heap Scan on vtiger_crmentity (cost=3646.54..30394.02\n> rows=640439 width=258) (actual time=114.763..986.504 rows=638318 loops=1)\n> Recheck Cond: (deleted = 0)\n> -> Bitmap Index Scan on vtiger_crmentity_deleted_idx\n> (cost=0.00..3646.54 rows=640439 width=0) (actual time=107.851..107.851\n> rows=638318 loops=1)\n> Index Cond: (deleted = 0)\n> -> Hash (cost=18.11..18.11 rows=211 width=1345) (actual\n> time=0.823..0.823 rows=211 loops=1)\n> -> Seq Scan on vtiger_users (cost=0.00..18.11 rows=211\n> width=1345) (actual time=0.005..0.496 rows=211 loops=1)\n> Total runtime: 3869.022 ms\n>\n> Sequential index is occuring on vtiger_users table while it has primary key\n> index on id.\n> Could anyone please tell me why?\n\nCause it's only 211 rows and only takes 0.5 milliseconds to scan?\n",
"msg_date": "Mon, 15 Feb 2010 02:50:13 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why primary key index are not using in joining?"
},
{
"msg_contents": "AI Rumman wrote:\n>\n> explain analyze\n> select *\n> from vtiger_crmentity\n> inner JOIN vtiger_users\n> ON vtiger_users.id <http://vtiger_users.id> = \n> vtiger_crmentity.smownerid\n> where vtiger_crmentity.deleted = 0 ;\n> \n> QUERY \n> PLAN \n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=3665.17..40019.25 rows=640439 width=1603) (actual \n> time=115.613..3288.436 rows=638081 loops=1)\n> Hash Cond: (\"outer\".smownerid = \"inner\".id)\n> -> Bitmap Heap Scan on vtiger_crmentity (cost=3646.54..30394.02 \n> rows=640439 width=258) (actual time=114.763..986.504 rows=638318 loops=1)\n> Recheck Cond: (deleted = 0)\n> -> Bitmap Index Scan on vtiger_crmentity_deleted_idx \n> (cost=0.00..3646.54 rows=640439 width=0) (actual time=107.851..107.851 \n> rows=638318 loops=1)\n> Index Cond: (deleted = 0)\n> -> Hash (cost=18.11..18.11 rows=211 width=1345) (actual \n> time=0.823..0.823 rows=211 loops=1)\n> -> Seq Scan on vtiger_users (cost=0.00..18.11 rows=211 \n> width=1345) (actual time=0.005..0.496 rows=211 loops=1)\n> Total runtime: 3869.022 ms\n>\n> Sequential index is occuring on vtiger_users table while it has \n> primary key index on id.\n> Could anyone please tell me why?\n>\n From the list of indexes you also supplied it seems to me you very much \nwant index scanning, the reason being that 4secs is too slow? The \nseqscan is not the reason for that - the main reason is that you process \nalmost all rows of the crmentity table. I bet that if you add a LIMIT, \nor adding a clause that selects only for a specific vtiger_user, the \nplan looks different on the access to the crmentity table as well as the \nkind of join, however if your application really needs to process the \n600k rows, I'm not sure if it can get any faster than that. Perhaps it \nwould help a bit to shorten the SELECT * to only the attributes you \nreally need.\n\nRegards,\nYeb Havinga\n\n",
"msg_date": "Mon, 15 Feb 2010 12:11:23 +0100",
"msg_from": "Yeb Havinga <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why primary key index are not using in joining?"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Mon, Feb 15, 2010 at 2:35 AM, AI Rumman <[email protected]> wrote:\n>> Please have a look at the following explain plan:\n\n>> Hash Join (cost=3665.17..40019.25 rows=640439 width=1603) (actual\n>> time=115.613..3288.436 rows=638081 loops=1)\n>> Hash Cond: (\"outer\".smownerid = \"inner\".id)\n>> -> Bitmap Heap Scan on vtiger_crmentity (cost=3646.54..30394.02\n>> rows=640439 width=258) (actual time=114.763..986.504 rows=638318 loops=1)\n>> Recheck Cond: (deleted = 0)\n>> -> Bitmap Index Scan on vtiger_crmentity_deleted_idx\n>> (cost=0.00..3646.54 rows=640439 width=0) (actual time=107.851..107.851\n>> rows=638318 loops=1)\n>> Index Cond: (deleted = 0)\n>> -> Hash (cost=18.11..18.11 rows=211 width=1345) (actual\n>> time=0.823..0.823 rows=211 loops=1)\n>> -> Seq Scan on vtiger_users (cost=0.00..18.11 rows=211\n>> width=1345) (actual time=0.005..0.496 rows=211 loops=1)\n>> Total runtime: 3869.022 ms\n>> \n>> Sequential index is occuring on vtiger_users table while it has primary key\n>> index on id.\n\n> Cause it's only 211 rows and only takes 0.5 milliseconds to scan?\n\nOr, even more to the point, because a nestloop-with-inner-index-scan\nplan would require 638318 repetitions of the inner index scan. There's\nno way that is going to be a better plan than this one. Given the\nrowcounts --- in particular, the fact that each vtiger_users row seems\nto have a lot of join partners --- I don't think there *is* any better\nplan than this one. A nestloop with vtiger_crmentity on the inside\nis the only alternative worth considering, and it doesn't look like\nthat could be any cheaper.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Feb 2010 10:32:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why primary key index are not using in joining? "
}
] |
[
{
"msg_contents": "lionel duboeuf wrote:\n> Kevin Grittner a �crit :\n \n>> I just reread your original email, and I'm not sure I understand\n>> what you meant regarding VACUUM ANALYZE. If you run that right\n>> beforehand, do you still get the slow plan for user 10?\n \n> I confirm by executing manual \"VACUUM ANALYZE\" that the problem is\n> solved. But what i don't understand is that i would expect\n> autovacuum to do the job.\n \nI think this is the crux of the issue. Boosting the\ndefault_statistics_target or the statistics target for specific\ncolumns might help, reducing autovacuum_analyze_scale_factor might\nhelp, but I can't help wondering whether you inserted a large number\nof rows for user 10 and then ran the query to select user 10 before\nautovacuum had time to complete. Does that seem possible?\n\n-Kevin\n",
"msg_date": "Mon, 15 Feb 2010 08:19:19 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t\t\n\twhen changing where clause value"
},
{
"msg_contents": "Here is my log analysis:\nDue to a database recovery task it appears that:\n\n I stopped postgresql\n I started postgresql (and as default autovacuum daemon)\n I restored the databases (need to restore 4 databases)\n It seems that after database 1 have been restored, autovacumm \nstarted on it and has been stopped while restoring database 2.\n \n @see log as attached\n\n\n\nHere is backup/restore commands i use:\n\n/usr/bin/pg_dump -i -h localhost -U postgres -F c -b -f ... x 4 times\n\nsudo -u postgres pg_restore -d 'database1' x 4 times\n\n\nDoes this kind of error can lead to the query problem ?\nIf so, what would you suggest me to do to avoid the problem again ?.\n\n\nThanks\nLionel\n\n \n\nKevin Grittner a �crit :\n> lionel duboeuf wrote:\n> \n>> Kevin Grittner a �crit :\n>> \n> \n> \n>>> I just reread your original email, and I'm not sure I understand\n>>> what you meant regarding VACUUM ANALYZE. If you run that right\n>>> beforehand, do you still get the slow plan for user 10?\n>>> \n> \n> \n>> I confirm by executing manual \"VACUUM ANALYZE\" that the problem is\n>> solved. But what i don't understand is that i would expect\n>> autovacuum to do the job.\n>> \n> \n> I think this is the crux of the issue. Boosting the\n> default_statistics_target or the statistics target for specific\n> columns might help, reducing autovacuum_analyze_scale_factor might\n> help, but I can't help wondering whether you inserted a large number\n> of rows for user 10 and then ran the query to select user 10 before\n> autovacuum had time to complete. Does that seem possible?\n>\n> -Kevin\n>\n> \n\n\n\n",
"msg_date": "Tue, 16 Feb 2010 11:26:18 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan when\n\tchanging where clause value"
},
{
"msg_contents": "Here is my log analysis:\nDue to a database recovery task it appears that:\n\n I stopped postgresql\n I started postgresql (and as default autovacuum daemon)\n I restored the databases (need to restore 4 databases)\n It seems that after database 1 have been restored, autovacumm \nstarted on it and has been stopped while restoring database 2.\n \n @see log as attached\n\n\n\nHere is backup/restore commands i use:\n\n/usr/bin/pg_dump -i -h localhost -U postgres -F c -b -f ... x 4 times\n\nsudo -u postgres pg_restore -d 'database1' x 4 times\n\n\nDoes this kind of error can lead to the query problem ?\nIf so, what would you suggest me to do to avoid the problem again ?.\n\n\nThanks\nLionel\n\n \n\nKevin Grittner a �crit :\n> lionel duboeuf wrote:\n> \n>> Kevin Grittner a �crit :\n>> \n> \n> \n>>> I just reread your original email, and I'm not sure I understand\n>>> what you meant regarding VACUUM ANALYZE. If you run that right\n>>> beforehand, do you still get the slow plan for user 10?\n>>> \n> \n> \n>> I confirm by executing manual \"VACUUM ANALYZE\" that the problem is\n>> solved. But what i don't understand is that i would expect\n>> autovacuum to do the job.\n>> \n> \n> I think this is the crux of the issue. Boosting the\n> default_statistics_target or the statistics target for specific\n> columns might help, reducing autovacuum_analyze_scale_factor might\n> help, but I can't help wondering whether you inserted a large number\n> of rows for user 10 and then ran the query to select user 10 before\n> autovacuum had time to complete. Does that seem possible?\n>\n> -Kevin\n>\n>",
"msg_date": "Tue, 16 Feb 2010 11:26:36 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan when\n\tchanging where clause value"
}
] |
[
{
"msg_contents": "Reydan Cankur wrote:\n \n> I want to spread out the workload on all cores. But also I want to\n> set the core number; for example first I want to spread out the\n> workload to 32 cores then 64 cores and see the scalability.\n \nPostgreSQL itself won't use more cores than you have active\nconnections, so one way to deal with this might be to use one of the\navailable connection poolers, like pgpool or pgbouncer. That should\nallow you to control the number of cores used pretty well, although\nit won't support targeting particular connections to particular\ncores (although this technique could be combined with other\nsuggestions). The OS might use another core or two to help with\nnetwork or disk I/O, but that should be minimal.\n \n-Kevin\n",
"msg_date": "Mon, 15 Feb 2010 08:33:11 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL on SMP Architectures"
}
] |
[
{
"msg_contents": "Ben Chobot wrote:\n \n> Here is the plan on 8.4.2:\n \n> Here is the very much less compact plan for the same query on\n> 8.1.19:\n \nCould you show the query, along with table definitions (including\nindexes)?\n \n-Kevin\n",
"msg_date": "Mon, 15 Feb 2010 09:59:27 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.1 -> 8.4 regression"
},
{
"msg_contents": "On Feb 15, 2010, at 7:59 AM, Kevin Grittner wrote:\n\n> Could you show the query, along with table definitions (including\n> indexes)?\n\nOh, yeah, I suppose that would help. :)\n\nhttp://wood.silentmedia.com/bench/query_and_definitions\n\n(I'd paste them here for posterity but I speculate the reason my first few attempts to ask this question never went through were because of the size of the email.)",
"msg_date": "Mon, 15 Feb 2010 08:16:04 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 -> 8.4 regression"
},
{
"msg_contents": "Ben Chobot <[email protected]> writes:\n> On Feb 15, 2010, at 7:59 AM, Kevin Grittner wrote:\n>> Could you show the query, along with table definitions (including\n>> indexes)?\n\n> Oh, yeah, I suppose that would help. :)\n\n> http://wood.silentmedia.com/bench/query_and_definitions\n\nIt looks like the problem is that the EXISTS sub-query is getting\nconverted into a join; which is usually a good thing but in this case it\ninterferes with letting the users table not be scanned completely.\nThe long-term fix for that is to support nestloop inner indexscans where\nthe index key comes from more than one join level up, but making that\nhappen isn't too easy.\n\nIn the meantime, I think you could defeat the \"optimization\" by\ninserting LIMIT 1 in the EXISTS sub-query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 15 Feb 2010 12:26:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 -> 8.4 regression "
},
{
"msg_contents": "Awesome, that did the trick. Thanks Tom! So I understand better, why is my case not the normal, better case?\n\n(I assume the long-term fix is post-9.0, right?)\n\nOn Feb 15, 2010, at 9:26 AM, Tom Lane wrote:\n\n> Ben Chobot <[email protected]> writes:\n>> On Feb 15, 2010, at 7:59 AM, Kevin Grittner wrote:\n>>> Could you show the query, along with table definitions (including\n>>> indexes)?\n> \n>> Oh, yeah, I suppose that would help. :)\n> \n>> http://wood.silentmedia.com/bench/query_and_definitions\n> \n> It looks like the problem is that the EXISTS sub-query is getting\n> converted into a join; which is usually a good thing but in this case it\n> interferes with letting the users table not be scanned completely.\n> The long-term fix for that is to support nestloop inner indexscans where\n> the index key comes from more than one join level up, but making that\n> happen isn't too easy.\n> \n> In the meantime, I think you could defeat the \"optimization\" by\n> inserting LIMIT 1 in the EXISTS sub-query.\n> \n> \t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 15 Feb 2010 09:35:13 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 -> 8.4 regression "
},
{
"msg_contents": "Ben Chobot <[email protected]> writes:\n> Awesome, that did the trick. Thanks Tom! So I understand better, why is my case not the normal, better case?\n\nWell, the short answer is that the 8.4 changes here are in the nature of\ntwo steps forward and one step back. The long-term goal is to increase\nthe planner's ability to choose among different join orders; but we're\ngetting rid of one restriction at a time, and sometimes the interactions\nof those restrictions produce unwanted results like the older code being\nable to find a better plan than the new code can.\n\n> (I assume the long-term fix is post-9.0, right?)\n\nYeah, fraid so. I've been mostly buried in non-planner work in the 9.0\ncycle, but hope to get back to this and other problems in the next\ncycle.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 17 Feb 2010 00:34:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 -> 8.4 regression "
}
] |
[
{
"msg_contents": "Good day,\n\nI have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy\nwriting and updating on a partitioned table. Sometimes within one minute,\nthere are tens of file with size=1,048,576kb (such as\nfilenode.1,filenode.2,...filenode.43) created in the database subdirectory\nwithin PGDATA/base. \n\nThis caused the disk space quickly used up. Is this expected?\n\nThanks for any information\n\n\n\nBest Regards\n\nRose Zhou \n\n",
"msg_date": "Mon, 15 Feb 2010 14:59:51 -0500",
"msg_from": "\"Rose Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "disk space usage unexpected"
},
{
"msg_contents": "On Feb 15, 2010, at 11:59 AM, Rose Zhou wrote:\n\n> Good day,\n> \n> I have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy\n> writing and updating on a partitioned table. Sometimes within one minute,\n> there are tens of file with size=1,048,576kb (such as\n> filenode.1,filenode.2,...filenode.43) created in the database subdirectory\n> within PGDATA/base. \n> \n> This caused the disk space quickly used up. Is this expected?\n\nIt's expected if you're doing lots of inserts, and/or lots of updates or deletes without an appropriate amount of vacuuming.\n\n\n",
"msg_date": "Mon, 15 Feb 2010 13:15:48 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk space usage unexpected"
},
{
"msg_contents": "Thanks Ben:\n \nI will adjust the auto vacuum parameters. It is on now, maybe not frequently\nenough.\nHow to get the disk space back to OS? Will a Vacuum Full Verbose get the\ndisk space back to OS?\n \n \n\n\n\nBest Regards\n\nRose Zhou \n\n \n\n\n _____ \n\nFrom: Ben Chobot [mailto:[email protected]] \nSent: 15 February 2010 16:16\nTo: Rose Zhou\nCc: [email protected]\nSubject: Re: [PERFORM] disk space usage unexpected\n\n\n\nOn Feb 15, 2010, at 11:59 AM, Rose Zhou wrote:\n\n> Good day,\n>\n> I have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy\n> writing and updating on a partitioned table. Sometimes within one minute,\n> there are tens of file with size=1,048,576kb (such as\n> filenode.1,filenode.2,...filenode.43) created in the database subdirectory\n> within PGDATA/base.\n>\n> This caused the disk space quickly used up. Is this expected?\n\nIt's expected if you're doing lots of inserts, and/or lots of updates or\ndeletes without an appropriate amount of vacuuming.\n\n\n\n\n\nRe: [PERFORM] disk space usage unexpected\n\n\n\nThanks Ben:\n \nI will adjust the auto vacuum parameters. It is on now, \nmaybe not frequently enough.\nHow to get the disk space back to OS? Will a Vacuum \nFull Verbose get the disk space back to OS?\n \n \nBest RegardsRose Zhou \n \n\n\n\nFrom: Ben Chobot \n [mailto:[email protected]] Sent: 15 February 2010 \n 16:16To: Rose ZhouCc: \n [email protected]: Re: [PERFORM] disk space \n usage unexpected\n\nOn Feb 15, 2010, at 11:59 AM, Rose Zhou wrote:> \n Good day,>> I have a PostgreSQL 8.4 database installed on WinXP \n x64 with very heavy> writing and updating on a partitioned table. \n Sometimes within one minute,> there are tens of file with \n size=1,048,576kb (such as> filenode.1,filenode.2,...filenode.43) \n created in the database subdirectory> within \n PGDATA/base.>> This caused the disk space quickly used up. Is \n this expected?It's expected if you're doing lots of inserts, and/or \n lots of updates or deletes without an appropriate amount of \n vacuuming.",
"msg_date": "Mon, 15 Feb 2010 16:25:00 -0500",
"msg_from": "\"Rose Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: disk space usage unexpected"
},
{
"msg_contents": "On Mon, 2010-02-15 at 14:59 -0500, Rose Zhou wrote:\n> Good day,\n> \n> I have a PostgreSQL 8.4 database installed on WinXP x64 with very heavy\n> writing and updating on a partitioned table. Sometimes within one minute,\n> there are tens of file with size=1,048,576kb (such as\n> filenode.1,filenode.2,...filenode.43) created in the database subdirectory\n> within PGDATA/base. \n> \n> This caused the disk space quickly used up. Is this expected?\n\nYes. Especially if autovacuum is not keeping up with the number of\nupdates.\n\nJoshua D. Drake\n\n\n> \n> Thanks for any information\n> \n> \n> \n> Best Regards\n> \n> Rose Zhou \n> \n> \n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 503.667.4564\nConsulting, Training, Support, Custom Development, Engineering\nRespect is earned, not gained through arbitrary and repetitive use or Mr. or Sir.\n\n",
"msg_date": "Mon, 15 Feb 2010 17:24:50 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk space usage unexpected"
},
{
"msg_contents": "On Feb 15, 2010, at 1:25 PM, Rose Zhou wrote:\n\n> Thanks Ben:\n> \n> I will adjust the auto vacuum parameters. It is on now, maybe not frequently enough.\n> How to get the disk space back to OS? Will a Vacuum Full Verbose get the disk space back to OS?\n> \n> \n\nYes, but it might bloat your indexes. Do you actually need to get your disk space back? If you did, would the database just eat it up again after more activity?\nOn Feb 15, 2010, at 1:25 PM, Rose Zhou wrote:\n\nThanks Ben:\n \nI will adjust the auto vacuum parameters. It is on now, \nmaybe not frequently enough.\nHow to get the disk space back to OS? Will a Vacuum \nFull Verbose get the disk space back to OS?\n \n Yes, but it might bloat your indexes. Do you actually need to get your disk space back? If you did, would the database just eat it up again after more activity?",
"msg_date": "Wed, 17 Feb 2010 10:34:17 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: disk space usage unexpected"
},
{
"msg_contents": "Vacuum Full Verbose did not get the disk space back to OS, it did bloat my\nindexes, also got out of memory error. I dumped the suspicious table,\ndeleted it, re-created it, then restored the data and re-indexed the data,\nso I got the disk space back.\n \nTo avoid the database eating up the disk space again, I have adjusted the\nautovacuum parameters, to make it keep up with the updates.\n \nNot sure if this is the right way to solve this kind of problem.\n \n\n\n\nBest Regards\n\nRose Zhou \n\n \n\n\n _____ \n\nFrom: Ben Chobot [mailto:[email protected]] \nSent: 17 February 2010 13:34\nTo: Rose Zhou\nCc: [email protected]\nSubject: Re: [PERFORM] disk space usage unexpected\n\n\nOn Feb 15, 2010, at 1:25 PM, Rose Zhou wrote:\n\n\nThanks Ben:\n \nI will adjust the auto vacuum parameters. It is on now, maybe not frequently\nenough.\nHow to get the disk space back to OS? Will a Vacuum Full Verbose get the\ndisk space back to OS?\n \n \n\n\nYes, but it might bloat your indexes. Do you actually need to get your disk\nspace back? If you did, would the database just eat it up again after more\nactivity?\n\n\n\n\n\n\nVacuum Full Verbose did not get the disk space back to OS, it \ndid bloat my indexes, also got out of memory error. I dumped the suspicious \ntable, deleted it, re-created it, then restored the data and re-indexed the \ndata, so I got the disk space back.\n \nTo avoid the database eating up the disk space again, I have \nadjusted the autovacuum parameters, to make it keep up with the \nupdates.\n \nNot sure if this is the right way to solve this kind of \nproblem.\n \nBest RegardsRose Zhou \n \n\n\n\nFrom: Ben Chobot \n [mailto:[email protected]] Sent: 17 February 2010 \n 13:34To: Rose ZhouCc: \n [email protected]: Re: [PERFORM] disk space \n usage unexpected\n\n\nOn Feb 15, 2010, at 1:25 PM, Rose Zhou wrote:\n\n\nThanks Ben:\n \nI will adjust the auto vacuum parameters. It is on \n now, maybe not frequently enough.\nHow to get the disk space back to OS? Will a Vacuum \n Full Verbose get the disk space back to OS?\n \n \nYes, but it might bloat your indexes. Do you actually need to get your \n disk space back? If you did, would the database just eat it up again after \n more activity?",
"msg_date": "Wed, 17 Feb 2010 13:43:46 -0500",
"msg_from": "\"Rose Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: disk space usage unexpected"
}
] |
[
{
"msg_contents": "\nGood day!\n\nWe bought a new WinXP x64 Professional, it has 12GB memory. \n\nI installed postgresql-8.4.1-1-windows version on this PC, also installed\nanother .Net application which reads in data from a TCP port and\ninsert/update the database, the data volume is large, with heavy writing and\nupdating on a partitioned table.\n\nI configured the PostgreSQL as below:\n\nShared_buffers=1024MB\neffective_cache_size=5120MB\nwork_mem=32MB\nmaintenance_work_men=200MB\n\nBut I got the Auto Vacuum out-of-memory error. The detailed configuration is\nas follows, can anyone suggest what is the best configuration from the\nperformance perspective?\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The \"=\" is optional.) Whitespace may be used. Comments are introduced\nwith # \"#\" anywhere on a line. The complete list of parameter names and\nallowed # values can be found in the PostgreSQL documentation.\n#\n# The commented-out settings shown in this file represent the default\nvalues.\n# Re-commenting a setting is NOT sufficient to revert it to the default\nvalue; # you need to reload the server.\n#\n# This file is read on server startup and when the server receives a SIGHUP\n# signal. If you edit the file on a running system, you have to SIGHUP the\n# server for the changes to take effect, or use \"pg_ctl reload\". Some #\nparameters, which are marked below, require a server shutdown and restart to\n# take effect.\n#\n# Any parameter can also be given as a command-line option to the server,\ne.g., # \"postgres -c log_connections=on\". Some parameters can be changed at\nrun time # with the \"SET\" SQL command.\n#\n# Memory units: kB = kilobytes Time units: ms = milliseconds\n# MB = megabytes s = seconds\n# GB = gigabytes min = minutes\n# h = hours\n# d = days\n\n\n#---------------------------------------------------------------------------\n---\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n---\n\n# The default values of these variables are driven from the -D command-line\n# option or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n\t\t\t\t\t# (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n\t\t\t\t\t# (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf'\t# ident configuration file\n\t\t\t\t\t# (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = '(none)'\t\t# write an extra PID file\n\t\t\t\t\t# (change requires restart)\n\n\n#---------------------------------------------------------------------------\n---\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n---\n\n# - Connection Settings -\n\nlisten_addresses = '*'\t\t# what IP address(es) to listen on;\n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\n\t\t\t\t\t# (change requires restart)\nport = 5432\t\t\t\t# (change requires restart)\nmax_connections = 100\t\t\t# (change requires restart)\n# Note: Increasing max_connections costs ~400 bytes of shared memory per #\nconnection slot, plus lock space (see max_locks_per_transaction).\n#superuser_reserved_connections = 3\t# (change requires restart)\n#unix_socket_directory = ''\t\t# (change requires restart)\n#unix_socket_group = ''\t\t\t# (change requires restart)\n#unix_socket_permissions = 0777\t\t# begin with 0 to use octal notation\n\t\t\t\t\t# (change requires restart)\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\t\t\t\t\t# (change requires restart)\n\n# - Security and Authentication -\n\n#authentication_timeout = 1min\t\t# 1s-600s\n#ssl = off\t\t\t\t# (change requires restart)\n#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'\t# allowed SSL\nciphers\n\t\t\t\t\t# (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos and GSSAPI\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\t\t# (Kerberos only)\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see \"man 7 tcp\" for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n---\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n---\n\n# - Memory -\n\n\nshared_buffers = 1024MB\t\t\t# min 128kB\n\t\t\t\t\t# (change requires restart)\n#temp_buffers = 8MB\t\t\t# min 800kB\n#max_prepared_transactions = 0\t\t# zero disables the feature\n\t\t\t\t\t# (change requires restart)\n# Note: Increasing max_prepared_transactions costs ~600 bytes of shared\nmemory # per transaction slot, plus lock space (see\nmax_locks_per_transaction).\n# It is not advisable to set max_prepared_transactions nonzero unless you #\nactively intend to use prepared transactions.\n\nwork_mem = 32MB\t\t\t\t# min 64kB\nmaintenance_work_mem=200MB\n#max_stack_depth = 2MB\t\t\t# min 100kB\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n\t\t\t\t\t# (change requires restart)\n#shared_preload_libraries = ''\t\t# (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0ms\t\t# 0-100 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 1-10000 credits\n\n# - Background Writer -\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms between rounds\n#bgwriter_lru_maxpages = 100\t\t# 0-1000 max buffers written/round\n#bgwriter_lru_multiplier = 2.0\t\t# 0-10.0 multipler on buffers\nscanned/round\n\n# - Asynchronous Behavior -\n\n#effective_io_concurrency = 1\t\t# 1-1000. 0 disables prefetching\n\n\n#---------------------------------------------------------------------------\n---\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n---\n\n# - Settings -\n\n#fsync = on\t\t\t\t# turns forced synchronization on or\noff\n#synchronous_commit = on\t\t# immediate fsync at commit\n#wal_sync_method = fsync\t\t# the default is the first option \n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n\nwal_buffers = 256\t\t\t# min 32kB\n\t\t\t\t\t# (change requires restart)\n#wal_writer_delay = 200ms\t\t# 1-10000 milliseconds\n\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments =30\n#checkpoint_timeout = 5min\t\t# range 30s-1h\n#checkpoint_completion_target = 0.5\t# checkpoint target duration, 0.0 -\n1.0\n#checkpoint_warning = 30s\t\t# 0 disables\n\n# - Archiving -\n\n#archive_mode = off\t\t# allows archiving to be done\n\t\t\t\t# (change requires restart)\n#archive_command = ''\t\t# command to use to archive a logfile\nsegment\n#archive_timeout = 0\t\t# force a logfile segment switch after this\n\t\t\t\t# number of seconds; 0 disables\n\n\n#---------------------------------------------------------------------------\n---\n# QUERY TUNING\n#---------------------------------------------------------------------------\n---\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\nrandom_page_cost = 4.0\t\t\t# same scale as above\n#cpu_tuple_cost = 0.01\t\t\t# same scale as above\n#cpu_index_tuple_cost = 0.005\t\t# same scale as above\n#cpu_operator_cost = 0.0025\t\t# same scale as above\neffective_cache_size =5120MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 100\t# range 1-10000\n#constraint_exclusion = partition\t# on, off, or partition\n#cursor_tuple_fraction = 0.1\t\t# range 0.0-1.0\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables collapsing of explicit \n\t\t\t\t\t# JOIN clauses\n\n\n#---------------------------------------------------------------------------\n---\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n---\n\n# - Where to Log -\n\nlog_destination = 'stderr'\t\t# Valid values are combinations of\n\t\t\t\t\t# stderr, csvlog, syslog and\neventlog,\n\t\t\t\t\t# depending on platform. csvlog\n\t\t\t\t\t# requires logging_collector to be\non.\n\n# This is used when logging to stderr:\nlogging_collector = on\t\t# Enable capturing of stderr and csvlog\n\t\t\t\t\t# into log files. Required to be on\nfor\n\t\t\t\t\t# csvlogs.\n\t\t\t\t\t# (change requires restart)\n\n# These are only used if logging_collector is on:\n#log_directory = 'pg_log'\t\t# directory where log files are\nwritten,\n\t\t\t\t\t# can be absolute or relative to\nPGDATA\nlog_filename = 'postgresql_%H.log'\t# log file name pattern,\n\t\t\t\t\t# can include strftime() escapes\nlog_truncate_on_rotation = on\t\t# If on, an existing log file of the\n\t\t\t\t\t# same name as the new log file will\nbe\n\t\t\t\t\t# truncated rather than appended to.\n\t\t\t\t\t# But such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on\nrestarts\n\t\t\t\t\t# or size-driven rotation. Default\nis\n\t\t\t\t\t# off, meaning append to existing\nfiles\n\t\t\t\t\t# in all cases.\nlog_rotation_age = 60\t\t\t# Automatic rotation of logfiles\nwill\n\t\t\t\t\t# happen after that time. 0\ndisables.\nlog_rotation_size = 100000\t\t# Automatic rotation of logfiles\nwill \n\t\t\t\t\t# happen after that much log output.\n\t\t\t\t\t# 0 disables.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n#silent_mode = off\t\t\t# Run server silently.\n\t\t\t\t\t# DO NOT USE without syslog or\n\t\t\t\t\t# logging_collector\n\t\t\t\t\t# (change requires restart)\n\n\n# - When to Log -\n\nclient_min_messages = error\t\t# values in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\nlog_min_messages = error\t\t# values in order of decreasing\ndetail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or verbose\nmessages\n\n#log_min_error_statement = error\t# values in order of decreasing\ndetail:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic (effectively off)\n\n#log_min_duration_statement = -1\t# -1 is disabled, 0 logs all\nstatements\n\t\t\t\t\t# and their durations, > 0 logs only\n\t\t\t\t\t# statements running at least this\nnumber\n\t\t\t\t\t# of milliseconds\n\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = on\n#log_checkpoints = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_hostname = off\nlog_line_prefix = '%t'\t\t\t# special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = process ID\n\t\t\t\t\t# %t = timestamp without\nmilliseconds\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session ID\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %v = virtual transaction ID\n\t\t\t\t\t# %x = transaction ID (0 if none)\n\t\t\t\t\t# %q = stop here in non-session\n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_lock_waits = off\t\t\t# log lock waits >= deadlock_timeout\n#log_statement = 'none'\t\t\t# none, ddl, mod, all\n#log_temp_files = -1\t\t\t# log temporary files equal or\nlarger\n\t\t\t\t\t# than the specified size in\nkilobytes;\n\t\t\t\t\t# -1 disables, 0 logs all temp files\n#log_timezone = unknown\t\t\t# actually, defaults to TZ\nenvironment\n\t\t\t\t\t# setting\n\n\n#---------------------------------------------------------------------------\n---\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n---\n\n# - Query/Index Statistics Collector -\n\n#track_activities = on\ntrack_counts = on\n#track_functions = none\t\t\t# none, pl, all\n#track_activity_query_size = 1024\n#update_process_title = on\n#stats_temp_directory = 'pg_stat_tmp'\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#---------------------------------------------------------------------------\n---\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n---\n\n\nautovacuum = true\t\t\t# Enable autovacuum subprocess?\n'on' \n\t\t\t\t\t# requires track_counts to also be\non.\n#log_autovacuum_min_duration = -1\t# -1 disables, 0 logs all actions\nand\n\t\t\t\t\t# their durations, > 0 logs only\n\t\t\t\t\t# actions running at least this\nnumber\n\t\t\t\t\t# of milliseconds.\n#autovacuum_max_workers = 3\t\t# max number of autovacuum\nsubprocesses\n#autovacuum_naptime = 1min\t\t# time between autovacuum runs\n# NOTE: This parameter is been added by EnterpriseDB's Tuning Wiard on\n2010/01/27 16:09:50\n#autovacuum_naptime = 60\t\t# time between autovacuum runs\n\nautovacuum_vacuum_threshold = 1000\t# min number of row updates before\n\t\t\t\t\t# vacuum\nautovacuum_analyze_threshold = 250\t# min number of row updates before \n\t\t\t\t\t# analyze\n\nautovacuum_vacuum_scale_factor = 0.2\t# fraction of table size before\nvacuum\n\nautovacuum_analyze_scale_factor = 0.1\t# fraction of table size before\nanalyze\n#autovacuum_freeze_max_age = 200000000\t# maximum XID age before forced\nvacuum\n\t\t\t\t\t# (change requires restart)\n#autovacuum_vacuum_cost_delay = 20ms\t# default vacuum cost delay for\n\t\t\t\t\t# autovacuum, in milliseconds;\n\t\t\t\t\t# -1 means use vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n\t\t\t\t\t# autovacuum, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n---\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n---\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses the\ndefault\n#temp_tablespaces = ''\t\t\t# a list of tablespace names, ''\nuses\n\t\t\t\t\t# only default tablespace\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#session_replication_role = 'origin'\n#statement_timeout = 0\t\t\t# in milliseconds, 0 is disabled\n#vacuum_freeze_min_age = 50000000\n#vacuum_freeze_table_age = 150000000\n#xmlbinary = 'base64'\n#xmloption = 'content'\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, dmy'\n#intervalstyle = 'postgres'\n#timezone = unknown\t\t\t# actually, defaults to TZ\nenvironment\n\t\t\t\t\t# setting\n#timezone_abbreviations = 'Default' # Select the set of available time\nzone\n\t\t\t\t\t# abbreviations. Currently, there\nare\n\t\t\t\t\t# Default\n\t\t\t\t\t# Australia\n\t\t\t\t\t# India\n\t\t\t\t\t# You can create your own file in\n\t\t\t\t\t# share/timezonesets/.\n#extra_float_digits = 0\t\t\t# min -15, max 2\n#client_encoding = sql_ascii\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb, but they can be changed.\nlc_messages = 'English_United Kingdom.1252'\t\t\t# locale for\nsystem error message\n\t\t\t\t\t# strings\nlc_monetary = 'English_United Kingdom.1252'\t\t\t# locale for\nmonetary formatting\nlc_numeric = 'English_United Kingdom.1252'\t\t\t# locale for\nnumber formatting\nlc_time = 'English_United Kingdom.1252'\t\t\t\t# locale for\ntime formatting\n\n# default configuration for text search\ndefault_text_search_config = 'pg_catalog.english'\n\n# - Other Defaults -\n\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n---\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n---\n\n#deadlock_timeout = 1s\nmax_locks_per_transaction = 150\t\t# min 10\n\t\t\t\t\t# (change requires restart)\n# Note: Each lock table slot uses ~270 bytes of shared memory, and there\nare # max_locks_per_transaction * (max_connections +\nmax_prepared_transactions) # lock table slots.\n\n\n#---------------------------------------------------------------------------\n---\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n---\n\n# - Previous PostgreSQL Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n#standard_conforming_strings = off\n#synchronize_seqscans = on\n\n# - Other Platforms and Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n---\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n---\n\n#custom_variable_classes = ''\t\t# list of custom variable class\nnames\n\n\n\nBest Regards\n\nRose Zhou \n\n",
"msg_date": "Mon, 15 Feb 2010 15:20:44 -0500",
"msg_from": "\"Rose Zhou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto Vacuum out of memory"
},
{
"msg_contents": "On Mon, Feb 15, 2010 at 3:20 PM, Rose Zhou <[email protected]> wrote:\n> We bought a new WinXP x64 Professional, it has 12GB memory.\n>\n> I installed postgresql-8.4.1-1-windows version on this PC, also installed\n> another .Net application which reads in data from a TCP port and\n> insert/update the database, the data volume is large, with heavy writing and\n> updating on a partitioned table.\n>\n> I configured the PostgreSQL as below:\n>\n> Shared_buffers=1024MB\n> effective_cache_size=5120MB\n> work_mem=32MB\n> maintenance_work_men=200MB\n>\n> But I got the Auto Vacuum out-of-memory error. The detailed configuration is\n> as follows, can anyone suggest what is the best configuration from the\n> performance perspective?\n\nPlease see http://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\nYou haven't provided very much detail here - for example, the error\nmessage that you got is conspicuously absent.\n\n...Robert\n",
"msg_date": "Sun, 21 Feb 2010 13:44:26 -0500",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto Vacuum out of memory"
}
] |
[
{
"msg_contents": "I am getting seq_scan on vtiger_account. Index is not using.\nCould anyone please tell me what the reason is?\n\n\n explain analyze\nselect *\nfrom vtiger_account\nLEFT JOIN vtiger_account vtiger_account2\n ON vtiger_account.parentid = vtiger_account2.accountid\n\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual\ntime=776.910..4407.233 rows=231572 loops=1)\n Hash Cond: (\"outer\".parentid = \"inner\".accountid)\n -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572\nwidth=132) (actual time=0.029..349.195 rows=231572 loops=1)\n -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual\ntime=776.267..776.267 rows=231572 loops=1)\n -> Seq Scan on vtiger_account vtiger_account2 (cost=0.00..7404.72\nrows=231572 width=132) (actual time=0.002..344.879 rows=231572 loops=1)\n Total runtime: 4640.868 ms\n(6 rows)\n\nvtigercrm504=# set enable_Seqscan = on;\nSET\n\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual\ntime=0.166..1924.417 rows=231572 loops=1)\n Merge Cond: (\"outer\".parentid = \"inner\".accountid)\n -> Index Scan using vtiger_account_parentid_idx on vtiger_account\n(cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985\nrows=231572 loops=1)\n -> Index Scan using vtiger_account_pkey on vtiger_account\nvtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual\ntime=0.074..532.463 rows=300971 loops=1)\n Total runtime: 2140.326 ms\n(5 rows)\n\n\n\n\\d vtiger_account\n Table \"public.vtiger_account\"\n Column | Type | Modifiers\n---------------+------------------------+--------------------------------\n accountid | integer | not null default 0\n accountname | character varying(200) | not null\n parentid | integer | default 0\n account_type | character varying(200) |\n industry | character varying(200) |\n annualrevenue | integer | default 0\n rating | character varying(200) |\n ownership | character varying(50) |\n siccode | character varying(50) |\n tickersymbol | character varying(30) |\n phone | character varying(30) |\n otherphone | character varying(30) |\n email1 | character varying(100) |\n email2 | character varying(100) |\n website | character varying(100) |\n fax | character varying(30) |\n employees | integer | default 0\n emailoptout | character varying(3) | default '0'::character varying\n notify_owner | character varying(3) | default '0'::character varying\nIndexes:\n \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid)\n \"account_account_type_idx\" btree (account_type)\n \"vtiger_account_parentid_idx\" btree (parentid)\n\nI am getting seq_scan on vtiger_account. Index is not using.\nCould anyone please tell me what the reason is?\n \n \n explain analyze select * from vtiger_account LEFT JOIN vtiger_account vtiger_account2 \n ON vtiger_account.parentid = vtiger_account2.accountid \n QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual time=776.910..4407.233 rows=231572 loops=1) Hash Cond: (\"outer\".parentid = \"inner\".accountid) -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.029..349.195 rows=231572 loops=1)\n -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual time=776.267..776.267 rows=231572 loops=1) -> Seq Scan on vtiger_account vtiger_account2 (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.002..344.879 rows=231572 loops=1)\n Total runtime: 4640.868 ms(6 rows)\nvtigercrm504=# set enable_Seqscan = on; SET\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual time=0.166..1924.417 rows=231572 loops=1)\n Merge Cond: (\"outer\".parentid = \"inner\".accountid) -> Index Scan using vtiger_account_parentid_idx on vtiger_account (cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985 rows=231572 loops=1)\n -> Index Scan using vtiger_account_pkey on vtiger_account vtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual time=0.074..532.463 rows=300971 loops=1) Total runtime: 2140.326 ms(5 rows)\n \n\\d vtiger_account Table \"public.vtiger_account\" Column | Type | Modifiers ---------------+------------------------+--------------------------------\n accountid | integer | not null default 0 accountname | character varying(200) | not null parentid | integer | default 0 account_type | character varying(200) | industry | character varying(200) | \n annualrevenue | integer | default 0 rating | character varying(200) | ownership | character varying(50) | siccode | character varying(50) | tickersymbol | character varying(30) | \n phone | character varying(30) | otherphone | character varying(30) | email1 | character varying(100) | email2 | character varying(100) | website | character varying(100) | \n fax | character varying(30) | employees | integer | default 0 emailoptout | character varying(3) | default '0'::character varying notify_owner | character varying(3) | default '0'::character varying\nIndexes: \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid) \"account_account_type_idx\" btree (account_type) \"vtiger_account_parentid_idx\" btree (parentid)",
"msg_date": "Tue, 16 Feb 2010 17:43:03 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why index is not using here?"
},
{
"msg_contents": "A mistake on the previous mail.\n\n explain analyze\nselect *\nfrom vtiger_account\nLEFT JOIN vtiger_account vtiger_account2\n ON vtiger_account.parentid = vtiger_account2.accountid\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual\ntime=776.910..4407.233 rows=231572 loops=1)\n Hash Cond: (\"outer\".parentid = \"inner\".accountid)\n -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572\nwidth=132) (actual time=0.029..349.195 rows=231572 loops=1)\n -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual\ntime=776.267..776.267 rows=231572 loops=1)\n -> Seq Scan on vtiger_account vtiger_account2 (cost=0.00..7404.72\nrows=231572 width=132) (actual time=0.002..344.879 rows=231572 loops=1)\n Total runtime: 4640.868 ms\n(6 rows)\nvtigercrm504=# set enable_Seqscan = off;\nSET\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual\ntime=0.166..1924.417 rows=231572 loops=1)\n Merge Cond: (\"outer\".parentid = \"inner\".accountid)\n -> Index Scan using vtiger_account_parentid_idx on vtiger_account\n(cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985\nrows=231572 loops=1)\n -> Index Scan using vtiger_account_pkey on vtiger_account\nvtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual\ntime=0.074..532.463 rows=300971 loops=1)\n Total runtime: 2140.326 ms\n(5 rows)\n\n\\d vtiger_account\n Table \"public.vtiger_account\"\n Column | Type | Modifiers\n---------------+------------------------+--------------------------------\n accountid | integer | not null default 0\n accountname | character varying(200) | not null\n parentid | integer | default 0\n account_type | character varying(200) |\n industry | character varying(200) |\n annualrevenue | integer | default 0\n rating | character varying(200) |\n ownership | character varying(50) |\n siccode | character varying(50) |\n tickersymbol | character varying(30) |\n phone | character varying(30) |\n otherphone | character varying(30) |\n email1 | character varying(100) |\n email2 | character varying(100) |\n website | character varying(100) |\n fax | character varying(30) |\n employees | integer | default 0\n emailoptout | character varying(3) | default '0'::character varying\n notify_owner | character varying(3) | default '0'::character varying\nIndexes:\n \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid)\n \"account_account_type_idx\" btree (account_type)\n \"vtiger_account_parentid_idx\" btree (parentid)\n\n\n\n\nOn Tue, Feb 16, 2010 at 5:43 PM, AI Rumman <[email protected]> wrote:\n\n> I am getting seq_scan on vtiger_account. Index is not using.\n> Could anyone please tell me what the reason is?\n>\n>\n> explain analyze\n> select *\n> from vtiger_account\n> LEFT JOIN vtiger_account vtiger_account2\n> ON vtiger_account.parentid = vtiger_account2.accountid\n>\n> QUERY\n> PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual\n> time=776.910..4407.233 rows=231572 loops=1)\n> Hash Cond: (\"outer\".parentid = \"inner\".accountid)\n> -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572\n> width=132) (actual time=0.029..349.195 rows=231572 loops=1)\n> -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual\n> time=776.267..776.267 rows=231572 loops=1)\n> -> Seq Scan on vtiger_account vtiger_account2\n> (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.002..344.879\n> rows=231572 loops=1)\n> Total runtime: 4640.868 ms\n> (6 rows)\n>\n> vtigercrm504=# set enable_Seqscan = on;\n> SET\n>\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual\n> time=0.166..1924.417 rows=231572 loops=1)\n> Merge Cond: (\"outer\".parentid = \"inner\".accountid)\n> -> Index Scan using vtiger_account_parentid_idx on vtiger_account\n> (cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985\n> rows=231572 loops=1)\n> -> Index Scan using vtiger_account_pkey on vtiger_account\n> vtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual\n> time=0.074..532.463 rows=300971 loops=1)\n> Total runtime: 2140.326 ms\n> (5 rows)\n>\n>\n>\n> \\d vtiger_account\n> Table \"public.vtiger_account\"\n> Column | Type | Modifiers\n> ---------------+------------------------+--------------------------------\n> accountid | integer | not null default 0\n> accountname | character varying(200) | not null\n> parentid | integer | default 0\n> account_type | character varying(200) |\n> industry | character varying(200) |\n> annualrevenue | integer | default 0\n> rating | character varying(200) |\n> ownership | character varying(50) |\n> siccode | character varying(50) |\n> tickersymbol | character varying(30) |\n> phone | character varying(30) |\n> otherphone | character varying(30) |\n> email1 | character varying(100) |\n> email2 | character varying(100) |\n> website | character varying(100) |\n> fax | character varying(30) |\n> employees | integer | default 0\n> emailoptout | character varying(3) | default '0'::character varying\n> notify_owner | character varying(3) | default '0'::character varying\n> Indexes:\n> \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid)\n> \"account_account_type_idx\" btree (account_type)\n> \"vtiger_account_parentid_idx\" btree (parentid)\n>\n\nA mistake on the previous mail.\n \n explain analyze select * from vtiger_account LEFT JOIN vtiger_account vtiger_account2 \n ON vtiger_account.parentid = vtiger_account2.accountid \n QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual time=776.910..4407.233 rows=231572 loops=1) Hash Cond: (\"outer\".parentid = \"inner\".accountid) -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.029..349.195 rows=231572 loops=1)\n -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual time=776.267..776.267 rows=231572 loops=1) -> Seq Scan on vtiger_account vtiger_account2 (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.002..344.879 rows=231572 loops=1)\n Total runtime: 4640.868 ms(6 rows)\nvtigercrm504=# set enable_Seqscan = off; SET\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual time=0.166..1924.417 rows=231572 loops=1)\n Merge Cond: (\"outer\".parentid = \"inner\".accountid) -> Index Scan using vtiger_account_parentid_idx on vtiger_account (cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985 rows=231572 loops=1)\n -> Index Scan using vtiger_account_pkey on vtiger_account vtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual time=0.074..532.463 rows=300971 loops=1) Total runtime: 2140.326 ms(5 rows)\n \n\\d vtiger_account Table \"public.vtiger_account\" Column | Type | Modifiers ---------------+------------------------+--------------------------------\n accountid | integer | not null default 0 accountname | character varying(200) | not null parentid | integer | default 0 account_type | character varying(200) | industry | character varying(200) | \n annualrevenue | integer | default 0 rating | character varying(200) | ownership | character varying(50) | siccode | character varying(50) | tickersymbol | character varying(30) | \n phone | character varying(30) | otherphone | character varying(30) | email1 | character varying(100) | email2 | character varying(100) | website | character varying(100) | \n fax | character varying(30) | employees | integer | default 0 emailoptout | character varying(3) | default '0'::character varying notify_owner | character varying(3) | default '0'::character varying\nIndexes: \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid) \"account_account_type_idx\" btree (account_type) \"vtiger_account_parentid_idx\" btree (parentid)\n \n \nOn Tue, Feb 16, 2010 at 5:43 PM, AI Rumman <[email protected]> wrote:\n\nI am getting seq_scan on vtiger_account. Index is not using.\nCould anyone please tell me what the reason is?\n \n \n explain analyze select * from vtiger_account LEFT JOIN vtiger_account vtiger_account2 \n ON vtiger_account.parentid = vtiger_account2.accountid \n QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=12506.65..38690.74 rows=231572 width=264) (actual time=776.910..4407.233 rows=231572 loops=1) Hash Cond: (\"outer\".parentid = \"inner\".accountid) -> Seq Scan on vtiger_account (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.029..349.195 rows=231572 loops=1)\n -> Hash (cost=7404.72..7404.72 rows=231572 width=132) (actual time=776.267..776.267 rows=231572 loops=1) -> Seq Scan on vtiger_account vtiger_account2 (cost=0.00..7404.72 rows=231572 width=132) (actual time=0.002..344.879 rows=231572 loops=1)\n Total runtime: 4640.868 ms(6 rows)\nvtigercrm504=# set enable_Seqscan = on; SET\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264) (actual time=0.166..1924.417 rows=231572 loops=1)\n Merge Cond: (\"outer\".parentid = \"inner\".accountid) -> Index Scan using vtiger_account_parentid_idx on vtiger_account (cost=0.00..642475.34 rows=231572 width=132) (actual time=0.083..483.985 rows=231572 loops=1)\n -> Index Scan using vtiger_account_pkey on vtiger_account vtiger_account2 (cost=0.00..737836.61 rows=231572 width=132) (actual time=0.074..532.463 rows=300971 loops=1) Total runtime: 2140.326 ms(5 rows)\n \n\\d vtiger_account Table \"public.vtiger_account\" Column | Type | Modifiers ---------------+------------------------+--------------------------------\n accountid | integer | not null default 0 accountname | character varying(200) | not null parentid | integer | default 0 account_type | character varying(200) | industry | character varying(200) | \n annualrevenue | integer | default 0 rating | character varying(200) | ownership | character varying(50) | siccode | character varying(50) | tickersymbol | character varying(30) | \n phone | character varying(30) | otherphone | character varying(30) | email1 | character varying(100) | email2 | character varying(100) | website | character varying(100) | \n fax | character varying(30) | employees | integer | default 0 emailoptout | character varying(3) | default '0'::character varying notify_owner | character varying(3) | default '0'::character varying\nIndexes: \"vtiger_account_pkey\" PRIMARY KEY, btree (accountid) \"account_account_type_idx\" btree (account_type) \"vtiger_account_parentid_idx\" btree (parentid)",
"msg_date": "Tue, 16 Feb 2010 17:44:22 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why index is not using here?"
},
{
"msg_contents": "AI Rumman <[email protected]> wrote:\n \n> Merge Left Join (cost=0.00..1383629.28 rows=231572 width=264)\n> (actual time=0.166..1924.417 rows=231572 loops=1)\n> Merge Cond: (\"outer\".parentid = \"inner\".accountid)\n> -> Index Scan using vtiger_account_parentid_idx on\n> vtiger_account (cost=0.00..642475.34 rows=231572 width=132) (actual\n> time=0.083..483.985 rows=231572 loops=1)\n> -> Index Scan using vtiger_account_pkey on vtiger_account\n> vtiger_account2 (cost=0.00..737836.61 rows=231572 width=132)\n> (actual time=0.074..532.463 rows=300971 loops=1)\n \nIt's doing over half a million random accesses in less than two\nseconds, which suggests rather strongly to me that your data is\ncached. Unless you have tuned the costing configuration values such\nthat the optimizer has reasonable information about this, you're not\ngoing to get the fastest plans for this environment. (The plan it\ngenerated would be a great plan if you were actually going to disk\nfor all of this.) Without knowing more about the machine on which\nyou're running this, it's hard to guess at optimal settings, but you\nalmost certainly need to adjust random_page_cost, seq_page_cost, and\neffective_cache_size; and possibly others.\n \nPlease post information about your OS, CPUs, RAM, and disk system.\n \nAs a complete SWAG, you could try setting these (instead of\ndisabling seqscan):\n \nset random_page_cost = 0.01;\nset seq_page_cost = 0.01;\nset effective_cache_size = '6GB';\n \n-Kevin\n",
"msg_date": "Tue, 16 Feb 2010 08:56:18 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why index is not using here?"
}
] |
[
{
"msg_contents": "lionel duboeuf wrote:\n \n> I stopped postgresql\n> I started postgresql (and as default autovacuum daemon)\n> I restored the databases (need to restore 4 databases)\n> It seems that after database 1 have been restored, autovacumm\n> started on it and has been stopped while restoring database 2.\n \n> Does this kind of error can lead to the query problem ?\n \nI believe it can. I assume you're talking about restoring something\nwhich was dumped by pg_dump or pg_dumpall? If so, you don't have\nstatistics right after the restore. Autovacuum will try to build\nthem for you, but that will take a while. There are also a couple\nother hidden performance issues after a restore:\n \n(1) None of the hint bits are set, so the first read to each tuple\nwill cause it to be written again with hint bits, so you will have\nmysterious bursts of write activity during read-only queries until\nall tuples have been read.\n \n(2) All tuples in the database will have the same, or nearly the\nsame, xmin (creation transaction number), so at some indeterminate\ntime in the future, vacuums will trigger for all of your tables at\nthe same time, probably in the middle of heavy activity.\n \nFor all of the above reasons, I use a special postgresql.conf,\noptimized for bulk loads, to restore from dumps (or to pipe from\npg_dump to psql). I turn off autovacuum during the restore, and then\ndo an explicit VACUUM FREEZE ANALYZE before I consider the restore\ncomplete. Then I restart with a \"normal\" postgresql.conf file. Some\nmight consider this extreme, but it works for me, an prevents the\nkind of problems which generated your post.\n \n-Kevin\n\n",
"msg_date": "Tue, 16 Feb 2010 06:45:58 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\n\twhen changing where clause value"
},
{
"msg_contents": "Thank you Kevin and Scott for all the explanations. ;-)\n\nregards\nLionel\n\n\n",
"msg_date": "Tue, 16 Feb 2010 17:39:19 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t when\n\tchanging where clause value"
},
{
"msg_contents": "lionel duboeuf a �crit :\n> Thank you Kevin and Scott for all the explanations. ;-)\n>\n> regards\n> Lionel\n>\n>\n>\nSorry for forgetting Jorge also.\n\nregards.\nAll this ,encourage me to know more about database management.\n\n\n\n",
"msg_date": "Tue, 16 Feb 2010 17:47:41 +0100",
"msg_from": "lionel duboeuf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Almost infinite query -> Different Query Plan\t when\n\tchanging where clause value"
}
] |
[
{
"msg_contents": "I'm having problems with another one of my queries after moving from 8.1.19 to 8.4.2. On 8.1.19, the plan looked like this:\n\nhttp://wood.silentmedia.com/bench/8119\n\nThat runs pretty well. On 8.4.2, the same query looks like this:\n\nhttp://wood.silentmedia.com/bench/842_bad\n\nIf I turn off mergejoin and hashjoin, I can get 8.4.2 to spit out this:\n\nhttp://wood.silentmedia.com/bench/842_better\n\n...which it thinks is going to suck but which does not. \n\nThe query and relevant table definitions are here:\n\nhttp://wood.silentmedia.com/bench/query_and_definitions\n\n\nAny suggestions? I'm guessing the problem is with the absurd over-estimation on the nested loop under the sort node, but I'm not sure why it's so bad. \nI'm having problems with another one of my queries after moving from 8.1.19 to 8.4.2. On 8.1.19, the plan looked like this:http://wood.silentmedia.com/bench/8119That runs pretty well. On 8.4.2, the same query looks like this:http://wood.silentmedia.com/bench/842_badIf I turn off mergejoin and hashjoin, I can get 8.4.2 to spit out this:http://wood.silentmedia.com/bench/842_better...which it thinks is going to suck but which does not. The query and relevant table definitions are here:http://wood.silentmedia.com/bench/query_and_definitionsAny suggestions? I'm guessing the problem is with the absurd over-estimation on the nested loop under the sort node, but I'm not sure why it's so bad.",
"msg_date": "Tue, 16 Feb 2010 13:29:16 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "another 8.1->8.4 regression"
},
{
"msg_contents": "On Feb 16, 2010, at 1:29 PM, Ben Chobot wrote:\n\n> I'm having problems with another one of my queries after moving from 8.1.19 to 8.4.2. On 8.1.19, the plan looked like this:\n> \n> http://wood.silentmedia.com/bench/8119\n> \n> That runs pretty well. On 8.4.2, the same query looks like this:\n> \n> http://wood.silentmedia.com/bench/842_bad\n> \n> If I turn off mergejoin and hashjoin, I can get 8.4.2 to spit out this:\n> \n> http://wood.silentmedia.com/bench/842_better\n> \n> ...which it thinks is going to suck but which does not. \n> \n> The query and relevant table definitions are here:\n> \n> http://wood.silentmedia.com/bench/query_and_definitions\n> \n> \n> Any suggestions? I'm guessing the problem is with the absurd over-estimation on the nested loop under the sort node, but I'm not sure why it's so bad. \n\n\nAfter looking at this some more, I'm pretty confused at both of 8.4.2's plans. They both have a Nested Loop node in them where the expected row count is a bit over 2 million, and yet the inner nodes have expected row counts of 1 and 152. I was under the impression that a nested loop between R and S would return no more than R*S?\nOn Feb 16, 2010, at 1:29 PM, Ben Chobot wrote:I'm having problems with another one of my queries after moving from 8.1.19 to 8.4.2. On 8.1.19, the plan looked like this:http://wood.silentmedia.com/bench/8119That runs pretty well. On 8.4.2, the same query looks like this:http://wood.silentmedia.com/bench/842_badIf I turn off mergejoin and hashjoin, I can get 8.4.2 to spit out this:http://wood.silentmedia.com/bench/842_better...which it thinks is going to suck but which does not. The query and relevant table definitions are here:http://wood.silentmedia.com/bench/query_and_definitionsAny suggestions? I'm guessing the problem is with the absurd over-estimation on the nested loop under the sort node, but I'm not sure why it's so bad. After looking at this some more, I'm pretty confused at both of 8.4.2's plans. They both have a Nested Loop node in them where the expected row count is a bit over 2 million, and yet the inner nodes have expected row counts of 1 and 152. I was under the impression that a nested loop between R and S would return no more than R*S?",
"msg_date": "Tue, 16 Feb 2010 15:39:08 -0800",
"msg_from": "Ben Chobot <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: another 8.1->8.4 regression"
}
] |
[
{
"msg_contents": "Since getting on 8.4 I've been monitoring things fairly closely.\nI whipped up a quick script to monitor pg_stat_bgwriter and save \ndeltas every minute so I can ensure my bgwriter is beating out the \nbackends for writes (as it is supposed to do).\n\nNow, the odd thing I'm running into is this:\n\nbgwriter_delay is 100ms (ie 10 times a second, give or take)\nbgwriter_lru_maxpages is 500 (~5000 pages / second)\nbgwriter_lru_multiplier is 4\n\nNow, assuming I understand these values right the following is what \nshould typically happen:\n\nwhile(true)\n{\n if buffers_written > bgwriter_lru_maxpages\n or buffers_written > anticipated_pages_needed * \nbgwriter_lru_multiplier\n {\n sleep(bgwriter_delay ms)\n continue;\n }\n ...\n}\n\nso I should not be able to have more than ~5000 bgwriter_clean pages \nper minute. (this assumes writing takes 0ms, which of course is \ninaccurate)\n\nHowever, I see this in my stats (they are deltas), and I'm reasonably \nsure it is not a bug in the code:\n\n(timestamp, buffers clean, buffers_checkpoint, buffers backend)\n 2010-02-17 08:23:51.184018 | 1 | 1686 \n| 5\n 2010-02-17 08:22:51.170863 | 15289 | 12676 \n| 207\n 2010-02-17 08:21:51.155793 | 38467 | 8993 \n| 4277\n 2010-02-17 08:20:51.139199 | 35582 | 0 \n| 9437\n 2010-02-17 08:19:51.125025 | 8 | 0 \n| 3\n 2010-02-17 08:18:51.111184 | 1140 | 1464 \n| 6\n 2010-02-17 08:17:51.098422 | 0 | 1682 \n| 228\n 2010-02-17 08:16:51.082804 | 50 | 0 \n| 6\n 2010-02-17 08:15:51.067886 | 789 | 0 \n| 1\n\nperhaps some stats buffering occurring or something or some general \nmisunderstanding of some of these tunables?\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Wed, 17 Feb 2010 08:30:07 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "bgwriter tunables vs pg_stat_bgwriter"
},
{
"msg_contents": "On Wed, 2010-02-17 at 08:30 -0500, Jeff wrote:\n> Since getting on 8.4 I've been monitoring things fairly closely.\n> I whipped up a quick script to monitor pg_stat_bgwriter and save \n> deltas every minute so I can ensure my bgwriter is beating out the \n> backends for writes (as it is supposed to do).\n> \n> Now, the odd thing I'm running into is this:\n> \n> bgwriter_delay is 100ms (ie 10 times a second, give or take)\n> bgwriter_lru_maxpages is 500 (~5000 pages / second)\n> bgwriter_lru_multiplier is 4\n> \n> Now, assuming I understand these values right the following is what \n> should typically happen:\n> \n> while(true)\n> {\n> if buffers_written > bgwriter_lru_maxpages\n> or buffers_written > anticipated_pages_needed * \n> bgwriter_lru_multiplier\n> {\n> sleep(bgwriter_delay ms)\n> continue;\n> }\n> ...\n> }\n\nCorrect.\n\n> so I should not be able to have more than ~5000 bgwriter_clean pages \n> per minute. (this assumes writing takes 0ms, which of course is \n> inaccurate)\n\nThat works out to 5000/second - 300,000/minute.\n\n> However, I see this in my stats (they are deltas), and I'm reasonably \n> sure it is not a bug in the code:\n> \n> (timestamp, buffers clean, buffers_checkpoint, buffers backend)\n> 2010-02-17 08:23:51.184018 | 1 | 1686 \n> | 5\n> 2010-02-17 08:22:51.170863 | 15289 | 12676 \n> | 207\n> 2010-02-17 08:21:51.155793 | 38467 | 8993 \n> | 4277\n> 2010-02-17 08:20:51.139199 | 35582 | 0 \n> | 9437\n> 2010-02-17 08:19:51.125025 | 8 | 0 \n> | 3\n> 2010-02-17 08:18:51.111184 | 1140 | 1464 \n> | 6\n> 2010-02-17 08:17:51.098422 | 0 | 1682 \n> | 228\n> 2010-02-17 08:16:51.082804 | 50 | 0 \n> | 6\n> 2010-02-17 08:15:51.067886 | 789 | 0 \n> | 1\n> \n> perhaps some stats buffering occurring or something or some general \n> misunderstanding of some of these tunables?\n>\n> --\n> Jeff Trout <[email protected]>\n> http://www.stuarthamm.net/\n> http://www.dellsmartexitin.com/\n> \n> \n> \n> \n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n",
"msg_date": "Wed, 17 Feb 2010 14:01:16 -0500",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter tunables vs pg_stat_bgwriter"
},
{
"msg_contents": "Jeff wrote:\n> while(true)\n> {\n> if buffers_written > bgwriter_lru_maxpages\n> or buffers_written > anticipated_pages_needed * \n> bgwriter_lru_multiplier\n> {\n> sleep(bgwriter_delay ms)\n> continue;\n> }\n> ...\n> }\n>\n> so I should not be able to have more than ~5000 bgwriter_clean pages \n> per minute. (this assumes writing takes 0ms, which of course is \n> inaccurate)\n\nThat's not how the loop is structured. It's actually more like:\n\n-Compute anticipated_pages_needed * bgwriter_lru_multiplier\n-Enter a cleaning loop until that many are confirmed free -or- \nbgwriter_lru_maxpages is reached\n-sleep(bgwriter_delay ms)\n\n> perhaps some stats buffering occurring or something or some general \n> misunderstanding of some of these tunables?\nWith bgwriter_lru_maxpages=500 and bgwriter_delay=100ms, you can get up \nto 5000 pages/second which makes for 300,000 pages/minute. So none of \nyour numbers look funny just via their scale. This is why the defaults \nare so low--the maximum output of the background writer is quite big \neven before you adjust it upwards.\n\nThere are however two bits of stats buffering involved. Stats updates \ndon't become visible instantly, they're buffered and only get their \nupdates pushed out periodically to where clients can see them to reduce \noverhead. Also, the checkpoint write update happens in one update at \nthe end--not incrementally as the checkpoint progresses. The idea is \nthat you should be able to tell if a checkpoint happened or not during a \nperiod of monitoring time. You look to be having checkpoints as often \nas once per minute right now, so something isn't right--probably \ncheckpoint_segments is too low for your workload.\n\nBy the way, your monitoring code should be saving maxwritten_clean and \nbuffers_allocated, too. While you may not be doing something with them \nyet, the former will shed some light on what you're running into now, \nand the latter is useful later down the road you're walking.\n\n-- \nGreg Smith 2ndQuadrant US Baltimore, MD\nPostgreSQL Training, Services and Support\[email protected] www.2ndQuadrant.us\n\n",
"msg_date": "Wed, 17 Feb 2010 18:23:06 -0500",
"msg_from": "Greg Smith <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bgwriter tunables vs pg_stat_bgwriter"
},
{
"msg_contents": "\nOn Feb 17, 2010, at 6:23 PM, Greg Smith wrote:\n\n> JWith bgwriter_lru_maxpages=500 and bgwriter_delay=100ms, you can \n> get up to 5000 pages/second which makes for 300,000 pages/minute. \n> So none of your numbers look funny just via their scale. This is \n> why the defaults are so low--the maximum output of the background \n> writer is quite big even before you adjust it upwards.\n>\n\nd'oh! that would be the reason. Sorry folks, nothing to see here :)\n\n> There are however two bits of stats buffering involved. Stats \n> updates don't become visible instantly, they're buffered and only \n> get their updates pushed out periodically to where clients can see \n> them to reduce overhead. Also, the checkpoint write update happens \n> in one update at the end--not incrementally as the checkpoint \n> progresses. The idea is that you should be able to tell if a \n> checkpoint happened or not during a period of monitoring time. You \n> look to be having checkpoints as often as once per minute right now, \n> so something isn't right--probably checkpoint_segments is too low \n> for your workload.\n>\n\ncheckpoint_segments is currently 32. maybe I'll bump it up - this db \ndoes a LOT of writes\n\n> By the way, your monitoring code should be saving maxwritten_clean \n> and buffers_allocated, too. While you may not be doing something \n> with them yet, the former will shed some light on what you're \n> running into now, and the latter is useful later down the road \n> you're walking.\n\nIt is, I just didn't include them in the mail.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n",
"msg_date": "Thu, 18 Feb 2010 08:35:43 -0500",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: bgwriter tunables vs pg_stat_bgwriter"
}
] |
[
{
"msg_contents": "LinkedIn\n------------\n\n \nI'd like to add you to my professional network on LinkedIn.\n\n- Michael Clemmons\n\nConfirm that you know Michael Clemmons\nhttps://www.linkedin.com/e/isd/1082803004/E81w7LvD/\n\n\n\n \n------\n(c) 2010, LinkedIn Corporation\n\n\n\nLinkedIn\n\n\n I'd like to add you to my professional network on LinkedIn.\n\n- Michael Clemmons\n \n\nConfirm that you know Michael\n\n\n© 2010, LinkedIn Corporation",
"msg_date": "Wed, 17 Feb 2010 17:04:12 -0800 (PST)",
"msg_from": "Michael Clemmons <[email protected]>",
"msg_from_op": true,
"msg_subject": "Michael Clemmons wants to stay in touch on LinkedIn"
}
] |
[
{
"msg_contents": "\"Not like\" operation does not use index.\n\nselect * from vtiger_contactscf where lower(cf_1253) not like\nlower('Former%')\n\nI created index on lower(cf_1253).\n\nHow can I ensure index usage in not like operation?\nAnyone please help.\n\n\"Not like\" operation does not use index.\n \nselect * from vtiger_contactscf where lower(cf_1253) not like lower('Former%') \n \nI created index on lower(cf_1253).\n \nHow can I ensure index usage in not like operation?\nAnyone please help.",
"msg_date": "Thu, 18 Feb 2010 17:55:52 +0600",
"msg_from": "AI Rumman <[email protected]>",
"msg_from_op": true,
"msg_subject": "index usage in not like"
},
{
"msg_contents": "On 18 February 2010 11:55, AI Rumman <[email protected]> wrote:\n> \"Not like\" operation does not use index.\n>\n> select * from vtiger_contactscf where lower(cf_1253) not like\n> lower('Former%')\n>\n> I created index on lower(cf_1253).\n>\n> How can I ensure index usage in not like operation?\n> Anyone please help.\n>\n\nHow many rows do you have in your table? If there are relatively few,\nit probably guesses it to be cheaper to do a sequential scan and\ncalculate lower values on-the-fly rather than bother with the index.\n\nThom\n",
"msg_date": "Thu, 18 Feb 2010 12:00:29 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
},
{
"msg_contents": "> On Thu, Feb 18, 2010 at 6:00 PM, Thom Brown <[email protected]> wrote:\n>>\n>> On 18 February 2010 11:55, AI Rumman <[email protected]> wrote:\n>> > \"Not like\" operation does not use index.\n>> >\n>> > select * from vtiger_contactscf where lower(cf_1253) not like\n>> > lower('Former%')\n>> >\n>> > I created index on lower(cf_1253).\n>> >\n>> > How can I ensure index usage in not like operation?\n>> > Anyone please help.\n>> >\n>>\n>> How many rows do you have in your table? If there are relatively few,\n>> it probably guesses it to be cheaper to do a sequential scan and\n>> calculate lower values on-the-fly rather than bother with the index.\n>>\n>> Thom\n>\nOn 18 February 2010 12:06, AI Rumman <[email protected]> wrote:\n> vtigercrm504=# explain analyze select * from vtiger_contactscf where\n> lower(cf_1253) like 'customer';\n>\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using vtiger_contactscf_cf_1253_idx on vtiger_contactscf\n> (cost=0.00..146.54 rows=6093 width=179) (actual time=0.083..29.868 rows=5171\n> loops=1)\n> Index Cond: (lower((cf_1253)::text) ~=~ 'customer'::character varying)\n> Filter: (lower((cf_1253)::text) ~~ 'customer'::text)\n> Total runtime: 34.956 ms\n> (4 rows)\n> vtigercrm504=# explain analyze select * from vtiger_contactscf where\n> lower(cf_1253) like 'customer';\n>\n> QUERY\n> PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using vtiger_contactscf_cf_1253_idx on vtiger_contactscf\n> (cost=0.00..146.54 rows=6093 width=179) (actual time=0.083..29.868 rows=5171\n> loops=1)\n> Index Cond: (lower((cf_1253)::text) ~=~ 'customer'::character varying)\n> Filter: (lower((cf_1253)::text) ~~ 'customer'::text)\n> Total runtime: 34.956 ms\n> (4 rows)\n\nCould you do the same again for a \"not like\" query?\n\nThom\n",
"msg_date": "Thu, 18 Feb 2010 12:14:03 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
},
{
"msg_contents": "In response to Thom Brown :\n> On 18 February 2010 11:55, AI Rumman <[email protected]> wrote:\n> > \"Not like\" operation does not use index.\n> >\n> > select * from vtiger_contactscf where lower(cf_1253) not like\n> > lower('Former%')\n> >\n> > I created index on lower(cf_1253).\n> >\n> > How can I ensure index usage in not like operation?\n> > Anyone please help.\n> >\n> \n> How many rows do you have in your table? If there are relatively few,\n> it probably guesses it to be cheaper to do a sequential scan and\n> calculate lower values on-the-fly rather than bother with the index.\n\nThat's one reason, an other reason, i think, is, that a btree-index can't\nsearch with an 'not like' - operator.\n\n\n\ntest=*# insert into words select 'fucking example' from generate_series(1,10000);\nINSERT 0 10000\ntest=*# insert into words select 'abc' from generate_series(1,10);\nINSERT 0 10\ntest=*# explain select * from words where lower(w) like lower('a%') or lower(w) like lower('b%');\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Bitmap Heap Scan on words (cost=1538.75..6933.39 rows=55643 width=36)\n Recheck Cond: ((lower(w) ~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n Filter: ((lower(w) ~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n -> BitmapOr (cost=1538.75..1538.75 rows=57432 width=0)\n -> Bitmap Index Scan on idx_words (cost=0.00..1027.04 rows=39073 width=0)\n Index Cond: ((lower(w) ~>=~ 'a'::text) AND (lower(w) ~<~ 'b'::text))\n -> Bitmap Index Scan on idx_words (cost=0.00..483.90 rows=18359 width=0)\n Index Cond: ((lower(w) ~>=~ 'b'::text) AND (lower(w) ~<~ 'c'::text))\n(8 rows)\n\ntest=*# explain select * from words where lower(w) not like lower('a%') or lower(w) like lower('b%');\n QUERY PLAN\n-------------------------------------------------------------------\n Seq Scan on words (cost=0.00..10624.48 rows=282609 width=36)\n Filter: ((lower(w) !~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n(2 rows)\n\n\nIn other words: revert your where-condition from 'not like' to multiple 'like' conditions for all letters except 'f%'.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 18 Feb 2010 13:18:10 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
},
{
"msg_contents": "On Thu, Feb 18, 2010 at 01:18:10PM +0100, A. Kretschmer wrote:\n> In response to Thom Brown :\n> > On 18 February 2010 11:55, AI Rumman <[email protected]> wrote:\n> > > \"Not like\" operation does not use index.\n> > >\n> > > select * from vtiger_contactscf where lower(cf_1253) not like\n> > > lower('Former%')\n> > >\n> > > I created index on lower(cf_1253).\n> > >\n> > > How can I ensure index usage in not like operation?\n> > > Anyone please help.\n> > >\n> > \n> > How many rows do you have in your table? If there are relatively few,\n> > it probably guesses it to be cheaper to do a sequential scan and\n> > calculate lower values on-the-fly rather than bother with the index.\n> \n> That's one reason, an other reason, i think, is, that a btree-index can't\n> search with an 'not like' - operator.\n> \n> \n> \n> test=*# insert into words select 'fucking example' from generate_series(1,10000);\n> INSERT 0 10000\n> test=*# insert into words select 'abc' from generate_series(1,10);\n> INSERT 0 10\n> test=*# explain select * from words where lower(w) like lower('a%') or lower(w) like lower('b%');\n> QUERY PLAN\n> -------------------------------------------------------------------------------------\n> Bitmap Heap Scan on words (cost=1538.75..6933.39 rows=55643 width=36)\n> Recheck Cond: ((lower(w) ~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n> Filter: ((lower(w) ~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n> -> BitmapOr (cost=1538.75..1538.75 rows=57432 width=0)\n> -> Bitmap Index Scan on idx_words (cost=0.00..1027.04 rows=39073 width=0)\n> Index Cond: ((lower(w) ~>=~ 'a'::text) AND (lower(w) ~<~ 'b'::text))\n> -> Bitmap Index Scan on idx_words (cost=0.00..483.90 rows=18359 width=0)\n> Index Cond: ((lower(w) ~>=~ 'b'::text) AND (lower(w) ~<~ 'c'::text))\n> (8 rows)\n> \n> test=*# explain select * from words where lower(w) not like lower('a%') or lower(w) like lower('b%');\n> QUERY PLAN\n> -------------------------------------------------------------------\n> Seq Scan on words (cost=0.00..10624.48 rows=282609 width=36)\n> Filter: ((lower(w) !~~ 'a%'::text) OR (lower(w) ~~ 'b%'::text))\n> (2 rows)\n> \n> \n> In other words: revert your where-condition from 'not like' to multiple 'like' conditions for all letters except 'f%'.\n> \n> \n> Andreas\n\nThe 'not like' condition is likely to be extremely non-selective\nwhich would cause a sequential scan to be used in any event whether\nor not an index could be used.\n\nCheers,\nKen\n\n",
"msg_date": "Thu, 18 Feb 2010 06:26:22 -0600",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
},
{
"msg_contents": "On 18 February 2010 12:18, A. Kretschmer\n<[email protected]> wrote:\n> In response to Thom Brown :\n>> On 18 February 2010 11:55, AI Rumman <[email protected]> wrote:\n>> > \"Not like\" operation does not use index.\n>> >\n>> > select * from vtiger_contactscf where lower(cf_1253) not like\n>> > lower('Former%')\n>> >\n>> > I created index on lower(cf_1253).\n>> >\n>> > How can I ensure index usage in not like operation?\n>> > Anyone please help.\n>> >\n>>\n>> How many rows do you have in your table? If there are relatively few,\n>> it probably guesses it to be cheaper to do a sequential scan and\n>> calculate lower values on-the-fly rather than bother with the index.\n>\n> That's one reason, an other reason, i think, is, that a btree-index can't\n> search with an 'not like' - operator.\n>\n\nErm.. yes. Now that you say it, it's obvious. :S\n\nThom\n",
"msg_date": "Thu, 18 Feb 2010 12:40:23 +0000",
"msg_from": "Thom Brown <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
},
{
"msg_contents": "In response to Kenneth Marshall :\n> > > How many rows do you have in your table? If there are relatively few,\n> > > it probably guesses it to be cheaper to do a sequential scan and\n> > > calculate lower values on-the-fly rather than bother with the index.\n> > \n> > That's one reason, an other reason, i think, is, that a btree-index can't\n> > search with an 'not like' - operator.\n> \n> The 'not like' condition is likely to be extremely non-selective\n> which would cause a sequential scan to be used in any event whether\n> or not an index could be used.\n\nThat's true, but i have an example where the 'not like' condition is\nextremely selective:\n\n,----[ sql ]\n| test=*# select count(1) from words where lower(w) not like lower('f%');\n| count\n| -------\n| 10\n| (1 row)\n|\n| test=*# select count(1) from words where lower(w) like lower('f%');\n| count\n| -------\n| 10000\n| (1 row)\n`----\n\n\nBut the index can't use:\n\n,----[ code ]\n| test=*# explain select * from words where lower(w) not like lower('f%');\n| QUERY PLAN\n| ----------------------------------------------------------\n| Seq Scan on words (cost=0.00..4396.15 rows=10 width=47)\n| Filter: (lower(w) !~~ 'f%'::text)\n| (2 rows)\n`----\n\n\nAnd i think, the reason is:\n\n,----[ quote from docu ]\n| B-trees can handle equality and range queries on data that can be sorted\n| into some ordering. In particular, the PostgreSQL query planner will\n| consider using a B-tree index whenever an indexed column is involved in\n| a comparison using one of these operators:\n|\n| <\n| <=\n| =\n| >=\n| >\n`----\n\n\n\nRegards, Andreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431 2EB0 389D 1DC2 3172 0C99\n",
"msg_date": "Thu, 18 Feb 2010 13:43:44 +0100",
"msg_from": "\"A. Kretschmer\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage in not like"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.