threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nI have a transaction that has multiple separate command in it (nothing \nunusual there).\n\nHowever sometimes one of the sql statements will fail and so the whole \ntransaction fails.\n\nIn some cases I could fix the failing statement if only I knew which one \nit was. Can anyone think of any way to get which statement actually \nfailed from the error message? If the error message gave me the line of \nthe failure it would be excellent, but it doesn't. Perhaps it would be \neasy for me to patch my version of Postgres to do that?\n\nI realize I could do this with 2 phase commit, but that isn't ready yet!\n\nAny thoughts or ideas are much appreciated\n\nThanks\nRalph\n",
"msg_date": "Tue, 08 Nov 2005 10:30:15 +1300",
"msg_from": "Ralph Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Figuring out which command failed"
},
{
"msg_contents": "\nOn Nov 7, 2005, at 3:30 PM, Ralph Mason wrote:\n\n> Hi,\n>\n> I have a transaction that has multiple separate command in it \n> (nothing unusual there).\n>\n> However sometimes one of the sql statements will fail and so the \n> whole transaction fails.\n>\n> In some cases I could fix the failing statement if only I knew \n> which one it was. Can anyone think of any way to get which \n> statement actually failed from the error message? If the error \n> message gave me the line of the failure it would be excellent, but \n> it doesn't. Perhaps it would be easy for me to patch my version of \n> Postgres to do that?\n>\n> I realize I could do this with 2 phase commit, but that isn't ready \n> yet!\n>\n> Any thoughts or ideas are much appreciated\n>\n> Thanks\n> Ralph\n\n2PC might not've been ready yesterday, but it's ready today!\n\nhttp://www.postgresql.org/docs/whatsnew\n\n--\nThomas F. O'Connell\nDatabase Architecture and Programming\nCo-Founder\nSitening, LLC\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-469-5150\n615-469-5151 (fax)\n",
"msg_date": "Tue, 8 Nov 2005 17:26:28 -0600",
"msg_from": "\"Thomas F. O'Connell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Figuring out which command failed"
}
] |
[
{
"msg_contents": "I have a function, call it \"myfunc()\", that is REALLY expensive computationally. Think of it like, \"If you call this function, it's going to telephone the Microsoft Help line and wait in their support queue to get the answer.\" Ok, it's not that bad, but it's so bad that the optimizer should ALWAYS consider it last, no matter what. (Realistically, the function takes 1-2 msec average, so applying it to 40K rows takes 40-80 seconds. It's a graph-theory algorithm, known to be NP-complete.)\n\nIs there some way to explain this cost to the optimizer in a permanent way, like when the function is installed? Here's what I get with one critical query (somewhat paraphrased for simplicity):\n\n explain analyze\n select A.ID\n from A join B ON (A.ID = B.ID)\n where A.row_num >= 0 and A.row_num <= 43477\n and B.ID = 52\n and myfunc(A.FOO, 'FooBar') order by row_num;\n\n QUERY PLAN \n ----------------------------------------------------------------------------------\n Nested Loop (cost=0.00..72590.13 rows=122 width=8)\n -> Index Scan using i_a_row_num on a (cost=0.00..10691.35 rows=12222 width=8)\n Index Cond: ((row_num >= 0) AND (row_num <= 43477))\n Filter: myfunc((foo)::text, 'FooBar'::text)\n -> Index Scan using i_b_id on b (cost=0.00..5.05 rows=1 width=4)\n Index Cond: (\"outer\".id = b.id)\n Filter: (id = 52)\n Total runtime: 62592.631 ms\n (8 rows)\n\nNotice the \"Filter: myfunc(...)\" that comes in the first loop. This means it's applying myfunc() to 43477 rows in this example. The second index scan would cut this number down from 43477 rows to about 20 rows, making the query time drop from 62 seconds down to a fraction of a second.\n\nIs there any way to give Postgres this information?\n\nThe only way I've thought of is something like this:\n\n select X.id from\n (select A.id, A.foo, A.row_num\n from A join B ON (A.id = B.id)\n where A.row_num >= 0 and A.row_num <= 43477\n and B.id = 52) as X\n where myfunc(X.foo, 'FooBar') order by X.row_num;\n\nI can do this, but it means carefully hand-crafting each query rather than writing a more natural query.\n\nThanks,\nCraig\n",
"msg_date": "Mon, 07 Nov 2005 17:58:03 -0800",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Expensive function and the optimizer"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Is there some way to explain this cost to the optimizer in a permanent\n> way,\n\nNope, sorry. One thing you could do in the particular case at hand is\nto rejigger the WHERE clause involving the function so that it requires\nvalues from both tables and therefore can't be applied till after the\njoin is made. (If nothing else, give the function an extra dummy\nargument that can be passed as a variable from the other table.)\nThis is an ugly and non-general solution of course.\n\n> The only way I've thought of is something like this:\n\n> select X.id from\n> (select A.id, A.foo, A.row_num\n> from A join B ON (A.id = B.id)\n> where A.row_num >= 0 and A.row_num <= 43477\n> and B.id = 52) as X\n> where myfunc(X.foo, 'FooBar') order by X.row_num;\n\nAs written, that won't work because the planner will happily flatten the\nquery to the same thing you had before. You can put an OFFSET 0 into\nthe sub-select to prevent that from happening, but realize that this\ncreates a pretty impervious optimization fence ... the side-effects\nmight be undesirable when you come to look at real queries instead\nof toy cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Nov 2005 23:12:07 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Expensive function and the optimizer "
}
] |
[
{
"msg_contents": "Christian Paul B. Cosinas wrote:\n> I try to run this command in my linux server.\n> VACUUM FULL pg_class;\n> VACUUM FULL pg_attribute;\n> VACUUM FULL pg_depend;\n> \n> But it give me the following error:\n> \t-bash: VACUUM: command not found\n\nThat needs to be run from psql ...\n\n> \n> \n> \n> \n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Mon, 07 Nov 2005 18:10:38 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "Ummm...they're SQL commands. Run them in PostgreSQL, not on the unix \ncommand line...\n\nChristian Paul B. Cosinas wrote:\n> I try to run this command in my linux server.\n> VACUUM FULL pg_class;\n> VACUUM FULL pg_attribute;\n> VACUUM FULL pg_depend;\n> \n> But it give me the following error:\n> \t-bash: VACUUM: command not found\n> \n> \n> \n> \n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Tue, 08 Nov 2005 10:14:45 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "> In what directory in my linux server will I find these 3 tables?\n\nDirectory? They're tables in your database...\n\n",
"msg_date": "Tue, 08 Nov 2005 10:15:10 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "You can use the vacuumdb external command. Here's an example:\n\nvacuumdb --full --analyze --table mytablename mydbname\n\n\n\nOn Tue, 8 Nov 2005, Christian Paul B. Cosinas wrote:\n\n> But How Can I put this in the Cron of my Linux Server?\n> I really don't have an idea :)\n> What I want to do is to loop around all the databases in my server and\n> execute the vacuum of these 3 tables in each tables.\n>\n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Tuesday, November 08, 2005 2:11 AM\n> To: Christian Paul B. Cosinas\n> Cc: 'Alvaro Nunes Melo'; [email protected]\n> Subject: Re: [PERFORM] Temporary Table\n>\n> Christian Paul B. Cosinas wrote:\n>> I try to run this command in my linux server.\n>> VACUUM FULL pg_class;\n>> VACUUM FULL pg_attribute;\n>> VACUUM FULL pg_depend;\n>>\n>> But it give me the following error:\n>> \t-bash: VACUUM: command not found\n>\n> That needs to be run from psql ...\n>\n>>\n>>\n>>\n>>\n>>\n>> I choose Polesoft Lockspam to fight spam, and you?\n>> http://www.polesoft.com/refer.html\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>\n>\n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Mon, 7 Nov 2005 18:22:54 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "Or you could just run the 'vacuumdb' utility...\n\nPut something like this in cron:\n\n# Vacuum full local pgsql database\n30 * * * * postgres vacuumdb -a -q -z\n\nYou really should read the manual.\n\nChris\n\nChristian Paul B. Cosinas wrote:\n> I see.\n> \n> But How Can I put this in the Cron of my Linux Server?\n> I really don't have an idea :)\n> What I want to do is to loop around all the databases in my server and\n> execute the vacuum of these 3 tables in each tables.\n> \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Tuesday, November 08, 2005 2:11 AM\n> To: Christian Paul B. Cosinas\n> Cc: 'Alvaro Nunes Melo'; [email protected]\n> Subject: Re: [PERFORM] Temporary Table\n> \n> Christian Paul B. Cosinas wrote:\n> \n>>I try to run this command in my linux server.\n>>VACUUM FULL pg_class;\n>>VACUUM FULL pg_attribute;\n>>VACUUM FULL pg_depend;\n>>\n>>But it give me the following error:\n>>\t-bash: VACUUM: command not found\n> \n> \n> That needs to be run from psql ...\n> \n> \n>>\n>>\n>>\n>>\n>>I choose Polesoft Lockspam to fight spam, and you?\n>>http://www.polesoft.com/refer.html \n>>\n>>\n>>---------------------------(end of \n>>broadcast)---------------------------\n>>TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n> \n> \n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Tue, 08 Nov 2005 10:24:20 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "On Tue, 2005-11-08 at 10:22 +0000, Christian Paul B. Cosinas wrote:\n> I see.\n> \n> But How Can I put this in the Cron of my Linux Server?\n> I really don't have an idea :)\n> What I want to do is to loop around all the databases in my server and\n> execute the vacuum of these 3 tables in each tables.\n\nI usually write a small shell script something like:\n\n==================================================\n#!/bin/sh\n\npsql somedatabase <<EOQ\n VACUUM this;\n VACUUM that;\n DELETE FROM someotherplace WHERE delete_this_record;\nEOQ\n==================================================\n\nand so forth...\n\nThis makes the SQL quite nicely readable.\n\n\nRegards,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n You work very hard. Don't try to think as well.\n-------------------------------------------------------------------------",
"msg_date": "Tue, 08 Nov 2005 15:35:20 +1300",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "\nIn what directory in my linux server will I find these 3 tables?\n\n-----Original Message-----\nFrom: Alvaro Nunes Melo [mailto:[email protected]]\nSent: Wednesday, October 26, 2005 10:49 AM\nTo: Christian Paul B. Cosinas\nSubject: Re: [PERFORM] Temporary Table\n\nChristian Paul B. Cosinas wrote:\n\n>I am creating a temporary table in every function that I execute.\n>Which I think is bout 100,000 temporary tables a day.\n> \n>\nI think that a lot. ;)\n\n>What is the command for vacuuming these 3 tables?\n> \n>\nVACUUM FULL pg_class;\nVACUUM FULL pg_attribute;\nVACUUM FULL pg_depend;\n\nI'm using this ones. Before using them, take a look in the size that this\ntables are using in your HD, and compare to what you get after running this\ncommands.\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Tue, 8 Nov 2005 10:04:01 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "I try to run this command in my linux server.\nVACUUM FULL pg_class;\nVACUUM FULL pg_attribute;\nVACUUM FULL pg_depend;\n\nBut it give me the following error:\n\t-bash: VACUUM: command not found\n\n\n\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Tue, 8 Nov 2005 10:09:00 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
},
{
"msg_contents": "I see.\n\nBut How Can I put this in the Cron of my Linux Server?\nI really don't have an idea :)\nWhat I want to do is to loop around all the databases in my server and\nexecute the vacuum of these 3 tables in each tables.\n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]]\nSent: Tuesday, November 08, 2005 2:11 AM\nTo: Christian Paul B. Cosinas\nCc: 'Alvaro Nunes Melo'; [email protected]\nSubject: Re: [PERFORM] Temporary Table\n\nChristian Paul B. Cosinas wrote:\n> I try to run this command in my linux server.\n> VACUUM FULL pg_class;\n> VACUUM FULL pg_attribute;\n> VACUUM FULL pg_depend;\n> \n> But it give me the following error:\n> \t-bash: VACUUM: command not found\n\nThat needs to be run from psql ...\n\n> \n> \n> \n> \n> \n> I choose Polesoft Lockspam to fight spam, and you?\n> http://www.polesoft.com/refer.html \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\nI choose Polesoft Lockspam to fight spam, and you?\nhttp://www.polesoft.com/refer.html \n\n",
"msg_date": "Tue, 8 Nov 2005 10:22:01 -0000",
"msg_from": "\"Christian Paul B. Cosinas\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary Table"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI have a question about the performance of sort.\n\nSetup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with \nPostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate 400GB drive. The \nIDE drive has the OS and the WAL files, the SATA drive the database. \n From hdparm the max IO for the IDE drive is about 50Mb/s and the SATA \ndrive is about 65Mb/s. Thus a very low-end machine - but it used just \nfor development (i.e., it is not a production machine) and the only \nthing it does is run a PostgresSQL database.\n\nI have a staging table called completechain that holds US tiger data \n(i.e., streets and addresses for the US). The table is approximately \n18GB. Its big because there is a lot of data, but also because the \ntable is not normalized (it comes that way).\n\nI want to extract data out of the file, with the most important values \nbeing stored in a column called tlid. The tlid field is an integer, and \nthe values are 98% unique. There is a second column called ogc_fid \nwhich is unique (it is a serial field). I need to extract out unique \nTLID's (doesn't matter which duplicate I get rid of). To do this I am \nrunning this query:\n\nSELECT tlid, min(ogc_fid)\nFROM completechain\nGROUP BY tlid;\n\nThe results from explain analyze are:\n\n\"GroupAggregate (cost=10400373.80..11361807.88 rows=48071704 width=8) \n(actual time=7311682.715..8315746.835 rows=47599910 loops=1)\"\n\" -> Sort (cost=10400373.80..10520553.06 rows=48071704 width=8) \n(actual time=7311682.682..7972304.777 rows=48199165 loops=1)\"\n\" Sort Key: tlid\"\n\" -> Seq Scan on completechain (cost=0.00..2228584.04 \nrows=48071704 width=8) (actual time=27.514..773245.046 rows=48199165 \nloops=1)\"\n\"Total runtime: 8486057.185 ms\"\n\t\nDoing a similar query produces the same results:\n\nSELECT DISTINCT ON (tlid), tlid, ogc_fid\nFROM completechain;\n\nNote it takes over 10 times longer to do the sort than the full \nsequential scan.\n\nShould I expect results like this? I realize that the computer is quite \nlow-end and is very IO bound for this query, but I'm still surprised \nthat the sort operation takes so long.\n\nOut of curiosity, I setup an Oracle database on the same machine with \nthe same data and ran the same query. Oracle was over an order of \nmagnitude faster. Looking at its query plan, it avoided the sort by \nusing \"HASH GROUP BY.\" Does such a construct exist in PostgreSQL (I see \nonly hash joins)?\n\nAlso as an experiment I forced oracle to do a sort by running this query:\n\nSELECT tlid, min(ogc_fid)\nFROM completechain\nGROUP BY tlid\nORDER BY tlid;\n\nEven with this, it was more than a magnitude faster than Postgresql. \nWhich makes me think I have somehow misconfigured postgresql (see the \nrelevant parts of postgresql.conf below).\n\nAny idea/help appreciated.\n\nThanks,\n\nCharlie\n\n\n-------------------------------\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\nshared_buffers = 40000 # 40000 buffers * 8192 \nbytes/buffer = 327,680,000 bytes\n#shared_buffers = 1000\t\t\t# min 16 or max_connections*2, 8KB each\n\ntemp_buffers = 5000\n#temp_buffers = 1000\t\t\t# min 100, 8KB each\n#max_prepared_transactions = 5\t\t# can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared \nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 16384 # in Kb\n#work_mem = 1024\t\t\t# min 64, size in KB\n\nmaintenance_work_mem = 262144 # in kb\n#maintenance_work_mem = 16384\t\t# min 1024, size in KB\n#max_stack_depth = 2048\t\t\t# min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 60000\t\n#max_fsm_pages = 20000\t\t\t# min max_fsm_relations*16, 6 bytes each\n\n#max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200\t\t\t# 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on\t\t\t\t# turns forced synchronization on or off\n#wal_sync_method = fsync\t\t# the default is the first option\n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n\nwal_buffers = 128\n#wal_buffers = 8\t\t\t# min 4, 8KB each\n\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 256 # 256 * 16Mb = 4,294,967,296 bytes\ncheckpoint_timeout = 1200\t\t# 1200 seconds (20 minutes)\ncheckpoint_warning = 30\t\t\t# in seconds, 0 is off\n\n#checkpoint_segments = 3\t\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t\t# range 30-3600, in seconds\n#checkpoint_warning = 30\t\t# in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t\t# command to use to archive a logfile\n\t\t\t\t\t# segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 80000\t\t# 80000 * 8192 = 655,360,000 bytes\n#effective_cache_size = 1000\t\t# typically 8KB each\n\nrandom_page_cost = 2.5\t\t\t# units are one sequential page fetch\n#random_page_cost = 4\t\t\t# units are one sequential page fetch\n\t\t\t\t\t# cost\n#cpu_tuple_cost = 0.01\t\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t\t# (same)\n#cpu_operator_cost = 0.0025\t\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100\t\t# range 1-1000\n#default_statistics_target = 10\t\t# range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables collapsing of explicit\n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = on\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\n\n#stats_start_collector = on\n#stats_command_string = off\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = true\nautovacuum_naptime = 600\n\n#autovacuum = false\t\t\t# enable autovacuum subprocess?\n#autovacuum_naptime = 60\t\t# time between autovacuum runs, in secs\n#autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 500\t# min # of tuple updates before\n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.2\t# fraction of rel size before\n\t\t\t\t\t# analyze\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for\n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n\t\t\t\t\t# autovac, -1 means use\n\t\t\t\t\t# vacuum_cost_\n\n\n----------------------\n\nCREATE TABLE tiger.completechain\n(\n ogc_fid int4 NOT NULL DEFAULT \nnextval('completechain_ogc_fid_seq'::regclass),\n module varchar(8) NOT NULL,\n tlid int4 NOT NULL,\n side1 int4,\n source varchar(1) NOT NULL,\n fedirp varchar(2),\n fename varchar(30),\n fetype varchar(4),\n fedirs varchar(2),\n cfcc varchar(3) NOT NULL,\n fraddl varchar(11),\n toaddl varchar(11),\n fraddr varchar(11),\n toaddr varchar(11),\n friaddl varchar(1),\n toiaddl varchar(1),\n friaddr varchar(1),\n toiaddr varchar(1),\n zipl int4,\n zipr int4,\n aianhhfpl int4,\n aianhhfpr int4,\n aihhtlil varchar(1),\n aihhtlir varchar(1),\n census1 varchar(1),\n census2 varchar(1),\n statel int4,\n stater int4,\n countyl int4,\n countyr int4,\n cousubl int4,\n cousubr int4,\n submcdl int4,\n submcdr int4,\n placel int4,\n placer int4,\n tractl int4,\n tractr int4,\n blockl int4,\n blockr int4,\n wkb_geometry public.geometry NOT NULL,\n CONSTRAINT enforce_dims_wkb_geometry CHECK (ndims(wkb_geometry) = 2),\n CONSTRAINT enforce_geotype_wkb_geometry CHECK \n(geometrytype(wkb_geometry) = 'LINESTRING'::text OR wkb_geometry IS NULL),\n CONSTRAINT enforce_srid_wkb_geometry CHECK (srid(wkb_geometry) = 4269)\n)\nWITHOUT OIDS;\nALTER TABLE tiger.completechain OWNER TO postgres;\n\n\n\n\n\n",
"msg_date": "Tue, 08 Nov 2005 00:05:01 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sort performance on large tables"
},
{
"msg_contents": "Charlie Savage wrote:\n> Hi everyone,\n> \n> I have a question about the performance of sort.\n\n> Note it takes over 10 times longer to do the sort than the full \n> sequential scan.\n> \n> Should I expect results like this? I realize that the computer is quite \n> low-end and is very IO bound for this query, but I'm still surprised \n> that the sort operation takes so long.\n\nThe sort will be spilling to disk, which will grind your I/O to a halt.\n\n> work_mem = 16384 # in Kb\n\nTry upping this. You should be able to issue \"set work_mem = 100000\" \nbefore running your query IIRC. That should let PG do its sorting in \nlarger chunks.\n\nAlso, if your most common access pattern is ordered via tlid look into \nclustering the table on that.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 08 Nov 2005 11:14:41 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "Thanks everyone for the feedback.\n\nI tried increasing work_mem:\n\nset work_mem to 300000;\n\nselect tlid, min(ogc_fid)\nfrom completechain\ngroup by tld;\n\nThe results are:\n\n\"GroupAggregate (cost=9041602.80..10003036.88 rows=48071704 width=8)\n(actual time=4371749.523..5106162.256 rows=47599910 loops=1)\"\n\" -> Sort (cost=9041602.80..9161782.06 rows=48071704 width=8) (actual\ntime=4371690.894..4758660.433 rows=48199165 loops=1)\"\n\" Sort Key: tlid\"\n\" -> Seq Scan on completechain (cost=0.00..2228584.04\nrows=48071704 width=8) (actual time=49.518..805234.970 rows=48199165\nloops=1)\"\n\"Total runtime: 5279988.127 ms\"\n\nThus the time decreased from 8486 seconds to 5279 seconds - which is a \nnice improvement. However, that still leaves postgresql about 9 times \nslower.\n\nI tried increasing work_mem up to 500000, but at that point the machine \nstarted using its swap partition and performance degraded back to the \noriginal values.\n\nCharlie\n\n\nRichard Huxton wrote:\n > Charlie Savage wrote:\n >> Hi everyone,\n >>\n >> I have a question about the performance of sort.\n >\n >> Note it takes over 10 times longer to do the sort than the full\n >> sequential scan.\n >>\n >> Should I expect results like this? I realize that the computer is\n >> quite low-end and is very IO bound for this query, but I'm still\n >> surprised that the sort operation takes so long.\n >\n > The sort will be spilling to disk, which will grind your I/O to a halt.\n >\n >> work_mem = 16384 # in Kb\n >\n > Try upping this. You should be able to issue \"set work_mem = 100000\"\n > before running your query IIRC. That should let PG do its sorting in\n > larger chunks.\n >\n > Also, if your most common access pattern is ordered via tlid look into\n > clustering the table on that.\n\n\n\nRichard Huxton wrote:\n> Charlie Savage wrote:\n>> Hi everyone,\n>>\n>> I have a question about the performance of sort.\n> \n>> Note it takes over 10 times longer to do the sort than the full \n>> sequential scan.\n>>\n>> Should I expect results like this? I realize that the computer is \n>> quite low-end and is very IO bound for this query, but I'm still \n>> surprised that the sort operation takes so long.\n> \n> The sort will be spilling to disk, which will grind your I/O to a halt.\n> \n>> work_mem = 16384 # in Kb\n> \n> Try upping this. You should be able to issue \"set work_mem = 100000\" \n> before running your query IIRC. That should let PG do its sorting in \n> larger chunks.\n> \n> Also, if your most common access pattern is ordered via tlid look into \n> clustering the table on that.\n",
"msg_date": "Tue, 08 Nov 2005 15:06:04 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "Charlie Savage <[email protected]> writes:\n> Thus the time decreased from 8486 seconds to 5279 seconds - which is a \n> nice improvement. However, that still leaves postgresql about 9 times \n> slower.\n\nBTW, what data type are you sorting, exactly? If it's a string type,\nwhat is your LC_COLLATE setting?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Nov 2005 17:26:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables "
},
{
"msg_contents": "Its an int4.\n\nCharlie\n\nTom Lane wrote:\n> Charlie Savage <[email protected]> writes:\n>> Thus the time decreased from 8486 seconds to 5279 seconds - which is a \n>> nice improvement. However, that still leaves postgresql about 9 times \n>> slower.\n> \n> BTW, what data type are you sorting, exactly? If it's a string type,\n> what is your LC_COLLATE setting?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n",
"msg_date": "Tue, 08 Nov 2005 16:47:10 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "I'd set up a trigger to maintain summary tables perhaps...\n\nChris\n\n\nCharlie Savage wrote:\n> Thanks everyone for the feedback.\n> \n> I tried increasing work_mem:\n> \n> set work_mem to 300000;\n> \n> select tlid, min(ogc_fid)\n> from completechain\n> group by tld;\n> \n> The results are:\n> \n> \"GroupAggregate (cost=9041602.80..10003036.88 rows=48071704 width=8)\n> (actual time=4371749.523..5106162.256 rows=47599910 loops=1)\"\n> \" -> Sort (cost=9041602.80..9161782.06 rows=48071704 width=8) (actual\n> time=4371690.894..4758660.433 rows=48199165 loops=1)\"\n> \" Sort Key: tlid\"\n> \" -> Seq Scan on completechain (cost=0.00..2228584.04\n> rows=48071704 width=8) (actual time=49.518..805234.970 rows=48199165\n> loops=1)\"\n> \"Total runtime: 5279988.127 ms\"\n> \n> Thus the time decreased from 8486 seconds to 5279 seconds - which is a \n> nice improvement. However, that still leaves postgresql about 9 times \n> slower.\n> \n> I tried increasing work_mem up to 500000, but at that point the machine \n> started using its swap partition and performance degraded back to the \n> original values.\n> \n> Charlie\n> \n> \n> Richard Huxton wrote:\n> > Charlie Savage wrote:\n> >> Hi everyone,\n> >>\n> >> I have a question about the performance of sort.\n> >\n> >> Note it takes over 10 times longer to do the sort than the full\n> >> sequential scan.\n> >>\n> >> Should I expect results like this? I realize that the computer is\n> >> quite low-end and is very IO bound for this query, but I'm still\n> >> surprised that the sort operation takes so long.\n> >\n> > The sort will be spilling to disk, which will grind your I/O to a halt.\n> >\n> >> work_mem = 16384 # in Kb\n> >\n> > Try upping this. You should be able to issue \"set work_mem = 100000\"\n> > before running your query IIRC. That should let PG do its sorting in\n> > larger chunks.\n> >\n> > Also, if your most common access pattern is ordered via tlid look into\n> > clustering the table on that.\n> \n> \n> \n> Richard Huxton wrote:\n> \n>> Charlie Savage wrote:\n>>\n>>> Hi everyone,\n>>>\n>>> I have a question about the performance of sort.\n>>\n>>\n>>> Note it takes over 10 times longer to do the sort than the full \n>>> sequential scan.\n>>>\n>>> Should I expect results like this? I realize that the computer is \n>>> quite low-end and is very IO bound for this query, but I'm still \n>>> surprised that the sort operation takes so long.\n>>\n>>\n>> The sort will be spilling to disk, which will grind your I/O to a halt.\n>>\n>>> work_mem = 16384 # in Kb\n>>\n>>\n>> Try upping this. You should be able to issue \"set work_mem = 100000\" \n>> before running your query IIRC. That should let PG do its sorting in \n>> larger chunks.\n>>\n>> Also, if your most common access pattern is ordered via tlid look into \n>> clustering the table on that.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n",
"msg_date": "Wed, 09 Nov 2005 09:49:39 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "On Tue, 2005-11-08 at 00:05 -0700, Charlie Savage wrote:\n\n> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with \n\n> I want to extract data out of the file, with the most important values \n> being stored in a column called tlid. The tlid field is an integer, and \n> the values are 98% unique. There is a second column called ogc_fid \n> which is unique (it is a serial field). I need to extract out unique \n> TLID's (doesn't matter which duplicate I get rid of). To do this I am \n> running this query:\n> \n> SELECT tlid, min(ogc_fid)\n> FROM completechain\n> GROUP BY tlid;\n> \n> The results from explain analyze are:\n> \n> \"GroupAggregate (cost=10400373.80..11361807.88 rows=48071704 width=8) \n> (actual time=7311682.715..8315746.835 rows=47599910 loops=1)\"\n> \" -> Sort (cost=10400373.80..10520553.06 rows=48071704 width=8) \n> (actual time=7311682.682..7972304.777 rows=48199165 loops=1)\"\n> \" Sort Key: tlid\"\n> \" -> Seq Scan on completechain (cost=0.00..2228584.04 \n> rows=48071704 width=8) (actual time=27.514..773245.046 rows=48199165 \n> loops=1)\"\n> \"Total runtime: 8486057.185 ms\"\n\n> Should I expect results like this? I realize that the computer is quite \n> low-end and is very IO bound for this query, but I'm still surprised \n> that the sort operation takes so long.\n> \n> Out of curiosity, I setup an Oracle database on the same machine with \n> the same data and ran the same query. Oracle was over an order of \n> magnitude faster. Looking at its query plan, it avoided the sort by \n> using \"HASH GROUP BY.\" Does such a construct exist in PostgreSQL (I see \n> only hash joins)?\n\nPostgreSQL can do HashAggregates as well as GroupAggregates, just like\nOracle. HashAggs avoid the sort phase, so would improve performance\nconsiderably. The difference in performance you are getting is because\nof the different plan used. Did you specifically do anything to Oracle\nto help it get that plan, or was it a pure out-of-the-box install (or\nmaybe even a \"set this up for Data Warehousing\" install)?\n\nTo get a HashAgg plan, you need to be able to fit all of the unique\nvalues in memory. That would be 98% of 48071704 rows, each 8+ bytes\nwide, giving a HashAgg memory sizing of over 375MB. You must allocate\nmemory of the next power of two above the level you want, so we would\nneed to allocate 512MB to work_mem before it would consider using a\nHashAgg.\n\nCan you let us know how high you have to set work_mem before an EXPLAIN\n(not EXPLAIN ANALYZE) chooses the HashAgg plan?\n\nPlease be aware that publishing Oracle performance results is against\nthe terms of their licence and we seek to be both fair and legitimate,\nespecially within this public discussion forum.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 09 Nov 2005 09:35:55 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "Hi Simon,\n\nThanks for the response Simon.\n\n> PostgreSQL can do HashAggregates as well as GroupAggregates, just like\n> Oracle. HashAggs avoid the sort phase, so would improve performance\n> considerably. The difference in performance you are getting is because\n> of the different plan used. Did you specifically do anything to Oracle\n> to help it get that plan, or was it a pure out-of-the-box install (or\n> maybe even a \"set this up for Data Warehousing\" install)?\n\nIt was an out-of-the-box plan with the standard database install option \n(wasn't a Data Warehousing install).\n\n> Can you let us know how high you have to set work_mem before an EXPLAIN\n> (not EXPLAIN ANALYZE) chooses the HashAgg plan?\n\nThe planner picked a HashAggregate only when I set work_mem to 2097151 - \nwhich I gather is the maximum allowed value according to a message \nreturned from the server.\n\n\n> Please be aware that publishing Oracle performance results is against\n> the terms of their licence and we seek to be both fair and legitimate,\n> especially within this public discussion forum.\n\nSorry, I didn't realize - I'll be more vague next time.\n\nCharlie\n",
"msg_date": "Wed, 09 Nov 2005 10:13:46 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
}
] |
[
{
"msg_contents": "I have run into this type of query problem as well. I solved it in my\napplication by the following type of query.\n\nSELECT tlid\nFROM completechain AS o\nWHERE not exists ( \n\tSELECT 1\n\tFROM completechain\n\tWHERE tlid=o.tlid and ogc_fid!=o.ogc_fid\n);\n\nAssumes of course that you have an index on tlid.\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Charlie Savage\n> Sent: Tuesday, November 08, 2005 2:05 AM\n> To: [email protected]\n> Subject: [PERFORM] Sort performance on large tables\n> \n> Hi everyone,\n> \n> I have a question about the performance of sort.\n> \n> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 \n> RC 1 with PostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate \n> 400GB drive. The IDE drive has the OS and the WAL files, the \n> SATA drive the database. \n> From hdparm the max IO for the IDE drive is about 50Mb/s and \n> the SATA drive is about 65Mb/s. Thus a very low-end machine \n> - but it used just for development (i.e., it is not a \n> production machine) and the only thing it does is run a \n> PostgresSQL database.\n> \n> I have a staging table called completechain that holds US \n> tiger data (i.e., streets and addresses for the US). The \n> table is approximately 18GB. Its big because there is a lot \n> of data, but also because the table is not normalized (it \n> comes that way).\n> \n> I want to extract data out of the file, with the most \n> important values being stored in a column called tlid. The \n> tlid field is an integer, and the values are 98% unique. \n> There is a second column called ogc_fid which is unique (it \n> is a serial field). I need to extract out unique TLID's \n> (doesn't matter which duplicate I get rid of). To do this I \n> am running this query:\n> \n> SELECT tlid, min(ogc_fid)\n> FROM completechain\n> GROUP BY tlid;\n> \n> The results from explain analyze are:\n> \n> \"GroupAggregate (cost=10400373.80..11361807.88 rows=48071704 \n> width=8) (actual time=7311682.715..8315746.835 rows=47599910 loops=1)\"\n> \" -> Sort (cost=10400373.80..10520553.06 rows=48071704 \n> width=8) (actual time=7311682.682..7972304.777 rows=48199165 loops=1)\"\n> \" Sort Key: tlid\"\n> \" -> Seq Scan on completechain (cost=0.00..2228584.04 \n> rows=48071704 width=8) (actual time=27.514..773245.046 \n> rows=48199165 loops=1)\"\n> \"Total runtime: 8486057.185 ms\"\n> \t\n> Doing a similar query produces the same results:\n> \n> SELECT DISTINCT ON (tlid), tlid, ogc_fid FROM completechain;\n> \n> Note it takes over 10 times longer to do the sort than the \n> full sequential scan.\n> \n> Should I expect results like this? I realize that the \n> computer is quite low-end and is very IO bound for this \n> query, but I'm still surprised that the sort operation takes so long.\n> \n> Out of curiosity, I setup an Oracle database on the same \n> machine with the same data and ran the same query. Oracle \n> was over an order of magnitude faster. Looking at its query \n> plan, it avoided the sort by using \"HASH GROUP BY.\" Does \n> such a construct exist in PostgreSQL (I see only hash joins)?\n> \n> Also as an experiment I forced oracle to do a sort by running \n> this query:\n> \n> SELECT tlid, min(ogc_fid)\n> FROM completechain\n> GROUP BY tlid\n> ORDER BY tlid;\n> \n> Even with this, it was more than a magnitude faster than Postgresql. \n> Which makes me think I have somehow misconfigured postgresql \n> (see the relevant parts of postgresql.conf below).\n> \n> Any idea/help appreciated.\n> \n> Thanks,\n> \n> Charlie\n> \n> \n> -------------------------------\n> \n> #-------------------------------------------------------------\n> --------------\n> # RESOURCE USAGE (except WAL)\n> #-------------------------------------------------------------\n> --------------\n> \n> shared_buffers = 40000 # 40000 buffers * 8192 \n> bytes/buffer = 327,680,000 bytes\n> #shared_buffers = 1000\t\t\t# min 16 or \n> max_connections*2, 8KB each\n> \n> temp_buffers = 5000\n> #temp_buffers = 1000\t\t\t# min 100, 8KB each\n> #max_prepared_transactions = 5\t\t# can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes \n> of shared memory # per transaction slot, plus lock space (see \n> max_locks_per_transaction).\n> \n> work_mem = 16384 # in Kb\n> #work_mem = 1024\t\t\t# min 64, size in KB\n> \n> maintenance_work_mem = 262144 # in kb\n> #maintenance_work_mem = 16384\t\t# min 1024, size in KB\n> #max_stack_depth = 2048\t\t\t# min 100, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 60000\t\n> #max_fsm_pages = 20000\t\t\t# min \n> max_fsm_relations*16, 6 bytes each\n> \n> #max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n> \n> # - Kernel Resource Usage -\n> \n> #max_files_per_process = 1000\t\t# min 25\n> #preload_libraries = ''\n> \n> # - Cost-Based Vacuum Delay -\n> \n> #vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n> #vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n> #vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n> #vacuum_cost_limit = 200\t\t# 0-10000 credits\n> \n> # - Background writer -\n> \n> #bgwriter_delay = 200\t\t\t# 10-10000 milliseconds \n> between rounds\n> #bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers \n> scanned/round\n> #bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max \n> written/round\n> #bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers \n> scanned/round\n> #bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max \n> written/round\n> \n> \n> #-------------------------------------------------------------\n> --------------\n> # WRITE AHEAD LOG\n> #-------------------------------------------------------------\n> --------------\n> \n> # - Settings -\n> \n> fsync = on\t\t\t\t# turns forced \n> synchronization on or off\n> #wal_sync_method = fsync\t\t# the default is the \n> first option\n> \t\t\t\t\t# supported by the \n> operating system:\n> \t\t\t\t\t# open_datasync\n> \t\t\t\t\t# fdatasync\n> \t\t\t\t\t# fsync\n> \t\t\t\t\t# fsync_writethrough\n> \t\t\t\t\t# open_sync\n> #full_page_writes = on\t\t\t# recover from \n> partial page writes\n> \n> wal_buffers = 128\n> #wal_buffers = 8\t\t\t# min 4, 8KB each\n> \n> #commit_delay = 0\t\t\t# range 0-100000, in \n> microseconds\n> #commit_siblings = 5\t\t\t# range 1-1000\n> \n> # - Checkpoints -\n> \n> checkpoint_segments = 256 # 256 * 16Mb = \n> 4,294,967,296 bytes\n> checkpoint_timeout = 1200\t\t# 1200 seconds (20 minutes)\n> checkpoint_warning = 30\t\t\t# in seconds, 0 is off\n> \n> #checkpoint_segments = 3\t\t# in logfile segments, \n> min 1, 16MB each\n> #checkpoint_timeout = 300\t\t# range 30-3600, in seconds\n> #checkpoint_warning = 30\t\t# in seconds, 0 is off\n> \n> # - Archiving -\n> \n> #archive_command = ''\t\t\t# command to use to \n> archive a logfile\n> \t\t\t\t\t# segment\n> \n> \n> #-------------------------------------------------------------\n> --------------\n> # QUERY TUNING\n> #-------------------------------------------------------------\n> --------------\n> \n> # - Planner Method Configuration -\n> \n> #enable_bitmapscan = on\n> #enable_hashagg = on\n> #enable_hashjoin = on\n> #enable_indexscan = on\n> #enable_mergejoin = on\n> #enable_nestloop = on\n> #enable_seqscan = on\n> #enable_sort = on\n> #enable_tidscan = on\n> \n> # - Planner Cost Constants -\n> \n> effective_cache_size = 80000\t\t# 80000 * 8192 = \n> 655,360,000 bytes\n> #effective_cache_size = 1000\t\t# typically 8KB each\n> \n> random_page_cost = 2.5\t\t\t# units are one \n> sequential page fetch\n> #random_page_cost = 4\t\t\t# units are one \n> sequential page fetch\n> \t\t\t\t\t# cost\n> #cpu_tuple_cost = 0.01\t\t\t# (same)\n> #cpu_index_tuple_cost = 0.001\t\t# (same)\n> #cpu_operator_cost = 0.0025\t\t# (same)\n> \n> # - Genetic Query Optimizer -\n> \n> #geqo = on\n> #geqo_threshold = 12\n> #geqo_effort = 5\t\t\t# range 1-10\n> #geqo_pool_size = 0\t\t\t# selects default based \n> on effort\n> #geqo_generations = 0\t\t\t# selects default based \n> on effort\n> #geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n> \n> # - Other Planner Options -\n> \n> default_statistics_target = 100\t\t# range 1-1000\n> #default_statistics_target = 10\t\t# range 1-1000\n> #constraint_exclusion = off\n> #from_collapse_limit = 8\n> #join_collapse_limit = 8\t\t# 1 disables collapsing \n> of explicit\n> \t\t\t\t\t# JOINs\n> \n> \n> #-------------------------------------------------------------\n> --------------\n> #-------------------------------------------------------------\n> --------------\n> # RUNTIME STATISTICS\n> #-------------------------------------------------------------\n> --------------\n> \n> # - Statistics Monitoring -\n> \n> #log_parser_stats = off\n> #log_planner_stats = off\n> #log_executor_stats = off\n> #log_statement_stats = off\n> \n> # - Query/Index Statistics Collector -\n> \n> stats_start_collector = on\n> stats_command_string = on\n> stats_block_level = on\n> stats_row_level = on\n> \n> #stats_start_collector = on\n> #stats_command_string = off\n> #stats_block_level = off\n> #stats_row_level = off\n> #stats_reset_on_server_start = off\n> \n> \n> #-------------------------------------------------------------\n> --------------\n> # AUTOVACUUM PARAMETERS\n> #-------------------------------------------------------------\n> --------------\n> \n> autovacuum = true\n> autovacuum_naptime = 600\n> \n> #autovacuum = false\t\t\t# enable autovacuum subprocess?\n> #autovacuum_naptime = 60\t\t# time between \n> autovacuum runs, in secs\n> #autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n> \t\t\t\t\t# vacuum\n> #autovacuum_analyze_threshold = 500\t# min # of tuple updates before\n> \t\t\t\t\t# analyze\n> #autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before\n> \t\t\t\t\t# vacuum\n> #autovacuum_analyze_scale_factor = 0.2\t# fraction of \n> rel size before\n> \t\t\t\t\t# analyze\n> #autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for\n> \t\t\t\t\t# autovac, -1 means use\n> \t\t\t\t\t# vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n> \t\t\t\t\t# autovac, -1 means use\n> \t\t\t\t\t# vacuum_cost_\n> \n> \n> ----------------------\n> \n> CREATE TABLE tiger.completechain\n> (\n> ogc_fid int4 NOT NULL DEFAULT\n> nextval('completechain_ogc_fid_seq'::regclass),\n> module varchar(8) NOT NULL,\n> tlid int4 NOT NULL,\n> side1 int4,\n> source varchar(1) NOT NULL,\n> fedirp varchar(2),\n> fename varchar(30),\n> fetype varchar(4),\n> fedirs varchar(2),\n> cfcc varchar(3) NOT NULL,\n> fraddl varchar(11),\n> toaddl varchar(11),\n> fraddr varchar(11),\n> toaddr varchar(11),\n> friaddl varchar(1),\n> toiaddl varchar(1),\n> friaddr varchar(1),\n> toiaddr varchar(1),\n> zipl int4,\n> zipr int4,\n> aianhhfpl int4,\n> aianhhfpr int4,\n> aihhtlil varchar(1),\n> aihhtlir varchar(1),\n> census1 varchar(1),\n> census2 varchar(1),\n> statel int4,\n> stater int4,\n> countyl int4,\n> countyr int4,\n> cousubl int4,\n> cousubr int4,\n> submcdl int4,\n> submcdr int4,\n> placel int4,\n> placer int4,\n> tractl int4,\n> tractr int4,\n> blockl int4,\n> blockr int4,\n> wkb_geometry public.geometry NOT NULL,\n> CONSTRAINT enforce_dims_wkb_geometry CHECK \n> (ndims(wkb_geometry) = 2),\n> CONSTRAINT enforce_geotype_wkb_geometry CHECK\n> (geometrytype(wkb_geometry) = 'LINESTRING'::text OR \n> wkb_geometry IS NULL),\n> CONSTRAINT enforce_srid_wkb_geometry CHECK \n> (srid(wkb_geometry) = 4269)\n> )\n> WITHOUT OIDS;\n> ALTER TABLE tiger.completechain OWNER TO postgres;\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n",
"msg_date": "Tue, 8 Nov 2005 08:19:44 -0500",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "Very interesting technique. It doesn't actually do quite what I want \nsince it returns all rows that do not have duplicates and not a complete \nlist of unique tlid values. But I could massage it to do what I want.\n\nAnyway, the timing:\n\n\n\"Seq Scan on completechain t1 (cost=0.00..218139733.60 rows=24099582 \nwidth=4) (actual time=25.890..3404650.452 rows=47000655 loops=1)\"\n\" Filter: (NOT (subplan))\"\n\" SubPlan\"\n\" -> Index Scan using idx_completechain_tlid on completechain t2 \n(cost=0.00..4.48 rows=1 width=0) (actual time=0.059..0.059 rows=0 \nloops=48199165)\"\n\" Index Cond: ($0 = tlid)\"\n\" Filter: ($1 <> ogc_fid)\"\n\"Total runtime: 3551423.162 ms\"\nMarc Morin wrote:\n\nSo a 60% reduction in time. Thanks again for the tip.\n\nCharlie\n\n\n> I have run into this type of query problem as well. I solved it in my\n> application by the following type of query.\n> \n> \t\n> Assumes of course that you have an index on tlid.\n> \n>> -----Original Message-----\n>> From: [email protected] \n>> [mailto:[email protected]] On Behalf Of \n>> Charlie Savage\n>> Sent: Tuesday, November 08, 2005 2:05 AM\n>> To: [email protected]\n>> Subject: [PERFORM] Sort performance on large tables\n>>\n>> Hi everyone,\n>>\n>> I have a question about the performance of sort.\n>>\n>> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 \n>> RC 1 with PostGIS, 1 built-in 80 GB IDE drive, 1 SATA Seagate \n>> 400GB drive. The IDE drive has the OS and the WAL files, the \n>> SATA drive the database. \n>> From hdparm the max IO for the IDE drive is about 50Mb/s and \n>> the SATA drive is about 65Mb/s. Thus a very low-end machine \n>> - but it used just for development (i.e., it is not a \n>> production machine) and the only thing it does is run a \n>> PostgresSQL database.\n>>\n>> I have a staging table called completechain that holds US \n>> tiger data (i.e., streets and addresses for the US). The \n>> table is approximately 18GB. Its big because there is a lot \n>> of data, but also because the table is not normalized (it \n>> comes that way).\n>>\n>> I want to extract data out of the file, with the most \n>> important values being stored in a column called tlid. The \n>> tlid field is an integer, and the values are 98% unique. \n>> There is a second column called ogc_fid which is unique (it \n>> is a serial field). I need to extract out unique TLID's \n>> (doesn't matter which duplicate I get rid of). To do this I \n>> am running this query:\n>>\n>> SELECT tlid, min(ogc_fid)\n>> FROM completechain\n>> GROUP BY tlid;\n>>\n>> The results from explain analyze are:\n>>\n>> \"GroupAggregate (cost=10400373.80..11361807.88 rows=48071704 \n>> width=8) (actual time=7311682.715..8315746.835 rows=47599910 loops=1)\"\n>> \" -> Sort (cost=10400373.80..10520553.06 rows=48071704 \n>> width=8) (actual time=7311682.682..7972304.777 rows=48199165 loops=1)\"\n>> \" Sort Key: tlid\"\n>> \" -> Seq Scan on completechain (cost=0.00..2228584.04 \n>> rows=48071704 width=8) (actual time=27.514..773245.046 \n>> rows=48199165 loops=1)\"\n>> \"Total runtime: 8486057.185 ms\"\n>> \t\n>> Doing a similar query produces the same results:\n>>\n>> SELECT DISTINCT ON (tlid), tlid, ogc_fid FROM completechain;\n>>\n>> Note it takes over 10 times longer to do the sort than the \n>> full sequential scan.\n>>\n>> Should I expect results like this? I realize that the \n>> computer is quite low-end and is very IO bound for this \n>> query, but I'm still surprised that the sort operation takes so long.\n>>\n>> Out of curiosity, I setup an Oracle database on the same \n>> machine with the same data and ran the same query. Oracle \n>> was over an order of magnitude faster. Looking at its query \n>> plan, it avoided the sort by using \"HASH GROUP BY.\" Does \n>> such a construct exist in PostgreSQL (I see only hash joins)?\n>>\n>> Also as an experiment I forced oracle to do a sort by running \n>> this query:\n>>\n>> SELECT tlid, min(ogc_fid)\n>> FROM completechain\n>> GROUP BY tlid\n>> ORDER BY tlid;\n>>\n>> Even with this, it was more than a magnitude faster than Postgresql. \n>> Which makes me think I have somehow misconfigured postgresql \n>> (see the relevant parts of postgresql.conf below).\n>>\n>> Any idea/help appreciated.\n>>\n>> Thanks,\n>>\n>> Charlie\n>>\n>>\n>> -------------------------------\n>>\n>> #-------------------------------------------------------------\n>> --------------\n>> # RESOURCE USAGE (except WAL)\n>> #-------------------------------------------------------------\n>> --------------\n>>\n>> shared_buffers = 40000 # 40000 buffers * 8192 \n>> bytes/buffer = 327,680,000 bytes\n>> #shared_buffers = 1000\t\t\t# min 16 or \n>> max_connections*2, 8KB each\n>>\n>> temp_buffers = 5000\n>> #temp_buffers = 1000\t\t\t# min 100, 8KB each\n>> #max_prepared_transactions = 5\t\t# can be 0 or more\n>> # note: increasing max_prepared_transactions costs ~600 bytes \n>> of shared memory # per transaction slot, plus lock space (see \n>> max_locks_per_transaction).\n>>\n>> work_mem = 16384 # in Kb\n>> #work_mem = 1024\t\t\t# min 64, size in KB\n>>\n>> maintenance_work_mem = 262144 # in kb\n>> #maintenance_work_mem = 16384\t\t# min 1024, size in KB\n>> #max_stack_depth = 2048\t\t\t# min 100, size in KB\n>>\n>> # - Free Space Map -\n>>\n>> max_fsm_pages = 60000\t\n>> #max_fsm_pages = 20000\t\t\t# min \n>> max_fsm_relations*16, 6 bytes each\n>>\n>> #max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n>>\n>> # - Kernel Resource Usage -\n>>\n>> #max_files_per_process = 1000\t\t# min 25\n>> #preload_libraries = ''\n>>\n>> # - Cost-Based Vacuum Delay -\n>>\n>> #vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n>> #vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n>> #vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n>> #vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n>> #vacuum_cost_limit = 200\t\t# 0-10000 credits\n>>\n>> # - Background writer -\n>>\n>> #bgwriter_delay = 200\t\t\t# 10-10000 milliseconds \n>> between rounds\n>> #bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers \n>> scanned/round\n>> #bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max \n>> written/round\n>> #bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers \n>> scanned/round\n>> #bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max \n>> written/round\n>>\n>>\n>> #-------------------------------------------------------------\n>> --------------\n>> # WRITE AHEAD LOG\n>> #-------------------------------------------------------------\n>> --------------\n>>\n>> # - Settings -\n>>\n>> fsync = on\t\t\t\t# turns forced \n>> synchronization on or off\n>> #wal_sync_method = fsync\t\t# the default is the \n>> first option\n>> \t\t\t\t\t# supported by the \n>> operating system:\n>> \t\t\t\t\t# open_datasync\n>> \t\t\t\t\t# fdatasync\n>> \t\t\t\t\t# fsync\n>> \t\t\t\t\t# fsync_writethrough\n>> \t\t\t\t\t# open_sync\n>> #full_page_writes = on\t\t\t# recover from \n>> partial page writes\n>>\n>> wal_buffers = 128\n>> #wal_buffers = 8\t\t\t# min 4, 8KB each\n>>\n>> #commit_delay = 0\t\t\t# range 0-100000, in \n>> microseconds\n>> #commit_siblings = 5\t\t\t# range 1-1000\n>>\n>> # - Checkpoints -\n>>\n>> checkpoint_segments = 256 # 256 * 16Mb = \n>> 4,294,967,296 bytes\n>> checkpoint_timeout = 1200\t\t# 1200 seconds (20 minutes)\n>> checkpoint_warning = 30\t\t\t# in seconds, 0 is off\n>>\n>> #checkpoint_segments = 3\t\t# in logfile segments, \n>> min 1, 16MB each\n>> #checkpoint_timeout = 300\t\t# range 30-3600, in seconds\n>> #checkpoint_warning = 30\t\t# in seconds, 0 is off\n>>\n>> # - Archiving -\n>>\n>> #archive_command = ''\t\t\t# command to use to \n>> archive a logfile\n>> \t\t\t\t\t# segment\n>>\n>>\n>> #-------------------------------------------------------------\n>> --------------\n>> # QUERY TUNING\n>> #-------------------------------------------------------------\n>> --------------\n>>\n>> # - Planner Method Configuration -\n>>\n>> #enable_bitmapscan = on\n>> #enable_hashagg = on\n>> #enable_hashjoin = on\n>> #enable_indexscan = on\n>> #enable_mergejoin = on\n>> #enable_nestloop = on\n>> #enable_seqscan = on\n>> #enable_sort = on\n>> #enable_tidscan = on\n>>\n>> # - Planner Cost Constants -\n>>\n>> effective_cache_size = 80000\t\t# 80000 * 8192 = \n>> 655,360,000 bytes\n>> #effective_cache_size = 1000\t\t# typically 8KB each\n>>\n>> random_page_cost = 2.5\t\t\t# units are one \n>> sequential page fetch\n>> #random_page_cost = 4\t\t\t# units are one \n>> sequential page fetch\n>> \t\t\t\t\t# cost\n>> #cpu_tuple_cost = 0.01\t\t\t# (same)\n>> #cpu_index_tuple_cost = 0.001\t\t# (same)\n>> #cpu_operator_cost = 0.0025\t\t# (same)\n>>\n>> # - Genetic Query Optimizer -\n>>\n>> #geqo = on\n>> #geqo_threshold = 12\n>> #geqo_effort = 5\t\t\t# range 1-10\n>> #geqo_pool_size = 0\t\t\t# selects default based \n>> on effort\n>> #geqo_generations = 0\t\t\t# selects default based \n>> on effort\n>> #geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n>>\n>> # - Other Planner Options -\n>>\n>> default_statistics_target = 100\t\t# range 1-1000\n>> #default_statistics_target = 10\t\t# range 1-1000\n>> #constraint_exclusion = off\n>> #from_collapse_limit = 8\n>> #join_collapse_limit = 8\t\t# 1 disables collapsing \n>> of explicit\n>> \t\t\t\t\t# JOINs\n>>\n>>\n>> #-------------------------------------------------------------\n>> --------------\n>> #-------------------------------------------------------------\n>> --------------\n>> # RUNTIME STATISTICS\n>> #-------------------------------------------------------------\n>> --------------\n>>\n>> # - Statistics Monitoring -\n>>\n>> #log_parser_stats = off\n>> #log_planner_stats = off\n>> #log_executor_stats = off\n>> #log_statement_stats = off\n>>\n>> # - Query/Index Statistics Collector -\n>>\n>> stats_start_collector = on\n>> stats_command_string = on\n>> stats_block_level = on\n>> stats_row_level = on\n>>\n>> #stats_start_collector = on\n>> #stats_command_string = off\n>> #stats_block_level = off\n>> #stats_row_level = off\n>> #stats_reset_on_server_start = off\n>>\n>>\n>> #-------------------------------------------------------------\n>> --------------\n>> # AUTOVACUUM PARAMETERS\n>> #-------------------------------------------------------------\n>> --------------\n>>\n>> autovacuum = true\n>> autovacuum_naptime = 600\n>>\n>> #autovacuum = false\t\t\t# enable autovacuum subprocess?\n>> #autovacuum_naptime = 60\t\t# time between \n>> autovacuum runs, in secs\n>> #autovacuum_vacuum_threshold = 1000\t# min # of tuple updates before\n>> \t\t\t\t\t# vacuum\n>> #autovacuum_analyze_threshold = 500\t# min # of tuple updates before\n>> \t\t\t\t\t# analyze\n>> #autovacuum_vacuum_scale_factor = 0.4\t# fraction of rel size before\n>> \t\t\t\t\t# vacuum\n>> #autovacuum_analyze_scale_factor = 0.2\t# fraction of \n>> rel size before\n>> \t\t\t\t\t# analyze\n>> #autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for\n>> \t\t\t\t\t# autovac, -1 means use\n>> \t\t\t\t\t# vacuum_cost_delay\n>> #autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for\n>> \t\t\t\t\t# autovac, -1 means use\n>> \t\t\t\t\t# vacuum_cost_\n>>\n>>\n>> ----------------------\n>>\n>> CREATE TABLE tiger.completechain\n>> (\n>> ogc_fid int4 NOT NULL DEFAULT\n>> nextval('completechain_ogc_fid_seq'::regclass),\n>> module varchar(8) NOT NULL,\n>> tlid int4 NOT NULL,\n>> side1 int4,\n>> source varchar(1) NOT NULL,\n>> fedirp varchar(2),\n>> fename varchar(30),\n>> fetype varchar(4),\n>> fedirs varchar(2),\n>> cfcc varchar(3) NOT NULL,\n>> fraddl varchar(11),\n>> toaddl varchar(11),\n>> fraddr varchar(11),\n>> toaddr varchar(11),\n>> friaddl varchar(1),\n>> toiaddl varchar(1),\n>> friaddr varchar(1),\n>> toiaddr varchar(1),\n>> zipl int4,\n>> zipr int4,\n>> aianhhfpl int4,\n>> aianhhfpr int4,\n>> aihhtlil varchar(1),\n>> aihhtlir varchar(1),\n>> census1 varchar(1),\n>> census2 varchar(1),\n>> statel int4,\n>> stater int4,\n>> countyl int4,\n>> countyr int4,\n>> cousubl int4,\n>> cousubr int4,\n>> submcdl int4,\n>> submcdr int4,\n>> placel int4,\n>> placer int4,\n>> tractl int4,\n>> tractr int4,\n>> blockl int4,\n>> blockr int4,\n>> wkb_geometry public.geometry NOT NULL,\n>> CONSTRAINT enforce_dims_wkb_geometry CHECK \n>> (ndims(wkb_geometry) = 2),\n>> CONSTRAINT enforce_geotype_wkb_geometry CHECK\n>> (geometrytype(wkb_geometry) = 'LINESTRING'::text OR \n>> wkb_geometry IS NULL),\n>> CONSTRAINT enforce_srid_wkb_geometry CHECK \n>> (srid(wkb_geometry) = 4269)\n>> )\n>> WITHOUT OIDS;\n>> ALTER TABLE tiger.completechain OWNER TO postgres;\n>>\n>>\n>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Tue, 08 Nov 2005 22:49:43 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
}
] |
[
{
"msg_contents": "Charlie, \n\n> Should I expect results like this? I realize that the \n> computer is quite low-end and is very IO bound for this \n> query, but I'm still surprised that the sort operation takes so long.\n\nIt's the sort performance of Postgres that's your problem.\n \n> Out of curiosity, I setup an Oracle database on the same \n> machine with the same data and ran the same query. Oracle \n> was over an order of magnitude faster. Looking at its query \n> plan, it avoided the sort by using \"HASH GROUP BY.\" Does \n> such a construct exist in PostgreSQL (I see only hash joins)?\n\nYes, hashaggregate does a similar thing. You can force the planner to\ndo it, don't remember off the top of my head but someone else on-list\nwill.\n \n> Also as an experiment I forced oracle to do a sort by running \n> this query:\n> \n> SELECT tlid, min(ogc_fid)\n> FROM completechain\n> GROUP BY tlid\n> ORDER BY tlid;\n> \n> Even with this, it was more than a magnitude faster than Postgresql. \n> Which makes me think I have somehow misconfigured postgresql \n> (see the relevant parts of postgresql.conf below).\n\nJust as we find with a similar comparison (with a \"popular commercial,\nproprietary database\" :-) Though some might suggest you increase\nwork_mem or other tuning suggestions to speed sorting, none work. In\nfact, we find that increasing work_mem actually slows sorting slightly.\n\nWe are commissioning an improved sorting routine for bizgres\n(www.bizgres.org) which will be contributed to the postgres main, but\nwon't come out at least until 8.2 comes out, possibly 12 mos. In the\nmeantime, you will be able to use the new routine in the bizgres version\nof postgres, possibly in the next couple of months.\n\nAlso - we (Greenplum) are about to announce the public beta of the\nbizgres MPP database, which will use all of your CPUs, and those of\nother nodes in a cluster, for sorting. We see a linear scaling of sort\nperformance, so you could add CPUs and/or hosts and scale out of the\nproblem.\n\nCheers,\n\n- Luke\n\n",
"msg_date": "Tue, 8 Nov 2005 11:21:39 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "On Tue, 8 Nov 2005, Luke Lonergan wrote:\n\n> > SELECT tlid, min(ogc_fid)\n> > FROM completechain\n> > GROUP BY tlid\n> > ORDER BY tlid;\n> >\n> > Even with this, it was more than a magnitude faster than Postgresql.\n> > Which makes me think I have somehow misconfigured postgresql\n> > (see the relevant parts of postgresql.conf below).\n>\n> Just as we find with a similar comparison (with a \"popular commercial,\n> proprietary database\" :-) Though some might suggest you increase\n> work_mem or other tuning suggestions to speed sorting, none work. In\n> fact, we find that increasing work_mem actually slows sorting slightly.\n\nI wish you'd qualify your statements, because I can demonstrably show that\nI can make sorts go faster on my machine at least by increasing work_mem\nunder some conditions.\n",
"msg_date": "Tue, 8 Nov 2005 09:38:21 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "Stephan,\n\nOn 11/8/05 9:38 AM, \"Stephan Szabo\" <[email protected]> wrote:\n\n>> >\n>> > Just as we find with a similar comparison (with a \"popular commercial,\n>> > proprietary database\" :-) Though some might suggest you increase\n>> > work_mem or other tuning suggestions to speed sorting, none work. In\n>> > fact, we find that increasing work_mem actually slows sorting slightly.\n> \n> I wish you'd qualify your statements, because I can demonstrably show that\n> I can make sorts go faster on my machine at least by increasing work_mem\n> under some conditions.\n> \nCool can you provide your test case please? I¹ll ask our folks to do the\nsame, but as I recall we did some pretty thorough testing and found that it\ndoesn¹t help. Moreover, the conclusion was that the current algorithm isn¹t\ndesigned to use memory effectively.\n\nRecognize also that we¹re looking for a factor of 10 or more improvement\nhere this is not a small increase that¹s needed.\n\n- Luke\n\n\n\n\nRe: [PERFORM] Sort performance on large tables\n\n\nStephan,\n\nOn 11/8/05 9:38 AM, \"Stephan Szabo\" <[email protected]> wrote:\n\n>\n> Just as we find with a similar comparison (with a \"popular commercial,\n> proprietary database\" :-) Though some might suggest you increase\n> work_mem or other tuning suggestions to speed sorting, none work. In\n> fact, we find that increasing work_mem actually slows sorting slightly.\n\nI wish you'd qualify your statements, because I can demonstrably show that\nI can make sorts go faster on my machine at least by increasing work_mem\nunder some conditions.\n\nCool – can you provide your test case please? I’ll ask our folks to do the same, but as I recall we did some pretty thorough testing and found that it doesn’t help. Moreover, the conclusion was that the current algorithm isn’t designed to use memory effectively.\n\nRecognize also that we’re looking for a factor of 10 or more improvement here – this is not a small increase that’s needed.\n\n- Luke",
"msg_date": "Tue, 08 Nov 2005 11:09:07 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "\nOn Tue, 8 Nov 2005, Luke Lonergan wrote:\n\n> Stephan,\n>\n> On 11/8/05 9:38 AM, \"Stephan Szabo\" <[email protected]> wrote:\n>\n> >> >\n> >> > Just as we find with a similar comparison (with a \"popular commercial,\n> >> > proprietary database\" :-) Though some might suggest you increase\n> >> > work_mem or other tuning suggestions to speed sorting, none work. In\n> >> > fact, we find that increasing work_mem actually slows sorting slightly.\n> >\n> > I wish you'd qualify your statements, because I can demonstrably show that\n> > I can make sorts go faster on my machine at least by increasing work_mem\n> > under some conditions.\n> >\n> Cool � can you provide your test case please?\n\nI probably should have added the wink smiley to make it obvious I was\ntalking about the simplest case, things that don't fit in work_mem at the\ncurrent level but for which it's easy to raise work_mem to cover. It's not\na big a gain as one might hope, but it does certainly drop again.\n\n> Recognize also that we�re looking for a factor of 10 or more improvement\n> here � this is not a small increase that�s needed.\n\nI agree that we definately need help on that regard. I do see the effect\nwhere raising work_mem lowers the performance up until that point. I just\nthink that it requires more care in the discussion than disregarding the\nsuggestions entirely especially since people are going to see this in the\narchives.\n",
"msg_date": "Tue, 8 Nov 2005 12:48:45 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
}
] |
[
{
"msg_contents": "unsubscribe\n\nunsubscribe",
"msg_date": "Wed, 9 Nov 2005 10:23:54 +0800",
"msg_from": "William Lai <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Hi, all!\n\nI've been using postgresql for a long time now, but today I had some\nproblem I couldn't solve properly - hope here some more experienced\nusers have some hint's for me.\n\nFirst, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM\nand I have a table with about 220 columns and 20000 rows - and the first\nfive columns build a primary key (and a unique index).\n\nNow my problem: I need really many queries of rows using it's primary\nkey and fetching about five different columns but these are quite slow\n(about 10 queries per second and as I have some other databases which\ncan have about 300 queries per second I think this is slow):\n\ntransfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001\nAND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;\n\n Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)\n(actual time=0.161..0.167 rows=1 loops=1)\n Index Cond: (test_a = 9091150001::bigint)\n Filter: ((test_b = 1) AND (test_c = 2) AND (test_d = 0) AND (test_e 0))\n\nSo, what to do to speed things up? If I understand correctly this\noutput, the planner uses my index (test_idx is the same as test_pkey\ncreated along with the table), but only for the first column.\n\nAccidently I can't refactor these tables as they were given to me.\n\nThanks for any hint!\nJan",
"msg_date": "Wed, 09 Nov 2005 13:08:07 +0100",
"msg_from": "Jan Kesten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving performance on multicolumn query"
},
{
"msg_contents": "On Wed, Nov 09, 2005 at 01:08:07PM +0100, Jan Kesten wrote:\n> Now my problem: I need really many queries of rows using it's primary\n> key and fetching about five different columns but these are quite slow\n> (about 10 queries per second and as I have some other databases which\n> can have about 300 queries per second I think this is slow):\n> \n> transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001\n> AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;\n> \n> Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)\n> (actual time=0.161..0.167 rows=1 loops=1)\n> Index Cond: (test_a = 9091150001::bigint)\n> Filter: ((test_b = 1) AND (test_c = 2) AND (test_d = 0) AND (test_e 0))\n\nYou don't post your table definitions (please do), but it looks like test_b,\ntest_c, test_d and test_e might be bigints? If so, you may want to do\nexplicit \"AND test_b=1::bigint AND test_c=2::bigint\" etc. -- 7.4 doesn't\nfigure this out for you. (8.0 and higher does.)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 Nov 2005 13:44:29 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving performance on multicolumn query"
},
{
"msg_contents": "Jan Kesten wrote:\n> \n> First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM\n> and I have a table with about 220 columns and 20000 rows - and the first\n> five columns build a primary key (and a unique index).\n\n> transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001\n> AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;\n> \n> Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)\n> (actual time=0.161..0.167 rows=1 loops=1)\n> Index Cond: (test_a = 9091150001::bigint)\n> Filter: ((test_b = 1) AND (test_c = 2) AND (test_d = 0) AND (test_e 0))\n\nThis says it's taking less than a millisecond - which is almost \ncertainly too fast to measure accurately anyway. Are you sure this query \nis the problem?\n\n> So, what to do to speed things up? If I understand correctly this\n> output, the planner uses my index (test_idx is the same as test_pkey\n> created along with the table), but only for the first column.\n\n1. Are all of test_a/b/c/d/e bigint rather than int?\n2. Have you tried explicitly casting your query parameters?\n...WHERE test_a=123::bigint AND test_b=456::bigint...\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 09 Nov 2005 12:54:45 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving performance on multicolumn query"
},
{
"msg_contents": "On Wed, Nov 09, 2005 at 01:08:07PM +0100, Jan Kesten wrote:\n> First, I'm using postgresql 7.4.7 on a 2GHz machine having 1.5GByte RAM\n> and I have a table with about 220 columns and 20000 rows - and the first\n> five columns build a primary key (and a unique index).\n\nI forgot this, but it should be mentioned: A primary key works as an\nunique index, so you don't need both.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 9 Nov 2005 14:00:17 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving performance on multicolumn query"
},
{
"msg_contents": "> transfer=> explain analyse SELECT * FROM test WHERE test_a=9091150001\n> AND test_b=1 AND test_c=2 AND test_d=0 AND test_e=0;\n> \n> Index Scan using test_idx on test (cost=0.00..50.27 rows=1 width=1891)\n> (actual time=0.161..0.167 rows=1 loops=1)\n> Index Cond: (test_a = 9091150001::bigint)\n> Filter: ((test_b = 1) AND (test_c = 2) AND (test_d = 0) AND (test_e 0))\n> \n> So, what to do to speed things up? If I understand correctly this\n> output, the planner uses my index (test_idx is the same as test_pkey\n> created along with the table), but only for the first column.\n\nHi Jan,\n\nIf you're using 7.4.x then the planner can't use the index for unquoted \nbigints. Try this:\n\nSELECT * FROM test WHERE test_a='9091150001' AND test_b='1' AND \ntest_c=''2 AND test_d='0' AND test_e='0';\n\nChris\n",
"msg_date": "Wed, 09 Nov 2005 21:19:33 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving performance on multicolumn query"
},
{
"msg_contents": "Hi all!\n\nFirst thanks to any answer by now :-)\n\n> You don't post your table definitions (please do), but it looks like\n> test_b, test_c, test_d and test_e might be bigints? If so, you may\n> want to do explicit \"AND test_b=1::bigint AND test_c=2::bigint\" etc.\n> -- 7.4 doesn't figure this out for you. (8.0 and higher does.)\n\nI didn't post table defintion, but you all are right, test_a to test_e\nare all bigint. I use JDBC to connect to this database and use a\nprepared statment for the queries and set all parameters with\npst.setLong() method. Perhaps this could be the problem? I'll try\n'normal' statements with typecasting, because as far as I can see, the\nquery is the problem (postgresql takes more than 98% cpu while running\nthese statements) or the overhead produced (but not the network, as it\nhas only 1-2% load). Quering other tables (not as big - both rows and\ncolumns are much less) run quite fast with the same code.\n\nSo, thanks again - I'll try and report :-) Can't be so slow, I have some\nself-build database with millions of rows and they run very fast - but\nthey don't use bigint ;-)\n\nCheers,\nJan\n",
"msg_date": "Wed, 09 Nov 2005 18:31:03 +0100",
"msg_from": "Jan Kesten <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving performance on multicolumn query"
}
] |
[
{
"msg_contents": "...and on those notes, let me repeat my often stated advice that a DB server should be configured with as much RAM as is feasible. 4GB or more strongly recommended.\n\nI'll add that the HW you are using for a DB server should be able to hold _at least_ 4GB of RAM (note that modern _laptops_ can hold 2GB. Next year's are likely to be able to hold 4GB.). I can't casually find specs on the D3000, but if it can't be upgraded to at least 4GB, you should be looking for new DB server HW.\n\nAt this writing, 4 1GB DIMMs (4GB) should set you back ~$300 or less. 4 2GB DIMMs (8GB) should cost ~$600.\nAs of now, very few mainboards support 4GB DIMMs and I doubt the D3000 has such a mainboard. If you can use them, 4 4GB DIMMs (16GB) will currently set you back ~$1600-$2400.\n\nWhatever the way you do it, it's well worth the money to have at least 4GB of RAM in a DB server. It makes all kinds of problems just not exist.\n\nRon\n\n\n-----Original Message-----\nFrom: Simon Riggs <[email protected]>\nSent: Nov 9, 2005 4:35 AM\nTo: Charlie Savage <[email protected]>, Luke Lonergan <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Sort performance on large tables\n\nOn Tue, 2005-11-08 at 00:05 -0700, Charlie Savage wrote:\n\n> Setup: Dell Dimension 3000, Suse 10, 1GB ram, PostgreSQL 8.1 RC 1 with \n\n> I want to extract data out of the file, with the most important values \n> being stored in a column called tlid. The tlid field is an integer, and \n> the values are 98% unique. There is a second column called ogc_fid \n> which is unique (it is a serial field). I need to extract out unique \n> TLID's (doesn't matter which duplicate I get rid of). To do this I am \n> running this query:\n> \n> SELECT tlid, min(ogc_fid)\n> FROM completechain\n> GROUP BY tlid;\n> \n> The results from explain analyze are:\n> \n> \"GroupAggregate (cost=10400373.80..11361807.88 rows=48071704 width=8) \n> (actual time=7311682.715..8315746.835 rows=47599910 loops=1)\"\n> \" -> Sort (cost=10400373.80..10520553.06 rows=48071704 width=8) \n> (actual time=7311682.682..7972304.777 rows=48199165 loops=1)\"\n> \" Sort Key: tlid\"\n> \" -> Seq Scan on completechain (cost=0.00..2228584.04 \n> rows=48071704 width=8) (actual time=27.514..773245.046 rows=48199165 \n> loops=1)\"\n> \"Total runtime: 8486057.185 ms\"\n\n> Should I expect results like this? I realize that the computer is quite \n> low-end and is very IO bound for this query, but I'm still surprised \n> that the sort operation takes so long.\n> \n> Out of curiosity, I setup an Oracle database on the same machine with \n> the same data and ran the same query. Oracle was over an order of \n> magnitude faster. Looking at its query plan, it avoided the sort by \n> using \"HASH GROUP BY.\" Does such a construct exist in PostgreSQL (I see \n> only hash joins)?\n\nPostgreSQL can do HashAggregates as well as GroupAggregates, just like\nOracle. HashAggs avoid the sort phase, so would improve performance\nconsiderably. The difference in performance you are getting is because\nof the different plan used. Did you specifically do anything to Oracle\nto help it get that plan, or was it a pure out-of-the-box install (or\nmaybe even a \"set this up for Data Warehousing\" install)?\n\nTo get a HashAgg plan, you need to be able to fit all of the unique\nvalues in memory. That would be 98% of 48071704 rows, each 8+ bytes\nwide, giving a HashAgg memory sizing of over 375MB. You must allocate\nmemory of the next power of two above the level you want, so we would\nneed to allocate 512MB to work_mem before it would consider using a\nHashAgg.\n\nCan you let us know how high you have to set work_mem before an EXPLAIN\n(not EXPLAIN ANALYZE) chooses the HashAgg plan?\n\nPlease be aware that publishing Oracle performance results is against\nthe terms of their licence and we seek to be both fair and legitimate,\nespecially within this public discussion forum.\n\nBest Regards, Simon Riggs\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 9 Nov 2005 13:26:13 -0500 (EST)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
}
] |
[
{
"msg_contents": "I noticed outer join is very very slow in postgresql as compared\nto Oracle.\n\nSELECT a.dln_code, a.company_name,\nto_char(a.certificate_date,'DD-MON-YYYY'),\nto_char(a.certificate_type_id, '99'),\nCOALESCE(b.certificate_type_description,'None') ,\na.description, a.blanket_single, a.certificate_status,\nCOALESCE(a.sun_legal_entity, 'None'),\nCOALESCE(a.other_entity_name, 'None'),\nCOALESCE(a.notes, 'None'),COALESCE(c.name, NULL),\nCOALESCE(to_char(a.created_date,'DD-MON-YYYY'), 'N/A'),\nCOALESCE(c.name, NULL),\nCOALESCE(to_char(a.updated_date,'DD-MON-YYYY'), 'N/A'),\nCOALESCE(e.name, NULL),\nCOALESCE(to_char(a.approved_date,'DD-MON-YYYY'), 'N/A')\n FROM ((((ecms_cert_headers a\n LEFT OUTER JOIN taxpack_user c ON (a.created_by = c.emp_no))\n LEFT OUTER JOIN taxpack_user d ON (a.updated_by = d.emp_no))\n LEFT OUTER JOIN taxpack_user e ON (a.approved_by = e.emp_no))\n INNER JOIN ecms_certificate_types b ON\n (a.certificate_type_id= b.certificate_type_id ))\n WHERE a.dln_code = '17319'\n\n\nThis query return only 1 record but take 25 second to execute in postgreSQL\nas compared to 1.3 second in Oracle. Any suggestion ? Below is explain output.\n\n\n Hash Join (cost=1666049.74..18486619.37 rows=157735046 width=874)\n Hash Cond: (\"outer\".certificate_type_id = \"inner\".certificate_type_id)\n -> Merge Right Join (cost=1666048.13..11324159.05 rows=643816513 width=826)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column16?\")\n -> Sort (cost=30776.19..31207.80 rows=172645 width=64)\n Sort Key: (e.emp_no)::text\n -> Seq Scan on taxpack_user e (cost=0.00..4898.45 rows=172645\nwidth=64)\n -> Sort (cost=1635271.94..1637136.51 rows=745827 width=811)\n Sort Key: (a.approved_by)::text\n -> Merge Left Join (cost=25230.45..36422.18 rows=745827 width=811)\n Merge Cond: (\"outer\".\"?column17?\" = \"inner\".\"?column2?\")\n -> Sort (cost=3117.35..3119.51 rows=864 width=844)\n Sort Key: (a.updated_by)::text\n -> Nested Loop Left Join (cost=0.00..3075.21\nrows=864 width=844)\n -> Index Scan using pk_ecms_cert_headers on\necms_cert_headers a (cost=0.00..6.01 rows=1 width=829)\n Index Cond: ((dln_code)::text =\n'17319'::text)\n -> Index Scan using ash_n1 on taxpack_user c\n(cost=0.00..3058.40 rows=864 width=64)\n Index Cond: ((\"outer\".created_by)::text =\n(c.emp_no)::text)\n -> Sort (cost=22113.10..22544.71 rows=172645 width=16)\n Sort Key: (d.emp_no)::text\n -> Seq Scan on taxpack_user d (cost=0.00..4898.45\nrows=172645 width=16)\n -> Hash (cost=1.49..1.49 rows=49 width=50)\n -> Seq Scan on ecms_certificate_types b (cost=0.00..1.49 rows=49\nwidth=50)\n(23 rows)\n\nThanks\nAshok\n\n",
"msg_date": "Wed, 09 Nov 2005 11:59:13 -0800",
"msg_from": "Ashok Agrawal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Outer Join performance in PostgreSQL"
},
{
"msg_contents": "Ashok Agrawal <[email protected]> writes:\n> I noticed outer join is very very slow in postgresql as compared\n> to Oracle.\n\nI think the three things the people best able to help you are going to\nask for are 1) what version of PostgreSQL, 2) what are the tables, and\nhow many rows in each, and 3) output from 'explain analyze' rather\nthan just 'explain'.\n\nThat said, I'm willing to take an amateurish stab at it even without\nthat.\n\nIn fact, I don't think the outer joins are the issue at all. I see\nthat you're forcing a right join from ecms_certificate_types to\necms_cert_headers. This seems to be causing postgresql to think it\nmust (unnecessarily) consider three quarters of a billion rows, which,\nif I'm reading right, seems to be producing the majority of the\nestimated cost:\n\n> Hash Join (cost=1666049.74..18486619.37 rows=157735046 width=874)\n> Hash Cond: (\"outer\".certificate_type_id = \"inner\".certificate_type_id)\n> -> Merge Right Join (cost=1666048.13..11324159.05 rows=643816513 width=826)\n\nIn fact, looking at the fact that you're doing a COALESCE on a column\nfrom b, it seems to me that doing a right join from ecms_cert_headers\nto ecms_certificate_types is just wrong. It seems to me that that\nshould be a left join as well.\n\nWith that in mind, I would rewrite the whole FROM clause as:\n\n FROM ecms_cert_headers a\nLEFT OUTER JOIN ecms_certificate_types b\n ON (a.certificate_type_id = b.certificate_type_id)\nLEFT OUTER JOIN taxpack_user c\n ON (a.created_by = c.emp_no)\nLEFT OUTER JOIN taxpack_user d\n ON (a.updated_by = d.emp_no)\nLEFT OUTER JOIN taxpack_user e\n ON (a.approved_by = e.emp_no)\n WHERE a.dln_code = '17319'\n\nIt seems to me that this more reflects the intent of the data that is\nbeing retrieved. I would also expect it to be a boatload faster.\n\nAssuming I've understood the intent correctly, I would guess that the\ndifference is the result of the Oracle planner being able to eliminate\nthe right join or something.\n\nMike\n",
"msg_date": "Wed, 09 Nov 2005 15:45:53 -0500",
"msg_from": "Michael Alan Dorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer Join performance in PostgreSQL"
},
{
"msg_contents": "On Wed, 9 Nov 2005, Ashok Agrawal wrote:\n\n> I noticed outer join is very very slow in postgresql as compared\n> to Oracle.\n>\n> SELECT a.dln_code, a.company_name,\n> to_char(a.certificate_date,'DD-MON-YYYY'),\n> to_char(a.certificate_type_id, '99'),\n> COALESCE(b.certificate_type_description,'None') ,\n> a.description, a.blanket_single, a.certificate_status,\n> COALESCE(a.sun_legal_entity, 'None'),\n> COALESCE(a.other_entity_name, 'None'),\n> COALESCE(a.notes, 'None'),COALESCE(c.name, NULL),\n> COALESCE(to_char(a.created_date,'DD-MON-YYYY'), 'N/A'),\n> COALESCE(c.name, NULL),\n> COALESCE(to_char(a.updated_date,'DD-MON-YYYY'), 'N/A'),\n> COALESCE(e.name, NULL),\n> COALESCE(to_char(a.approved_date,'DD-MON-YYYY'), 'N/A')\n> FROM ((((ecms_cert_headers a\n> LEFT OUTER JOIN taxpack_user c ON (a.created_by = c.emp_no))\n> LEFT OUTER JOIN taxpack_user d ON (a.updated_by = d.emp_no))\n> LEFT OUTER JOIN taxpack_user e ON (a.approved_by = e.emp_no))\n> INNER JOIN ecms_certificate_types b ON\n> (a.certificate_type_id= b.certificate_type_id ))\n> WHERE a.dln_code = '17319'\n\nI think in the above it's safe to do the inner join first, although\nPostgreSQL won't determine that currently and that could have something to\ndo with the difference in performance if Oracle did reorder the joins.\nIf you were to run the query doing the INNER JOIN first, does that give\nthe correct results and run more quickly in PostgreSQL? In either case,\nexplain analyze output would be handy to find the actual times taken by\nthe steps.\n",
"msg_date": "Wed, 9 Nov 2005 14:47:22 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Outer Join performance in PostgreSQL"
},
{
"msg_contents": "Hello Michael,\n\nHere is the information : I had executed explain analyze with modified\nFROM clause.\n\nOops forgot to mention the version earlier.\n\nUsing postgres 8.0.0 on Solaris 9.\n\nRows Count :\n\ncic=# select count(*) from taxpack_user;\n count\n--------\n 172645\n(1 row)\n\ncic=# select count(*) from ecms_certificate_types;\n count\n-------\n 10\n(1 row)\n\ncic=# select count(*) from ecms_cert_headers;\n count\n-------\n 17913\n(1 row)\n\nTable Information :\n\n Table \"ecms.ecms_certificate_types\"\n Column | Type | Modifiers\n------------------------------+-----------------------------+-----------\n certificate_type_id | smallint | not null\n certificate_type_description | character varying(60) |\n created_by | character varying(30) |\n created_date | timestamp without time zone |\n updated_by | character varying(30) |\n updated_date | timestamp without time zone |\nIndexes:\n \"sys_c003733\" PRIMARY KEY, btree (certificate_type_id)\n \"pk_ecms_certificate_types\" UNIQUE, btree (certificate_type_id)\n\n Table \"ecms.ecms_cert_headers\"\n Column | Type | Modifiers\n---------------------+-----------------------------+-----------\n dln_code | character varying(10) | not null\n sun_legal_entity | character varying(12) | not null\n other_entity_name | character varying(20) |\n company_name | character varying(80) | not null\n certificate_date | timestamp without time zone | not null\n certificate_type_id | smallint | not null\n description | character varying(80) | not null\n blanket_single | character(1) | not null\n notes | character varying(4000) |\n certificate_status | character(1) | not null\n approved_by | character varying(30) |\n approved_date | timestamp without time zone |\n created_by | character varying(30) |\n created_date | timestamp without time zone |\n updated_by | character varying(30) |\n updated_date | timestamp without time zone |\nIndexes:\n \"pk_ecms_cert_headers\" UNIQUE, btree (dln_code)\n \"ecms_cert_headers_idx1\" btree (certificate_type_id)\n \"ecms_cert_headers_idx2\" btree (company_name)\n \"ecms_cert_headers_idx3\" btree (description)\nForeign-key constraints:\n \"sys_c003754\" FOREIGN KEY (certificate_type_id) REFERENCES\necms_certificate_types(certificate_type_id)\n\n Table \"ecms.taxpack_user\"\n Column | Type | Modifiers\n------------+-----------------------+-----------\n emp_no | character varying(12) | not null\n name | character varying(60) | not null\n manager_id | character varying(12) |\n dept_no | character varying(12) |\n mailstop | character varying(12) |\n phone | character varying(60) |\n email | character varying(60) |\n active | character varying(3) | not null\n admin | smallint | not null\n super_user | smallint | not null\n\n\nMerge Right Join (cost=1757437.54..21072796.15 rows=643816513 width=874)\n(actual time=27800.250..27800.256 rows=1 loops=1)\n Merge Cond: (\"outer\".\"?column3?\" = \"inner\".\"?column17?\")\n -> Sort (cost=30776.19..31207.80 rows=172645 width=64) (actual\ntime=12229.482..12791.468 rows=172645 loops=1)\n Sort Key: (e.emp_no)::text\n -> Seq Scan on taxpack_user e (cost=0.00..4898.45 rows=172645\nwidth=64) (actual time=0.050..1901.218 rows=172645 loops=1)\n -> Sort (cost=1726661.35..1728525.92 rows=745827 width=859) (actual\ntime=12675.899..12675.901 rows=1 loops=1)\n Sort Key: (a.approved_by)::text\n -> Merge Left Join (cost=29219.87..40411.59 rows=745827 width=859)\n(actual time=12675.815..12675.830 rows=1 loops=1)\n Merge Cond: (\"outer\".\"?column18?\" = \"inner\".\"?column2?\")\n -> Sort (cost=7106.77..7108.93 rows=864 width=892) (actual\ntime=1441.644..1441.646 rows=1 loops=1)\n Sort Key: (a.updated_by)::text\n -> Nested Loop Left Join (cost=0.00..7064.62 rows=864\nwidth=892) (actual time=435.864..1441.465 rows=1 loops=1)\n Join Filter: ((\"outer\".created_by)::text =\n(\"inner\".emp_no)::text)\n -> Nested Loop Left Join (cost=0.00..8.11 rows=1\nwidth=877) (actual time=0.251..0.361 rows=1 loops=1)\n Join Filter: (\"outer\".certificate_type_id =\n\"inner\".certificate_type_id)\n -> Index Scan using pk_ecms_cert_headers on\necms_cert_headers a (cost=0.00..6.01 rows=1 width=829) (actual\ntime=0.113..0.136 rows=1 loops=1)\n Index Cond: ((dln_code)::text =\n'17319'::text)\n -> Seq Scan on ecms_certificate_types b\n(cost=0.00..1.49 rows=49 width=50) (actual time=0.018..0.059 rows=10 loops=1)\n -> Seq Scan on taxpack_user c (cost=0.00..4898.45\nrows=172645 width=64) (actual time=0.014..674.881 rows=172645 loops=1)\n -> Sort (cost=22113.10..22544.71 rows=172645 width=16) (actual\ntime=10689.742..10885.155 rows=71665 loops=1)\n Sort Key: (d.emp_no)::text\n -> Seq Scan on taxpack_user d (cost=0.00..4898.45\nrows=172645 width=16) (actual time=0.031..1791.036 rows=172645 loops=1)\n Total runtime: 27802.014 ms\n(23 rows)\n\n\n\nMichael Alan Dorman wrote On 11/09/05 12:45,:\n> Ashok Agrawal <[email protected]> writes:\n> \n>>I noticed outer join is very very slow in postgresql as compared\n>>to Oracle.\n> \n> \n> I think the three things the people best able to help you are going to\n> ask for are 1) what version of PostgreSQL, 2) what are the tables, and\n> how many rows in each, and 3) output from 'explain analyze' rather\n> than just 'explain'.\n> \n> That said, I'm willing to take an amateurish stab at it even without\n> that.\n> \n> In fact, I don't think the outer joins are the issue at all. I see\n> that you're forcing a right join from ecms_certificate_types to\n> ecms_cert_headers. This seems to be causing postgresql to think it\n> must (unnecessarily) consider three quarters of a billion rows, which,\n> if I'm reading right, seems to be producing the majority of the\n> estimated cost:\n> \n> \n>> Hash Join (cost=1666049.74..18486619.37 rows=157735046 width=874)\n>> Hash Cond: (\"outer\".certificate_type_id = \"inner\".certificate_type_id)\n>> -> Merge Right Join (cost=1666048.13..11324159.05 rows=643816513 width=826)\n> \n> \n> In fact, looking at the fact that you're doing a COALESCE on a column\n> from b, it seems to me that doing a right join from ecms_cert_headers\n> to ecms_certificate_types is just wrong. It seems to me that that\n> should be a left join as well.\n> \n> With that in mind, I would rewrite the whole FROM clause as:\n> \n> FROM ecms_cert_headers a\n> LEFT OUTER JOIN ecms_certificate_types b\n> ON (a.certificate_type_id = b.certificate_type_id)\n> LEFT OUTER JOIN taxpack_user c\n> ON (a.created_by = c.emp_no)\n> LEFT OUTER JOIN taxpack_user d\n> ON (a.updated_by = d.emp_no)\n> LEFT OUTER JOIN taxpack_user e\n> ON (a.approved_by = e.emp_no)\n> WHERE a.dln_code = '17319'\n> \n> It seems to me that this more reflects the intent of the data that is\n> being retrieved. I would also expect it to be a boatload faster.\n> \n> Assuming I've understood the intent correctly, I would guess that the\n> difference is the result of the Oracle planner being able to eliminate\n> the right join or something.\n> \n> Mike\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nNOTICE: This email message is for the sole use of the intended\nrecipient(s) and may contain confidential and privileged\ninformation. Any unauthorized review, use, disclosure or\ndistribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply email and destroy\nall copies of the original message.\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n",
"msg_date": "Thu, 10 Nov 2005 10:17:33 -0800",
"msg_from": "Ashok Agrawal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Outer Join performance in PostgreSQL"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've got PG 8.0 on Debian sarge set up ...\nI want to speed up performance on the system.\n\nThe system will run PG, Apache front-end on port 80 and Tomcat / Cocoon \nfor the webapp.\nThe webapp is not so heavily used, so we can give the max performance \nto the database.\nThe database has a lot of work to do, we upload files every day.\nThe current server has 8 databases of around 1 million records. This \nwill be more in the future.\nThere's only one main table, with some smaller tables. 95% of the \nrecords are in that one table.\nA lot of updates are done on that table, affecting 10-20% of the \nrecords.\n\nThe system has 1 gig of ram. I could give 512Mb to PG.\nFilesystem is ext2, with the -noatime parameter in fstab\n\nCould I get some suggestions in how to configure my buffers, wals, .... \n?\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Wed, 9 Nov 2005 21:11:18 +0100",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Some help on buffers and other performance tricks"
}
] |
[
{
"msg_contents": "\nHello all , i post this question here because i wasen't able to find\nanswer to my question elsewhere , i hope someone can answer.\n\n\nAbstract:\n\nThe function that can be found at the end of the e-mail emulate two thing.\n\nFirst it will fill a record set of result with needed column from a table and two \"empty result column\" a min and a max.\n\nThose two column are then filled by a second query on the same table that will do a min and a max\n\non an index idx_utctime.\n\nThe function loop for the first recordset and return a setof record that is casted by caller to the function.\n\n\nThe goald of this is to enabled the application that will receive the result set to minimise its\n\nwork by having to group internaly two matching rowset. We use to handle two resultset but i am looking\n\ntoward improving performances and at first glance it seem to speed up the process.\n\n\nQuestions:\n\n1. How could this be done in a single combinasion of SQL and view? \n\n2. In a case like that is plpgsql really givig significant overhead?\n\n3. Performance difference [I would need a working pure-SQL version to compare PLANNER and Explain results ]\n\nSTUFF:\n\n--TABLE && INDEX\n\n\nCREATE TABLE archive_event\n(\n inst int4 NOT NULL,\n cid int8 NOT NULL,\n src int8 NOT NULL,\n dst int8 NOT NULL,\n bid int8 NOT NULL,\n tid int4 NOT NULL,\n utctime int4 NOT NULL,\n CONSTRAINT ids_archives_event_pkey PRIMARY KEY (inst, cid),\n CONSTRAINT ids_archives_event_cid_index UNIQUE (cid)\n) \n\n--index\n\nCREATE INDEX idx_archive_utctime\n ON archive_event\n USING btree\n (utctime);\n\nCREATE INDEX idx_archive_src\n ON archive_event\n USING btree\n (src);\n\nCREATE INDEX idx_archive_bid_tid\n ON archive_event\n USING btree\n (tid, bid);\n\n\n\n\n--FUNCTION\nCREATE OR REPLACE FUNCTION console_get_source_rule_level_1()\n RETURNS SETOF RECORD AS\n'\nDECLARE\n\none_record record;\nr_record record;\n\nBEGIN\n\n\tFOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\') GROUP BY src, bid, tid LOOP\n\n\tSELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\');\n\n\tr_record.min_time := one_record.timestart;\n\tr_record.max_time := one_record.timestop;\n \n RETURN NEXT r_record;\n\nEND LOOP;\n\n RETURN;\n\nEND;\n'\n LANGUAGE 'plpgsql' VOLATILE;\nGRANT EXECUTE ON FUNCTION console_get_source_rule_level_1() TO console WITH GRANT OPTION;\n\n\n--FUNCTION CALLER\nSELECT * from get_source_rule_level_1() AS (hits int8,src int8,bid int8,tid int4,min_time int8,max_time int8)\n\n\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858 \n",
"msg_date": "Wed, 9 Nov 2005 15:43:18 -0500",
"msg_from": "\"Eric Lauzon\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "(View and SQL) VS plpgsql"
}
] |
[
{
"msg_contents": "0= Optimize your schema to be a tight as possible. Your goal is to give yourself the maximum chance that everything you want to work on is in RAM when you need it.\n1= Upgrade your RAM to as much as you can possibly strain to afford. 4GB at least. It's that important.\n2= If the _entire_ DB does not fit in RAM after upgrading your RAM, the next step is making sure your HD IO subsystem is adequate to your needs.\n3= Read the various pg tuning docs that are available and Do The Right Thing.\n4= If performance is still not acceptable, then it's time to drill down into what specific actions/queries are problems.\nIf you get to here and the entire DBMS is still not close to acceptable, your fundamental assumptions have to be re-examined.\n\nRon\n\n-----Original Message-----\nFrom: Yves Vindevogel <[email protected]>\nSent: Nov 9, 2005 3:11 PM\nTo: [email protected]\nSubject: [PERFORM] Some help on buffers and other performance tricks\n\nHi all,\n\nI've got PG 8.0 on Debian sarge set up ...\nI want to speed up performance on the system.\n\nThe system will run PG, Apache front-end on port 80 and Tomcat / Cocoon \nfor the webapp.\nThe webapp is not so heavily used, so we can give the max performance \nto the database.\nThe database has a lot of work to do, we upload files every day.\nThe current server has 8 databases of around 1 million records. This \nwill be more in the future.\nThere's only one main table, with some smaller tables. 95% of the \nrecords are in that one table.\nA lot of updates are done on that table, affecting 10-20% of the \nrecords.\n\nThe system has 1 gig of ram. I could give 512Mb to PG.\nFilesystem is ext2, with the -noatime parameter in fstab\n\nCould I get some suggestions in how to configure my buffers, wals, .... \n?\n\nMet vriendelijke groeten,\nBien � vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\n\n",
"msg_date": "Wed, 9 Nov 2005 16:24:49 -0500 (EST)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "Ron Peacetree wrote:\n> 0= Optimize your schema to be a tight as possible. Your goal is to give yourself the maximum chance that everything you want to work on is in RAM when you need it.\n> 1= Upgrade your RAM to as much as you can possibly strain to afford. 4GB at least. It's that important.\n> 2= If the _entire_ DB does not fit in RAM after upgrading your RAM, the next step is making sure your HD IO subsystem is adequate to your needs.\n> 3= Read the various pg tuning docs that are available and Do The Right Thing.\n> 4= If performance is still not acceptable, then it's time to drill down into what specific actions/queries are problems.\n> If you get to here and the entire DBMS is still not close to acceptable, your fundamental assumptions have to be re-examined.\n\nIMHO you should really be examining your queries _before_ you do any\ninvestment in hardware, because later those may prove unnecessary.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Wed, 9 Nov 2005 20:07:52 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "On Wed, 9 Nov 2005 20:07:52 -0300\nAlvaro Herrera <[email protected]> wrote:\n\n> IMHO you should really be examining your queries _before_ you do any\n> investment in hardware, because later those may prove unnecessary.\n\n It all really depends on what you're doing. For some of the systems\n I run, 4 GBs of RAM is *WAY* overkill, RAID 1+0 is overkill, etc. \n\n In general I would slightly change the \"order of operations\" from: \n\n 1) Buy tons of RAM \n 2) Buy lots of disk I/O \n 3) Tune your conf\n 4) Examine your queries \n\n to \n\n 1) Tune your conf\n 2) Spend a few minutes examining your queries \n 3) Buy as much RAM as you can afford\n 4) Buy as much disk I/O as you can \n 5) Do in depth tuning of your queries/conf \n\n Personally I avoid planning my schema around my performance at\n the start. I just try to represent the data in a sensible,\n normalized way. While I'm sure I sub-consciously make decisions \n based on performance considerations early on, I don't do any major \n schema overhauls until I find I can't get the performance I need\n via tuning. \n\n Obviously there are systems/datasets/quantities where this won't\n always work out best, but for the majority of systems out there \n complicating your schema, maxing your hardware, and THEN tuning\n is IMHO the wrong approach. \n \n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Wed, 9 Nov 2005 17:53:32 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "Frank Wiles wrote:\n\n> Obviously there are systems/datasets/quantities where this won't\n> always work out best, but for the majority of systems out there \n> complicating your schema, maxing your hardware, and THEN tuning\n> is IMHO the wrong approach. \n\nI wasn't suggesting to complicate the schema -- I was actually thinking\nin systems where some queries are not using indexes, some queries are\nplain wrong, etc. Buying a very expensive RAID and then noticing that\nyou just needed to create an index, is going to make somebody feel at\nleast somewhat stupid.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 17.7\", W 73� 14' 26.8\"\nY una voz del caos me habl� y me dijo\n\"Sonr�e y s� feliz, podr�a ser peor\".\nY sonre�. Y fui feliz.\nY fue peor.\n",
"msg_date": "Wed, 9 Nov 2005 21:43:33 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "On Wed, 9 Nov 2005 21:43:33 -0300\nAlvaro Herrera <[email protected]> wrote:\n\n> Frank Wiles wrote:\n> \n> > Obviously there are systems/datasets/quantities where this won't\n> > always work out best, but for the majority of systems out there \n> > complicating your schema, maxing your hardware, and THEN tuning\n> > is IMHO the wrong approach. \n> \n> I wasn't suggesting to complicate the schema -- I was actually\n> thinking in systems where some queries are not using indexes, some\n> queries are plain wrong, etc. Buying a very expensive RAID and then\n> noticing that you just needed to create an index, is going to make\n> somebody feel at least somewhat stupid.\n\n Sorry I was referring to Ron statement that the first step should\n be to \"Optimize your schema to be as tight as possible.\" \n\n But I agree, finding out you need an index after spending $$$ on \n extra hardware would be bad. Especially if you have to explain it\n to the person forking over the $$$! :) \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Wed, 9 Nov 2005 18:54:50 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
}
] |
[
{
"msg_contents": "The point Gentlemen, was that Good Architecture is King. That's what I was trying to emphasize by calling proper DB architecture step 0. All other things being equal (and they usually aren't, this sort of stuff is _very_ context dependent), the more of your critical schema that you can fit into RAM during normal operation the better.\n\n...and it all starts with proper DB design. Otherwise, you are quite right in stating that you risk wasting time, effort, and HW.\n\nRon\n\n\n-----Original Message-----\nFrom: Frank Wiles <[email protected]>\nSent: Nov 9, 2005 6:53 PM\nTo: Alvaro Herrera <[email protected]>\nCc: [email protected], [email protected], [email protected]\nSubject: Re: [PERFORM] Some help on buffers and other performance tricks\n\nOn Wed, 9 Nov 2005 20:07:52 -0300\nAlvaro Herrera <[email protected]> wrote:\n\n> IMHO you should really be examining your queries _before_ you do any\n> investment in hardware, because later those may prove unnecessary.\n\n It all really depends on what you're doing. For some of the systems\n I run, 4 GBs of RAM is *WAY* overkill, RAID 1+0 is overkill, etc. \n\n In general I would slightly change the \"order of operations\" from: \n\n 1) Buy tons of RAM \n 2) Buy lots of disk I/O \n 3) Tune your conf\n 4) Examine your queries \n\n to \n\n 1) Tune your conf\n 2) Spend a few minutes examining your queries \n 3) Buy as much RAM as you can afford\n 4) Buy as much disk I/O as you can \n 5) Do in depth tuning of your queries/conf \n\n Personally I avoid planning my schema around my performance at\n the start. I just try to represent the data in a sensible,\n normalized way. While I'm sure I sub-consciously make decisions \n based on performance considerations early on, I don't do any major \n schema overhauls until I find I can't get the performance I need\n via tuning. \n\n Obviously there are systems/datasets/quantities where this won't\n always work out best, but for the majority of systems out there \n complicating your schema, maxing your hardware, and THEN tuning\n is IMHO the wrong approach. \n \n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Wed, 9 Nov 2005 23:20:10 -0500 (EST)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "On Wed, 2005-11-09 at 22:20, Ron Peacetree wrote:\n> The point Gentlemen, was that Good Architecture is King. That's what I was trying to emphasize by calling proper DB architecture step 0. All other things being equal (and they usually aren't, this sort of stuff is _very_ context dependent), the more of your critical schema that you can fit into RAM during normal operation the better.\n> \n> ...and it all starts with proper DB design. Otherwise, you are quite right in stating that you risk wasting time, effort, and HW.\n\n\nVery valid point. It's the reason, in my last job, we had a mainline\nserver with dual 2800MHz CPUs and a big RAID array.\n\nAnd our development, build and test system was a Dual Pentium Pro 200\nwith 256 Meg of ram. You notice slow queries real fast on such a box.\n",
"msg_date": "Thu, 10 Nov 2005 09:16:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "On Thu, 10 Nov 2005 09:16:10 -0600\nScott Marlowe <[email protected]> wrote:\n\n> Very valid point. It's the reason, in my last job, we had a mainline\n> server with dual 2800MHz CPUs and a big RAID array.\n> \n> And our development, build and test system was a Dual Pentium Pro 200\n> with 256 Meg of ram. You notice slow queries real fast on such a box.\n\n I know several people who use this development method. It can \n sometimes lead to premature optimizations, but overall I think it is\n a great way to work. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Thu, 10 Nov 2005 09:25:01 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
},
{
"msg_contents": "On Thu, 2005-11-10 at 09:25, Frank Wiles wrote:\n> On Thu, 10 Nov 2005 09:16:10 -0600\n> Scott Marlowe <[email protected]> wrote:\n> \n> > Very valid point. It's the reason, in my last job, we had a mainline\n> > server with dual 2800MHz CPUs and a big RAID array.\n> > \n> > And our development, build and test system was a Dual Pentium Pro 200\n> > with 256 Meg of ram. You notice slow queries real fast on such a box.\n> \n> I know several people who use this development method. It can \n> sometimes lead to premature optimizations, but overall I think it is\n> a great way to work. \n\nHehe. Yeah, you get used to things running a bit slower pretty\nquickly. Keep in mind though, that the test box is likely only\nsupporting one single application at a time, whereas the production\nserver may be running dozens or even hundreds of apps, so it's important\nto catch performance issues before they get to production.\n\nPlus, the Dual PPRo 200 WAS running a decent RAID array, even if it was\na linux kernel software RAID and not hardware. But it was on 8 9\ngigabyte SCSI drives, so it was quite fast for reads. \n\nIn actuality, a lot of the folks developed their code on their own\nworkstations (generally 1+GHz machines with 1G or more of ram) then had\nto move them over to the ppro 200 for testing and acceptance. So that\nkind of helps stop the premature optimizations. We were mainly looking\nto catch stupidity before it got to production.\n",
"msg_date": "Thu, 10 Nov 2005 09:51:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Some help on buffers and other performance tricks"
}
] |
[
{
"msg_contents": "\n\n\n<snip>\n\tFOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\') GROUP BY src, bid, tid LOOP\n\n\tSELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\');\n</snip>\n\n\n(it seems to me, that you might combine both queries)\n\n1. have you ever tried to select the min/max within the first stmt ? as i see you are reducing data in second stmt using same key as in stmt 1.\n2. you are querying data using two keys (int, utctime). you may create a combined index speeding up your query\n3. same for grouping. you are grouping over three fields. composite indexing may helps (8.1 supports index based grouping)\n\nregards,\n\nmarcus\n\n\n\n\n\n-----Ursprüngliche Nachricht-----\nVon: [email protected]\n[mailto:[email protected]]Im Auftrag von Eric\nLauzon\nGesendet: Mittwoch, 9. November 2005 21:43\nAn: [email protected]\nBetreff: [PERFORM] (View and SQL) VS plpgsql\n\n\n\nHello all , i post this question here because i wasen't able to find\nanswer to my question elsewhere , i hope someone can answer.\n\n\nAbstract:\n\nThe function that can be found at the end of the e-mail emulate two thing.\n\nFirst it will fill a record set of result with needed column from a table and two \"empty result column\" a min and a max.\n\nThose two column are then filled by a second query on the same table that will do a min and a max\n\non an index idx_utctime.\n\nThe function loop for the first recordset and return a setof record that is casted by caller to the function.\n\n\nThe goald of this is to enabled the application that will receive the result set to minimise its\n\nwork by having to group internaly two matching rowset. We use to handle two resultset but i am looking\n\ntoward improving performances and at first glance it seem to speed up the process.\n\n\nQuestions:\n\n1. How could this be done in a single combinasion of SQL and view? \n\n2. In a case like that is plpgsql really givig significant overhead?\n\n3. Performance difference [I would need a working pure-SQL version to compare PLANNER and Explain results ]\n\nSTUFF:\n\n--TABLE && INDEX\n\n\nCREATE TABLE archive_event\n(\n inst int4 NOT NULL,\n cid int8 NOT NULL,\n src int8 NOT NULL,\n dst int8 NOT NULL,\n bid int8 NOT NULL,\n tid int4 NOT NULL,\n utctime int4 NOT NULL,\n CONSTRAINT ids_archives_event_pkey PRIMARY KEY (inst, cid),\n CONSTRAINT ids_archives_event_cid_index UNIQUE (cid)\n) \n\n--index\n\nCREATE INDEX idx_archive_utctime\n ON archive_event\n USING btree\n (utctime);\n\nCREATE INDEX idx_archive_src\n ON archive_event\n USING btree\n (src);\n\nCREATE INDEX idx_archive_bid_tid\n ON archive_event\n USING btree\n (tid, bid);\n\n\n\n\n--FUNCTION\nCREATE OR REPLACE FUNCTION console_get_source_rule_level_1()\n RETURNS SETOF RECORD AS\n'\nDECLARE\n\none_record record;\nr_record record;\n\nBEGIN\n\n\tFOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\') GROUP BY src, bid, tid LOOP\n\n\tSELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\\'3\\' AND (utctime BETWEEN \\'1114920000\\' AND \\'1131512399\\');\n\n\tr_record.min_time := one_record.timestart;\n\tr_record.max_time := one_record.timestop;\n \n RETURN NEXT r_record;\n\nEND LOOP;\n\n RETURN;\n\nEND;\n'\n LANGUAGE 'plpgsql' VOLATILE;\nGRANT EXECUTE ON FUNCTION console_get_source_rule_level_1() TO console WITH GRANT OPTION;\n\n\n--FUNCTION CALLER\nSELECT * from get_source_rule_level_1() AS (hits int8,src int8,bid int8,tid int4,min_time int8,max_time int8)\n\n\n\nEric Lauzon\n[Recherche & Développement]\nAbove Sécurité / Above Security\nTél : (450) 430-8166\nFax : (450) 430-1858 \n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n",
"msg_date": "Thu, 10 Nov 2005 09:45:34 +0100",
"msg_from": "=?iso-8859-1?Q?N=F6rder-Tuitje=2C_Marcus?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: (View and SQL) VS plpgsql"
}
] |
[
{
"msg_contents": "Hi,\n\nWe're having problems with our PostgreSQL server using forever for simple\nqueries, even when there's little load -- or rather, the transactions seem\nto take forever to commit. We're using 8.1 (yay!) on a single Opteron, with\nWAL on the system two-disk (software) RAID-1, separate from the database\nfour-disk RAID-10. All drives are 10000rpm SCSI disks, with write cache\nturned off; we value our data :-) We're running Linux 2.6.13.4, with 64-bit\nkernel but 32-bit userspace.\n\nThe main oddity is that simple transactions take forever to execute, even on\nsmall tables with no triggers. A COMMIT on an otherwise idle system with one\nrow to commit can take anything from 60-200ms to execute, which seems quite\nexcessive -- sometimes (and I've verified that there's not a checkpoint or\nvacuum going on at that time), transactions seem to pile up and you get\nbehaviour like:\n\nLOG: duration: 836.004 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.100', hostname = 'mivu-03.samfundet.no' WHERE posid = 'mivu-03'\nLOG: duration: 753.545 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.110', hostname = 'mivu-13.samfundet.no' WHERE posid = 'mivu-13'\nLOG: duration: 567.914 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.109', hostname = 'mivu-12.samfundet.no' WHERE posid = 'mivu-12'\nLOG: duration: 515.013 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.105', hostname = 'mivu-08.samfundet.no' WHERE posid = 'mivu-08'\nLOG: duration: 427.541 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.104', hostname = 'mivu-07.samfundet.no' WHERE posid = 'mivu-07'\nLOG: duration: 383.314 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.107', hostname = 'mivu-10.samfundet.no' WHERE posid = 'mivu-10'\nLOG: duration: 348.965 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.103', hostname = 'mivu-06.samfundet.no' WHERE posid = 'mivu-06'\nLOG: duration: 314.465 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.101', hostname = 'mivu-04.samfundet.no' WHERE posid = 'mivu-04'\nLOG: duration: 824.893 ms statement: UPDATE mivu3ping SET pingtime = NOW(), ip = '129.241.93.106', hostname = 'mivu-09.samfundet.no' WHERE posid = 'mivu-09'\n\nSometimes, six or seven of these transactions even seem to wait for the same\nthing, reporting finishing times of something like 6, 5, 4, 3 and 2 seconds\nright after each other in the log! This is not a highly loaded system, so I\ndon't really see why this should happen. (We had the same problems with 7.4,\nbut if my imagination isn't playing games on me, they seem to have become\nslightly worse with 8.1.)\n\nstrace shows that fdatasync() takes almost all that time, but when I run my own\nfdatasync() test program on the same file system, I can consistently sync a\nfile (after an 8kB write) in about 30ms every time, so I don't really know why\nthis would be so much slower with PostgreSQL. We're using the cfq scheduler,\nbut deadline and noop give about the same results.\n\nSetting wal_sync_method = open_sync seems to improve the situation\ndramatically on simple commits; we get down into the 10-30ms range on an idle\nsystem. OTOH, behaviour seems to get slightly worse when there's more stuff\ngoing on, and we still get the 300ms transactions in batches every now and\nthen.\n\nAny good ideas?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 10 Nov 2005 14:32:41 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "WAL sync behaviour"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> Hi,\n> \n> We're having problems with our PostgreSQL server using forever for simple\n> queries, even when there's little load -- or rather, the transactions seem\n> to take forever to commit. We're using 8.1 (yay!) on a single Opteron, with\n> WAL on the system two-disk (software) RAID-1, separate from the database\n> four-disk RAID-10. All drives are 10000rpm SCSI disks, with write cache\n> turned off; we value our data :-) We're running Linux 2.6.13.4, with 64-bit\n> kernel but 32-bit userspace.\n\nYou're beyond my area of expertise, but I do know that someone's going \nto ask what filesystem this is (ext2/xfs/etc). And probably to see the \nstrace too.\n\nHmm - the only things I can think to check:\nDo vmstat/iostat show any unusual activity?\nAre your system logs on these disks too?\nCould it be the journalling on the fs clashing with the WAL?\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 10 Nov 2005 14:14:30 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "On Thu, Nov 10, 2005 at 02:14:30PM +0000, Richard Huxton wrote:\n> You're beyond my area of expertise, but I do know that someone's going \n> to ask what filesystem this is (ext2/xfs/etc).\n\nAh, yes, I forgot -- it's ext3. We're considering simply moving the WAL onto\na separate partition (with data=writeback and noatime) if that can help us\nany.\n\n> And probably to see the strace too.\n\nThe strace with wal_sync_method = fdatasync goes like this (I attach just\nbefore I do the commit):\n\ncirkus:~> sudo strace -T -p 15718 \nProcess 15718 attached - interrupt to quit\nread(8, \"\\27\\3\\1\\0 \", 5) = 5 <2.635856>\nread(8, \"\\336\\333\\24KB\\325Ga\\324\\264[\\307v\\254h\\254\\350\\20\\220a\"..., 32) = 32 <0.000031>\nread(8, \"\\27\\3\\1\\0000\", 5) = 5 <0.000027>\nread(8, \"$E\\217<z\\350bI\\2217\\317\\3662\\301\\273\\233\\17\\177\\256\\225\"..., 48) = 48 <0.000026>\nsend(7, \"\\3\\0\\0\\0\\30\\0\\0\\0\\20\\0\\0\\0f=\\0\\0commit;\\0\", 24, 0) = 24 <0.000071>\ngettimeofday({1131632603, 187599}, NULL) = 0 <0.000026>\ntime(NULL) = 1131632603 <0.000027>\nopen(\"pg_xlog/0000000100000000000000A2\", O_RDWR|O_LARGEFILE) = 14 <0.000039>\n_llseek(14, 12500992, [12500992], SEEK_SET) = 0 <0.000026>\nwrite(14, \"]\\320\\1\\0\\1\\0\\0\\0\\0\\0\\0\\0\\0\\300\\276\\242\\362\\0\\0\\0\\31\\0\"..., 8192) = 8192 <0.000057>\nfdatasync(14) = 0 <0.260194>\ngettimeofday({1131632603, 448459}, NULL) = 0 <0.000034>\ntime(NULL) = 1131632603 <0.000027>\ntime([1131632603]) = 1131632603 <0.000025>\ngetpid() = 15718 <0.000025>\nrt_sigaction(SIGPIPE, {0x558a27e0, [], 0}, {SIG_IGN}, 8) = 0 <0.000029>\nsend(3, \"<134>Nov 10 15:23:23 postgres[15\"..., 121, 0) = 121 <0.000032>\nrt_sigaction(SIGPIPE, {SIG_IGN}, NULL, 8) = 0 <0.000029>\nsend(7, \"\\4\\0\\0\\0\\330\\3\\0\\0\\20\\0\\0\\0f=\\0\\0\\247@\\0\\0\\16\\0\\0\\0\\1\\0\"..., 984, 0) = 984 <0.000076>\nsend(7, \"\\4\\0\\0\\0\\330\\3\\0\\0\\20\\0\\0\\0f=\\0\\0\\247@\\0\\0\\16\\0\\0\\0\\0\\0\"..., 984, 0) = 984 <0.000051>\nsend(7, \"\\4\\0\\0\\0\\330\\3\\0\\0\\20\\0\\0\\0f=\\0\\0\\247@\\0\\0\\16\\0\\0\\0\\0\\0\"..., 984, 0) = 984 <0.000050>\nsend(7, \"\\4\\0\\0\\0\\250\\0\\0\\0\\20\\0\\0\\0f=\\0\\0\\247@\\0\\0\\2\\0\\0\\0\\0\\0\"..., 168, 0) = 168 <0.000050>\nsend(7, \"\\4\\0\\0\\0\\250\\0\\0\\0\\20\\0\\0\\0f=\\0\\0\\0\\0\\0\\0\\2\\0\\0\\0\\0\\0\\0\"..., 168, 0) = 168 <0.000049>\nsend(7, \"\\3\\0\\0\\0\\27\\0\\0\\0\\20\\0\\0\\0f=\\0\\0<IDLE>\\0\", 23, 0) = 23 <0.000047>\nwrite(8, \"\\27\\3\\1\\0 B\\260\\253rq)\\232\\265o\\225\\272\\235\\v\\375\\31\\323\"..., 90) = 90 <0.000229>\nread(8, <unfinished ...>\nProcess 15718 detached\n\n> Do vmstat/iostat show any unusual activity?\n\nNo, there's not much activity. In fact, it's close to idle.\n\n> Are your system logs on these disks too?\n\nYes, they are, but nothing much is logged, really -- and sync is off for most\nof the logs in syslogd.\n \n> Could it be the journalling on the fs clashing with the WAL?\n\nUnsure -- that's what I was hoping to get some information on :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 10 Nov 2005 15:25:35 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "On Thu, Nov 10, 2005 at 03:25:35PM +0100, Steinar H. Gunderson wrote:\n>Ah, yes, I forgot -- it's ext3. We're considering simply moving the WAL onto\n>a separate partition (with data=writeback and noatime) if that can help us\n>any.\n\nThere's no reason to use a journaled filesystem for the wal. Use ext2 in\npreference to ext3.\n\nMike Stone\n",
"msg_date": "Thu, 10 Nov 2005 09:43:09 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "On Thu, 2005-11-10 at 08:43, Michael Stone wrote:\n> On Thu, Nov 10, 2005 at 03:25:35PM +0100, Steinar H. Gunderson wrote:\n> >Ah, yes, I forgot -- it's ext3. We're considering simply moving the WAL onto\n> >a separate partition (with data=writeback and noatime) if that can help us\n> >any.\n> \n> There's no reason to use a journaled filesystem for the wal. Use ext2 in\n> preference to ext3.\n\nNot from what I understood. Ext2 can't guarantee that your data will\neven be there in any form after a crash. I believe only metadata\njournaling is needed though.\n",
"msg_date": "Thu, 10 Nov 2005 09:52:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "On Thu, Nov 10, 2005 at 09:52:38AM -0600, Scott Marlowe wrote:\n>Not from what I understood. Ext2 can't guarantee that your data will\n>even be there in any form after a crash. \n\nIt can if you sync the data. (Which is the whole point of the WAL.)\n\n>I believe only metadata journaling is needed though.\n\nIf you don't sync, metadata journaling doesn't do anything to guarantee\nthat your data will be there, so you're adding no data security in the\nnon-synchronous-write case. (Which is irrelevant for the WAL.) \n\nWhat metadata journalling gets you is fast recovery from crashes by\navoiding a fsck. The fsck time is related to the number of files on a\nfilesystem--so it's generally pretty quick on a WAL partition anyway.\n\nMike Stone\n",
"msg_date": "Thu, 10 Nov 2005 11:00:55 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Thu, 2005-11-10 at 08:43, Michael Stone wrote:\n>> There's no reason to use a journaled filesystem for the wal. Use ext2 in\n>> preference to ext3.\n\n> Not from what I understood. Ext2 can't guarantee that your data will\n> even be there in any form after a crash. I believe only metadata\n> journaling is needed though.\n\nNo, Mike is right: for WAL you shouldn't need any journaling. This is\nbecause we zero out *and fsync* an entire WAL file before we ever\nconsider putting live WAL data in it. During live use of a WAL file,\nits metadata is not changing. As long as the filesystem follows\nthe minimal rule of syncing metadata about a file when it fsyncs the\nfile, all the live WAL files should survive crashes OK.\n\nWe can afford to do this mainly because WAL files can normally be\nrecycled instead of created afresh, so the zero-out overhead doesn't\nget paid during normal operation.\n\nYou do need metadata journaling for all non-WAL PG files, since we don't\nfsync them every time we extend them; which means the filesystem could\nlose track of which disk blocks belong to such a file, if it's not\njournaled.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2005 11:39:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour "
},
{
"msg_contents": "On Thu, Nov 10, 2005 at 11:39:34AM -0500, Tom Lane wrote:\n> No, Mike is right: for WAL you shouldn't need any journaling. This is\n> because we zero out *and fsync* an entire WAL file before we ever\n> consider putting live WAL data in it. During live use of a WAL file,\n> its metadata is not changing. As long as the filesystem follows\n> the minimal rule of syncing metadata about a file when it fsyncs the\n> file, all the live WAL files should survive crashes OK.\n\nYes, with emphasis on the zero out... :-)\n\n> You do need metadata journaling for all non-WAL PG files, since we don't\n> fsync them every time we extend them; which means the filesystem could\n> lose track of which disk blocks belong to such a file, if it's not\n> journaled.\n\nI think there may be theoretical problems with regard to the ordering\nof the fsync operation, for files that are not pre-allocated. For\nexample, if a new block is allocated - there are two blocks that need\nto be updated. The indirect reference block (or inode block, if block\nreferences fit into the inode entry), and the block itself. If the\nindirect reference block is written first, before the data block, the\nstate of the disk is inconsistent. This would be a crash during the\nfsync() operation. The metadata journalling can ensure that the data\nblock is allocated first, and then all the necessary references\nupdated, allowing for the operation to be incomplete and rolled back,\nor committed in full.\n\nOr, that is my understanding, anyways, and this is why I would not use\next2 for the database, even if it was claimed that fsync() was used.\n\nFor WAL, with pre-allocated zero blocks? Sure. Ext2... :-)\n\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 10 Nov 2005 11:53:13 -0500",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "On Thu, 2005-11-10 at 10:39, Tom Lane wrote:\n> Scott Marlowe <[email protected]> writes:\n> > On Thu, 2005-11-10 at 08:43, Michael Stone wrote:\n> >> There's no reason to use a journaled filesystem for the wal. Use ext2 in\n> >> preference to ext3.\n> \n> > Not from what I understood. Ext2 can't guarantee that your data will\n> > even be there in any form after a crash. I believe only metadata\n> > journaling is needed though.\n> \n> No, Mike is right: for WAL you shouldn't need any journaling. This is\n> because we zero out *and fsync* an entire WAL file before we ever\n> consider putting live WAL data in it. During live use of a WAL file,\n> its metadata is not changing. As long as the filesystem follows\n> the minimal rule of syncing metadata about a file when it fsyncs the\n> file, all the live WAL files should survive crashes OK.\n> \n> We can afford to do this mainly because WAL files can normally be\n> recycled instead of created afresh, so the zero-out overhead doesn't\n> get paid during normal operation.\n> \n> You do need metadata journaling for all non-WAL PG files, since we don't\n> fsync them every time we extend them; which means the filesystem could\n> lose track of which disk blocks belong to such a file, if it's not\n> journaled.\n\nThanks for the clarification! Nice to know I can setup an ext2\npartition for my WAL files then. Is this in the docs anywhere?\n",
"msg_date": "Thu, 10 Nov 2005 11:00:10 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> Thanks for the clarification! Nice to know I can setup an ext2\n> partition for my WAL files then. Is this in the docs anywhere?\n\nDon't think so ... want to write something up? Hard part is to\nfigure out where to put it ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2005 12:44:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: WAL sync behaviour "
},
{
"msg_contents": "On Thu, Nov 10, 2005 at 12:44:15PM -0500, Tom Lane wrote:\n> Don't think so ... want to write something up? Hard part is to\n> figure out where to put it ...\n\nTo be honest, I think we could use a \"newbie's guide to PostgreSQL\nperformance tuning\". I've seen rather good guides for query tuning, and\nguides for general performance tuning, but none that really cover both in a\ncoherent way. (Also, many of the ones I've seen start getting rather dated;\nafter bitmap index scans arrived, for instance, many of the rules with regard\nto index planning probably changed.)\n\nI'd guess http://www.powerpostgresql.com/PerfList is a rather good start for\nthe second part (and it's AFAICS under a free license); having something like\nthat in the docs (or some other document) would probably be a good start.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 10 Nov 2005 18:58:43 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: WAL sync behaviour"
}
] |
[
{
"msg_contents": "This is with Postgres 8.0.3. Any advice is appreciated. I'm not sure\nexactly what I expect, but I was hoping that if it used the\nexternal_id_map_source_target_id index it would be faster. Mainly I was\nsurprised that the same plan could perform so much differently with just\nan extra condition.\n\nI've increased the statistics target on util.external_id_map.source, but\nI'm fuzzy on exactly where (what columns) the planner could use more\ninformation.\n\nstatgen=> explain analyze select * from subject_source;\n\nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..316.79 rows=1186 width=46) (actual\ntime=0.136..9.808 rows=1186 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".target_id)\n -> Index Scan using subject_pkey on subject norm (cost=0.00..63.36\nrows=1186 width=28) (actual time=0.050..1.834 rows=1186 loops=1)\n -> Index Scan using external_id_map_primary_key on external_id_map\neim (cost=0.00..2345747.01 rows=15560708 width=26) (actual\ntime=0.061..2.944 rows=2175 loops=1)\n Total runtime: 10.745 ms\n(5 rows)\n\nstatgen=> explain analyze select * from subject_source where\nsource='SCH';\n\nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..640.95 rows=1 width=46) (actual\ntime=0.043..21074.403 rows=1186 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".target_id)\n -> Index Scan using subject_pkey on subject norm (cost=0.00..63.36\nrows=1186 width=28) (actual time=0.014..1.478 rows=1186 loops=1)\n -> Index Scan using external_id_map_primary_key on external_id_map\neim (cost=0.00..2384648.78 rows=4150 width=26) (actual\ntime=0.020..21068.508 rows=1186 loops=1)\n Filter: (source = 'SCH'::bpchar)\n Total runtime: 21075.142 ms\n(6 rows)\n\nstatgen=> \\d subject\n Table \"public.subject\"\n Column | Type | Modifiers\n---------+---------+-----------\n id | bigint | not null\n sex | integer |\n parent1 | bigint |\n parent2 | bigint |\nIndexes:\n \"subject_pkey\" PRIMARY KEY, btree (id)\nForeign-key constraints:\n \"subject_parent1\" FOREIGN KEY (parent1) REFERENCES subject(id)\nDEFERRABLE INITIALLY DEFERRED\n \"subject_parent2\" FOREIGN KEY (parent2) REFERENCES subject(id)\nDEFERRABLE INITIALLY DEFERRED\n \"subject_id_map\" FOREIGN KEY (id) REFERENCES\nutil.external_id_map(target_id) DEFERRABLE INITIALLY DEFERRED\n\nstatgen=> \\d subject_source\n View \"public.subject_source\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n id | bigint |\n sex | integer |\n parent1 | bigint |\n parent2 | bigint |\n source | character(3) |\n source_id | character varying(32) |\nView definition:\n SELECT norm.id, norm.sex, norm.parent1, norm.parent2, eim.source,\neim.source_id\n FROM subject norm\n JOIN util.external_id_map eim ON norm.id = eim.target_id;\n\nstatgen=> \\d util.external_id_map\n Table \"util.external_id_map\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n source_id | character varying(32) | not null\n source | character(3) | not null\n target_id | bigint | not null\nIndexes:\n \"external_id_map_primary_key\" PRIMARY KEY, btree (target_id)\n \"external_id_map_source_source_id_unique\" UNIQUE, btree (source,\nsource_id)\n \"external_id_map_source\" btree (source)\n \"external_id_map_source_target_id\" btree (source, target_id)\nForeign-key constraints:\n \"external_id_map_source\" FOREIGN KEY (source) REFERENCES\nutil.source(id)\n\nThanks in advance,\nMitch\n",
"msg_date": "Thu, 10 Nov 2005 05:42:56 -0800",
"msg_from": "Mitch Skinner <[email protected]>",
"msg_from_op": true,
"msg_subject": "same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "Mitch Skinner <[email protected]> writes:\n> This is with Postgres 8.0.3. Any advice is appreciated.\n\nThese are exactly the same plan, except for the addition of the extra\nfilter condition ...\n\n> -> Index Scan using external_id_map_primary_key on external_id_map\n> eim (cost=0.00..2345747.01 rows=15560708 width=26) (actual\n> time=0.061..2.944 rows=2175 loops=1)\n\n> -> Index Scan using external_id_map_primary_key on external_id_map\n> eim (cost=0.00..2384648.78 rows=4150 width=26) (actual\n> time=0.020..21068.508 rows=1186 loops=1)\n> Filter: (source = 'SCH'::bpchar)\n\nApparently, you are using a platform and/or locale in which strcoll() is\nspectacularly, god-awfully slow --- on the order of 10 msec per comparison.\nThis is a bit hard to believe but I can't make sense of those numbers\nany other way. What is the platform exactly, and what database locale\nand encoding are you using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2005 12:23:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
},
{
"msg_contents": "On Thu, 2005-11-10 at 12:23 -0500, Tom Lane wrote:\n> Apparently, you are using a platform and/or locale in which strcoll() is\n> spectacularly, god-awfully slow --- on the order of 10 msec per comparison.\n\nThe version with the condition is definitely doing more I/O. The\nversion without the condition doesn't read at all. I strace'd an\nexplain analyze for each separately, and this is what I ended up with\n(the first is with the condition, the second is without):\n\nbash-2.05b$ cut '-d(' -f1 subsourcestrace | sort | uniq -c\n 7127 gettimeofday\n 75213 _llseek\n 1 Process 30227 attached - interrupt to quit\n 1 Process 30227 detached\n 148671 read\n 2 recv\n 4 semop\n 4 send\nbash-2.05b$ cut '-d(' -f1 subsourcestrace-nocond | sort | uniq -c\n 9103 gettimeofday\n 7 _llseek\n 1 Process 30227 attached - interrupt to quit\n 1 Process 30227 detached\n 2 recv\n 4 send\n\nFor the moment, all of the rows in the view I'm selecting from satisfy\nthe condition, so the output of both queries is the same. The relevant\nrows of the underlying tables are probably pretty contiguous (all of the\nrows satisfying the condition and the join were inserted at the same\ntime). Could it just be the result of a weird physical distribution of\ndata in the table/index files? For the fast query, the actual number of\nrows is a lot less than the planner expects.\n\n> This is a bit hard to believe but I can't make sense of those numbers\n> any other way. What is the platform exactly, and what database locale\n> and encoding are you using?\n\nIt's RHEL 3 on x86:\n[root@rehoboam root]# uname -a\nLinux rehoboam 2.4.21-32.0.1.ELsmp #1 SMP Tue May 17 17:52:23 EDT 2005\ni686 i686 i386 GNU/Linux\n\nThe glibc version is 2.3.2.\n\nstatgen=# select current_setting('lc_collate');\n current_setting\n-----------------\n en_US.UTF-8\n\nNot sure what's relevant, but here's some more info:\nThe machine has 4.5GiB of RAM and a 5-disk Raid 5. It's a dual xeon\n3.2ghz.\n\n relname | relpages | reltuples\n-----------------------------+----------+-------------\n external_id_map | 126883 | 1.55625e+07\n external_id_map_primary_key | 64607 | 1.55625e+07\n subject | 31 | 1186\n subject_pkey | 19 | 1186\n\nI've attached the output of \"select name, setting from pg_settings\".\n\nAnd, in case my original message isn't handy, the explain analyze output\nand table/view info is below.\n\nThanks for taking a look,\nMitch\n\nstatgen=> explain analyze select * from subject_source;\n\nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..330.72 rows=1186 width=46) (actual\ntime=0.051..8.890 rows=1186 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".target_id)\n -> Index Scan using subject_pkey on subject norm (cost=0.00..63.36\nrows=1186 width=28) (actual time=0.022..1.441 rows=1186 loops=1)\n -> Index Scan using external_id_map_primary_key on external_id_map\neim (cost=0.00..2485226.70 rows=15562513 width=26) (actual\ntime=0.016..2.532 rows=2175 loops=1)\n Total runtime: 9.592 ms\n(5 rows)\n\nstatgen=> explain analyze select * from subject_source where\nsource='SCH';\n\nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..1147.33 rows=1 width=46) (actual\ntime=0.054..20258.161 rows=1186 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".target_id)\n -> Index Scan using subject_pkey on subject norm (cost=0.00..63.36\nrows=1186 width=28) (actual time=0.022..1.478 rows=1186 loops=1)\n -> Index Scan using external_id_map_primary_key on external_id_map\neim (cost=0.00..2524132.99 rows=2335 width=26) (actual\ntime=0.022..20252.326 rows=1186 loops=1)\n Filter: (source = 'SCH'::bpchar)\n Total runtime: 20258.922 ms\n(6 rows)\n\nstatgen=> \\d subject_source\n View \"public.subject_source\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n id | bigint |\n sex | integer |\n parent1 | bigint |\n parent2 | bigint |\n source | character(3) |\n source_id | character varying(32) |\nView definition:\n SELECT norm.id, norm.sex, norm.parent1, norm.parent2, eim.source,\neim.source_id\n FROM subject norm\n JOIN util.external_id_map eim ON norm.id = eim.target_id;\n\nstatgen=> \\d subject\n Table \"public.subject\"\n Column | Type | Modifiers\n---------+---------+-----------\n id | bigint | not null\n sex | integer |\n parent1 | bigint |\n parent2 | bigint |\nIndexes:\n \"subject_pkey\" PRIMARY KEY, btree (id)\nForeign-key constraints:\n \"subject_parent1\" FOREIGN KEY (parent1) REFERENCES subject(id)\nDEFERRABLE INITIALLY DEFERRED\n \"subject_parent2\" FOREIGN KEY (parent2) REFERENCES subject(id)\nDEFERRABLE INITIALLY DEFERRED\n \"subject_id_map\" FOREIGN KEY (id) REFERENCES\nutil.external_id_map(target_id) DEFERRABLE INITIALLY DEFERRED\n\nstatgen=> \\d util.external_id_map\n Table \"util.external_id_map\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n source_id | character varying(32) | not null\n source | character(3) | not null\n target_id | bigint | not null\nIndexes:\n \"external_id_map_primary_key\" PRIMARY KEY, btree (target_id)\n \"external_id_map_source_source_id_unique\" UNIQUE, btree (source,\nsource_id)\n \"external_id_map_source\" btree (source)\n \"external_id_map_source_target_id\" btree (source, target_id)\nForeign-key constraints:\n \"external_id_map_source\" FOREIGN KEY (source) REFERENCES\nutil.source(id)",
"msg_date": "Fri, 11 Nov 2005 01:17:15 -0800",
"msg_from": "Mitch Skinner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "Mitch Skinner wrote:\n> \n> The version with the condition is definitely doing more I/O. The\n> version without the condition doesn't read at all. \n[snip]\n> relname | relpages | reltuples\n> -----------------------------+----------+-------------\n> external_id_map | 126883 | 1.55625e+07\n> external_id_map_primary_key | 64607 | 1.55625e+07\n> subject | 31 | 1186\n> subject_pkey | 19 | 1186\n\nDoes external_id_map really have 15 million rows? If not, try a VACUUM \nFULL on it. Be prepared to give it some time to complete.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 11 Nov 2005 11:51:55 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "Mitch Skinner <[email protected]> writes:\n> On Thu, 2005-11-10 at 12:23 -0500, Tom Lane wrote:\n>> Apparently, you are using a platform and/or locale in which strcoll() is\n>> spectacularly, god-awfully slow --- on the order of 10 msec per comparison.\n\n> The version with the condition is definitely doing more I/O. The\n> version without the condition doesn't read at all.\n\nThat's pretty interesting, but what file(s) is it reading exactly?\n\nIt could still be strcoll's fault. The only plausible explanation\nI can think of for strcoll being so slow is if for some reason it were\nre-reading the locale definition file every time, instead of setting up\njust once.\n\nIf it is hitting Postgres files, it'd be interesting to look at exactly\nwhich files and what the distribution of seek offsets is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 09:09:36 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Mitch Skinner wrote:\n>> The version with the condition is definitely doing more I/O. The\n>> version without the condition doesn't read at all. \n\n> Does external_id_map really have 15 million rows? If not, try a VACUUM \n> FULL on it. Be prepared to give it some time to complete.\n\nPlease don't, actually, until we understand what's going on.\n\nThe thing is that the given plan will fetch every row indicated by the\nindex in both cases, in order to check the row's visibility. I don't\nsee how an additional test on a non-indexed column would cause any\nadditional I/O. If the value were large enough to be toasted\nout-of-line then it could cause toast table accesses ... but we're\nspeaking of a char(3).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 09:17:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
},
{
"msg_contents": "On Fri, 2005-11-11 at 11:51 +0000, Richard Huxton wrote:\n> Does external_id_map really have 15 million rows? If not, try a VACUUM \n> FULL on it. Be prepared to give it some time to complete.\n\nThanks for the reply. It does indeed have that many rows:\nstatgen=> select count(*) from util.external_id_map ;\n count\n----------\n 15562513\n(1 row)\n\nThat table never gets deletions or updates, only insertions and reads.\nFor fun and base-covering, I'm running a full vacuum now. Usually\nthere's just a nightly lazy vacuum.\n\nIf it helps, here's some background on what we're doing and why (plus\nsome stuff at the end about how it relates to Postgres):\n\nWe get very similar data from multiple sources, and I want to be able to\ncombine it all into one schema. The data from different sources is\nsimilar enough (it's generally constrained by the underlying biology,\ne.g., each person has a father and a mother, two versions of each\nregular chromosome, etc.) that I think putting it all into one set of\ntables makes sense.\n\nDifferent people in our group use different tools (Python, R, Java), so\ninstead of integrating at the code level (like a shared class hierarchy)\nwe use the schema as our shared idea of the data. This helps make my\nanalyses comparable to the analyses from my co-workers. We don't all\nwant to have to write basic sanity checks in each of our languages, so\nwe want to be able to have foreign keys in the schema. Having foreign\nkeys and multiple data sources means that we have to generate our own\ninternal identifiers (otherwise we'd expect to have ID collisions from\ndifferent sources). I'd like to be able to have a stable\ninternal-external ID mapping (this is actually something we spent a lot\nof time arguing about), so we have a table that does exactly that.\n\nWhen we import data, we do a bunch of joins against the external_id_map\ntable to translate external IDs into internal IDs. It means that the\nexternal_id_map table gets pretty big and the joins can take a long time\n(it takes four hours to import one 11-million row source table into our\ncanonical schema, because we have to do 5 ID translations per row on\nthat one), but we don't need to import data too often so it works. The\nmain speed concern is that exploratory data analyses are pretty\ninteractive, and also sometimes you want to run a bunch of analyses in\nparallel, and if the queries are slow that can be a bottleneck.\n\nI'm looking forward to partitioning the external_id_map table with 8.1,\nand when Greenplum comes out with their stuff we'll probably take a\nlook. If the main Postgres engine had parallel query execution, I'd be\npretty happy. I also followed the external sort thread with interest,\nbut I didn't get the impression that there was a very clear consensus\nthere.\n\nSince some of our sources change over time, and I can't generally expect\nthem to have timestamps on their data, what we do when we re-import from\na source is delete everything out of the canonical tables from that\nsource and then re-insert. It sounds like mass deletions are not such a\ncommon thing to do; I think there was a thread about this recently and\nTom questioned the real-world need to worry about that workload. I was\nthinking that maybe the foreign key integrity checks might be better\ndone by a join rather than a per-deleted-row trigger queue, but since\nall my foreign keys are indexed on both ends it doesn't look like a\nbottleneck. \n\nAnyway, all that probably has an effect on the data distribution in our\ntables and indexes. I'll report back on the effect of the full vacuum.\n\nThanks for reading,\nMitch\n",
"msg_date": "Fri, 11 Nov 2005 06:20:43 -0800",
"msg_from": "Mitch Skinner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "On Fri, 2005-11-11 at 09:17 -0500, Tom Lane wrote:\n> Richard Huxton <[email protected]> writes:\n> > Does external_id_map really have 15 million rows? If not, try a VACUUM \n> > FULL on it. Be prepared to give it some time to complete.\n> \n> Please don't, actually, until we understand what's going on.\n\nAck, I was the middle of the vacuum full already when I got this. I\nstill have the strace and lsof output from before the vacuum full. It's\ndefinitely reading Postgres files:\n\nbash-2.05b$ grep '^read' subsourcestrace | cut -d, -f1 | sort | uniq -c\n 100453 read(44\n 48218 read(47\nbash-2.05b$ grep 'seek' subsourcestrace | cut -d, -f1 | sort | uniq -c\n 1 _llseek(40\n 1 _llseek(43\n 35421 _llseek(44\n 1 _llseek(45\n 1 _llseek(46\n 39787 _llseek(47\n 1 _llseek(48\n\nFile handles:\n44 - external_id_map\n47 - external_id_map_primary_key\n40 - subject\n43 - subject_pkey\n45 - external_id_map_source\n46 - external_id_map_source_target_id\n48 - external_id_map_source_source_id_unique\n\nAs far as the seek offsets go, R doesn't want to do a histogram for me\nwithout using up more RAM than I have. I put up some files at:\nhttp://arctur.us/pgsql/\nThey are:\nsubsourcestrace - the strace output from \"select * from subject_source\nwhere source='SCH'\"\nsubsourcestrace-nocond - the strace output from \"select * from\nsubject_source\"\nsubsourcelsof - the lsof output (for mapping from file handles to file\nnames)\nrelfilenode.html - for mapping from file names to table/index names (I\nthink I've gotten all the relevant file handle-table name mappings\nabove, though)\nseekoff-44 - just the beginning seek offsets for the 44 file handle\n(external_id_map)\nseekoff-47 - just the beginning seek offsets for the 47 file handle\n(external_id_map_primary_key)\n\nThe vacuum full is still going; I'll let you know if it changes things.\n\n> The thing is that the given plan will fetch every row indicated by the\n> index in both cases, in order to check the row's visibility. I don't\n> see how an additional test on a non-indexed column would cause any\n> additional I/O. If the value were large enough to be toasted\n> out-of-line then it could cause toast table accesses ... but we're\n> speaking of a char(3).\n\nPardon my ignorance, but do the visibility check and the check of the\ncondition happen at different stages of execution? Would it end up\nchecking the condition for all 15M rows, but only checking visibility\nfor the 1200 rows that come back from the join? I guess I'm confused\nabout what \"every row indicated by the index\" means in the context of\nthe join.\n\nThanks for taking an interest,\nMitch\n\n",
"msg_date": "Fri, 11 Nov 2005 07:24:41 -0800",
"msg_from": "Mitch Skinner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "Mitch Skinner <[email protected]> writes:\n> On Fri, 2005-11-11 at 09:17 -0500, Tom Lane wrote:\n>> Please don't, actually, until we understand what's going on.\n\n> Ack, I was the middle of the vacuum full already when I got this.\n\nGiven what you said about no deletions or updates, the vacuum should\nhave no effect anyway, so don't panic.\n\n> I put up some files at: http://arctur.us/pgsql/\n\nGreat, I'll take a look ...\n\n> Pardon my ignorance, but do the visibility check and the check of the\n> condition happen at different stages of execution? Would it end up\n> checking the condition for all 15M rows, but only checking visibility\n> for the 1200 rows that come back from the join?\n\nNo, the visibility check happens first. The timing does seem consistent\nwith the idea that the comparison is being done at all 15M rows, but\nyour other EXPLAIN shows that only 2K rows are actually retrieved, which\npresumably is because the merge doesn't need the rest. (Merge will stop\nscanning either input when it runs out of rows on the other side; so\nthis sort of plan is very fast if the range of keys on one side is\nsmaller than the range on the other. The numbers from the no-comparison\nEXPLAIN ANALYZE indicate that that is happening for your case.) So the\ncomparison should happen for at most 2K rows too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 10:33:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
},
{
"msg_contents": "I wrote:\n> No, the visibility check happens first. The timing does seem consistent\n> with the idea that the comparison is being done at all 15M rows, but\n> your other EXPLAIN shows that only 2K rows are actually retrieved, which\n> presumably is because the merge doesn't need the rest. (Merge will stop\n> scanning either input when it runs out of rows on the other side; so\n> this sort of plan is very fast if the range of keys on one side is\n> smaller than the range on the other. The numbers from the no-comparison\n> EXPLAIN ANALYZE indicate that that is happening for your case.) So the\n> comparison should happen for at most 2K rows too.\n\nAfter re-reading your explanation of what you're doing with the data,\nI thought of a possible explanation. Is the \"source\" value exactly\ncorrelated with the external_id_map primary key? What could be\nhappening is this:\n\n1. We can see from the EXPLAIN ANALYZE for the no-comparison case that\nthe merge join stops after fetching only 2175 rows from external_id_map.\nThis implies that the subject table joins to the first couple thousand\nentries in external_id_map and nothing beyond that. In particular, the\nmerge join must have observed that the join key in the 2175'th row (in\nindex order) of external_id_map was larger than the last (largest) join\nkey in subject.\n\n2. Let's suppose that source = 'SCH' is false for the 2175'th row of\nexternal_id_map and every one after that. Then what will happen is that\nthe index scan will vainly seek through the entire external_id_map,\nlooking for a row that its filter allows it to return, not knowing that\nthe merge join has no use for any of those rows.\n\nIf this is the story, and you need to make this sort of query fast,\nthen what you need to do is incorporate the \"source\" value into the\nexternal_id_map index key somehow. Then the index scan would be able to\nrealize that there is no possibility of finding another row with source\n= 'SCH'. The simplest way is just to make a 2-column index, but I\nwonder whether the source isn't actually redundant with the\nexternal_id_map primary key already ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 10:53:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
},
{
"msg_contents": "On Fri, 2005-11-11 at 10:53 -0500, Tom Lane wrote:\n> After re-reading your explanation of what you're doing with the data,\n> I thought of a possible explanation. Is the \"source\" value exactly\n> correlated with the external_id_map primary key?\n\nSort of. In this case, at the beginning of external_id_map, yes, though\nfurther down the table they're not. For example, if we got new subjects\nfrom 'SCH' at this point, they'd get assigned external_id_map.target_id\n(the primary key) values that are totally unrelated to what the current\nset are (the values in the external_id_map primary key just come off of\na sequence that we use for everything).\n\nRight now though, since the 'SCH' data came in a contiguous chunk right\nat the beginning and hasn't changed or grown since then, the correlation\nis pretty exact, I think. It's true that there are no 'SCH' rows in the\ntable after the first contiguous set (when I get back to work I'll check\nexactly what row that is). It's interesting that there are these\ncorrelations in the the data that didn't exist at all in my mental\nmodel.\n\n> what you need to do is incorporate the \"source\" value into the\n> external_id_map index key somehow. Then the index scan would be able to\n> realize that there is no possibility of finding another row with source\n> = 'SCH'. The simplest way is just to make a 2-column index\n\nI thought that's what I had done with the\nexternal_id_map_source_target_id index:\n\nstatgen=> \\d util.external_id_map\n Table \"util.external_id_map\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n source_id | character varying(32) | not null\n source | character(3) | not null\n target_id | bigint | not null\nIndexes:\n \"external_id_map_primary_key\" PRIMARY KEY, btree (target_id)\n \"external_id_map_source_source_id_unique\" UNIQUE, btree (source,\nsource_id)\n \"external_id_map_source\" btree (source)\n \"external_id_map_source_target_id\" btree (source, target_id)\nForeign-key constraints:\n \"external_id_map_source\" FOREIGN KEY (source) REFERENCES\nutil.source(id)\n\nSo if I understand your suggestion correctly, we're back to the \"why\nisn't this query using index foo\" FAQ. For the external_id_map table,\nthe statistics target for \"source\" is 200; the other two columns are at\nthe default level because I didn't think of them as being very\ninteresting statistics-wise. I suppose I should probably go ahead and\nraise the targets for every column of that table; I expect the planning\ntime is negligible, and our queries tend to be large data-wise. Beyond\nthat, I'm not sure how else to encourage the use of that index. If I\nchanged that index to be (target_id, source) would it make a difference?\n\nThanks for your help,\nMitch \n",
"msg_date": "Fri, 11 Nov 2005 08:57:35 -0800",
"msg_from": "Mitchell Skinner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower"
},
{
"msg_contents": "Mitchell Skinner <[email protected]> writes:\n> On Fri, 2005-11-11 at 10:53 -0500, Tom Lane wrote:\n>> what you need to do is incorporate the \"source\" value into the\n>> external_id_map index key somehow. Then the index scan would be able to\n>> realize that there is no possibility of finding another row with source\n>> = 'SCH'. The simplest way is just to make a 2-column index\n\n> I thought that's what I had done with the\n> external_id_map_source_target_id index:\n> \"external_id_map_source_target_id\" btree (source, target_id)\n\n> If I changed that index to be (target_id, source) would it make a difference?\n\n[ fools around with a test case ... ] Seems like not :-(. PG is not\nbright enough to realize that an index on (source, target_id) can be\nused with a mergejoin on target_id, because the index sort order isn't\ncompatible. (Given the equality constraint on source, there is an\neffective compatibility. I had thought that 8.1 might be able to\ndetect this, but it seems not to in a simple test case --- there may be\na bug involved there. In any case 8.0 definitely won't see it.) An\nindex on (target_id, source) would be recognized as mergejoinable, but\nthat doesn't solve the problem because an index condition on the second\ncolumn doesn't provide enough information to know that the scan can stop\nearly.\n\nGiven your comment that the correlation is accidental, it may be that\nthere's not too much point in worrying. The planner is picking this\nplan only because it notices the asymmetry in key ranges, and as soon\nas some more rows get added with higher-numbered target_ids it will\nshift to something else (probably a hash join).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 12:16:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: same plan, add 1 condition, 1900x slower "
}
] |
[
{
"msg_contents": "> The point Gentlemen, was that Good Architecture is King. That's what\nI\n> was trying to emphasize by calling proper DB architecture step 0. All\n> other things being equal (and they usually aren't, this sort of stuff\nis\n> _very_ context dependent), the more of your critical schema that you\ncan\n> fit into RAM during normal operation the better.\n> \n> ...and it all starts with proper DB design. Otherwise, you are quite\n> right in stating that you risk wasting time, effort, and HW.\n> \n> Ron\n\n+1!\n\nI answer lots of question on this list that are in the form of 'query x\nis running to slow'. Often, the first thing that pops in my mind is\n'why are you running query x in the first place?' \n\nThe #1 indicator that something is not right is 'distinct' clause.\nDistinct (and its evil cousin, union) are often brought in to address\nproblems.\n\nThe human brain is the best optimizer. Even on old hardware the server\ncan handle a *lot* of data. It's just about where we add\ninefficiency...lousy database designs lead to lousy queries or (even\nworse) extra application code.\n\nMerlin\n",
"msg_date": "Thu, 10 Nov 2005 10:43:50 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Some help on buffers and other performance tricks"
}
] |
[
{
"msg_contents": "My original post did not take into account VAT, I apologize for that oversight.\n\nHowever, unless you are naive, or made of gold, or have some sort of \"special\" relationship that requires you to, _NE VER_ buy RAM from your computer HW OEM. For at least two decades it's been a provable fact that OEMs like DEC, Sun, HP, Compaq, Dell, etc, etc charge far more per GB for the RAM they sell. Same goes for HDs. Buy your memory and HDs direct from reputable manufacturers, you'll get at least the same quality and pay considerably less.\n\nYour Dell example is evidence that supports my point. As of this writing, decent RAM should cost $75-$150 pr GB (not including VAT ;-) ). Don't let yourself be conned into paying more.\n\nI'm talking about decent RAM from reputable direct suppliers like Corsair and Kingston (_not_ their Value RAM, the actual Kingston branded stuff), OCZ, etc. Such companies sell via multiple channels, including repuatble websites like dealtime.com, pricewatch.com, newegg.com, etc, etc.\n\nYou are quite correct that there's poor quality junk out there. I was not talking about it, only reasonable quality components.\n\nRon\n\n\n-----Original Message-----\nFrom: Kurt De Grave <[email protected]>\nSent: Nov 10, 2005 5:40 AM\nTo: Ron Peacetree <[email protected]>\nCc: Charlie Savage <[email protected]>, [email protected]\nSubject: Re: [PERFORM] Sort performance on large tables\n\n\n\nOn Wed, 9 Nov 2005, Ron Peacetree wrote:\n\n> At this writing, 4 1GB DIMMs (4GB) should set you back ~$300 or less.\n> 4 2GB DIMMs (8GB) should cost ~$600. As of now, very few mainboards\n> support 4GB DIMMs and I doubt the D3000 has such a mainboard. If you\n> can use them, 4 4GB DIMMs (16GB) will currently set you back\n> ~$1600-$2400.\n\nSorry, but every time again I see unrealistic memory prices quoted when\nthe buy-more-memory argument passes by.\nWhat kind of memory are you buying for your servers? Non-ECC no-name\nmemory that doesn't even pass a one-hour memtest86 for 20% of the items\nyou buy?\n\nJust checked at Dell's web page: adding 4 1GB DIMMs to a PowerEdge 2850\nsets you back _1280 EURO_ excluding VAT. And that's after they already\ncharged you 140 euro for replacing the obsolete standard 4 512MB DIMMs\nwith the same capacity in 1GB DIMMs. So the 4GB upgrade actually costs\n1420 euro plus VAT, which is quite a bit more than $300.\n\nOkay, few people will happily buy at those prices. You can get the\nexact same goods much cheaper elsewhere, but it'll still cost you way\nmore than the number you gave, plus you'll have to drive to the server's\nlocation, open up the box yourself, and risk incompatibilities and\nsupport problems if there's ever something wrong with that memory.\n\nDisclaimers:\nI know that you're talking about a desktop in this particular case.\nI wouldn't see a need for ECC in a development box either.\nI know a Dell hasn't been the smartest choice for a database box lately\n(but politics...).\n\nkurt.\n\n\n",
"msg_date": "Thu, 10 Nov 2005 11:23:23 -0500 (EST)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort performance on large tables"
},
{
"msg_contents": "We use this memory in all our servers (well - the 512 sticks). 0\nproblems to date:\n\nhttp://www.newegg.com/Product/Product.asp?Item=N82E16820145513\n\n$163 for 1GB.\n\nThis stuff is probably better than the Samsung RAM dell is selling you\nfor 3 times the price.\n\nAlex\n\nOn 11/10/05, Ron Peacetree <[email protected]> wrote:\n> My original post did not take into account VAT, I apologize for that oversight.\n>\n> However, unless you are naive, or made of gold, or have some sort of \"special\" relationship that requires you to, _NE VER_ buy RAM from your computer HW OEM. For at least two decades it's been a provable fact that OEMs like DEC, Sun, HP, Compaq, Dell, etc, etc charge far more per GB for the RAM they sell. Same goes for HDs. Buy your memory and HDs direct from reputable manufacturers, you'll get at least the same quality and pay considerably less.\n>\n> Your Dell example is evidence that supports my point. As of this writing, decent RAM should cost $75-$150 pr GB (not including VAT ;-) ). Don't let yourself be conned into paying more.\n>\n> I'm talking about decent RAM from reputable direct suppliers like Corsair and Kingston (_not_ their Value RAM, the actual Kingston branded stuff), OCZ, etc. Such companies sell via multiple channels, including repuatble websites like dealtime.com, pricewatch.com, newegg.com, etc, etc.\n>\n> You are quite correct that there's poor quality junk out there. I was not talking about it, only reasonable quality components.\n>\n> Ron\n>\n>\n> -----Original Message-----\n> From: Kurt De Grave <[email protected]>\n> Sent: Nov 10, 2005 5:40 AM\n> To: Ron Peacetree <[email protected]>\n> Cc: Charlie Savage <[email protected]>, [email protected]\n> Subject: Re: [PERFORM] Sort performance on large tables\n>\n>\n>\n> On Wed, 9 Nov 2005, Ron Peacetree wrote:\n>\n> > At this writing, 4 1GB DIMMs (4GB) should set you back ~$300 or less.\n> > 4 2GB DIMMs (8GB) should cost ~$600. As of now, very few mainboards\n> > support 4GB DIMMs and I doubt the D3000 has such a mainboard. If you\n> > can use them, 4 4GB DIMMs (16GB) will currently set you back\n> > ~$1600-$2400.\n>\n> Sorry, but every time again I see unrealistic memory prices quoted when\n> the buy-more-memory argument passes by.\n> What kind of memory are you buying for your servers? Non-ECC no-name\n> memory that doesn't even pass a one-hour memtest86 for 20% of the items\n> you buy?\n>\n> Just checked at Dell's web page: adding 4 1GB DIMMs to a PowerEdge 2850\n> sets you back _1280 EURO_ excluding VAT. And that's after they already\n> charged you 140 euro for replacing the obsolete standard 4 512MB DIMMs\n> with the same capacity in 1GB DIMMs. So the 4GB upgrade actually costs\n> 1420 euro plus VAT, which is quite a bit more than $300.\n>\n> Okay, few people will happily buy at those prices. You can get the\n> exact same goods much cheaper elsewhere, but it'll still cost you way\n> more than the number you gave, plus you'll have to drive to the server's\n> location, open up the box yourself, and risk incompatibilities and\n> support problems if there's ever something wrong with that memory.\n>\n> Disclaimers:\n> I know that you're talking about a desktop in this particular case.\n> I wouldn't see a need for ECC in a development box either.\n> I know a Dell hasn't been the smartest choice for a database box lately\n> (but politics...).\n>\n> kurt.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n",
"msg_date": "Thu, 10 Nov 2005 11:34:03 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort performance on large tables"
}
] |
[
{
"msg_contents": "This is related to my post the other day about sort performance.\n\nPart of my problem seems to be that postgresql is greatly overestimating \nthe cost of index scans. As a result, it prefers query plans that \ninvolve seq scans and sorts versus query plans that use index scans.\nHere is an example query:\n\nSELECT tlid, count(tlid)\nFROM completechain\nGROUP BY tlid;\t\n\nApproach #1 - seq scan with sort:\n\n\"GroupAggregate (cost=10177594.61..11141577.89 rows=48199164 width=4) \n(actual time=7439085.877..8429628.234 rows=47599910 loops=1)\"\n\" -> Sort (cost=10177594.61..10298092.52 rows=48199164 width=4) \n(actual time=7439085.835..8082452.721 rows=48199165 loops=1)\"\n\" Sort Key: tlid\"\n\" -> Seq Scan on completechain (cost=0.00..2229858.64 \nrows=48199164 width=4) (actual time=10.788..768403.874 rows=48199165 \nloops=1)\"\n\"Total runtime: 8596987.505 ms\"\n\n\nApproach #2 - index scan (done by setting enable_seqscan to false and \nenable_sort to false):\n\n\"GroupAggregate (cost=0.00..113713861.43 rows=48199164 width=4) (actual \ntime=53.211..2652227.201 rows=47599910 loops=1)\"\n\" -> Index Scan using idx_completechain_tlid on completechain \n(cost=0.00..112870376.06 rows=48199164 width=4) (actual \ntime=53.168..2312426.321 rows=48199165 loops=1)\"\n\"Total runtime: 2795420.933 ms\"\n\nApproach #1 is estimated to be 10 times less costly, yet takes 3 times \nlonger to execute.\n\n\nMy questions:\n\n1. Postgresql estimates the index scan will be 50 times more costly \nthan the seq scan (112870376 vs 2229858) yet in fact it only takes 3 \ntimes longer to execute (2312426 s vs. 768403 s). My understanding is \nthat postgresql assumes, via the random_page_cost parameter, that an \nindex scan will take 4 times longer than a sequential scan. So why is \nthe analyzer estimating it is 50 times slower?\n\n2. In approach #1, the planner thinks the sort will take roughly 4 \ntimes longer [(10,298,092 - 2,229,858) / 2,229,858] than the sequential \nscan. Yet it really takes almost ten times longer. It seems as is the \nplanner is greatly underestimating the sort cost?\n\nDue to these two apparent miscalculations, postgresql is choosing the \nwrong query plan to execute this query. I've attached my \npostgresql.conf file below just in case this is due to some \nmisconfiguration on my part.\n\nSome setup notes:\n* All tables are vacuumed and analyzed\n* Running Postgresql 8.1 on Suse 10\n* Low end hardware - Dell Dimension 3000, 1GB ram, 1 built-in 80 GB IDE \ndrive, 1 SATA Seagate 400GB drive. The IDE drive has the OS and the WAL \nfiles, the SATA drive the database. From hdparm the max IO for the IDE \ndrive is about 50Mb/s and the SATA drive is about 65Mb/s.\n\n\n-------------------------------\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\nshared_buffers = 40000 # 40000 buffers * 8192 \nbytes/buffer = 327,680,000 bytes\n#shared_buffers = 1000 # min 16 or max_connections*2, 8KB each\n\ntemp_buffers = 5000\n#temp_buffers = 1000 # min 100, 8KB each\n#max_prepared_transactions = 5 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared \nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 16384 # in Kb\n#work_mem = 1024 # min 64, size in KB\n\nmaintenance_work_mem = 262144 # in kb\n#maintenance_work_mem = 16384 # min 1024, size in KB\n#max_stack_depth = 2048 # min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 60000\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n#bgwriter_lru_percent = 1.0 # 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5 # 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333 # 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = on # turns forced synchronization on or off\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\n\nwal_buffers = 128\n#wal_buffers = 8 # min 4, 8KB each\n\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 256 # 256 * 16Mb = 4,294,967,296 bytes\ncheckpoint_timeout = 1200 # 1200 seconds (20 minutes)\ncheckpoint_warning = 30 # in seconds, 0 is off\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a logfile\n # segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 80000 # 80000 * 8192 = 655,360,000 bytes\n#effective_cache_size = 1000 # typically 8KB each\n\nrandom_page_cost = 2.5 # units are one sequential page fetch\n#random_page_cost = 4 # units are one sequential page fetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100 # range 1-1000\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOINs\n\n\n#---------------------------------------------------------------------------\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = on\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on\n\n#stats_start_collector = on\n#stats_command_string = off\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = true\nautovacuum_naptime = 600\n\n#autovacuum = false # enable autovacuum subprocess?\n#autovacuum_naptime = 60 # time between autovacuum runs, in secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates before\n # vacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n # vacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n # analyze\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n # vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovac, -1 means use\n # vacuum_cost_\n\n\n----------------------\n-- Table: tiger.completechain\n\n-- DROP TABLE tiger.completechain;\n\nCREATE TABLE tiger.completechain\n(\n ogc_fid int4 NOT NULL DEFAULT \nnextval('completechain_ogc_fid_seq'::regclass),\n module varchar(8) NOT NULL,\n tlid int4 NOT NULL,\n side1 int4,\n source varchar(1) NOT NULL,\n fedirp varchar(2),\n fename varchar(30),\n fetype varchar(4),\n fedirs varchar(2),\n cfcc varchar(3) NOT NULL,\n fraddl varchar(11),\n toaddl varchar(11),\n fraddr varchar(11),\n toaddr varchar(11),\n friaddl varchar(1),\n toiaddl varchar(1),\n friaddr varchar(1),\n toiaddr varchar(1),\n zipl int4,\n zipr int4,\n aianhhfpl int4,\n aianhhfpr int4,\n aihhtlil varchar(1),\n aihhtlir varchar(1),\n census1 varchar(1),\n census2 varchar(1),\n statel int4,\n stater int4,\n countyl int4,\n countyr int4,\n cousubl int4,\n cousubr int4,\n submcdl int4,\n submcdr int4,\n placel int4,\n placer int4,\n tractl int4,\n tractr int4,\n blockl int4,\n blockr int4,\n wkb_geometry public.geometry NOT NULL,\n CONSTRAINT enforce_dims_wkb_geometry CHECK (ndims(wkb_geometry) = 2),\n CONSTRAINT enforce_geotype_wkb_geometry CHECK \n(geometrytype(wkb_geometry) = 'LINESTRING'::text OR wkb_geometry IS NULL),\n CONSTRAINT enforce_srid_wkb_geometry CHECK (srid(wkb_geometry) = 4269)\n)\nWITHOUT OIDS;\nALTER TABLE tiger.completechain OWNER TO postgres;\n\n\n-- Index: tiger.idx_completechain_tlid\n\n-- DROP INDEX tiger.idx_completechain_tlid;\n\nCREATE INDEX idx_completechain_tlid\n ON tiger.completechain\n USING btree\n (tlid);\n\n\n\n",
"msg_date": "Thu, 10 Nov 2005 09:59:21 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index Scan Costs versus Sort"
},
{
"msg_contents": "Charlie Savage <[email protected]> writes:\n> 1. Postgresql estimates the index scan will be 50 times more costly \n> than the seq scan (112870376 vs 2229858) yet in fact it only takes 3 \n> times longer to execute (2312426 s vs. 768403 s). My understanding is \n> that postgresql assumes, via the random_page_cost parameter, that an \n> index scan will take 4 times longer than a sequential scan. So why is \n> the analyzer estimating it is 50 times slower?\n\nThe other factors that are likely to affect this are index correlation\nand effective cache size. It's fairly remarkable that a full-table\nindex scan only takes 3 times longer than a seqscan; you must have both\na high correlation and a reasonably large cache. You showed us your\neffective_cache_size setting, but what's the pg_stats entry for \ncompletechain.tlid contain? Can you quantify what the physical\nordering of tlid values is likely to be?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2005 13:04:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Scan Costs versus Sort "
},
{
"msg_contents": "Hi Tom,\n\n From pg_stats:\n\nschema = \"tiger\";\ntablename = \"completechain\";\nattname = \"tlid\";\nnull_frac = 0;\navg_width = 4;\nn_distinct = -1;\nmost_common_vals = ;\nmost_common_freqs = ;\ncorrelation = 0.155914;\n\nNote that I have default_statistics_target set to 100. Here is the \nfirst few values from histogram_bounds:\n\n\"{102450,2202250,4571797,6365754,8444936,10541593,12485818,14545727,16745594,18421868,20300549,22498643,24114709,26301001,28280632,30370123,32253657,33943046,35898115,37499478,39469054,41868498,43992143,45907830,47826340,49843926,52051798,54409298,56447416, \n\n\nThe tlid column is a US Census bureau ID assigned to each chain in the \nUS - where a chain is a road segment, river segment, railroad segment, \netc. The data is loaded on state-by-state basis, and then a \ncounty-by-county basis. There is no overall ordering to TLIDs, although \nperhaps there is some local ordering at the county level (but from a \nquick look at the data I don't see any, and the correlation factor \nindicates there isn't any if I am interpreting it correctly).\n\nAny other info that would be helpful to see?\n\n\nCharlie\n\n\nTom Lane wrote:\n> Charlie Savage <[email protected]> writes:\n>> 1. Postgresql estimates the index scan will be 50 times more costly \n>> than the seq scan (112870376 vs 2229858) yet in fact it only takes 3 \n>> times longer to execute (2312426 s vs. 768403 s). My understanding is \n>> that postgresql assumes, via the random_page_cost parameter, that an \n>> index scan will take 4 times longer than a sequential scan. So why is \n>> the analyzer estimating it is 50 times slower?\n> \n> The other factors that are likely to affect this are index correlation\n> and effective cache size. It's fairly remarkable that a full-table\n> index scan only takes 3 times longer than a seqscan; you must have both\n> a high correlation and a reasonably large cache. You showed us your\n> effective_cache_size setting, but what's the pg_stats entry for \n> completechain.tlid contain? Can you quantify what the physical\n> ordering of tlid values is likely to be?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n",
"msg_date": "Thu, 10 Nov 2005 12:00:51 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index Scan Costs versus Sort"
},
{
"msg_contents": "Following up with some additional information.\n\nThe machine has 1Gb physical RAM. When I run the query (with sort and \nseqscan enabled), top reports (numbers are fairly consistent):\n\nMem: 1,032,972k total, 1,019,516k used, 13,412k free, 17,132k buffers\n\nSwap: 2,032,140k total, 17,592k used, 2,014,548k free, 742,636k cached\n\nThe postmaster process is using 34.7% of RAM - 359m virt, 349 res, 319m. \n No other process is using more than 2% of the memory.\n\n From vmstat:\n\nr b swpd free buff cache\n1 0 17592 13568 17056 743676\n\nvmstat also shows no swapping going on.\n\nNote that I have part of the database, for just Colorado, on my Windows \nXP laptop (table size for completechain table in this case is 1Gb versus \n18Gb for the whole US) for development purposes. I see the same \nbehavior on it, which is a Dell D6100 laptop with 1Gb, running 8.1, and \na default postgres.conf file with three changes (shared_buffers set to \n7000, and work_mem set to 8192, effective_cache_size 2500).\n\nOut of curiosity, how much longer would an index_scan expected to be \nversus a seq scan? I was under the impression it would be about a facto \nof 4, or is that not usually the case?\n\nThanks for the help,\n\nCharlie\n",
"msg_date": "Thu, 10 Nov 2005 17:11:48 -0700",
"msg_from": "Charlie Savage <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index Scan Costs versus Sort"
},
{
"msg_contents": "Charlie Savage <[email protected]> writes:\n> Out of curiosity, how much longer would an index_scan expected to be \n> versus a seq scan? I was under the impression it would be about a facto \n> of 4, or is that not usually the case?\n\nNo, it can easily be dozens or even hundreds of times worse, in the\nworst case. The factor of 4 you are thinking of is the random_page_cost\nwhich is the assumed ratio between the cost of randomly fetching a page\nand the cost of fetching it in a sequential scan of the whole table.\nNot only is the sequential scan fetch normally much cheaper (due to less\nseeking and the kernel probably catching on and doing read-ahead), but\nif there are N tuples on a page then a seqscan reads them all with one\npage fetch. In the worst case an indexscan might fetch the page from\ndisk N separate times, if all its tuples are far apart in the index\norder. This is all on top of the extra cost to read the index itself,\ntoo.\n\nThe planner's estimate of 50x higher cost is not out of line for small\ntuples (large N) and a badly-out-of-order table. What's puzzling is\nthat you seem to be getting near best-case behavior in what does not\nseem to be a best-case scenario for an indexscan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Nov 2005 19:32:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index Scan Costs versus Sort "
}
] |
[
{
"msg_contents": "Hello,\n\nI'm perplexed. I'm trying to find out why some queries are taking a long \ntime, and have found that after running analyze, one particular query \nbecomes slow.\n\nThis query is based on a view that is based on multiple left outer joins \nto merge data from lots of tables.\n\nIf I drop the database and reload it from a dump, the query result is \ninstaneous (less than one second).\n\nBut after I run analyze, it then takes much longer to run -- about 10 \nseconds, give or take a few depending on the hardware I'm testing it on.\nEarlier today, it was taking almost 30 seconds on the actual production \nserver -- I restarted pgsql server and the time got knocked down to \nabout 10 seconds -- another thing I don't understand.\n\nI've run the query a number of times before and after running analyze, \nand the problem reproduces everytime. I also ran with \"explain\", and saw \nthat the costs go up dramatically after I run analyze.\n\nI'm fairly new to postgresql and not very experienced as a db admin to \nbegin with, but it looks like I'm going to have to get smarter about \nthis stuff fast, unless it's something the programmers need to deal with \nwhen constructing their code and queries or designing the databases.\n\nI've already learned that I've commited the cardinal sin of configuring \nmy new database server with RAID 5 instead of something more sensible \nfor databases like 0+1, but I've been testing out and replicating this \nproblem on different hardware, so I know that this issue is not the \ndirect cause of this.\n\nThanks for any info. I can supply more info (like config files, schemas, \netc.) if you think it might help. But I though I would just describe the \nproblem for starters.\n\n-DW\n\n",
"msg_date": "Fri, 11 Nov 2005 14:16:01 -0500",
"msg_from": "DW <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow queries after ANALYZE"
},
{
"msg_contents": "DW <[email protected]> writes:\n> I'm perplexed. I'm trying to find out why some queries are taking a long \n> time, and have found that after running analyze, one particular query \n> becomes slow.\n\nThis implies that the planner's default choice of plan (without any\nstatistics) is better than its choice when informed by statistics.\nThis is undesirable but not unheard of :-(\n\nIt would be interesting to see EXPLAIN ANALYZE results in both cases,\nplus the contents of the relevant pg_stats rows. (BTW, you need not\ndump and reload to get back to the virgin state --- just delete the\nrelevant rows from pg_statistic.) Also we'd want to know exactly what\nPG version this is, and on what sort of platform.\n\nYou might be able to fix things by increasing the statistics targets or\ntweaking planner cost parameters, but it'd be best to investigate before\ntrying to fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 14:32:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries after ANALYZE "
},
{
"msg_contents": "Tom Lane wrote:\n\n> It would be interesting to see EXPLAIN ANALYZE results in both cases,\n> plus the contents of the relevant pg_stats rows. (BTW, you need not\n> dump and reload to get back to the virgin state --- just delete the\n> relevant rows from pg_statistic.) Also we'd want to know exactly what\n> PG version this is, and on what sort of platform.\n> \n\nThanks for replying. I've got a message into to my team asking if I need \nto de-identify some of the table names before I go submitting output to \na public mailing list.\n\nIn the meantime, again I'm new to this -- I got pg_stats; which rows are \n the relevent ones?\n\nAlso, I am running postgresql-server-7.4.9 from FreeBSD port (with \noptimized CFLAGS turned on during compiling)\n\nOS: FreeBSD 5.4 p8\n\nThanks,\nDW\n\n",
"msg_date": "Fri, 11 Nov 2005 15:48:16 -0500",
"msg_from": "DW <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries after ANALYZE"
},
{
"msg_contents": "DW <[email protected]> writes:\n> In the meantime, again I'm new to this -- I got pg_stats; which rows are \n> the relevent ones?\n\nThe ones for columns that are mentioned in the problem query.\nI don't think you need to worry about columns used only in the SELECT\noutput list, but anything used in WHERE, GROUP BY, etc is interesting.\n\n> Also, I am running postgresql-server-7.4.9 from FreeBSD port (with \n> optimized CFLAGS turned on during compiling)\n> OS: FreeBSD 5.4 p8\n\nThe hardware environment (particularly disks/filesystems) is relevant\ntoo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Nov 2005 16:25:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries after ANALYZE "
},
{
"msg_contents": "On 11/11/05, DW <[email protected]> wrote:\n>\n> I'm perplexed. I'm trying to find out why some queries are taking a long\n> time, and have found that after running analyze, one particular query\n> becomes slow.\n>\n\ni have had exactly the same problem very recently.\nwhat helped? increasing statistics on come column.\nwhich ones?\nmake:\nexplain analyze <your select>;\nand check in which situations you gget the biggest change of \"estiamted\nrows\" and \"actual rows\".\nthen check what this particular part of your statement is touching, and\nincrease appropriate statistics.\n\ndepesz\n\nOn 11/11/05, DW <[email protected]> wrote:\nI'm perplexed. I'm trying to find out why some queries are taking a longtime, and have found that after running analyze, one particular querybecomes slow.\ni have had exactly the same problem very recently.\nwhat helped? increasing statistics on come column.\nwhich ones?\nmake:\nexplain analyze <your select>;\nand check in which situations you gget the biggest change of \"estiamted rows\" and \"actual rows\".\nthen check what this particular part of your statement is touching, and increase appropriate statistics.\n\ndepesz",
"msg_date": "Sat, 12 Nov 2005 10:14:49 +0100",
"msg_from": "hubert depesz lubaczewski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow queries after ANALYZE"
},
{
"msg_contents": "DW wrote:\n> Hello,\n> \n> I'm perplexed. I'm trying to find out why some queries are taking a long \n> time, and have found that after running analyze, one particular query \n> becomes slow.\n> \n> This query is based on a view that is based on multiple left outer joins \n> to merge data from lots of tables.\n> \n> If I drop the database and reload it from a dump, the query result is \n> instaneous (less than one second).\n> \n> But after I run analyze, it then takes much longer to run -- about 10 \n> seconds, give or take a few depending on the hardware I'm testing it on.\n> Earlier today, it was taking almost 30 seconds on the actual production \n> server -- I restarted pgsql server and the time got knocked down to \n> about 10 seconds -- another thing I don't understand.\n> \n> I've run the query a number of times before and after running analyze, \n> and the problem reproduces everytime. I also ran with \"explain\", and saw \n> that the costs go up dramatically after I run analyze.\n> \n> I'm fairly new to postgresql and not very experienced as a db admin to \n> begin with, but it looks like I'm going to have to get smarter about \n> this stuff fast, unless it's something the programmers need to deal with \n> when constructing their code and queries or designing the databases.\n> \n> I've already learned that I've commited the cardinal sin of configuring \n> my new database server with RAID 5 instead of something more sensible \n> for databases like 0+1, but I've been testing out and replicating this \n> problem on different hardware, so I know that this issue is not the \n> direct cause of this.\n> \n> Thanks for any info. I can supply more info (like config files, schemas, \n> etc.) if you think it might help. But I though I would just describe the \n> problem for starters.\n> \n> -DW\n> \nWell, for whatever it's worth, on my test box, I upgraded from postgreql \n7.4.9 to 8.1, and that seems to make all the difference in the world.\n\nThese complex queries are instantaneous, and the query planner when I \nrun EXPLAIN ANALYZE both before and after running ANALYZE displays \nresults more in line with what is expected (< 60ms).\n\nWhatever changes were introduced in 8.x seems to make a huge improvment \nin query performance.\n\n\n\n\n> \n\n",
"msg_date": "Mon, 14 Nov 2005 13:53:40 -0500",
"msg_from": "DW <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: slow queries after ANALYZE"
}
] |
[
{
"msg_contents": "That sure seems to bolster the theory that performance is degrading\nbecause you exhaust the cache space and need to start reading\nindex pages. When inserting sequential data, you don't need to\nrandomly access pages all over the index tree.\n\n-Kevin\n\n\n>>> Kelly Burkhart <[email protected]> >>>\n\nI modified my original program to insert generated, sequential data.\nThe following graph shows the results to be flat:\n\n<http://kkcsm.net/pgcpy_20051111_1.jpg>\n\n",
"msg_date": "Fri, 11 Nov 2005 16:58:17 -0600",
"msg_from": "\"Kevin Grittner\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.x index insert performance"
}
] |
[
{
"msg_contents": "ns30966:~# NOTICE: Executing SQL: update tblPrintjobs set \nApplicationType = 1 where ApplicationType is null and \nupper(DocumentName) like '%.DOC'\n\nns30966:~# NOTICE: Executing SQL: update tblPrintjobs set \nApplicationType = 1 where ApplicationType is null and \nupper(DocumentName) like 'DOCUMENT%'\n\nns30966:~#\nns30966:~# ERROR: could not read block 3231 of relation \n1663/165707259/173511769: Input/output error\nCONTEXT: SQL statement \"update tblPrintjobs set ApplicationType = 1 \nwhere ApplicationType is null and upper(DocumentName) like \n'DOCUMENT%'\"\nPL/pgSQL function \"fnapplicationtype\" line 30 at execute statement\n\n[1]+ Exit 1 psql -d kpmg -c \"select \nfnApplicationType()\"\n\n\nI get this error. Is this hardware related or could it be something \nwith the postgresql.conf settings.\nI changed them for performance reasons. (More memory, more wal \nbuffers).\nThere are 2 databases. One got the error yesterday, I dropped it (was \nbrand new), recreated it and the error was gone.\nNow the error is there again on another database.\n\nPersonally, I think it's a HD error.\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Sat, 12 Nov 2005 15:18:09 +0100",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "IO Error "
},
{
"msg_contents": "Yves Vindevogel <[email protected]> writes:\n> ns30966:~# ERROR: could not read block 3231 of relation \n> 1663/165707259/173511769: Input/output error\n\n> I get this error. Is this hardware related or could it be something \n> with the postgresql.conf settings.\n\nIt's a hardware failure --- bad disk block, likely. You might find more\ndetails in the kernel log (/var/log/messages or equivalent).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Nov 2005 10:53:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IO Error "
}
] |
[
{
"msg_contents": "We have a large DB with partitioned tables in postgres. We have had\ntrouble with a ORDER/LIMIT type query. The order and limit are not\npushed down to the sub-tables....\n \nCREATE TABLE base (\n foo int \n);\n \nCREATE TABLE bar_0\n extra int\n) INHERITS (base);\nALTER TABLE bar ADD PRIMARY KEY (foo);\n \n-- repeated for bar_0... bar_40\n \nSELECT foo FROM base ORDER BY foo LIMIT 10;\n \nis real slow. What is required to make the query planner generate the\nfollowing instead... (code change i know, but how hard would it be?)\n \nSELECT\n foo\nFROM\n(\n SELECT\n *\n FROM bar_0\n ORDER BY foo LIMIT 10\nUNION ALL\n SELECT\n *\n FROM bar_1\n ORDER BY foo LIMIT 10\n....\n) AS base\nORDER BY foo\nLIMIT 10;\n \n \n\n\n\n\n\nWe have a large DB \nwith partitioned tables in postgres. We have had trouble with a \nORDER/LIMIT type query. The order and limit are not pushed down to the \nsub-tables....\n \nCREATE TABLE base \n(\n foo int \n);\n \nCREATE TABLE \nbar_0\n extra int\n) INHERITS \n(base);\nALTER TABLE bar ADD \nPRIMARY KEY (foo);\n \n-- repeated for \nbar_0... bar_40\n \nSELECT foo FROM base \nORDER BY foo LIMIT 10;\n \nis real slow. What \nis required to make the query planner generate the following instead... (code \nchange i know, but how hard would it be?)\n \nSELECT\n foo\nFROM\n(\n \nSELECT\n \n*\n FROM bar_0\n ORDER BY foo LIMIT \n10\nUNION ALL\n SELECT\n \n*\n FROM bar_1\n ORDER BY foo LIMIT \n10\n....\n) AS base\nORDER BY foo\nLIMIT 10;",
"msg_date": "Mon, 14 Nov 2005 08:25:10 -0500",
"msg_from": "\"Marc Morin\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "sort/limit across union all"
}
] |
[
{
"msg_contents": "Does anyone know what factors affect the recovery time of postgres if it does not shutdown cleanly? With the same size database I've seen times from a few seconds to a few minutes. The longest time was 33 minutes. The 33 minutes was after a complete system crash and reboot so there are a lot of other things going on as well. 125 seconds was the longest time I could reproduce by just doing a kill -9 on postmaster. \n\nIs it the size of the transaction log? The dead space in files? \n\nI'm running postges 7.3.4 in Red Hat 8.0. Yes, yes I know it's crazy but for a variety of reasons upgrading is not currently feasible.\n\nJim\n\n\n\n\n\n\nPostgres recovery time\n\n\n\nDoes anyone know what factors affect the recovery time of postgres if it does not shutdown cleanly? With the same size database I've seen times from a few seconds to a few minutes. The longest time was 33 minutes. The 33 minutes was after a complete system crash and reboot so there are a lot of other things going on as well. 125 seconds was the longest time I could reproduce by just doing a kill -9 on postmaster. \nIs it the size of the transaction log? The dead space in files? \n\nI'm running postges 7.3.4 in Red Hat 8.0. Yes, yes I know it's crazy but for a variety of reasons upgrading is not currently feasible.\nJim",
"msg_date": "Mon, 14 Nov 2005 10:34:42 -0500",
"msg_from": "\"Piccarello, James (James)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres recovery time"
},
{
"msg_contents": "Piccarello, James (James) wrote:\n\n> Does anyone know what factors affect the recovery time of postgres if \n> it does not shutdown cleanly? With the same size database I've seen \n> times from a few seconds to a few minutes. The longest time was 33 \n> minutes. The 33 minutes was after a complete system crash and reboot \n> so there are a lot of other things going on as well. 125 seconds was \n> the longest time I could reproduce by just doing a kill -9 on postmaster.\n>\n> Is it the size of the transaction log? The dead space in files?\n>\nI don't know much about postgresql, but typically WAL mechanisms\nwill exhibit recovery times that are bounded by the amount of log record\ndata written since the last checkpoint. The 'worst' case will be where\nyou have continuous writes to the database and a long checkpoint\ninterval. In that case many log records must be replayed into the\ndata files upon recovery. The 'best' case would be zero write transactions\nsince the last checkpoint. In that case recovery would be swift since\nthere are no live records to recover. In your tests you are probably\nexercising this 'best' or near best case.\n\n\n\n\n\n\n\n\n\nPiccarello, James (James) wrote:\n\n\n\nPostgres recovery time\n\nDoes anyone know what factors affect\nthe recovery time of postgres if it does not shutdown cleanly? With the\nsame size database I've seen times from a few seconds to a few\nminutes. The longest time was 33 minutes. The 33 minutes was after a\ncomplete system crash and reboot so there are a lot of other things\ngoing on as well. 125 seconds was the longest time I could reproduce by\njust doing a kill -9 on postmaster. \nIs it the size of the transaction log?\nThe dead space in files? \n\n\nI don't know much about postgresql,\nbut typically WAL mechanisms\nwill exhibit recovery times that are bounded by the amount of log record\ndata written since the last checkpoint. The 'worst' case will be where \nyou have continuous writes to the database and a long checkpoint\ninterval. In that case many log records must be replayed into the\ndata files upon recovery. The 'best' case would be zero write\ntransactions\nsince the last checkpoint. In that case recovery would be swift since\nthere are no live records to recover. In your tests you are probably\nexercising this 'best' or near best case.",
"msg_date": "Mon, 14 Nov 2005 14:41:29 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres recovery time"
}
] |
[
{
"msg_contents": "\nWe've got an older system in production (PG 7.2.4). Recently\none of the users has wanted to implement a selective delete,\nbut is finding that the time it appears to take exceeds her\npatience factor by several orders of magnitude. Here's\na synopsis of her report. It appears that the \"WHERE\nid IN ...\" is resulting in a seq scan that is causing\nthe problem, but we're not SQL expert enough to know\nwhat to do about it.\n\nCan someone point out what we're doing wrong, or how we\ncould get a (much) faster delete? Thanks!\n\nReport:\n============================================================\nThis command yields results in only a few seconds:\n\n# SELECT at.id FROM \"tmp_table2\" at, \"tmp_tabl2e\" a\n# WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n\nHowever, the following command does not seen to want to ever\ncomplete (the person running this killed it after 1/2 hour).\n\n# DELETE FROM \"tmp_table2\" WHERE id IN\n# (SELECT at.id FROM \"tmp_table2\" at, \"tmp_table2\" a\n# WHERE at.id=a.id and a.name='obsid' and a.value='oid080505');\n\n==============================================================\n\nThe table has four columns. There are 6175 rows satifying the condition\ngiven, and the table itself has 1539688 entries. Layout is:\n\nlab.devel.configdb=# \\d tmp_table2\n Table \"tmp_table2\"\n Column | Type | Modifiers\n--------+--------------------------+-----------\n id | character varying(64) |\n name | character varying(64) |\n units | character varying(32) |\n value | text |\n time | timestamp with time zone |\n\n==============================================================\n\nlab.devel.configdb=# EXPLAIN DELETE FROM \"tmp_table2\" WHERE id IN\nlab.devel.configdb-# (SELECT at.id FROM \"tmp_table2\" at, \"tmp_table2\" a\nlab.devel.configdb(# WHERE at.id=a.id AND a.name='obsid' AND a.value='oid080505');\nNOTICE: QUERY PLAN:\n\nSeq Scan on tmp_table2 (cost=0.00..154893452082.10 rows=769844 width=6)\n SubPlan\n -> Materialize (cost=100600.52..100600.52 rows=296330 width=100)\n -> Hash Join (cost=42674.42..100600.52 rows=296330 width=100)\n -> Seq Scan on tmp_table2 at (cost=0.00..34975.88 rows=1539688 width=50)\n -> Hash (cost=42674.32..42674.32 rows=38 width=50)\n -> Seq Scan on tmp_table2 a (cost=0.00..42674.32 rows=38 width=50)\nEXPLAIN\n\nlab.devel.configdb=# EXPLAIN (SELECT at.id FROM \"tmp_table2\" at, \"tmp_table2\" a\nlab.devel.configdb(# WHERE at.id=a.id AND a.name='obsid' AND a.value='oid080505');\nNOTICE: QUERY PLAN:\n\nHash Join (cost=42674.42..100600.52 rows=296330 width=100)\n -> Seq Scan on tmp_table2 at (cost=0.00..34975.88 rows=1539688 width=50)\n -> Hash (cost=42674.32..42674.32 rows=38 width=50)\n -> Seq Scan on tmp_table2 a (cost=0.00..42674.32 rows=38 width=50)\n\nEXPLAIN\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 14 Nov 2005 15:07:21 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Help speeding up delete"
},
{
"msg_contents": "On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:\n> # SELECT at.id FROM \"tmp_table2\" at, \"tmp_tabl2e\" a\n> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n\nIsn't this equivalent?\n\nselect id from tmp_table2 where name = 'obsid' and value = 'oid080505';\n\n> # DELETE FROM \"tmp_table2\" WHERE id IN\n> # (SELECT at.id FROM \"tmp_table2\" at, \"tmp_table2\" a\n> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505');\n\nand this?\n\ndelete from tmp_table2 where name = 'obsid' and value = 'oid080505';\n\nWhy are you doing a self-join using id, which I assume is a primary key?\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n",
"msg_date": "Mon, 14 Nov 2005 15:20:21 -0800",
"msg_from": "Scott Lamb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Steve Wampler <[email protected]> writes:\n> We've got an older system in production (PG 7.2.4). Recently\n> one of the users has wanted to implement a selective delete,\n> but is finding that the time it appears to take exceeds her\n> patience factor by several orders of magnitude. Here's\n> a synopsis of her report. It appears that the \"WHERE\n> id IN ...\" is resulting in a seq scan that is causing\n> the problem, but we're not SQL expert enough to know\n> what to do about it.\n\n> Can someone point out what we're doing wrong, or how we\n> could get a (much) faster delete? Thanks!\n\nUpdate to 7.4 or later ;-)\n\nQuite seriously, if you're still using 7.2.4 for production purposes\nyou could justifiably be accused of negligence. There are three or four\ndata-loss-grade bugs fixed in the later 7.2.x releases, not to mention\nsecurity holes; and that was before we abandoned support for 7.2.\nYou *really* need to be thinking about an update.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 14 Nov 2005 18:42:43 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete "
},
{
"msg_contents": "Scott Lamb wrote:\n> On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:\n> \n>> # SELECT at.id FROM \"tmp_table2\" at, \"tmp_tabl2e\" a\n>> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n> \n> \n> Isn't this equivalent?\n> \n> select id from tmp_table2 where name = 'obsid' and value = 'oid080505';\n\nProbably, the user based the above on a query designed to find\nall rows with the same id as those rows that have a.name='obsid' and\na.value='oid080505'. However, I think the above would work to locate\nall the ids, which is all we need for the delete (see below)\n\n>> # DELETE FROM \"tmp_table2\" WHERE id IN\n>> # (SELECT at.id FROM \"tmp_table2\" at, \"tmp_table2\" a\n>> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505');\n> \n> \n> and this?\n> \n> delete from tmp_table2 where name = 'obsid' and value = 'oid080505';\n> \n> Why are you doing a self-join using id, which I assume is a primary key?\n\nBecause I think we need to. The above would only delete rows that have\nname = 'obsid' and value = 'oid080505'. We need to delete all rows that\nhave the same ids as those rows. However, from what you note, I bet\nwe could do:\n\n DELETE FROM \"tmp_table2\" WHERE id IN\n (SELECT id FROM \"temp_table2\" WHERE name = 'obsid' and value= 'oid080505');\n\nHowever, even that seems to have a much higher cost than I'd expect:\n\n lab.devel.configdb=# explain delete from \"tmp_table2\" where id in\n (select id from tmp_table2 where name='obsid' and value = 'oid080505');\n NOTICE: QUERY PLAN:\n\n Seq Scan on tmp_table2 (cost=0.00..65705177237.26 rows=769844 width=6)\n SubPlan\n -> Materialize (cost=42674.32..42674.32 rows=38 width=50)\n -> Seq Scan on tmp_table2 (cost=0.00..42674.32 rows=38 width=50)\n\n EXPLAIN\n\nAnd, sure enough, is taking an extrordinarily long time to run (more than\n10 minutes so far, compared to < 10seconds for the select). Is this\nreally typical of deletes? It appears (to me) to be the Seq Scan on tmp_table2\nthat is the killer here. If we put an index on, would it help? (The user\nclaims she tried that and it's EXPLAIN cost went even higher, but I haven't\nchecked that...)\n\nThanks!\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 14 Nov 2005 16:52:53 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Tom Lane wrote:\n> Steve Wampler <[email protected]> writes:\n> \n>>We've got an older system in production (PG 7.2.4). Recently\n>>one of the users has wanted to implement a selective delete,\n>>but is finding that the time it appears to take exceeds her\n>>patience factor by several orders of magnitude. Here's\n>>a synopsis of her report. It appears that the \"WHERE\n>>id IN ...\" is resulting in a seq scan that is causing\n>>the problem, but we're not SQL expert enough to know\n>>what to do about it.\n> \n> \n>>Can someone point out what we're doing wrong, or how we\n>>could get a (much) faster delete? Thanks!\n> \n> \n> Update to 7.4 or later ;-)\n\nI was afraid you'd say that :-) I'm not officially involved in\nthis project anymore and was hoping for a fix that wouldn't drag\nme back in. The security issues aren't a concern because this\nDB is *well* hidden from the outside world (it's part of a telescope\ncontrol system behind several firewalls with no outside access).\nHowever, the data-loss-grade bugs issue *is* important. We'll\ntry to do the upgrade as soon as we get some cloudy days to\nactually do it!\n\nIs the performance behavior that we're experiencing a known\nproblem with 7.2 that has been addressed in 7.4? Or will the\nupgrade fix other problems while leaving this one?\n\n> Quite seriously, if you're still using 7.2.4 for production purposes\n> you could justifiably be accused of negligence. There are three or four\n> data-loss-grade bugs fixed in the later 7.2.x releases, not to mention\n> security holes; and that was before we abandoned support for 7.2.\n> You *really* need to be thinking about an update.\n\nThanks!\nSteve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 14 Nov 2005 17:00:35 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "On 11/14/05, Steve Wampler <[email protected]> wrote:\n\n> However, even that seems to have a much higher cost than I'd expect:\n>\n> lab.devel.configdb=# explain delete from \"tmp_table2\" where id in\n> (select id from tmp_table2 where name='obsid' and value = 'oid080505');\n> NOTICE: QUERY PLAN:\n>\n> Seq Scan on tmp_table2 (cost=0.00..65705177237.26 rows=769844 width=6)\n> SubPlan\n> -> Materialize (cost=42674.32..42674.32 rows=38 width=50)\n> -> Seq Scan on tmp_table2 (cost=0.00..42674.32 rows=38 width=50)\n>\n\nFor one reason or the other, the planner things a sequential scan is the\nbest solution. Try turning off seq_scan before the query and see if it\nchanges the plan (set enable_seqscan off;).\n\nI've seen this problem with sub queries and that usually solves it.\n\n--\nThis E-mail is covered by the Electronic Communications Privacy Act, 18\nU.S.C. 2510-2521 and is legally privileged.\n\nThis information is confidential information and is intended only for the\nuse of the individual or entity named above. If the reader of this message\nis not the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this communication is strictly\nprohibited.\n\nOn 11/14/05, Steve Wampler <[email protected]> wrote:\nHowever, even that seems to have a much higher cost than I'd expect: lab.devel.configdb=# explain delete from \"tmp_table2\" where id in (select id from tmp_table2 where name='obsid' and value = 'oid080505');\n NOTICE: QUERY PLAN: Seq Scan on tmp_table2 (cost=0.00..65705177237.26 rows=769844 width=6) SubPlan -> Materialize (cost=42674.32..42674.32 rows=38 width=50) \n-> Seq Scan on tmp_table2 (cost=0.00..42674.32\nrows=38 width=50)\nFor one reason or the other, the planner things a sequential scan is\nthe best solution. Try turning off seq_scan before the query and see if\nit changes the plan (set enable_seqscan off;). \n\nI've seen this problem with sub queries and that usually solves it.\n-- This E-mail is covered by the Electronic Communications Privacy Act, 18 U.S.C. 2510-2521 and is legally privileged.This\ninformation is confidential information and is intended only for the\nuse of the individual or entity named above. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\ndissemination, distribution or copying of this communication is\nstrictly prohibited.",
"msg_date": "Mon, 14 Nov 2005 17:10:14 -0700",
"msg_from": "Joshua Marsh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Joshua Marsh wrote:\n> \n> \n> On 11/14/05, *Steve Wampler* <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> However, even that seems to have a much higher cost than I'd expect:\n> \n> lab.devel.configdb=# explain delete from \"tmp_table2\" where id in\n> (select id from tmp_table2 where name='obsid' and value =\n> 'oid080505');\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on tmp_table2 (cost=0.00..65705177237.26 rows=769844\n> width=6)\n> SubPlan\n> -> Materialize (cost=42674.32..42674.32 rows=38 width=50)\n> -> Seq Scan on tmp_table2 (cost=0.00..42674.32\n> rows=38 width=50)\n> \n> \n> For one reason or the other, the planner things a sequential scan is the\n> best solution. Try turning off seq_scan before the query and see if it\n> changes the plan (set enable_seqscan off;). \n> \n> I've seen this problem with sub queries and that usually solves it.\n> \n\nHmmm, not only does it still use sequential scans, it thinks it'll take\neven longer:\n\n set enable_seqscan to off;\n SET VARIABLE\n explain delete from \"tmp_table2\" where id in\n (select id from tmp_table2 where name='obsid' and value = 'oid080505');\n NOTICE: QUERY PLAN:\n\n Seq Scan on tmp_table2 (cost=100000000.00..160237039405992.50 rows=800836 width=6)\n SubPlan\n -> Materialize (cost=100043604.06..100043604.06 rows=45 width=26)\n -> Seq Scan on tmp_table2 (cost=100000000.00..100043604.06 rows=45 width=26)\n\n EXPLAIN\n\nBut the advice sounds like it *should* have helped...\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 14 Nov 2005 17:28:19 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "On Nov 14, 2005, at 3:52 PM, Steve Wampler wrote:\n> Scott Lamb wrote:\n>> On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:\n>>\n>>> # SELECT at.id FROM \"tmp_table2\" at, \"tmp_tabl2e\" a\n>>> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n>>\n>>\n>> Isn't this equivalent?\n>>\n>> select id from tmp_table2 where name = 'obsid' and value = \n>> 'oid080505';\n>\n> Probably, the user based the above on a query designed to find\n> all rows with the same id as those rows that have a.name='obsid' and\n> a.value='oid080505'.\n\nWell, this indirection is only significant if those two sets can \ndiffer. If (A) you meant \"tmp_table2\" when you wrote \"tmp_tabl2e\", so \nthis is a self-join, and (B) there is a primary key on \"id\", I don't \nthink that can ever happen.\n\n> It appears (to me) to be the Seq Scan on tmp_table2\n> that is the killer here. If we put an index on, would it help?\n\nOn...tmp_table2.id? If it is a primary key, there already is one. If \nnot, yeah, I expect it would help.\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n",
"msg_date": "Mon, 14 Nov 2005 17:08:03 -0800",
"msg_from": "Scott Lamb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Steve Wampler wrote:\n> \n> Is the performance behavior that we're experiencing a known\n> problem with 7.2 that has been addressed in 7.4? Or will the\n> upgrade fix other problems while leaving this one?\n\nI'm pretty sure that in versions earlier than 7.4, IN clauses that use a \nsubquery will always use a seqscan, regardless of what indexes are \navailable. If you try an IN using explicit values though, it should use \nthe index.\n\nThanks\nLeigh\n",
"msg_date": "Tue, 15 Nov 2005 12:18:48 +1100",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Scott Lamb wrote:\n> On Nov 14, 2005, at 3:52 PM, Steve Wampler wrote:\n> \n>> Scott Lamb wrote:\n>>\n>>> On Nov 14, 2005, at 2:07 PM, Steve Wampler wrote:\n>>>\n>>>> # SELECT at.id FROM \"tmp_table2\" at, \"tmp_tabl2e\" a\n>>>> # WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n>>>\n>>>\n>>>\n>>> Isn't this equivalent?\n>>>\n>>> select id from tmp_table2 where name = 'obsid' and value = 'oid080505';\n>>\n>>\n>> Probably, the user based the above on a query designed to find\n>> all rows with the same id as those rows that have a.name='obsid' and\n>> a.value='oid080505'.\n> \n> \n> Well, this indirection is only significant if those two sets can \n> differ. If (A) you meant \"tmp_table2\" when you wrote \"tmp_tabl2e\", so \n> this is a self-join, and (B) there is a primary key on \"id\", I don't \n> think that can ever happen.\n\nI wasn't clear. The original query was:\n\n SELECT at.* FROM \"tmp_table2\" at, \"tmp_table2\" a\n WHERE at.id=a.id and a.name='obsid' and a.value='oid080505';\n\nwhich is significantly different than:\n\n SELECT * FROM \"tmp_table2\" WHERE name='obsid' and value='oid080505';\n\nThe user had adapted that query for her needs, but it would have been\nbetter to just use the query that you suggested (as the subselect in\nthe DELETE FROM...). Unfortunately, that only improves performance\nslightly - it is still way too slow on deletes.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 14 Nov 2005 21:03:56 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "On Mon, 2005-11-14 at 18:42 -0500, Tom Lane wrote:\n> Steve Wampler <[email protected]> writes:\n> > We've got an older system in production (PG 7.2.4). \n\n> \n> Update to 7.4 or later ;-)\n> \n> Quite seriously, if you're still using 7.2.4 for production purposes\n> you could justifiably be accused of negligence. There are three or four\n> data-loss-grade bugs fixed in the later 7.2.x releases, not to mention\n> security holes; and that was before we abandoned support for 7.2.\n> You *really* need to be thinking about an update.\n\nPerhaps we should put a link on the home page underneath LATEST RELEASEs\nsaying\n\t7.2: de-supported\n\nwith a link to a scary note along the lines of the above.\n\nISTM that there are still too many people on older releases.\n\nWe probably need an explanation of why we support so many releases (in\ncomparison to licenced software) and a note that this does not imply the\nlatest releases are not yet production (in comparison to MySQL or Sybase\nwho have been in beta for a very long time).\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Wed, 16 Nov 2005 20:00:06 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": ">>Update to 7.4 or later ;-)\n>>\n>>Quite seriously, if you're still using 7.2.4 for production purposes\n>>you could justifiably be accused of negligence. There are three or four\n>>data-loss-grade bugs fixed in the later 7.2.x releases, not to mention\n>>security holes; and that was before we abandoned support for 7.2.\n>>You *really* need to be thinking about an update.\n> \n> \n> Perhaps we should put a link on the home page underneath LATEST RELEASEs\n> saying\n> \t7.2: de-supported\n> \n> with a link to a scary note along the lines of the above.\n\nI strongly support an explicit desupported notice for 7.2 and below on \nthe website...\n\nChris\n\n",
"msg_date": "Thu, 17 Nov 2005 09:38:42 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": "> Perhaps we should put a link on the home page underneath LATEST RELEASEs\n> saying\n> \t7.2: de-supported\n> \n> with a link to a scary note along the lines of the above.\n> \n> ISTM that there are still too many people on older releases.\n> \n> We probably need an explanation of why we support so many releases (in\n> comparison to licenced software) and a note that this does not imply the\n> latest releases are not yet production (in comparison to MySQL or Sybase\n> who have been in beta for a very long time).\n\nBy the way, is anyone interested in creating some sort of online \nrepository on pgsql.org or pgfoundry where we can keep statically \ncompiled pg_dump/all for several platforms for 8.1?\n\nThat way if someone wanted to upgrade from 7.2 to 8.1, they can just \ngrab the latest dumper from the website, dump their old database, then \nupgrade easily.\n\nIn my experience not many pgsql admins have test servers or the skills \nto build up test machines with the latest pg_dump, etc. (Seriously.) \nIn fact, few realise at all that they should use the 8.1 dumper.\n\nChris\n\n",
"msg_date": "Thu, 17 Nov 2005 09:40:42 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": "On Thu, Nov 17, 2005 at 09:40:42AM +0800, Christopher Kings-Lynne wrote:\n> In my experience not many pgsql admins have test servers or the skills \n> to build up test machines with the latest pg_dump, etc. (Seriously.) \n> In fact, few realise at all that they should use the 8.1 dumper.\n\nIsn't your distribution supposed to do this for you? Mine does these days...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 17 Nov 2005 12:19:27 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "> Isn't your distribution supposed to do this for you? Mine does these days...\n\nA distribution that tries to automatically do a major postgresql update \nis doomed to fail - spectacularly...\n\nChris\n",
"msg_date": "Thu, 17 Nov 2005 23:07:42 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "On Wed, 2005-11-16 at 19:40, Christopher Kings-Lynne wrote:\n> > Perhaps we should put a link on the home page underneath LATEST RELEASEs\n> > saying\n> > \t7.2: de-supported\n> > \n> > with a link to a scary note along the lines of the above.\n> > \n> > ISTM that there are still too many people on older releases.\n> > \n> > We probably need an explanation of why we support so many releases (in\n> > comparison to licenced software) and a note that this does not imply the\n> > latest releases are not yet production (in comparison to MySQL or Sybase\n> > who have been in beta for a very long time).\n> \n> By the way, is anyone interested in creating some sort of online \n> repository on pgsql.org or pgfoundry where we can keep statically \n> compiled pg_dump/all for several platforms for 8.1?\n> \n> That way if someone wanted to upgrade from 7.2 to 8.1, they can just \n> grab the latest dumper from the website, dump their old database, then \n> upgrade easily.\n> \n> In my experience not many pgsql admins have test servers or the skills \n> to build up test machines with the latest pg_dump, etc. (Seriously.) \n> In fact, few realise at all that they should use the 8.1 dumper.\n\nI would especially like such a thing available as an RPM. A\npgsql-8.1-clienttools.rpm or something like that, with psql, pg_dump,\npg_restore, and what other command line tools you can think of that\nwould help.\n",
"msg_date": "Thu, 17 Nov 2005 09:56:32 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": "On Thu, Nov 17, 2005 at 11:07:42PM +0800, Christopher Kings-Lynne wrote:\n>> Isn't your distribution supposed to do this for you? Mine does these \n>> days...\n> A distribution that tries to automatically do a major postgresql update \n> is doomed to fail - spectacularly...\n\nAutomatically? Well, you can install the two versions side-by-side, and do\npg_upgradecluster, which ports your configuration to the new version and does\na pg_dump between the two versions; exactly what a system administrator would\ndo. Of course, stuff _can_ fail, but it works for the simple cases, and a\ngreat deal of the not-so-simple cases. I did this for our cluster the other\nday (130 wildly different databases, from 7.4 to 8.1) and it worked\nflawlessly.\n\nI do not really see why all the distributions could do something like this,\ninstead of mucking around with special statically compiled pg_dumps and the\nlike...\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 17 Nov 2005 18:25:11 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>>>\n>>> Quite seriously, if you're still using 7.2.4 for production purposes\n>>> you could justifiably be accused of negligence....\n>>\n>> Perhaps we should put a link on the home page underneath LATEST RELEASEs\n>> saying\n>> 7.2: de-supported\n>> with a link to a scary note along the lines of the above.\n> \n> I strongly support an explicit desupported notice for 7.2 and below on \n> the website...\n\n\nI'd go so far as to say the version #s of supported versions\nis one of pieces of information I'd most expect to see on\nthe main support page ( http://www.postgresql.org/support/ ).\n\nPerhaps it'd be nice to even show a table like\n Version Released On Support Ends\n 7.1 4 BC Sep 3 1752\n 7.2 Feb 31 1900 Jan 0 2000\n 7.4 2003-11-17 At least 2005-x-x\n 8.0 2005-01-19 At least 2006-x-x\nwith a footnote saying that only the most recent dot release\nof each family is considered supported.\n\nIt also might be nice to have a footnote saying that any\nof the commercical support companies might support the older\nversions for longer periods of time.\n",
"msg_date": "Thu, 17 Nov 2005 17:39:48 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "> I do not really see why all the distributions could do something like this,\n> instead of mucking around with special statically compiled pg_dumps and the\n> like...\n\nContrib modules and tablespaces.\n\nPlus, no version of pg_dump before 8.0 is able to actually perform such \nreliable dumps and reloads (due to bugs). However, that's probably moot \nthese days.\n\nChris\n\n",
"msg_date": "Fri, 18 Nov 2005 09:56:28 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
}
] |
[
{
"msg_contents": "Does anyone have recommendations for hardware and/or OS to work with\naround 5TB datasets? \n \nThe data is for analysis, so there is virtually no inserting besides a\nbig bulk load. Analysis involves full-database aggregations - mostly\nbasic arithmetic and grouping. In addition, much smaller subsets of data\nwould be pulled and stored to separate databases.\n \nI have been working with datasets no bigger than around 30GB, and that\n(I'm afraid to admit) has been in MSSQL.\n \nThanks,\n \nAdam\n\n\n\n\n\nDoes anyone have \nrecommendations for hardware and/or OS to work with around 5TB datasets? \n\n \nThe data is for \nanalysis, so there is virtually no inserting besides a big bulk \nload. \nAnalysis involves full-database aggregations - mostly basic arithmetic and \ngrouping. In addition, much smaller subsets of data would be pulled and \nstored to separate databases.\n \nI have been \nworking with datasets no bigger than around 30GB, and that (I'm \nafraid to admit) has been in MSSQL.\n \nThanks,\n \nAdam",
"msg_date": "Mon, 14 Nov 2005 18:46:41 -0500",
"msg_from": "\"Adam Weisberg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware/OS recommendations for large databases (5TB)"
},
{
"msg_contents": "> Does anyone have recommendations for hardware and/or OS to work with around\n> 5TB datasets?\n\nHardware-wise I'd say dual core opterons. One dual-core-opteron\nperforms better than two single-core at the same speed. Tyan makes\nsome boards that have four sockets, thereby giving you 8 cpu's (if you\nneed that many). Sun and HP also makes nice hardware although the Tyan\nboard is more competetive priced.\n\nOS wise I would choose the FreeBSD amd64 port but partititions larger\nthan 2 TB needs some special care, using gpt rather than disklabel\netc., tools like fsck may not be able to completely check partitions\nlarger than 2 TB. Linux or Solaris with either LVM or Veritas FS\nsounds like candidates.\n\n> I have been working with datasets no bigger than around 30GB, and that (I'm\n> afraid to admit) has been in MSSQL.\n\nWell, our data are just below 30 GB so I can't help you there :-)\n\nregards\nClaus\n",
"msg_date": "Tue, 15 Nov 2005 09:28:30 +0100",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases (5TB)"
},
{
"msg_contents": "\nOn Nov 15, 2005, at 3:28 AM, Claus Guttesen wrote:\n\n> Hardware-wise I'd say dual core opterons. One dual-core-opteron\n> performs better than two single-core at the same speed. Tyan makes\n\nat 5TB data, i'd vote that the application is disk I/O bound, and the \ndifference in CPU speed at the level of dual opteron vs. dual-core \nopteron is not gonna be noticed.\n\nto maximize disk, try getting a dedicated high-end disk system like \nnstor or netapp file servers hooked up to fiber channel, then use a \ngood high-end fiber channel controller like one from LSI.\n\nand go with FreeBSD amd64 port. It is *way* fast, especially the \nFreeBSD 6.0 disk system.\n\n",
"msg_date": "Wed, 16 Nov 2005 14:24:38 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases (5TB)"
},
{
"msg_contents": "> at 5TB data, i'd vote that the application is disk I/O bound, and the\n> difference in CPU speed at the level of dual opteron vs. dual-core\n> opteron is not gonna be noticed.\n>\n> to maximize disk, try getting a dedicated high-end disk system like\n> nstor or netapp file servers hooked up to fiber channel, then use a\n> good high-end fiber channel controller like one from LSI.\n>\n> and go with FreeBSD amd64 port. It is *way* fast, especially the\n> FreeBSD 6.0 disk system.\n\nI'm (also) FreeBSD-biased but I'm not shure whether the 5 TB fs will\nwork so well if tools like fsck are needed. Gvinum could be one option\nbut I don't have any experience in that area.\n\nregards\nClaus\n",
"msg_date": "Wed, 16 Nov 2005 22:50:19 +0100",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases (5TB)"
},
{
"msg_contents": "\nOn Nov 16, 2005, at 4:50 PM, Claus Guttesen wrote:\n\n> I'm (also) FreeBSD-biased but I'm not shure whether the 5 TB fs will\n> work so well if tools like fsck are needed. Gvinum could be one option\n> but I don't have any experience in that area.\n\nThen look into an external filer and mount via NFS. Then it is not \nFreeBSD's responsibility to manage the volume.\n\n",
"msg_date": "Thu, 17 Nov 2005 09:34:52 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases (5TB)"
}
] |
[
{
"msg_contents": "> Because I think we need to. The above would only delete rows \n> that have name = 'obsid' and value = 'oid080505'. We need to \n> delete all rows that have the same ids as those rows. \n> However, from what you note, I bet we could do:\n> \n> DELETE FROM \"tmp_table2\" WHERE id IN\n> (SELECT id FROM \"temp_table2\" WHERE name = 'obsid' and \n> value= 'oid080505');\n> \n> However, even that seems to have a much higher cost than I'd expect:\n> \n> lab.devel.configdb=# explain delete from \"tmp_table2\" where id in\n> (select id from tmp_table2 where name='obsid' and \n> value = 'oid080505');\n> NOTICE: QUERY PLAN:\n> \n> Seq Scan on tmp_table2 (cost=0.00..65705177237.26 \n> rows=769844 width=6)\n> SubPlan\n> -> Materialize (cost=42674.32..42674.32 rows=38 width=50)\n> -> Seq Scan on tmp_table2 (cost=0.00..42674.32 \n> rows=38 width=50)\n> \n> EXPLAIN\n> \n> And, sure enough, is taking an extrordinarily long time to \n> run (more than 10 minutes so far, compared to < 10seconds for \n> the select). Is this really typical of deletes? It appears \n> (to me) to be the Seq Scan on tmp_table2 that is the killer \n> here. If we put an index on, would it help? (The user \n> claims she tried that and it's EXPLAIN cost went even higher, \n> but I haven't checked that...)\n\n\nEarlier pg versions have always been bad at dealing with IN subqueries.\nTry rewriting it as (with fixing any broken syntax, I'm not actually\ntesting this :P)\n\nDELETE FROM tmp_table2 WHERE EXISTS \n (SELECT * FROM tmp_table2 t2 WHERE t2.id=tmp_table2.id AND\nt2.name='obsid' AND t2.value='oid080505')\n\n\nI assume you do have an index on tmp_table2.id :-) And that it's\nnon-unique? (If it was unique, the previous simplification of the query\nreally should've worked..)\n\nDo you also have an index on \"name,value\" or something like that, so you\nget an index scan from it?\n\n//Magnus\n",
"msg_date": "Tue, 15 Nov 2005 09:47:37 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Magnus Hagander wrote:\n>>Because I think we need to. The above would only delete rows \n>>that have name = 'obsid' and value = 'oid080505'. We need to \n>>delete all rows that have the same ids as those rows. \n>>However, from what you note, I bet we could do:\n>>\n>> DELETE FROM \"tmp_table2\" WHERE id IN\n>> (SELECT id FROM \"temp_table2\" WHERE name = 'obsid' and \n>>value= 'oid080505');\n>>\n>>However, even that seems to have a much higher cost than I'd expect:\n>>\n>> lab.devel.configdb=# explain delete from \"tmp_table2\" where id in\n>> (select id from tmp_table2 where name='obsid' and \n>>value = 'oid080505');\n>> NOTICE: QUERY PLAN:\n>>\n>> Seq Scan on tmp_table2 (cost=0.00..65705177237.26 \n>>rows=769844 width=6)\n>> SubPlan\n>> -> Materialize (cost=42674.32..42674.32 rows=38 width=50)\n>> -> Seq Scan on tmp_table2 (cost=0.00..42674.32 \n>>rows=38 width=50)\n>>\n>> EXPLAIN\n...\n> \n> Earlier pg versions have always been bad at dealing with IN subqueries.\n> Try rewriting it as (with fixing any broken syntax, I'm not actually\n> testing this :P)\n> \n> DELETE FROM tmp_table2 WHERE EXISTS \n> (SELECT * FROM tmp_table2 t2 WHERE t2.id=tmp_table2.id AND\n> t2.name='obsid' AND t2.value='oid080505')\n\nThanks - that looks *significantly* better:\n\n lab.devel.configdb=# explain delete from tmp_table2 where exists\n (select 1 from tmp_table2 t2 where\n t2.id=tmp_table2.id and\n t2.name='obsid' and t2.value='oid080505');\n NOTICE: QUERY PLAN:\n\n Seq Scan on tmp_table2 (cost=0.00..9297614.80 rows=769844 width=6)\n SubPlan\n -> Index Scan using inv_index_2 on tmp_table2 t2 (cost=0.00..6.02 rows=1 width=0)\n\n EXPLAIN\n\n(This is after putting an index on the (id,name,value) tuple.) That outer seq scan\nis still annoying, but maybe this will be fast enough.\n\nI've passed this on, along with the (strong) recommendation that they\nupgrade PG.\n\nThanks!!\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Tue, 15 Nov 2005 07:18:12 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "On 15-11-2005 15:18, Steve Wampler wrote:\n> Magnus Hagander wrote:\n> (This is after putting an index on the (id,name,value) tuple.) That outer seq scan\n> is still annoying, but maybe this will be fast enough.\n> \n> I've passed this on, along with the (strong) recommendation that they\n> upgrade PG.\n\nHave you tried with an index on (name,value) and of course one on id ?\n\nBest regards,\n\nArjen\n",
"msg_date": "Wed, 16 Nov 2005 21:22:17 +0100",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> On 15-11-2005 15:18, Steve Wampler wrote:\n> \n>> Magnus Hagander wrote:\n>> (This is after putting an index on the (id,name,value) tuple.) That\n>> outer seq scan\n>> is still annoying, but maybe this will be fast enough.\n>>\n>> I've passed this on, along with the (strong) recommendation that they\n>> upgrade PG.\n> \n> \n> Have you tried with an index on (name,value) and of course one on id ?\n\nYes, although not with a unique index on (name,value) [possible, but not\nso on the just-id index]. Anyway, it turns out the latest incarnation\nis 'fast enough' for the user's need, so she's not doing any more with\nit until after an upgrade.\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 16 Nov 2005 13:29:44 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Help speeding up delete"
}
] |
[
{
"msg_contents": "Hi,\nI am trying to use Explain Analyze to trace a slow SQL statement called from \nJDBC.\nThe SQL statement with the parameters taked 11 seconds. When I run a explain \nanalyze from psql, it takes < 50 ms with a reasonable explain plan. However \nwhen I try to run an explain analyze from JDBC with the parameters, I get \nerror :\nERROR: no value found for parameter 1\n\nHere is sample code which causes this exception ...\n pst=prodconn.prepareStatement(\"explain analyze select count(*) from \njam_heaprel r where heap_id = ? and parentaddr = ?\");\n pst.setInt(1,1);\n pst.setInt(2,0);\n rs=pst.executeQuery();\n\njava.sql.SQLException: ERROR: no value found for parameter 1\n\tat \norg.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:1471)\n\tat \norg.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1256)\n\tat \norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:175)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:389)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:330)\n\tat \norg.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:240)\n\tat jsp._testexplain_2ejsp._jspService(_testexplain_2ejsp.java:82)\n\tat org.gjt.jsp.HttpJspPageImpl.service(HttpJspPageImpl.java:75)\n\nRegards,\n\nVirag\n\n\n",
"msg_date": "Tue, 15 Nov 2005 10:06:33 +0000",
"msg_from": "\"Virag Saksena\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: no value found for parameter 1 with JDBC and Explain Analyze"
},
{
"msg_contents": "\"Virag Saksena\" <[email protected]> writes:\n> ERROR: no value found for parameter 1\n\n> Here is sample code which causes this exception ...\n> pst=prodconn.prepareStatement(\"explain analyze select count(*) from \n> jam_heaprel r where heap_id = ? and parentaddr = ?\");\n\nI don't think EXPLAIN can take parameters (most of the \"utility\"\nstatements don't take parameters).\n\nThe usual workaround is to use PREPARE:\n\n\tPREPARE foo(paramtype,paramtype) AS SELECT ...;\n\tEXPLAIN EXECUTE foo(x,y);\n\nThis will generate the same parameterized plan as you'd get from the\nother way, so it's a reasonable approximation to the behavior with\nJDBC parameters.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Nov 2005 19:06:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: no value found for parameter 1 with JDBC and Explain\n\tAnalyze"
}
] |
[
{
"msg_contents": "Adam,\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Claus Guttesen\n> Sent: Tuesday, November 15, 2005 12:29 AM\n> To: Adam Weisberg\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Hardware/OS recommendations for large \n> databases ( 5TB)\n> \n> > Does anyone have recommendations for hardware and/or OS to \n> work with \n> > around 5TB datasets?\n> \n> Hardware-wise I'd say dual core opterons. One \n> dual-core-opteron performs better than two single-core at the \n> same speed. Tyan makes some boards that have four sockets, \n> thereby giving you 8 cpu's (if you need that many). Sun and \n> HP also makes nice hardware although the Tyan board is more \n> competetive priced.\n> \n> OS wise I would choose the FreeBSD amd64 port but \n> partititions larger than 2 TB needs some special care, using \n> gpt rather than disklabel etc., tools like fsck may not be \n> able to completely check partitions larger than 2 TB. Linux \n> or Solaris with either LVM or Veritas FS sounds like candidates.\n\nI agree - you can get a very good one from www.acmemicro.com or\nwww.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\nRAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\non a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\nperformance on these (with tuning) on Linux using the xfs filesystem,\nwhich is one of the most critical factors for large databases. \n\nNote that you want to have your DBMS use all of the CPU and disk channel\nbandwidth you have on each query, which takes a parallel database like\nBizgres MPP to achieve.\n\nRegards,\n\n- Luke\n\n",
"msg_date": "Tue, 15 Nov 2005 07:09:56 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke,\n\nHave you tried the areca cards, they are slightly faster yet.\n\nDave\nOn 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n\n>\n> I agree - you can get a very good one from www.acmemicro.com or\n> www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX \n> SATA\n> RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n> on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n> performance on these (with tuning) on Linux using the xfs filesystem,\n> which is one of the most critical factors for large databases.\n>\n> Note that you want to have your DBMS use all of the CPU and disk \n> channel\n> bandwidth you have on each query, which takes a parallel database like\n> Bizgres MPP to achieve.\n>\n> Regards,\n\n\nLuke,Have you tried the areca cards, they are slightly faster yet.DaveOn 15-Nov-05, at 7:09 AM, Luke Lonergan wrote: I agree - you can get a very good one from www.acmemicro.com or www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read performance on these (with tuning) on Linux using the xfs filesystem, which is one of the most critical factors for large databases. Note that you want to have your DBMS use all of the CPU and disk channel bandwidth you have on each query, which takes a parallel database like Bizgres MPP to achieve. Regards,",
"msg_date": "Tue, 15 Nov 2005 09:15:04 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 11/15/05, Luke Lonergan <[email protected]> wrote:\n> Adam,\n>\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of\n> > Claus Guttesen\n> > Sent: Tuesday, November 15, 2005 12:29 AM\n> > To: Adam Weisberg\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] Hardware/OS recommendations for large\n> > databases ( 5TB)\n> >\n> > > Does anyone have recommendations for hardware and/or OS to\n> > work with\n> > > around 5TB datasets?\n> >\n> > Hardware-wise I'd say dual core opterons. One\n> > dual-core-opteron performs better than two single-core at the\n> > same speed. Tyan makes some boards that have four sockets,\n> > thereby giving you 8 cpu's (if you need that many). Sun and\n> > HP also makes nice hardware although the Tyan board is more\n> > competetive priced.\n> >\n> > OS wise I would choose the FreeBSD amd64 port but\n> > partititions larger than 2 TB needs some special care, using\n> > gpt rather than disklabel etc., tools like fsck may not be\n> > able to completely check partitions larger than 2 TB. Linux\n> > or Solaris with either LVM or Veritas FS sounds like candidates.\n>\n> I agree - you can get a very good one from www.acmemicro.com or\n> www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n> RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n> on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n> performance on these (with tuning) on Linux using the xfs filesystem,\n> which is one of the most critical factors for large databases.\n>\n\nSpend a fortune on dual core CPUs and then buy crappy disks... I bet\nfor most applications this system will be IO bound, and you will see a\nnice lot of drive failures in the first year of operation with\nconsumer grade drives.\n\nSpend your money on better Disks, and don't bother with Dual Core IMHO\nunless you can prove the need for it.\n\nAlex\n",
"msg_date": "Wed, 16 Nov 2005 00:15:36 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Not at random access in RAID 10 they aren't, and anyone with their\nhead screwed on right is using RAID 10. The 9500S will still beat the\nAreca cards at RAID 10 database access patern.\n\nAlex.\n\nOn 11/15/05, Dave Cramer <[email protected]> wrote:\n> Luke,\n>\n> Have you tried the areca cards, they are slightly faster yet.\n>\n> Dave\n>\n> On 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n>\n>\n>\n>\n>\n> I agree - you can get a very good one from www.acmemicro.com or\n>\n> www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n>\n> RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n>\n> on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n>\n> performance on these (with tuning) on Linux using the xfs filesystem,\n>\n> which is one of the most critical factors for large databases.\n>\n>\n>\n>\n> Note that you want to have your DBMS use all of the CPU and disk channel\n>\n> bandwidth you have on each query, which takes a parallel database like\n>\n> Bizgres MPP to achieve.\n>\n>\n>\n>\n> Regards,\n>\n",
"msg_date": "Wed, 16 Nov 2005 00:16:58 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex Turner wrote:\n> Not at random access in RAID 10 they aren't, and anyone with their\n> head screwed on right is using RAID 10. The 9500S will still beat the\n> Areca cards at RAID 10 database access patern.\n\nThe max 256MB onboard for 3ware cards is disappointing though. While \ngood enough for 95% of cases, there's that 5% that could use a gig or \ntwo of onboard ram for ultrafast updates. For example, I'm specing out \nan upgrade to our current data processing server. Instead of the \ntraditional 6xFast-Server-HDs, we're gonna go for broke and do \n32xConsumer-HDs. This will give us mega I/O bandwidth but we're \nvulnerable to random access since consumer-grade HDs don't have the RPMs \nor the queueing-smarts. This means we're very dependent on the \ncontroller using onboard RAM to do I/O scheduling. 256MB divided over \n4/6/8 drives -- OK. 256MB divided over 32 drives -- ugh, the HD's \nbuffers are bigger than the RAM alotted to it.\n\nAt least this is how it seems it would work from thinking through all \nthe factors. Unfortunately, I haven't found anybody else who has gone \nthis route and reported their results so I guess we're the guinea pig.\n",
"msg_date": "Wed, 16 Nov 2005 04:51:49 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\n>> I agree - you can get a very good one from www.acmemicro.com or\n>> www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n>> RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n>> on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n>> performance on these (with tuning) on Linux using the xfs filesystem,\n>> which is one of the most critical factors for large databases.\n>>\n>> \n>\n> Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> for most applications this system will be IO bound, and you will see a\n> nice lot of drive failures in the first year of operation with\n> consumer grade drives.\n> \nThere is nothing wrong with using SATA disks and they perform very well. \nThe catch is, make sure\nyou have a battery back up on the raid controller.\n\n> Spend your money on better Disks, and don't bother with Dual Core IMHO\n> unless you can prove the need for it.\n> \nThe reason you want the dual core cpus is that PostgreSQL can only \nexecute 1 query per cpu\nat a time, so the application will see a big boost in overall \ntransactional velocity if you push two\ndual-core cpus into the machine.\n\n\nJoshua D. Drake\n\n\n> Alex\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n",
"msg_date": "Wed, 16 Nov 2005 05:58:48 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Got some hard numbers to back your statement up? IME, the Areca \n1160's with >= 1GB of cache beat any other commodity RAID \ncontroller. This seems to be in agreement with at least one \nindependent testing source:\n\nhttp://print.tweakers.net/?reviews/557\n\nRAID HW from Xyratex, Engino, or Dot Hill will _destroy_ any \ncommodity HW solution, but their price point is considerably higher.\n\n...on another note, I completely agree with the poster who says we \nneed more cache on RAID controllers. We should all be beating on the \nRAID HW manufacturers to use standard DIMMs for their caches and to \nprovide 2 standard DIMM slots in their full height cards (allowing \nfor up to 8GB of cache using 2 4GB DIMMs as of this writing).\n\nIt should also be noted that 64 drive chassis' are going to become \npossible once 2.5\" 10Krpm SATA II and FC HDs become the standard next \nyear (48's are the TOTL now). We need controller technology to keep up.\n\nRon\n\nAt 12:16 AM 11/16/2005, Alex Turner wrote:\n>Not at random access in RAID 10 they aren't, and anyone with their\n>head screwed on right is using RAID 10. The 9500S will still beat the\n>Areca cards at RAID 10 database access patern.\n>\n>Alex.\n>\n>On 11/15/05, Dave Cramer <[email protected]> wrote:\n> > Luke,\n> >\n> > Have you tried the areca cards, they are slightly faster yet.\n> >\n> > Dave\n> >\n> > On 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n> >\n> >\n> >\n> >\n> >\n> > I agree - you can get a very good one from www.acmemicro.com or\n> >\n> > www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n> >\n> > RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n> >\n> > on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n> >\n> > performance on these (with tuning) on Linux using the xfs filesystem,\n> >\n> > which is one of the most critical factors for large databases.\n> >\n> >\n> >\n> >\n> > Note that you want to have your DBMS use all of the CPU and disk channel\n> >\n> > bandwidth you have on each query, which takes a parallel database like\n> >\n> > Bizgres MPP to achieve.\n> >\n> >\n> >\n> >\n> > Regards,\n> >\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Wed, 16 Nov 2005 08:58:56 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "On 16 Nov 2005, at 12:51, William Yu wrote:\n\n> Alex Turner wrote:\n>\n>> Not at random access in RAID 10 they aren't, and anyone with their\n>> head screwed on right is using RAID 10. The 9500S will still beat \n>> the\n>> Areca cards at RAID 10 database access patern.\n>>\n>\n> The max 256MB onboard for 3ware cards is disappointing though. \n> While good enough for 95% of cases, there's that 5% that could use \n> a gig or two of onboard ram for ultrafast updates. For example, I'm \n> specing out an upgrade to our current data processing server. \n> Instead of the traditional 6xFast-Server-HDs, we're gonna go for \n> broke and do 32xConsumer-HDs. This will give us mega I/O bandwidth \n> but we're vulnerable to random access since consumer-grade HDs \n> don't have the RPMs or the queueing-smarts. This means we're very \n> dependent on the controller using onboard RAM to do I/O scheduling. \n> 256MB divided over 4/6/8 drives -- OK. 256MB divided over 32 drives \n> -- ugh, the HD's buffers are bigger than the RAM alotted to it.\n>\n> At least this is how it seems it would work from thinking through \n> all the factors. Unfortunately, I haven't found anybody else who \n> has gone this route and reported their results so I guess we're the \n> guinea pig.\n>\n\nYour going to have to factor in the increased failure rate in your \ncost measurements, including any downtime or performance degradation \nwhilst rebuilding parts of your RAID array. It depends on how long \nyour planning for this system to be operational as well of course.\n\nPick two: Fast, cheap, reliable.\n",
"msg_date": "Wed, 16 Nov 2005 14:13:26 +0000",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Joshua D. Drake wrote:\n> The reason you want the dual core cpus is that PostgreSQL can only\n> execute 1 query per cpu at a time,...\n\nIs that true? I knew that PG only used one cpu per query, but how\ndoes PG know how many CPUs there are to limit the number of queries?\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 16 Nov 2005 07:29:44 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Steve Wampler wrote:\n\n>Joshua D. Drake wrote:\n> \n>\n>>The reason you want the dual core cpus is that PostgreSQL can only\n>>execute 1 query per cpu at a time,...\n>> \n>>\n>\n>Is that true? I knew that PG only used one cpu per query, but how\n>does PG know how many CPUs there are to limit the number of queries?\n>\n> \n>\nHe means only one query can be executing on each cpu at any particular \ninstant.\n\n\n",
"msg_date": "Wed, 16 Nov 2005 07:35:23 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex Stapleton wrote:\n> Your going to have to factor in the increased failure rate in your cost \n> measurements, including any downtime or performance degradation whilst \n> rebuilding parts of your RAID array. It depends on how long your \n> planning for this system to be operational as well of course.\n\nIf we go 32xRAID10, rebuild time should be the same as rebuild time in a \n4xRAID10 system. Only the hard drive that was replaced needs rebuild -- \nnot the entire array.\n\nAnd yes, definitely need a bunch of drives lying around as spares.\n",
"msg_date": "Wed, 16 Nov 2005 06:37:48 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "David Boreham wrote:\n> Steve Wampler wrote:\n> \n>> Joshua D. Drake wrote:\n>> \n>>\n>>> The reason you want the dual core cpus is that PostgreSQL can only\n>>> execute 1 query per cpu at a time,...\n>>> \n>>\n>>\n>> Is that true? I knew that PG only used one cpu per query, but how\n>> does PG know how many CPUs there are to limit the number of queries?\n>>\n>> \n>>\n> He means only one query can be executing on each cpu at any particular\n> instant.\n\nGot it - the cpu is only acting on one query in any instant but may be\nswitching between many 'simultaneous' queries. PG isn't really involved\nin the decision. That makes sense.\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Wed, 16 Nov 2005 07:38:52 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": " >Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n >for most applications this system will be IO bound, and you will see a\n >nice lot of drive failures in the first year of operation with\n >consumer grade drives.\n\nI guess I've never bought into the vendor story that there are\ntwo reliability grades. Why would they bother making two\ndifferent kinds of bearing, motor etc ? Seems like it's more\nlikely an excuse to justify higher prices. In my experience the\nexpensive SCSI drives I own break frequently while the cheapo\ndesktop drives just keep chunking along (modulo certain products\nthat have a specific known reliability problem).\n\nI'd expect that a larger number of hotter drives will give a less reliable\nsystem than a smaller number of cooler ones.\n\n\n",
"msg_date": "Wed, 16 Nov 2005 07:51:31 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex Turner wrote:\n> Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> for most applications this system will be IO bound, and you will see a\n> nice lot of drive failures in the first year of operation with\n> consumer grade drives.\n> \n> Spend your money on better Disks, and don't bother with Dual Core IMHO\n> unless you can prove the need for it.\n\nI would say the opposite -- you always want Dual Core nowadays. DC \nOpterons simply give you better bang for the buck than single core \nOpterons. Price out a 1xDC system against a 2x1P system -- the 1xDC will \nbe cheaper. Do the same for 2xDC versus 4x1P, 4xDC versus 8x1P, 8xDC \nversus 16x1P, etc. -- DC gets cheaper by wider and wider margins because \nthose mega-CPU motherboards are astronomically expensive.\n\nDC also gives you a better upgrade path. Let's say you do testing and \nfigure 2x246 is the right setup to handle the load. Well instead of \ngetting 2x1P, use the same 2P motherboard but only populate 1 CPU w/ a \nDC/270. Now you have a server that can be upgraded to +80% more CPU by \npopping in another DC/270 versus throwing out the entire thing to get a \n4x1P setup.\n\nThe only questions would be:\n(1) Do you need a SMP server at all? I'd claim yes -- you always need 2+ \ncores whether it's DC or 2P to avoid IO interrupts blocking other \nprocesses from running.\n\n(2) Does a DC system perform better than it's Nx1P cousin? My experience \nis yes. Did some rough tests in a drop-in-replacement 1x265 versus 2x244 \nand saw about +10% for DC. All the official benchmarks (Spec, Java, SAP, \netc) from AMD/Sun/HP/IBM show DCs outperforming the Nx1P setups.\n\n(3) Do you need an insane amount of memory? Well here's the case where \nthe more expensive motherboard will serve you better since each CPU slot \nhas its own bank of memory. Spend more money on memory, get cheaper \nsingle-core CPUs.\n\nOf course, this doesn't apply if you are an Intel/Dell-only shop. Xeon \nDCs, while cheaper than their corresponding single-core SMPs, don't have \nthe same performance profile of Opteron DCs. Basically, you're paying a \nbit extra so your server can generate a ton more heat.\n",
"msg_date": "Wed, 16 Nov 2005 07:33:39 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "David Boreham wrote:\n> >Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> >for most applications this system will be IO bound, and you will see a\n> >nice lot of drive failures in the first year of operation with\n> >consumer grade drives.\n> \n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices. In my experience the\n> expensive SCSI drives I own break frequently while the cheapo\n> desktop drives just keep chunking along (modulo certain products\n> that have a specific known reliability problem).\n> \n> I'd expect that a larger number of hotter drives will give a less reliable\n> system than a smaller number of cooler ones.\n\nOur SCSI drives have failed maybe a little less than our IDE drives. \nHell, some of the SCSIs even came bad when we bought them. Of course, \nthe IDE drive failure % is inflated by all the IBM Deathstars we got -- ugh.\n\nBasically, I've found it's cooling that's most important. Packing the \ndrives together into really small rackmounts? Good for your density, not \ngood for the drives. Now we do larger rackmounts -- drives have more \nspace in between each other plus fans in front and back of the drives.\n",
"msg_date": "Wed, 16 Nov 2005 07:41:10 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nAMD added quad-core processors to their public roadmap for 2007.\n\nBeyond 2007, the quad-cores will scale up to 32 sockets!!!!!!!!\n(using Direct Connect Architecture 2.0)\n\nExpect Intel to follow.\n\n\tdouglas\n\nOn Nov 16, 2005, at 9:38 AM, Steve Wampler wrote:\n\n> [...]\n>\n> Got it - the cpu is only acting on one query in any instant but may be\n> switching between many 'simultaneous' queries. PG isn't really \n> involved\n> in the decision. That makes sense.\n",
"msg_date": "Wed, 16 Nov 2005 10:45:15 -0500",
"msg_from": "\"Douglas J. Trainor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "OT Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": ">\n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices. In my experience the\n> expensive SCSI drives I own break frequently while the cheapo\n> desktop drives just keep chunking along (modulo certain products\n> that have a specific known reliability problem).\n\nI don't know if the reliability grade is true or not but what I can tell\nyou is that I have scsi drives that are 5+ years old that still work without\nissue.\n\nI have never had an IDE drive last longer than 3 years (when used in \nproduction).\n\nThat being said, so what. That is what raid is for. You loose a drive \nand hot swap\nit back in. Heck keep a hotspare in the trays.\n\nJoshua D. Drake\n\n\n>\n> I'd expect that a larger number of hotter drives will give a less \n> reliable\n> system than a smaller number of cooler ones.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Wed, 16 Nov 2005 07:47:53 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": ">\n> The only questions would be:\n> (1) Do you need a SMP server at all? I'd claim yes -- you always need \n> 2+ cores whether it's DC or 2P to avoid IO interrupts blocking other \n> processes from running.\n\nI would back this up. Even for smaller installations (single raid 1, 1 \ngig of ram). Why? Well because many applications are going to be CPU \nbound. For example\nwe have a PHP application that is a CMS. On a single CPU machine, RAID 1 \nit takes about 300ms to deliver a single page, point to point. We are \nnot IO bound.\nSo what happens is that under reasonable load we are actually waiting \nfor the CPU to process the code.\n\nA simple upgrade to an SMP machine literally doubles our performance \nbecause we are still not IO bound. I strongly suggest that everyone use \nat least a single dual core because of this experience.\n\n>\n> (3) Do you need an insane amount of memory? Well here's the case where \n> the more expensive motherboard will serve you better since each CPU \n> slot has its own bank of memory. Spend more money on memory, get \n> cheaper single-core CPUs.\nAgreed. A lot of times the slowest dual-core is 5x what you actually \nneed. So get the slowest, and bulk up on memory. If nothing else memory \nis cheap today and it might not be tomorrow.\n\n> Of course, this doesn't apply if you are an Intel/Dell-only shop. Xeon \n> DCs, while cheaper than their corresponding single-core SMPs, don't \n> have the same performance profile of Opteron DCs. Basically, you're \n> paying a bit extra so your server can generate a ton more heat.\n>\nWell if you are an Intel/Dell shop running PostgreSQL you have bigger \nproblems ;)\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n",
"msg_date": "Wed, 16 Nov 2005 08:03:37 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Wed, 2005-11-16 at 08:51, David Boreham wrote:\n> >Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> >for most applications this system will be IO bound, and you will see a\n> >nice lot of drive failures in the first year of operation with\n> >consumer grade drives.\n> \n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices. In my experience the\n> expensive SCSI drives I own break frequently while the cheapo\n> desktop drives just keep chunking along (modulo certain products\n> that have a specific known reliability problem).\n> \n> I'd expect that a larger number of hotter drives will give a less reliable\n> system than a smaller number of cooler ones.\n\nMy experience has mirrored this.\n\nAnyone remember back when HP made their SureStore drives? We built 8\ndrive RAID arrays to ship to customer sites, pre-filled with data. Not\na single one arrived fully operational. The failure rate on those\ndrives was something like 60% in the first year, and HP quit making hard\ndrives because of it.\n\nThose were SCSI Server class drives, supposedly built to last 5 years.\n\nOTOH, I remember putting a pair of 60 Gig IDEs into a server that had\nlots of ventilation and fans and such, and having no problems\nwhatsoever.\n\nThere was a big commercial EMC style array in the hosting center at the\nsame place that had something like a 16 wide by 16 tall array of IDE\ndrives for storing pdf / tiff stuff on it, and we had at least one\nfailure a month in it. Of course, that's 256 drives, so you're gonna\nhave failures, and it was configured with a spare on every other row or\nsome such. We just had a big box of hard drives and it was smart enough\nto rebuild automagically when you put a new one in, so the maintenance\nwasn't really that bad. The performance was quite impressive too.\n",
"msg_date": "Wed, 16 Nov 2005 11:06:25 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,\nthe 3ware controllers beat the Areca card.\n\nAlex.\n\nOn 11/16/05, Ron <[email protected]> wrote:\n> Got some hard numbers to back your statement up? IME, the Areca\n> 1160's with >= 1GB of cache beat any other commodity RAID\n> controller. This seems to be in agreement with at least one\n> independent testing source:\n>\n> http://print.tweakers.net/?reviews/557\n>\n> RAID HW from Xyratex, Engino, or Dot Hill will _destroy_ any\n> commodity HW solution, but their price point is considerably higher.\n>\n> ...on another note, I completely agree with the poster who says we\n> need more cache on RAID controllers. We should all be beating on the\n> RAID HW manufacturers to use standard DIMMs for their caches and to\n> provide 2 standard DIMM slots in their full height cards (allowing\n> for up to 8GB of cache using 2 4GB DIMMs as of this writing).\n>\n> It should also be noted that 64 drive chassis' are going to become\n> possible once 2.5\" 10Krpm SATA II and FC HDs become the standard next\n> year (48's are the TOTL now). We need controller technology to keep up.\n>\n> Ron\n>\n> At 12:16 AM 11/16/2005, Alex Turner wrote:\n> >Not at random access in RAID 10 they aren't, and anyone with their\n> >head screwed on right is using RAID 10. The 9500S will still beat the\n> >Areca cards at RAID 10 database access patern.\n> >\n> >Alex.\n> >\n> >On 11/15/05, Dave Cramer <[email protected]> wrote:\n> > > Luke,\n> > >\n> > > Have you tried the areca cards, they are slightly faster yet.\n> > >\n> > > Dave\n> > >\n> > > On 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > I agree - you can get a very good one from www.acmemicro.com or\n> > >\n> > > www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n> > >\n> > > RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n> > >\n> > > on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n> > >\n> > > performance on these (with tuning) on Linux using the xfs filesystem,\n> > >\n> > > which is one of the most critical factors for large databases.\n> > >\n> > >\n> > >\n> > >\n> > > Note that you want to have your DBMS use all of the CPU and disk channel\n> > >\n> > > bandwidth you have on each query, which takes a parallel database like\n> > >\n> > > Bizgres MPP to achieve.\n> > >\n> > >\n> > >\n> > >\n> > > Regards,\n> > >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: Don't 'kill -9' the postmaster\n>\n>\n>\n>\n",
"msg_date": "Wed, 16 Nov 2005 12:08:37 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "On Wed, 2005-11-16 at 09:33, William Yu wrote:\n> Alex Turner wrote:\n> > Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> > for most applications this system will be IO bound, and you will see a\n> > nice lot of drive failures in the first year of operation with\n> > consumer grade drives.\n> > \n> > Spend your money on better Disks, and don't bother with Dual Core IMHO\n> > unless you can prove the need for it.\n> \n> I would say the opposite -- you always want Dual Core nowadays. DC \n> Opterons simply give you better bang for the buck than single core \n> Opterons. Price out a 1xDC system against a 2x1P system -- the 1xDC will \n> be cheaper. Do the same for 2xDC versus 4x1P, 4xDC versus 8x1P, 8xDC \n> versus 16x1P, etc. -- DC gets cheaper by wider and wider margins because \n> those mega-CPU motherboards are astronomically expensive.\n\nThe biggest gain is going from 1 to 2 CPUs (real cpus, like the DC\nOpterons or genuine dual CPU mobo, not \"hyperthreaded\"). Part of the\nissue isn't just raw CPU processing power. The second CPU allows the\nmachine to be more responsive because it doesn't have to context switch\nas much.\n\nWhile I've seen plenty of single CPU servers start to bog under load\nrunning one big query, the dual CPU machines always seem more than just\ntwice as snappy under similar loads.\n",
"msg_date": "Wed, 16 Nov 2005 11:09:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Scott,\n\nOn 11/16/05 9:09 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> The biggest gain is going from 1 to 2 CPUs (real cpus, like the DC\n> Opterons or genuine dual CPU mobo, not \"hyperthreaded\"). Part of the\n> issue isn't just raw CPU processing power. The second CPU allows the\n> machine to be more responsive because it doesn't have to context switch\n> as much.\n> \n> While I've seen plenty of single CPU servers start to bog under load\n> running one big query, the dual CPU machines always seem more than just\n> twice as snappy under similar loads.\n> \nI agree, 2 CPUs are better than one in most cases.\n\nThe discussion was kicked off by the suggestion to get 8 dual core CPUs to\nprocess a large database with postgres. Say your decision support query\ntakes 15 minutes to run with one CPU. Add another and it still takes 15\nminutes. Add 15 and the same ...\n\nOLTP is so different from Business intelligence and Decision Support that\nvery little of this thread¹s discussion is relevant IMO.\n\nThe job is to design a system that can process sequential scan as fast as\npossible and uses all resources (CPUs, mem, disk channels) on each query.\nSequential scan is 100x more important than random seeks.\n\nHere are the facts so far:\n* Postgres can only use 1 CPU on each query\n* Postgres I/O for sequential scan is CPU limited to 110-120 MB/s on the\nfastest modern CPUs\n* Postgres disk-based sort speed is 1/10 or more slower than commercial\ndatabases and memory doesn¹t improve it (much)\n\nThese are the conclusions that follow about decision support / BI system\narchitecture for normal Postgres:\n* I/O hardware with more than 110MB/s of read bandwidth is not useful\n* More than 1 CPU is not useful\n* More RAM than a nominal amount for small table caching is not useful\n\nIn other words, big SMP doesn¹t address the problem at all. By contrast,\nhaving all CPUs on multiple machines, or even on a big SMP with lots of I/O\nchannels, solves all of the above issues.\n\nRegards,\n\n- Luke \n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nScott,\n\nOn 11/16/05 9:09 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\nThe biggest gain is going from 1 to 2 CPUs (real cpus, like the DC\nOpterons or genuine dual CPU mobo, not \"hyperthreaded\"). Part of the\nissue isn't just raw CPU processing power. The second CPU allows the\nmachine to be more responsive because it doesn't have to context switch\nas much.\n\nWhile I've seen plenty of single CPU servers start to bog under load\nrunning one big query, the dual CPU machines always seem more than just\ntwice as snappy under similar loads.\n\nI agree, 2 CPUs are better than one in most cases.\n\nThe discussion was kicked off by the suggestion to get 8 dual core CPUs to process a large database with postgres. Say your decision support query takes 15 minutes to run with one CPU. Add another and it still takes 15 minutes. Add 15 and the same ...\n\nOLTP is so different from Business intelligence and Decision Support that very little of this thread’s discussion is relevant IMO.\n\nThe job is to design a system that can process sequential scan as fast as possible and uses all resources (CPUs, mem, disk channels) on each query. Sequential scan is 100x more important than random seeks.\n\nHere are the facts so far:\nPostgres can only use 1 CPU on each query\nPostgres I/O for sequential scan is CPU limited to 110-120 MB/s on the fastest modern CPUs\nPostgres disk-based sort speed is 1/10 or more slower than commercial databases and memory doesn’t improve it (much)\n\nThese are the conclusions that follow about decision support / BI system architecture for normal Postgres:\nI/O hardware with more than 110MB/s of read bandwidth is not useful \nMore than 1 CPU is not useful\nMore RAM than a nominal amount for small table caching is not useful\n\nIn other words, big SMP doesn’t address the problem at all. By contrast, having all CPUs on multiple machines, or even on a big SMP with lots of I/O channels, solves all of the above issues.\n\nRegards,\n\n- Luke",
"msg_date": "Wed, 16 Nov 2005 09:47:26 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Oops,\n\nLast point should be worded: ³All CPUs on all machines used by a parallel\ndatabase²\n\n- Luke \n\n\nOn 11/16/05 9:47 AM, \"Luke Lonergan\" <[email protected]> wrote:\n\n> Scott,\n> \n> On 11/16/05 9:09 AM, \"Scott Marlowe\" <[email protected]> wrote:\n> \n>> The biggest gain is going from 1 to 2 CPUs (real cpus, like the DC\n>> Opterons or genuine dual CPU mobo, not \"hyperthreaded\"). Part of the\n>> issue isn't just raw CPU processing power. The second CPU allows the\n>> machine to be more responsive because it doesn't have to context switch\n>> as much.\n>> \n>> While I've seen plenty of single CPU servers start to bog under load\n>> running one big query, the dual CPU machines always seem more than just\n>> twice as snappy under similar loads.\n>> \n> I agree, 2 CPUs are better than one in most cases.\n> \n> The discussion was kicked off by the suggestion to get 8 dual core CPUs to\n> process a large database with postgres. Say your decision support query takes\n> 15 minutes to run with one CPU. Add another and it still takes 15 minutes.\n> Add 15 and the same ...\n> \n> OLTP is so different from Business intelligence and Decision Support that very\n> little of this thread¹s discussion is relevant IMO.\n> \n> The job is to design a system that can process sequential scan as fast as\n> possible and uses all resources (CPUs, mem, disk channels) on each query.\n> Sequential scan is 100x more important than random seeks.\n> \n> Here are the facts so far:\n> * Postgres can only use 1 CPU on each query\n> * Postgres I/O for sequential scan is CPU limited to 110-120 MB/s on the\n> fastest modern CPUs\n> * Postgres disk-based sort speed is 1/10 or more slower than commercial\n> databases and memory doesn¹t improve it (much)\n> \n> These are the conclusions that follow about decision support / BI system\n> architecture for normal Postgres:\n> * I/O hardware with more than 110MB/s of read bandwidth is not useful\n> * More than 1 CPU is not useful\n> * More RAM than a nominal amount for small table caching is not useful\n> \n> In other words, big SMP doesn¹t address the problem at all. By contrast,\n> having all CPUs on multiple machines, or even on a big SMP with lots of I/O\n> channels, solves all of the above issues.\n> \n> Regards,\n> \n- Luke \n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nOops,\n\nLast point should be worded: “All CPUs on all machines used by a parallel database”\n\n- Luke \n\n\nOn 11/16/05 9:47 AM, \"Luke Lonergan\" <[email protected]> wrote:\n\nScott,\n\nOn 11/16/05 9:09 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\nThe biggest gain is going from 1 to 2 CPUs (real cpus, like the DC\nOpterons or genuine dual CPU mobo, not \"hyperthreaded\"). Part of the\nissue isn't just raw CPU processing power. The second CPU allows the\nmachine to be more responsive because it doesn't have to context switch\nas much.\n\nWhile I've seen plenty of single CPU servers start to bog under load\nrunning one big query, the dual CPU machines always seem more than just\ntwice as snappy under similar loads.\n\nI agree, 2 CPUs are better than one in most cases.\n\nThe discussion was kicked off by the suggestion to get 8 dual core CPUs to process a large database with postgres. Say your decision support query takes 15 minutes to run with one CPU. Add another and it still takes 15 minutes. Add 15 and the same ...\n\nOLTP is so different from Business intelligence and Decision Support that very little of this thread’s discussion is relevant IMO.\n\nThe job is to design a system that can process sequential scan as fast as possible and uses all resources (CPUs, mem, disk channels) on each query. Sequential scan is 100x more important than random seeks.\n\nHere are the facts so far:\nPostgres can only use 1 CPU on each query \nPostgres I/O for sequential scan is CPU limited to 110-120 MB/s on the fastest modern CPUs \nPostgres disk-based sort speed is 1/10 or more slower than commercial databases and memory doesn’t improve it (much)\n\nThese are the conclusions that follow about decision support / BI system architecture for normal Postgres:\nI/O hardware with more than 110MB/s of read bandwidth is not useful \nMore than 1 CPU is not useful \nMore RAM than a nominal amount for small table caching is not useful\n\nIn other words, big SMP doesn’t address the problem at all. By contrast, having all CPUs on multiple machines, or even on a big SMP with lots of I/O channels, solves all of the above issues.\n\nRegards,\n\n- Luke",
"msg_date": "Wed, 16 Nov 2005 09:49:28 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Wed, 2005-11-16 at 11:47, Luke Lonergan wrote:\n> Scott,\n\nSome cutting for clarity... I agree on the OLTP versus OLAP\ndiscussion. \n\n> Here are the facts so far:\n> * Postgres can only use 1 CPU on each query\n> * Postgres I/O for sequential scan is CPU limited to 110-120\n> MB/s on the fastest modern CPUs\n> * Postgres disk-based sort speed is 1/10 or more slower than\n> commercial databases and memory doesn’t improve it (much)\n\nBut PostgreSQL only spills to disk if the data set won't fit into the\namount of memory allocated by working_mem / sort_mem. And for most\nBusiness analysis stuff, this can be quite large, and you can even crank\nit up for a single query. \n\nI've written reports that were horrifically slow, hitting the disk and\nall, and I upped sort_mem to hundreds of megabytes until it fit into\nmemory, and suddenly, a slow query is running factors faster than\nbefore.\n\nOr did you mean something else by \"disk base sort speed\"???\n",
"msg_date": "Wed, 16 Nov 2005 11:54:34 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 11/16/05, David Boreham <[email protected]> wrote:\n> >Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> >for most applications this system will be IO bound, and you will see a\n> >nice lot of drive failures in the first year of operation with\n> >consumer grade drives.\n>\n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices. In my experience the\n> expensive SCSI drives I own break frequently while the cheapo\n> desktop drives just keep chunking along (modulo certain products\n> that have a specific known reliability problem).\n>\n> I'd expect that a larger number of hotter drives will give a less reliable\n> system than a smaller number of cooler ones.\n\nOf all the SCSI and IDE drives I've used, and I've used a lot, there\nis a definite difference in quality. The SCSI drives primarily use\nhigher quality components that are intended to last longer under 24/7\nwork loads. I've taken several SCSI and IDE drives apart and you can\ntell from the guts that the SCSI drives are built with sturdier\ncomponents.\n\nI haven't gotten my hands on the Raptor line of ATA drives yet, but\nI've heard they share this in common with the SCSI drives - they are\nbuilt with components made to be used day and night for years straight\nwithout ending.\n\nThat doesn't mean they will last longer than IDE drives, that just\nmeans they've been designed to withstand higher amounts of heat and\nsustained activity. I've got some IDE drives that have lasted years++\nand I've got some IDE drives that have lasted months. However, my SCSI\ndrives I've had over the years all lasted longer than the server they\nwere installed in.\n\nI will say that in the last 10 years, the MTBF of IDE/ATA drives has\nimproved dramatically, so I regularly use them in servers, however I\nhave also shifted my ideology so that a server should be replaced\nafter 3 years, where before I aimed for 5.\n\nIt seems to me that the least reliable components in servers these\ndays are the fans.\n\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 16 Nov 2005 12:51:25 -0600",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:\n> There was a big commercial EMC style array in the hosting center at the\n> same place that had something like a 16 wide by 16 tall array of IDE\n> drives for storing pdf / tiff stuff on it, and we had at least one\n> failure a month in it. Of course, that's 256 drives, so you're gonna\n> have failures, and it was configured with a spare on every other row or\n> some such. We just had a big box of hard drives and it was smart enough\n> to rebuild automagically when you put a new one in, so the maintenance\n> wasn't really that bad. The performance was quite impressive too.\n\nIf you have a cool SAN, it alerts you and removes all data off a disk\n_before_ it starts giving hard failures :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 16 Nov 2005 19:51:38 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 11/16/05, Steinar H. Gunderson <[email protected]> wrote:\n> If you have a cool SAN, it alerts you and removes all data off a disk\n> _before_ it starts giving hard failures :-)\n>\n> /* Steinar */\n> --\n> Homepage: http://www.sesse.net/\n\nGood point. I have avoided data loss *twice* this year by using SMART\nhard drive monitoring software.\n\nI can't tell you how good it feels to replace a drive that is about to\ndie, as compared to restoring data because a drive died.\n--\nMatthew Nuzum\nwww.bearfruit.org\n",
"msg_date": "Wed, 16 Nov 2005 13:03:46 -0600",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Wed, 2005-11-16 at 12:51, Steinar H. Gunderson wrote:\n> On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:\n> > There was a big commercial EMC style array in the hosting center at the\n> > same place that had something like a 16 wide by 16 tall array of IDE\n> > drives for storing pdf / tiff stuff on it, and we had at least one\n> > failure a month in it. Of course, that's 256 drives, so you're gonna\n> > have failures, and it was configured with a spare on every other row or\n> > some such. We just had a big box of hard drives and it was smart enough\n> > to rebuild automagically when you put a new one in, so the maintenance\n> > wasn't really that bad. The performance was quite impressive too.\n> \n> If you have a cool SAN, it alerts you and removes all data off a disk\n> _before_ it starts giving hard failures :-)\n\nYeah, I forget who made the unit we used, but it was pretty much fully\nautomated. IT was something like a large RAID 5+0 (0+5???) and would\nsend an alert when a drive died or started getting errors, and the bad\ndrive's caddy would be flashing read instead of steady green.\n\nI just remember thinking that I'd never used a drive array that was\ntaller than I was before that.\n",
"msg_date": "Wed, 16 Nov 2005 14:08:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "You _ARE_ kidding right? In what hallucination?\n\nThe performance numbers for the 1GB cache version of the Areca 1160 \nare the _grey_ line in the figures, and were added after the original \narticle was published:\n\n\"Note: Since the original Dutch article was published in late \nJanuary, we have finished tests of the 16-port Areca ARC-1160 using \n128MB, 512MB and 1GB cache configurations and RAID 5 arrays of up to \n12 drives. The ARC-1160 was using the latest 1.35 beta firmware. The \nperformance graphs have been updated to include the ARC-1160 results. \nDiscussions of the results have not been updated, however. \"\n\nWith 1GB of cache, the 1160's beat everything else in almost all of \nthe tests they participated in. For the few where they do not win \nhands down, the Escalade's (very occasionally) essentially tie.\n\nThese are very easy to read full color graphs where higher is better \nand the grey line representing the 1GB 1160's is almost always higher \non the graph than anything else. Granted the Escalades seem to give \nthem the overall best run for their money, but they still are clearly \nsecond best when looking at all the graphs and the CPU utilization \nnumbers in aggregate.\n\nRon\n\n\n\nAt 12:08 PM 11/16/2005, Alex Turner wrote:\n>Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,\n>the 3ware controllers beat the Areca card.\n>\n>Alex.\n>\n>On 11/16/05, Ron <[email protected]> wrote:\n> > Got some hard numbers to back your statement up? IME, the Areca\n> > 1160's with >= 1GB of cache beat any other commodity RAID\n> > controller. This seems to be in agreement with at least one\n> > independent testing source:\n> >\n> > http://print.tweakers.net/?reviews/557\n> >\n> > RAID HW from Xyratex, Engino, or Dot Hill will _destroy_ any\n> > commodity HW solution, but their price point is considerably higher.\n> >\n> > ...on another note, I completely agree with the poster who says we\n> > need more cache on RAID controllers. We should all be beating on the\n> > RAID HW manufacturers to use standard DIMMs for their caches and to\n> > provide 2 standard DIMM slots in their full height cards (allowing\n> > for up to 8GB of cache using 2 4GB DIMMs as of this writing).\n> >\n> > It should also be noted that 64 drive chassis' are going to become\n> > possible once 2.5\" 10Krpm SATA II and FC HDs become the standard next\n> > year (48's are the TOTL now). We need controller technology to keep up.\n> >\n> > Ron\n> >\n> > At 12:16 AM 11/16/2005, Alex Turner wrote:\n> > >Not at random access in RAID 10 they aren't, and anyone with their\n> > >head screwed on right is using RAID 10. The 9500S will still beat the\n> > >Areca cards at RAID 10 database access patern.\n> > >\n> > >Alex.\n> > >\n> > >On 11/15/05, Dave Cramer <[email protected]> wrote:\n> > > > Luke,\n> > > >\n> > > > Have you tried the areca cards, they are slightly faster yet.\n> > > >\n> > > > Dave\n> > > >\n> > > > On 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n> > > >\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > I agree - you can get a very good one from www.acmemicro.com or\n> > > >\n> > > > www.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\n> > > >\n> > > > RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n> > > >\n> > > > on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n> > > >\n> > > > performance on these (with tuning) on Linux using the xfs filesystem,\n> > > >\n> > > > which is one of the most critical factors for large databases.\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > Note that you want to have your DBMS use all of the CPU and \n> disk channel\n> > > >\n> > > > bandwidth you have on each query, which takes a parallel database like\n> > > >\n> > > > Bizgres MPP to achieve.\n> > > >\n> > > >\n> > > >\n> > > >\n> > > > Regards,\n> > > >\n> > >\n> > >---------------------------(end of broadcast)---------------------------\n> > >TIP 2: Don't 'kill -9' the postmaster\n> >\n> >\n> >\n> >\n\n\n\n",
"msg_date": "Wed, 16 Nov 2005 15:57:20 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "Yeah those big disks arrays are real sweet.\n\nOne day last week I was in a data center in Arizona when the big LSI/Storagetek\narray in the cage next to mine had a hard drive failure. So the alarm shrieked\nat like 13225535 decibles continuously for hours. BEEEP BEEEEP BEEEEP BEEEEP. \nOf course since this was a colo facility it wasn't staffed on site by the idiots\nwho own the array. BEEEEP BEEEEEEEP BEEEEEEEP for hours. So I had to stand\nnext to this thing--only separated by a few feet and a little wire mesh--while\nit shrieked for hours until a knuckle-dragger arrived on site to swap the drive.\n\nYay.\n\nSo if you're going to get a fancy array (they're worth it if somebody else is\npaying) then make sure to *turn off the @#%@#SF'ing audible alarm* if you deploy\nit in a colo facility.\n\nQuoting Scott Marlowe <[email protected]>:\n\n> On Wed, 2005-11-16 at 12:51, Steinar H. Gunderson wrote:\n> > On Wed, Nov 16, 2005 at 11:06:25AM -0600, Scott Marlowe wrote:\n> > > There was a big commercial EMC style array in the hosting center at the\n> > > same place that had something like a 16 wide by 16 tall array of IDE\n> > > drives for storing pdf / tiff stuff on it, and we had at least one\n> > > failure a month in it. Of course, that's 256 drives, so you're gonna\n> > > have failures, and it was configured with a spare on every other row or\n> > > some such. We just had a big box of hard drives and it was smart\n> enough\n> > > to rebuild automagically when you put a new one in, so the maintenance\n> > > wasn't really that bad. The performance was quite impressive too.\n> > \n> > If you have a cool SAN, it alerts you and removes all data off a disk\n> > _before_ it starts giving hard failures :-)\n> \n> Yeah, I forget who made the unit we used, but it was pretty much fully\n> automated. IT was something like a large RAID 5+0 (0+5???) and would\n> send an alert when a drive died or started getting errors, and the bad\n> drive's caddy would be flashing read instead of steady green.\n> \n> I just remember thinking that I'd never used a drive array that was\n> taller than I was before that.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Wed, 16 Nov 2005 13:03:37 -0800",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "William Yu wrote:\n> \n> Our SCSI drives have failed maybe a little less than our IDE drives. \n\nMicrosoft in their database showcase terraserver project has\nhad the same experience. They studied multiple configurations\nincluding a SCSI/SAN solution as well as a cluster of SATA boxes.\n\nThey measured a\n 6.4% average annual failure rate of their SATA version and a\n 5.5% average annual failure rate on their SCSI implementation.\n\nftp://ftp.research.microsoft.com/pub/tr/TR-2004-107.pdf\n\n \"We lost 9 drives out of 140 SATA drives on the Web and\n Storage Bricks in one year. This is a 6.4% annual failure rate.\n In contrast, the Compaq Storageworks SAN and Web servers lost\n approximately 32 drives in three years out of a total of 194\n drives.13 This is a 5.5% annual failure rate.\n\n The failure rates indicate that SCSI drives are more\n reliable than SATA. SATA drives are substantially\n cheaper than SCSI drives. Because the SATA failure rate\n is so close to the SCSI failure rate gives SATA a\n substantial return on investment advantage.\"\n\nSo unless your system is extremely sensitive to single drive\nfailures, the difference is pretty small. And for the cost\nit seems you can buy enough extra spindles of SATA drives to\neasily make up for the performance difference.\n\n\n> Basically, I've found it's cooling that's most important. Packing the \n> drives together into really small rackmounts? Good for your density, not \n> good for the drives.\n\nIndeed that was their guess for their better-than-expected\nlife of their SATA drives as well. From the same paper:\n\n \"We were careful about disk cooling � SATA\n drives are rarely cooled with the same care that a SCSI\n array receives.\"\n",
"msg_date": "Wed, 16 Nov 2005 13:08:58 -0800",
"msg_from": "Ron Mayer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Amendment: there are graphs where the 1GB Areca 1160's do not do as \nwell. Given that they are mySQL specific and that similar usage \nscenarios not involving mySQL (as well as most of the usage scenarios \ninvolving mySQL; as I said these did not follow the pattern of the \nrest of the benchmarks) show the usual pattern of the 1GB 1160's in \n1st place or tied for 1st place, it seems reasonable that mySQL has \nsomething to due with the aberrant results in those 2 (IIRC) cases.\n\nRon\n\nAt 03:57 PM 11/16/2005, Ron wrote:\n>You _ARE_ kidding right? In what hallucination?\n>\n>The performance numbers for the 1GB cache version of the Areca 1160 \n>are the _grey_ line in the figures, and were added after the \n>original article was published:\n>\n>\"Note: Since the original Dutch article was published in late \n>January, we have finished tests of the 16-port Areca ARC-1160 using \n>128MB, 512MB and 1GB cache configurations and RAID 5 arrays of up to \n>12 drives. The ARC-1160 was using the latest 1.35 beta firmware. The \n>performance graphs have been updated to include the ARC-1160 \n>results. Discussions of the results have not been updated, however. \"\n>\n>With 1GB of cache, the 1160's beat everything else in almost all of \n>the tests they participated in. For the few where they do not win \n>hands down, the Escalade's (very occasionally) essentially tie.\n>\n>These are very easy to read full color graphs where higher is better \n>and the grey line representing the 1GB 1160's is almost always \n>higher on the graph than anything else. Granted the Escalades seem \n>to give them the overall best run for their money, but they still \n>are clearly second best when looking at all the graphs and the CPU \n>utilization numbers in aggregate.\n>\n>Ron\n>\n>\n>\n>At 12:08 PM 11/16/2005, Alex Turner wrote:\n>>Yes - that very benchmark shows that for a MySQL Datadrive in RAID 10,\n>>the 3ware controllers beat the Areca card.\n>>\n>>Alex.\n>>\n>>On 11/16/05, Ron <[email protected]> wrote:\n>> > Got some hard numbers to back your statement up? IME, the Areca\n>> > 1160's with >= 1GB of cache beat any other commodity RAID\n>> > controller. This seems to be in agreement with at least one\n>> > independent testing source:\n>> >\n>> > http://print.tweakers.net/?reviews/557\n>> >\n>> > RAID HW from Xyratex, Engino, or Dot Hill will _destroy_ any\n>> > commodity HW solution, but their price point is considerably higher.\n>> >\n>> > ...on another note, I completely agree with the poster who says we\n>> > need more cache on RAID controllers. We should all be beating on the\n>> > RAID HW manufacturers to use standard DIMMs for their caches and to\n>> > provide 2 standard DIMM slots in their full height cards (allowing\n>> > for up to 8GB of cache using 2 4GB DIMMs as of this writing).\n>> >\n>> > It should also be noted that 64 drive chassis' are going to become\n>> > possible once 2.5\" 10Krpm SATA II and FC HDs become the standard next\n>> > year (48's are the TOTL now). We need controller technology to keep up.\n>> >\n>> > Ron\n>> >\n>> > At 12:16 AM 11/16/2005, Alex Turner wrote:\n>> > >Not at random access in RAID 10 they aren't, and anyone with their\n>> > >head screwed on right is using RAID 10. The 9500S will still beat the\n>> > >Areca cards at RAID 10 database access patern.\n>> > >\n>> > >Alex.\n>> > >\n>> > >On 11/15/05, Dave Cramer <[email protected]> wrote:\n>> > > > Luke,\n>> > > >\n>> > > > Have you tried the areca cards, they are slightly faster yet.\n>> > > >\n>> > > > Dave\n>> > > >\n>> > > > On 15-Nov-05, at 7:09 AM, Luke Lonergan wrote:\n>> > > >\n>> > > >\n>> > > >\n>> > > >\n>> > > >\n>> > > > I agree - you can get a very good one from www.acmemicro.com or\n>> > > >\n>> > > > www.rackable.com with 8x 400GB SATA disks and the new 3Ware \n>> 9550SX SATA\n>> > > >\n>> > > > RAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\n>> > > >\n>> > > > on a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\n>> > > >\n>> > > > performance on these (with tuning) on Linux using the xfs filesystem,\n>> > > >\n>> > > > which is one of the most critical factors for large databases.\n>> > > >\n>> > > >\n>> > > >\n>> > > >\n>> > > > Note that you want to have your DBMS use all of the CPU and \n>> disk channel\n>> > > >\n>> > > > bandwidth you have on each query, which takes a parallel database like\n>> > > >\n>> > > > Bizgres MPP to achieve.\n>> > > >\n>> > > >\n>> > > >\n>> > > >\n>> > > > Regards,\n>> > > >\n>> > >\n>> > >---------------------------(end of broadcast)---------------------------\n>> > >TIP 2: Don't 'kill -9' the postmaster\n>> >\n>> >\n>> >\n>> >\n>\n>\n>\n\n\n\n",
"msg_date": "Wed, 16 Nov 2005 16:21:25 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "On 11/16/05, William Yu <[email protected]> wrote:\n> Alex Turner wrote:\n> > Spend a fortune on dual core CPUs and then buy crappy disks... I bet\n> > for most applications this system will be IO bound, and you will see a\n> > nice lot of drive failures in the first year of operation with\n> > consumer grade drives.\n> >\n> > Spend your money on better Disks, and don't bother with Dual Core IMHO\n> > unless you can prove the need for it.\n>\n> I would say the opposite -- you always want Dual Core nowadays. DC\n> Opterons simply give you better bang for the buck than single core\n> Opterons. Price out a 1xDC system against a 2x1P system -- the 1xDC will\n> be cheaper. Do the same for 2xDC versus 4x1P, 4xDC versus 8x1P, 8xDC\n> versus 16x1P, etc. -- DC gets cheaper by wider and wider margins because\n> those mega-CPU motherboards are astronomically expensive.\n>\n\nOpteron 242 - $178.00\nOpteron 242 - $178.00\nTyan S2882 - $377.50\nTotal: $733.50\n\nOpteron 265 - $719.00\nTyan K8E - $169.00\nTotal: $888.00\n\nTyan K8E - doesn't have any PCI-X, so it's also not apples to apples. \nInfact I couldn't find a single CPU slot board that did, so you pretty\nmuch have to buy a dual CPU board to get PCI-X.\n\n1xDC is _not_ cheaper.\n\nOur DB application does about 5 queries/second peak, plus a heavy\ninsert job once per day. We only _need_ two CPUs, which is true for a\ngreat many DB applications. Unless you like EJB of course, which will\nthrash the crap out of your system.\n\nConsider the two most used regions for DBs:\n\na) OLTP - probably IO bound, large number of queries/sec updating info\non _disks_, not requiring much CPU activity except to retrieve item\ninfomration which is well indexed and normalized.\n\nb) Data wharehouse - needs CPU, but probably still IO bound, large\ndata set that won't fit in RAM will required large amounts of disk\nreads. CPU can easily keep up with disk reads.\n\nI have yet to come across a DB system that wasn't IO bound.\n\n> DC also gives you a better upgrade path. Let's say you do testing and\n> figure 2x246 is the right setup to handle the load. Well instead of\n> getting 2x1P, use the same 2P motherboard but only populate 1 CPU w/ a\n> DC/270. Now you have a server that can be upgraded to +80% more CPU by\n> popping in another DC/270 versus throwing out the entire thing to get a\n> 4x1P setup.\n\nNo argument there. But it's pointless if you are IO bound.\n\n>\n> The only questions would be:\n> (1) Do you need a SMP server at all? I'd claim yes -- you always need 2+\n> cores whether it's DC or 2P to avoid IO interrupts blocking other\n> processes from running.\n\nAt least 2CPUs is always good for precisely those reasons. More than\n2CPUs gives diminishing returns.\n\n>\n> (2) Does a DC system perform better than it's Nx1P cousin? My experience\n> is yes. Did some rough tests in a drop-in-replacement 1x265 versus 2x244\n> and saw about +10% for DC. All the official benchmarks (Spec, Java, SAP,\n> etc) from AMD/Sun/HP/IBM show DCs outperforming the Nx1P setups.\n\nMaybe true, but the 265 does have a 25% faster FSB than the 244, which\nmight perhaps play a role.\n\n>\n> (3) Do you need an insane amount of memory? Well here's the case where\n> the more expensive motherboard will serve you better since each CPU slot\n> has its own bank of memory. Spend more money on memory, get cheaper\n> single-core CPUs.\n\nRemember - large DB is going to be IO bound. Memory will get thrashed\nfor file block buffers, even if you have large amounts, it's all gonna\nbe cycled in and out again.\n\n>\n> Of course, this doesn't apply if you are an Intel/Dell-only shop. Xeon\n> DCs, while cheaper than their corresponding single-core SMPs, don't have\n> the same performance profile of Opteron DCs. Basically, you're paying a\n> bit extra so your server can generate a ton more heat.\n\nDell/Xeon/Postgres is just a bad combination any day of the week ;)\n\nAlex.\n",
"msg_date": "Thu, 17 Nov 2005 14:48:38 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 11/16/05, Joshua D. Drake <[email protected]> wrote:\n> >\n> > The only questions would be:\n> > (1) Do you need a SMP server at all? I'd claim yes -- you always need\n> > 2+ cores whether it's DC or 2P to avoid IO interrupts blocking other\n> > processes from running.\n>\n> I would back this up. Even for smaller installations (single raid 1, 1\n> gig of ram). Why? Well because many applications are going to be CPU\n> bound. For example\n> we have a PHP application that is a CMS. On a single CPU machine, RAID 1\n> it takes about 300ms to deliver a single page, point to point. We are\n> not IO bound.\n> So what happens is that under reasonable load we are actually waiting\n> for the CPU to process the code.\n>\n\nThis is the performance profile for PHP, not for Postgresql. This is\nthe postgresql mailing list.\n\n> A simple upgrade to an SMP machine literally doubles our performance\n> because we are still not IO bound. I strongly suggest that everyone use\n> at least a single dual core because of this experience.\n>\n\nPerformance of PHP, not postgresql.\n\n> >\n> > (3) Do you need an insane amount of memory? Well here's the case where\n> > the more expensive motherboard will serve you better since each CPU\n> > slot has its own bank of memory. Spend more money on memory, get\n> > cheaper single-core CPUs.\n> Agreed. A lot of times the slowest dual-core is 5x what you actually\n> need. So get the slowest, and bulk up on memory. If nothing else memory\n> is cheap today and it might not be tomorrow.\n[snip]\n\nRunning postgresql on a single drive RAID 1 with PHP on the same\nmachine is not a typical installation.\n\n300ms for PHP in CPU time? wow dude - that's quite a page. PHP\ntypical can handle up to 30-50 pages per second for a typical OLTP\napplication on a single CPU box. Something is really wrong with that\nsystem if it takes 300ms per page.\n\nAlex.\n",
"msg_date": "Thu, 17 Nov 2005 14:54:54 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex Turner wrote:\n> Opteron 242 - $178.00\n> Opteron 242 - $178.00\n> Tyan S2882 - $377.50\n> Total: $733.50\n> \n> Opteron 265 - $719.00\n> Tyan K8E - $169.00\n> Total: $888.00\n\nYou're comparing the wrong CPUs. The 265 is the 2x of the 244 so you'll \nhave to bump up the price more although not enough to make a difference.\n\nLooks like the price of the 2X MBs have dropped since I last looked at \nit. Just a few months back, Tyan duals were $450-$500 which is what I \nwas basing my \"priced less\" statement from.\n\n> Tyan K8E - doesn't have any PCI-X, so it's also not apples to apples. \n> Infact I couldn't find a single CPU slot board that did, so you pretty\n> much have to buy a dual CPU board to get PCI-X.\n\nYou can get single CPU boards w/ PCIe and use PCIe controller cards. \nProbably expensive right now because they're so bleeding-edge new but \ndefinitely on the downswing.\n\n> a) OLTP - probably IO bound, large number of queries/sec updating info\n> on _disks_, not requiring much CPU activity except to retrieve item\n> infomration which is well indexed and normalized.\n\nNot in my experience. I find on our OLTP servers, we run 98% in RAM and \nhence are 100% CPU-bound. Our DB is about 50GB in size now, servers run \nw/ 8GB of RAM. We were *very* CPU limited running 2x244. During busy \nhours of the day, our avg \"user transaction\" time were jumping from \n0.8sec to 1.3+sec. Did the 2x265 and now we're always in the 0.7sec to \n0.9sec range.\n\n>>DC also gives you a better upgrade path. Let's say you do testing and\n>>figure 2x246 is the right setup to handle the load. Well instead of\n>>getting 2x1P, use the same 2P motherboard but only populate 1 CPU w/ a\n>>DC/270. Now you have a server that can be upgraded to +80% more CPU by\n>>popping in another DC/270 versus throwing out the entire thing to get a\n>>4x1P setup.\n> \n> \n> No argument there. But it's pointless if you are IO bound.\n\nWhy would you just accept \"we're IO bound, nothing we can do\"? I'd do \neverything in my power to make my app go from IO bound to CPU bound -- \nwhether by optimizing my code or buying more hardware. I can tell you if \nour OLTP servers were IO bound, it would run like crap. Instead of < 1 \nsec, we'd be looking at 5-10 seconds per \"user transaction\" and our \nusers would be screaming bloody murder.\n\nIn theory, you can always convert your IO bound DB to CPU bound by \nstuffing more and more RAM into your server. (Or partitioning the DB \nacross multiple servers.) Whether it's cost effective depends on the DB \nand how much your users are paying you -- and that's a case-by-case \nanalysis. Not a global statement of \"IO-bound, pointless\".\n\n>>(2) Does a DC system perform better than it's Nx1P cousin? My experience\n>>is yes. Did some rough tests in a drop-in-replacement 1x265 versus 2x244\n>>and saw about +10% for DC. All the official benchmarks (Spec, Java, SAP,\n>>etc) from AMD/Sun/HP/IBM show DCs outperforming the Nx1P setups.\n> \n> \n> Maybe true, but the 265 does have a 25% faster FSB than the 244, which\n> might perhaps play a role.\n\nNope. There's no such thing as FSB on Opterons. On-die memory controller \nruns @ CPU speed and hence connects at whatever the memory runs at \n(rounded off to some multiplier math). There's the HT speed that \ncontrols the max IO bandwidth but that's based on the motherboard, not \nthe CPU. Plus the 265 and 244 both run at 1.8Ghz so the memory \nmultiplier & HT IO are both the same.\n",
"msg_date": "Thu, 17 Nov 2005 12:38:11 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\n>> So what happens is that under reasonable load we are actually waiting\n>> for the CPU to process the code.\n>>\n>> \n>\n> This is the performance profile for PHP, not for Postgresql. This is\n> the post\nAnd your point? PostgreSQL benefits directly from what I am speaking \nabout as well.\n\n>> Performance of PHP, not postgresql.\n>>\n>> \nActually both.\n\n> [snip]\n>\n> Running postgresql on a single drive RAID 1 with PHP on the same\n> machine is not a typical installation.\n> \nWant to bet? What do you think the majority of people hosting at \nrackshack, rackspace,\nsuperrack etc... are doing? Or how about all those virtual hosts?\n\n> 300ms for PHP in CPU time? wow dude - that's quite a page. PHP\n> typical can handle up to 30-50 pages per second for a typical OLTP\n> application on a single CPU box. Something is really wrong with that\n> system if it takes 300ms per page.\n> \nThere is wait time associated with that because we are hitting it with \n50-100 connections at a time.\n\nJoshua D. Drake\n\n> Alex.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n\n",
"msg_date": "Thu, 17 Nov 2005 12:39:55 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 11/17/05, William Yu <[email protected]> wrote:\n>\n> > No argument there. But it's pointless if you are IO bound.\n>\n> Why would you just accept \"we're IO bound, nothing we can do\"? I'd do\n> everything in my power to make my app go from IO bound to CPU bound --\n> whether by optimizing my code or buying more hardware. I can tell you if\n> our OLTP servers were IO bound, it would run like crap. Instead of < 1\n> sec, we'd be looking at 5-10 seconds per \"user transaction\" and our\n> users would be screaming bloody murder.\n>\n> In theory, you can always convert your IO bound DB to CPU bound by\n> stuffing more and more RAM into your server. (Or partitioning the DB\n> across multiple servers.) Whether it's cost effective depends on the DB\n> and how much your users are paying you -- and that's a case-by-case\n> analysis. Not a global statement of \"IO-bound, pointless\".\n\n\nWe all want our systems to be CPU bound, but it's not always possible.\nRemember, he is managing a 5 TB Databse. That's quite a bit different than a\n100 GB or even 500 GB database.\n\nOn 11/17/05, William Yu <[email protected]> wrote:\n> No argument there. But it's pointless if you are IO bound.Why would you just accept \"we're IO bound, nothing we can do\"? I'd doeverything in my power to make my app go from IO bound to CPU bound --\nwhether by optimizing my code or buying more hardware. I can tell you ifour OLTP servers were IO bound, it would run like crap. Instead of < 1sec, we'd be looking at 5-10 seconds per \"user transaction\" and our\nusers would be screaming bloody murder.In theory, you can always convert your IO bound DB to CPU bound bystuffing more and more RAM into your server. (Or partitioning the DBacross multiple servers.) Whether it's cost effective depends on the DB\nand how much your users are paying you -- and that's a case-by-caseanalysis. Not a global statement of \"IO-bound, pointless\".\nWe all want our systems to be CPU bound, but it's not always\npossible. Remember, he is managing a 5 TB Databse. That's\nquite a bit different than a 100 GB or even 500 GB database.",
"msg_date": "Thu, 17 Nov 2005 13:58:46 -0700",
"msg_from": "Joshua Marsh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Joshua Marsh wrote:\n> \n> On 11/17/05, *William Yu* <[email protected] <mailto:[email protected]>> wrote:\n> \n> > No argument there. But it's pointless if you are IO bound.\n> \n> Why would you just accept \"we're IO bound, nothing we can do\"? I'd do\n> everything in my power to make my app go from IO bound to CPU bound --\n> whether by optimizing my code or buying more hardware. I can tell you if\n> our OLTP servers were IO bound, it would run like crap. Instead of < 1\n> sec, we'd be looking at 5-10 seconds per \"user transaction\" and our\n> users would be screaming bloody murder.\n> \n> In theory, you can always convert your IO bound DB to CPU bound by\n> stuffing more and more RAM into your server. (Or partitioning the DB\n> across multiple servers.) Whether it's cost effective depends on the DB\n> and how much your users are paying you -- and that's a case-by-case\n> analysis. Not a global statement of \"IO-bound, pointless\".\n> \n> \n> We all want our systems to be CPU bound, but it's not always possible. \n> Remember, he is managing a 5 TB Databse. That's quite a bit different \n> than a 100 GB or even 500 GB database.\n\nI did say \"in theory\". :) I'm pretty sure google is more CPU bound than \nIO bound -- they just spread their DB over 50K servers or whatever. Not \neverybody is willing to pay for that but it's always in the realm of \nplausibility.\n\nPlus we have to go back to the statement I was replying to which was \"I \nhave yet to come across a DB system that wasn't IO bound\".\n\n",
"msg_date": "Thu, 17 Nov 2005 13:22:47 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nJoshua Marsh <[email protected]> writes:\n\n> We all want our systems to be CPU bound, but it's not always possible.\n\nSure it is, let me introduce you to my router, a 486DX100...\n\n\n\nOk, I guess that wasn't very helpful, I admit.\n\n\n-- \ngreg\n\n",
"msg_date": "17 Nov 2005 23:58:45 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nJoshua Marsh <[email protected]> writes:\n\n> We all want our systems to be CPU bound, but it's not always possible.\n> Remember, he is managing a 5 TB Databse. That's quite a bit different than a\n> 100 GB or even 500 GB database.\n\nOk, a more productive point: it's not really the size of the database that\ncontrols whether you're I/O bound or CPU bound. It's the available I/O\nbandwidth versus your CPU speed. \n\nIf your I/O subsystem can feed data to your CPU as fast as it can consume it\nthen you'll be CPU bound no matter how much data you have in total. It's\nharder to scale up I/O subsystems than CPUs, instead of just replacing a CPU\nit tends to mean replacing the whole system to get a better motherboard with a\nfaster, better bus, as well as adding more controllers and more disks.\n\n-- \ngreg\n\n",
"msg_date": "18 Nov 2005 00:17:15 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg,\n\n\nOn 11/17/05 9:17 PM, \"Greg Stark\" <[email protected]> wrote:\n\n> Ok, a more productive point: it's not really the size of the database that\n> controls whether you're I/O bound or CPU bound. It's the available I/O\n> bandwidth versus your CPU speed.\n\nPostgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound after\n110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.\n\nA $1,000 system with one CPU and two SATA disks in a software RAID0 will\nperform exactly the same as a $80,000 system with 8 dual core CPUs and the\nworld's best SCSI RAID hardware on a large database for decision support\n(what the poster asked about).\n\nRegards,\n\n- Luke\n\n\n",
"msg_date": "Thu, 17 Nov 2005 22:07:54 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nOn 18-Nov-05, at 1:07 AM, Luke Lonergan wrote:\n\n> Greg,\n>\n>\n> On 11/17/05 9:17 PM, \"Greg Stark\" <[email protected]> wrote:\n>\n>> Ok, a more productive point: it's not really the size of the \n>> database that\n>> controls whether you're I/O bound or CPU bound. It's the available \n>> I/O\n>> bandwidth versus your CPU speed.\n>\n> Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound \n> after\n> 110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.\n>\n> A $1,000 system with one CPU and two SATA disks in a software RAID0 \n> will\n> perform exactly the same as a $80,000 system with 8 dual core CPUs \n> and the\n> world's best SCSI RAID hardware on a large database for decision \n> support\n> (what the poster asked about).\n\nNow there's an interesting line drawn in the sand. I presume you have \nnumbers to back this up ?\n\nThis should draw some interesting posts.\n\nDave\n>\n> Regards,\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Fri, 18 Nov 2005 08:00:17 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Dave Cramer wrote:\n> \n> On 18-Nov-05, at 1:07 AM, Luke Lonergan wrote:\n> \n>> Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound after\n>> 110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.\n>>\n>> A $1,000 system with one CPU and two SATA disks in a software RAID0 will\n>> perform exactly the same as a $80,000 system with 8 dual core CPUs \n>> and the\n>> world's best SCSI RAID hardware on a large database for decision support\n>> (what the poster asked about).\n> \n> \n> Now there's an interesting line drawn in the sand. I presume you have \n> numbers to back this up ?\n> \n> This should draw some interesting posts.\n\nWell, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000 \nsystem if he's got one going :-)\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 18 Nov 2005 13:22:41 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Richard,\n\nOn 11/18/05 5:22 AM, \"Richard Huxton\" <[email protected]> wrote:\n\n> Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000\n> system if he's got one going :-)\n\nFinally, a game worth playing!\n\nExcept it¹s backward I¹ll show you 80 $1,000 systems performing 80 times\nfaster than one $80,000 system.\n\nOn your proposition I don¹t have any $80,000 systems for trade, do you?\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nRichard,\n\nOn 11/18/05 5:22 AM, \"Richard Huxton\" <[email protected]> wrote:\n\nWell, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000\nsystem if he's got one going :-)\n\nFinally, a game worth playing!\n\nExcept it’s backward – I’ll show you 80 $1,000 systems performing 80 times faster than one $80,000 system.\n\nOn your proposition – I don’t have any $80,000 systems for trade, do you?\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 05:30:34 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Richard Huxton wrote:\n> Dave Cramer wrote:\n>>\n>> On 18-Nov-05, at 1:07 AM, Luke Lonergan wrote:\n>>\n>>> Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound \n>>> after\n>>> 110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.\n>>>\n>>> A $1,000 system with one CPU and two SATA disks in a software RAID0 \n>>> will\n>>> perform exactly the same as a $80,000 system with 8 dual core CPUs \n>>> and the\n>>> world's best SCSI RAID hardware on a large database for decision \n>>> support\n>>> (what the poster asked about).\n>>\n>>\n>> Now there's an interesting line drawn in the sand. I presume you \n>> have numbers to back this up ?\n>>\n>> This should draw some interesting posts.\n\nThat's interesting, as I occasionally see more than 110MB/s of \npostgresql IO on our system. I'm using a 32KB block size, which has \nbeen a huge win in performance for our usage patterns. 300GB database \nwith a lot of turnover. A vacuum analyze now takes about 3 hours, which \nis much shorter than before. Postgresql 8.1, dual opteron, 8GB memory, \nLinux 2.6.11, FC drives.\n\n-- Alan\n",
"msg_date": "Fri, 18 Nov 2005 08:41:58 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 5:41 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> \n> That's interesting, as I occasionally see more than 110MB/s of\n> postgresql IO on our system. I'm using a 32KB block size, which has\n> been a huge win in performance for our usage patterns. 300GB database\n> with a lot of turnover. A vacuum analyze now takes about 3 hours, which\n> is much shorter than before. Postgresql 8.1, dual opteron, 8GB memory,\n> Linux 2.6.11, FC drives.\n\n300GB / 3 hours = 27MB/s.\n\nIf you are using the 2.6 linux kernel, you may be fooled into thinking you\nburst more than you actually get in net I/O because the I/O stats changed in\ntools like iostat and vmstat.\n\nThe only meaningful stats are (size of data) / (time to process data). Do a\nsequential scan of one of your large tables that you know the size of, then\ndivide by the run time and report it.\n\nI'm compiling some new test data to make my point now.\n\nRegards,\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 05:46:58 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On 18-Nov-05, at 8:30 AM, Luke Lonergan wrote:\n\n> Richard,\n>\n> On 11/18/05 5:22 AM, \"Richard Huxton\" <[email protected]> wrote:\n>\n>> Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000\n>> system if he's got one going :-)\n>\n> Finally, a game worth playing!\n>\n> Except it�s backward � I�ll show you 80 $1,000 systems performing \n> 80 times faster than one $80,000 system.\nNow you wouldn't happen to be selling a system that would enable this \nfor postgres, now would ya ?\n>\n> On your proposition � I don�t have any $80,000 systems for trade, \n> do you?\n>\n> - Luke\n\n\nOn 18-Nov-05, at 8:30 AM, Luke Lonergan wrote: Richard, On 11/18/05 5:22 AM, \"Richard Huxton\" <[email protected]> wrote: Well, I'm prepared to swap Luke *TWO* $1000 systems for one $80,000 system if he's got one going :-) Finally, a game worth playing! Except it’s backward – I’ll show you 80 $1,000 systems performing 80 times faster than one $80,000 system.Now you wouldn't happen to be selling a system that would enable this for postgres, now would ya ? On your proposition – I don’t have any $80,000 systems for trade, do you? - Luke",
"msg_date": "Fri, 18 Nov 2005 08:47:43 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Dave,\n\n\nOn 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> \n> Now there's an interesting line drawn in the sand. I presume you have\n> numbers to back this up ?\n> \n> This should draw some interesting posts.\n\nOK, here we go:\n\nThe $1,000 system (System A):\n\n- I bought 16 of these in 2003 for $1,200 each. They have Intel or Asus\nmotherboards, Intel P4 3.0GHz CPUs with an 800MHz FSB. They have a system\ndrive and two RAID0 SATA drives, the Western Digital 74GB Raptor (10K RPM).\nThey have 1GB of RAM.\n\n* A test of write and read performance on the RAID0:\n\n> [llonergan@kite4 raid0]$ time dd if=/dev/zero of=bigfile bs=8k count=250000\n> 250000+0 records in\n> 250000+0 records out\n> \n> real 0m17.453s\n> user 0m0.249s\n> sys 0m10.246s\n\n> [llonergan@kite4 raid0]$ time dd if=bigfile of=/dev/null bs=8k\n> 250000+0 records in\n> 250000+0 records out\n> \n> real 0m18.930s\n> user 0m0.130s\n> sys 0m3.590s\n\n> So, the write performance is 114MB/s and read performance is 106MB/s.\n\nThe $6,000 system (System B):\n\n* I just bought 5 of these systems for $6,000 each. They are dual Opteron\nsystems with 8GB of RAM and 2x 250 model CPUs, which are close to the\nfastest. They have the new 3Ware 9550SX SATA RAID adapters coupled to\nWestern Digital 400GB RE2 model hard drives. They are organized as a RAID5.\n\n* A test of write and read performance on the RAID5:\n\n> [root@modena2 dbfast1]# time dd if=/dev/zero of=bigfile bs=8k count=2000000\n> 2000000+0 records in\n> 2000000+0 records out\n> \n> real 0m51.441s\n> user 0m0.288s\n> sys 0m29.119s\n> \n> [root@modena2 dbfast1]# time dd if=bigfile of=/dev/null bs=8k\n> 2000000+0 records in\n> 2000000+0 records out\n> \n> real 0m39.605s\n> user 0m0.244s\n> sys 0m19.207s\n> \n> So, the write performance is 314MB/s and read performance is 404MB/s (!) This\n> is the fastest I¹ve seen 8 disk drives perform.\n> \nSo, the question is: which of these systems (A or B) can scan a large table\nfaster using non-MPP postgres? How much faster would you wager?\n\nSend your answer, and I¹ll post the result.\n\nRegards,\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nDave,\n\n\nOn 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> \n> Now there's an interesting line drawn in the sand. I presume you have \n> numbers to back this up ?\n> \n> This should draw some interesting posts.\n\nOK, here we go:\n\nThe $1,000 system (System A):\n\n- I bought 16 of these in 2003 for $1,200 each. They have Intel or Asus motherboards, Intel P4 3.0GHz CPUs with an 800MHz FSB. They have a system drive and two RAID0 SATA drives, the Western Digital 74GB Raptor (10K RPM). They have 1GB of RAM.\n\nA test of write and read performance on the RAID0:\n\n[llonergan@kite4 raid0]$ time dd if=/dev/zero of=bigfile bs=8k count=250000\n250000+0 records in\n250000+0 records out\n\nreal 0m17.453s\nuser 0m0.249s\nsys 0m10.246s\n\n[llonergan@kite4 raid0]$ time dd if=bigfile of=/dev/null bs=8k\n250000+0 records in\n250000+0 records out\n\nreal 0m18.930s\nuser 0m0.130s\nsys 0m3.590s\n\nSo, the write performance is 114MB/s and read performance is 106MB/s.\n\nThe $6,000 system (System B):\n\nI just bought 5 of these systems for $6,000 each. They are dual Opteron systems with 8GB of RAM and 2x 250 model CPUs, which are close to the fastest. They have the new 3Ware 9550SX SATA RAID adapters coupled to Western Digital 400GB RE2 model hard drives. They are organized as a RAID5.\n\nA test of write and read performance on the RAID5:\n\n[root@modena2 dbfast1]# time dd if=/dev/zero of=bigfile bs=8k count=2000000\n2000000+0 records in\n2000000+0 records out\n\nreal 0m51.441s\nuser 0m0.288s\nsys 0m29.119s\n\n[root@modena2 dbfast1]# time dd if=bigfile of=/dev/null bs=8k\n2000000+0 records in\n2000000+0 records out\n\nreal 0m39.605s\nuser 0m0.244s\nsys 0m19.207s\n\nSo, the write performance is 314MB/s and read performance is 404MB/s (!) This is the fastest I’ve seen 8 disk drives perform.\n\nSo, the question is: which of these systems (A or B) can scan a large table faster using non-MPP postgres? How much faster would you wager?\n\nSend your answer, and I’ll post the result.\n\nRegards,\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 06:04:24 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alan,\n>\n> On 11/18/05 5:41 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> \n>> That's interesting, as I occasionally see more than 110MB/s of\n>> postgresql IO on our system. I'm using a 32KB block size, which has\n>> been a huge win in performance for our usage patterns. 300GB database\n>> with a lot of turnover. A vacuum analyze now takes about 3 hours, which\n>> is much shorter than before. Postgresql 8.1, dual opteron, 8GB memory,\n>> Linux 2.6.11, FC drives.\n>> \n>\n> 300GB / 3 hours = 27MB/s.\n> \nThat's 3 hours under load, with 80 compute clients beating on the \ndatabase at the same time. We have the stats turned way up, so the \nanalyze tends to read a big chunk of the tables a second time as \nwell. We typically don't have three hours a day of idle time.\n\n-- Alan\n",
"msg_date": "Fri, 18 Nov 2005 09:46:56 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "While I agree with you in principle that pg becomes CPU bound \nrelatively easily compared to other DB products (at ~110-120MBps \naccording to a recent thread), there's a bit of hyperbole in your post.\n\na. There's a big difference between the worst performing 1C x86 ISA \nCPU available and the best performing 2C one (IIRC, that's the \n2.4GHz, 1MB L2 cache AMDx2 4800+ as of this writing)\n\nb. Two 2C CPU's vs one 1C CPU means that a pg process will almost \nnever be waiting on other non pg processes. It also means that 3-4 \npg processes, CPU bound or not, can execute in parallel. Not an \noption with one 1C CPU.\n\nc. Mainboards with support for multiple CPUs and lots' of RAM are \n_not_ the cheap ones.\n\nd. No one should ever use RAID 0 for valuable data. Ever. So at \nthe least you need 4 HD's for a RAID 10 set (RAID 5 is not a good \noption unless write performance is unimportant. 4HD RAID 5 is \nparticularly not a good option.)\n\ne. The server usually needs to talk to things over a network \nconnection. Often performance here matters. Mainboards with 2 1GbE \nNICs and/or PCI-X (or PCI-E) slots for 10GbE cards are not the cheap ones.\n\nf. Trash HDs mean poor IO performance and lower reliability. While \nTOTL 15Krpm 4Gb FC HDs are usually overkill (Not always. It depends \non context.),\nyou at least want SATA II HDs with NCQ or TCQ support. And you want \nthem to have a decent media warranty- preferably a 5 year one if you \ncan get it. Again, these are not the cheapest HD's available.\n\ng. Throughput limitations say nothing about latency \nconsiderations. OLTP-like systems _want_ HD spindles. AMAP. Even \nnon OLTP-like systems need a fair number of spindles to optimize HD \nIO: dedicated WAL set, multiple dedicated DB sets, dedicated OS and \nswap space set, etc, etc. At 50MBps ASTR, you need 16 HD's operating \nin parallel to saturate the bandwidth of a PCI-X channel.\nThat's ~8 independent pg tasks (queries using different tables, \ndedicated WAL IO, etc) running in parallel. Regardless of application domain.\n\nh. Decent RAID controllers and HBAs are not cheap either. Even SW \nRAID benefits from having a big dedicated RAM buffer to talk to.\n\nWhile the above may not cost you $80K, it sure isn't costing you $1K either.\nMaybe ~$15-$20K, but not $1K.\n\nRon\n\n\nAt 01:07 AM 11/18/2005, Luke Lonergan wrote:\n>Greg,\n>\n>\n>On 11/17/05 9:17 PM, \"Greg Stark\" <[email protected]> wrote:\n>\n> > Ok, a more productive point: it's not really the size of the database that\n> > controls whether you're I/O bound or CPU bound. It's the available I/O\n> > bandwidth versus your CPU speed.\n>\n>Postgres + Any x86 CPU from 2.4GHz up to Opteron 280 is CPU bound after\n>110MB/s of I/O. This is true of Postgres 7.4, 8.0 and 8.1.\n>\n>A $1,000 system with one CPU and two SATA disks in a software RAID0 will\n>perform exactly the same as a $80,000 system with 8 dual core CPUs and the\n>world's best SCSI RAID hardware on a large database for decision support\n>(what the poster asked about).\n>\n>Regards,\n>\n>- Luke\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\n\n",
"msg_date": "Fri, 18 Nov 2005 10:00:56 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "Dave,\n\nOn 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> \n> Now there's an interesting line drawn in the sand. I presume you have\n> numbers to back this up ?\n> \n> This should draw some interesting posts.\n\nPart 2: The answer\n\nSystem A:\n> This system is running RedHat 3 Update 4, with a Fedora 2.6.10 Linux kernel.\n> \n> On a single table with 15 columns (the Bizgres IVP) at a size double memory\n> (2.12GB), Postgres 8.0.3 with Bizgres enhancements takes 32 seconds to scan\n> the table: that¹s 66 MB/s. Not the efficiency I¹d hope from the onboard SATA\n> controller that I¹d like, I would have expected to get 85% of the 100MB/s raw\n> read performance.\n> \n> So that¹s $1,200 / 66 MB/s (without adjusting for 2003 price versus now) =\n> 18.2 $/MB/s\n> \n> Raw data:\n> [llonergan@kite4 IVP]$ cat scan.sh\n> #!/bin/bash\n> \n> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n> [llonergan@kite4 IVP]$ cat sysout1\n> count \n> ----------\n> 10000000\n> (1 row)\n> \n> \n> real 0m32.565s\n> user 0m0.002s\n> sys 0m0.003s\n> \n> Size of the table data:\n> [llonergan@kite4 IVP]$ du -sk dgtestdb/base\n> 2121648 dgtestdb/base\n> \nSystem B:\n> This system is running an XFS filesystem, and has been tuned to use very large\n> (16MB) readahead. It¹s running the Centos 4.1 distro, which uses a Linux\n> 2.6.9 kernel.\n> \n> Same test as above, but with 17GB of data takes 69.7 seconds to scan (!)\n> That¹s 244.2MB/s, which is obviously double my earlier point of 110-120MB/s.\n> This system is running with a 16MB Linux readahead setting, let¹s try it with\n> the default (I think) setting of 256KB AHA! Now we get 171.4 seconds or\n> 99.3MB/s.\n> \n> So, using the tuned setting of ³blockdev setra 16384² we get $6,000 / 244MB/s\n> = 24.6 $/MB/s\n> If we use the default Linux setting it¹s 2.5x worse.\n> \n> Raw data:\n> [llonergan@modena2 IVP]$ cat scan.sh\n> #!/bin/bash\n> \n> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n> [llonergan@modena2 IVP]$ cat sysout3\n> count \n> ----------\n> 80000000\n> (1 row)\n> \n> \n> real 1m9.875s\n> user 0m0.000s\n> sys 0m0.004s\n> [llonergan@modena2 IVP]$ !du\n> du -sk dgtestdb/base\n> 17021260 dgtestdb/base\n\nSummary:\n\n<cough, cough> OK you can get more I/O bandwidth out of the current I/O\npath for sequential scan if you tune the filesystem for large readahead.\nThis is a cheap alternative to overhauling the executor to use asynch I/O.\n\nStill, there is a CPU limit here this is not I/O bound, it is CPU limited\nas evidenced by the sensitivity to readahead settings. If the filesystem\ncould do 1GB/s, you wouldn¹t go any faster than 244MB/s.\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nDave,\n\nOn 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> \n> Now there's an interesting line drawn in the sand. I presume you have \n> numbers to back this up ?\n> \n> This should draw some interesting posts.\n\nPart 2: The answer\n\nSystem A:\nThis system is running RedHat 3 Update 4, with a Fedora 2.6.10 Linux kernel.\n\nOn a single table with 15 columns (the Bizgres IVP) at a size double memory (2.12GB), Postgres 8.0.3 with Bizgres enhancements takes 32 seconds to scan the table: that’s 66 MB/s. Not the efficiency I’d hope from the onboard SATA controller that I’d like, I would have expected to get 85% of the 100MB/s raw read performance.\n\nSo that’s $1,200 / 66 MB/s (without adjusting for 2003 price versus now) = 18.2 $/MB/s\n\nRaw data:\n[llonergan@kite4 IVP]$ cat scan.sh \n#!/bin/bash\n\ntime psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n[llonergan@kite4 IVP]$ cat sysout1\n count \n----------\n 10000000\n(1 row)\n\n\nreal 0m32.565s\nuser 0m0.002s\nsys 0m0.003s\n\nSize of the table data:\n[llonergan@kite4 IVP]$ du -sk dgtestdb/base\n2121648 dgtestdb/base\n\nSystem B:\nThis system is running an XFS filesystem, and has been tuned to use very large (16MB) readahead. It’s running the Centos 4.1 distro, which uses a Linux 2.6.9 kernel.\n\nSame test as above, but with 17GB of data takes 69.7 seconds to scan (!) That’s 244.2MB/s, which is obviously double my earlier point of 110-120MB/s. This system is running with a 16MB Linux readahead setting, let’s try it with the default (I think) setting of 256KB – AHA! Now we get 171.4 seconds or 99.3MB/s.\n\nSo, using the tuned setting of “blockdev —setra 16384” we get $6,000 / 244MB/s = 24.6 $/MB/s\nIf we use the default Linux setting it’s 2.5x worse.\n\nRaw data:\n[llonergan@modena2 IVP]$ cat scan.sh \n#!/bin/bash\n\ntime psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n[llonergan@modena2 IVP]$ cat sysout3\n count \n----------\n 80000000\n(1 row)\n\n\nreal 1m9.875s\nuser 0m0.000s\nsys 0m0.004s\n[llonergan@modena2 IVP]$ !du\ndu -sk dgtestdb/base\n17021260 dgtestdb/base\n\nSummary:\n\n<cough, cough> OK – you can get more I/O bandwidth out of the current I/O path for sequential scan if you tune the filesystem for large readahead. This is a cheap alternative to overhauling the executor to use asynch I/O.\n\nStill, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 07:13:42 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke,\n\nInteresting numbers. I'm a little concerned about the use of blockdev \n�setra 16384. If I understand this correctly it assumes that the \ntable is contiguous on the disk does it not ?\n\n\nDave\nOn 18-Nov-05, at 10:13 AM, Luke Lonergan wrote:\n\n> Dave,\n>\n> On 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> >\n> > Now there's an interesting line drawn in the sand. I presume you \n> have\n> > numbers to back this up ?\n> >\n> > This should draw some interesting posts.\n>\n> Part 2: The answer\n>\n> System A:\n>> This system is running RedHat 3 Update 4, with a Fedora 2.6.10 \n>> Linux kernel.\n>>\n>> On a single table with 15 columns (the Bizgres IVP) at a size \n>> double memory (2.12GB), Postgres 8.0.3 with Bizgres enhancements \n>> takes 32 seconds to scan the table: that�s 66 MB/s. Not the \n>> efficiency I�d hope from the onboard SATA controller that I�d \n>> like, I would have expected to get 85% of the 100MB/s raw read \n>> performance.\n>>\n>> So that�s $1,200 / 66 MB/s (without adjusting for 2003 price \n>> versus now) = 18.2 $/MB/s\n>>\n>> Raw data:\n>> [llonergan@kite4 IVP]$ cat scan.sh\n>> #!/bin/bash\n>>\n>> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n>> [llonergan@kite4 IVP]$ cat sysout1\n>> count\n>> ----------\n>> 10000000\n>> (1 row)\n>>\n>>\n>> real 0m32.565s\n>> user 0m0.002s\n>> sys 0m0.003s\n>>\n>> Size of the table data:\n>> [llonergan@kite4 IVP]$ du -sk dgtestdb/base\n>> 2121648 dgtestdb/base\n>>\n> System B:\n>> This system is running an XFS filesystem, and has been tuned to \n>> use very large (16MB) readahead. It�s running the Centos 4.1 \n>> distro, which uses a Linux 2.6.9 kernel.\n>>\n>> Same test as above, but with 17GB of data takes 69.7 seconds to \n>> scan (!) That�s 244.2MB/s, which is obviously double my earlier \n>> point of 110-120MB/s. This system is running with a 16MB Linux \n>> readahead setting, let�s try it with the default (I think) setting \n>> of 256KB � AHA! Now we get 171.4 seconds or 99.3MB/s.\n>>\n>> So, using the tuned setting of �blockdev �setra 16384� we get \n>> $6,000 / 244MB/s = 24.6 $/MB/s\n>> If we use the default Linux setting it�s 2.5x worse.\n>>\n>> Raw data:\n>> [llonergan@modena2 IVP]$ cat scan.sh\n>> #!/bin/bash\n>>\n>> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n>> [llonergan@modena2 IVP]$ cat sysout3\n>> count\n>> ----------\n>> 80000000\n>> (1 row)\n>>\n>>\n>> real 1m9.875s\n>> user 0m0.000s\n>> sys 0m0.004s\n>> [llonergan@modena2 IVP]$ !du\n>> du -sk dgtestdb/base\n>> 17021260 dgtestdb/base\n>\n> Summary:\n>\n> <cough, cough> OK � you can get more I/O bandwidth out of the \n> current I/O path for sequential scan if you tune the filesystem for \n> large readahead. This is a cheap alternative to overhauling the \n> executor to use asynch I/O.\n>\n> Still, there is a CPU limit here � this is not I/O bound, it is CPU \n> limited as evidenced by the sensitivity to readahead settings. If \n> the filesystem could do 1GB/s, you wouldn�t go any faster than \n> 244MB/s.\n>\n> - Luke\n\n\nLuke,Interesting numbers. I'm a little concerned about the use of blockdev —setra 16384. If I understand this correctly it assumes that the table is contiguous on the disk does it not ?DaveOn 18-Nov-05, at 10:13 AM, Luke Lonergan wrote: Dave, On 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote: > > Now there's an interesting line drawn in the sand. I presume you have > numbers to back this up ? > > This should draw some interesting posts. Part 2: The answer System A: This system is running RedHat 3 Update 4, with a Fedora 2.6.10 Linux kernel. On a single table with 15 columns (the Bizgres IVP) at a size double memory (2.12GB), Postgres 8.0.3 with Bizgres enhancements takes 32 seconds to scan the table: that’s 66 MB/s. Not the efficiency I’d hope from the onboard SATA controller that I’d like, I would have expected to get 85% of the 100MB/s raw read performance. So that’s $1,200 / 66 MB/s (without adjusting for 2003 price versus now) = 18.2 $/MB/s Raw data: [llonergan@kite4 IVP]$ cat scan.sh #!/bin/bash time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb [llonergan@kite4 IVP]$ cat sysout1 count ---------- 10000000 (1 row) real 0m32.565s user 0m0.002s sys 0m0.003s Size of the table data: [llonergan@kite4 IVP]$ du -sk dgtestdb/base 2121648 dgtestdb/base System B: This system is running an XFS filesystem, and has been tuned to use very large (16MB) readahead. It’s running the Centos 4.1 distro, which uses a Linux 2.6.9 kernel. Same test as above, but with 17GB of data takes 69.7 seconds to scan (!) That’s 244.2MB/s, which is obviously double my earlier point of 110-120MB/s. This system is running with a 16MB Linux readahead setting, let’s try it with the default (I think) setting of 256KB – AHA! Now we get 171.4 seconds or 99.3MB/s. So, using the tuned setting of “blockdev —setra 16384” we get $6,000 / 244MB/s = 24.6 $/MB/s If we use the default Linux setting it’s 2.5x worse. Raw data: [llonergan@modena2 IVP]$ cat scan.sh #!/bin/bash time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb [llonergan@modena2 IVP]$ cat sysout3 count ---------- 80000000 (1 row) real 1m9.875s user 0m0.000s sys 0m0.004s [llonergan@modena2 IVP]$ !du du -sk dgtestdb/base 17021260 dgtestdb/base Summary: <cough, cough> OK – you can get more I/O bandwidth out of the current I/O path for sequential scan if you tune the filesystem for large readahead. This is a cheap alternative to overhauling the executor to use asynch I/O. Still, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s. - Luke",
"msg_date": "Fri, 18 Nov 2005 10:25:52 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 6:46 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> That's 3 hours under load, with 80 compute clients beating on the\n> database at the same time. We have the stats turned way up, so the\n> analyze tends to read a big chunk of the tables a second time as\n> well. We typically don't have three hours a day of idle time.\n\nSo I guess you¹re saying you don¹t know what your I/O rate is?\n\n- Luke\n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nAlan,\n\nOn 11/18/05 6:46 AM, \"Alan Stange\" <[email protected]> wrote:\n\nThat's 3 hours under load, with 80 compute clients beating on the\ndatabase at the same time. We have the stats turned way up, so the\nanalyze tends to read a big chunk of the tables a second time as\nwell. We typically don't have three hours a day of idle time.\n\nSo I guess you’re saying you don’t know what your I/O rate is?\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 07:27:42 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Dave,\n\nOn 11/18/05 7:25 AM, \"Dave Cramer\" <[email protected]> wrote:\n\n> Luke,\n> \n> Interesting numbers. I'm a little concerned about the use of blockdev setra\n> 16384. If I understand this correctly it assumes that the table is contiguous\n> on the disk does it not ?\n\nFor optimum performance, yes it does. Remember that the poster is asking\nabout a 5TB warehouse. Decision support applications deal with large tables\nand sequential scans a lot, and the data is generally contiguous on disk.\nIf delete gaps are there, they will generally vacuum them away.\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nDave,\n\nOn 11/18/05 7:25 AM, \"Dave Cramer\" <[email protected]> wrote:\n\nLuke,\n\nInteresting numbers. I'm a little concerned about the use of blockdev —setra 16384. If I understand this correctly it assumes that the table is contiguous on the disk does it not ?\n\nFor optimum performance, yes it does. Remember that the poster is asking about a 5TB warehouse. Decision support applications deal with large tables and sequential scans a lot, and the data is generally contiguous on disk. If delete gaps are there, they will generally vacuum them away.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 07:30:31 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Nov 18, 2005, at 08:00, Dave Cramer wrote:\n\n>> A $1,000 system with one CPU and two SATA disks in a software RAID0 \n>> will\n>> perform exactly the same as a $80,000 system with 8 dual core CPUs \n>> and the\n>> world's best SCSI RAID hardware on a large database for decision \n>> support\n>> (what the poster asked about).\n>\n> Now there's an interesting line drawn in the sand. I presume you have \n> numbers to back this up ?\n> This should draw some interesting posts.\n\nThere is some truth to it. For an app I'm currently running (full-text \nsearch using tsearch2 on ~100MB of data) on:\n\nDev System:\nAsus bare-bones bookshelf case/mobo\n3GHz P4 w/ HT\n800MHz memory Bus\nFedora Core 3 (nightly update)\n1GB RAM\n1 SATA Seagate disk (7200RPM, 8MB Cache)\n$800\nworst-case query: 7.2 seconds\n\nnow, the machine I'm deploying to:\n\nDell SomthingOrOther\n(4) 2.4GHz Xeons\n533MHz memory bus\nRedHat Enterprise 3.6\n1GB RAM\n(5) 150000 RPM Ultra SCSI 320 on an Adaptec RAID 5 controller\n > $10000\nsame worst-case query: 9.6 seconds\n\nNow it's not apples-to-apples. There's a kernel 2.4 vs. 2.6 difference \nand the memory bus is much faster and I'm not sure what kind of context \nswitching hit you get with the Xeon MP memory controller. On a \nprevious postgresql app I did I ran nearly identically spec'ed machines \nexcept for the memory bus and saw about a 30% boost in performance just \nwith the 800MHz bus. I imagine the Opteron bus does even better.\n\nSo the small machine is probably slower on disk but makes up for it in \nsingle-threaded access to CPU and memory speed. But if this app were to \nbe scaled it would make much more sense to cluster several $800 \nmachines than it would to buy 'big-iron'.\n\n-Bill\n-----\nBill McGonigle, Owner Work: 603.448.4440\nBFC Computing, LLC Home: 603.448.1668\[email protected] Mobile: 603.252.2606\nhttp://www.bfccomputing.com/ Pager: 603.442.1833\nJabber: [email protected] Text: [email protected]\nBlog: http://blog.bfccomputing.com/\n\n",
"msg_date": "Fri, 18 Nov 2005 10:55:19 -0500",
"msg_from": "Bill McGonigle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Nov 18, 2005, at 1:07 AM, Luke Lonergan wrote:\n\n> A $1,000 system with one CPU and two SATA disks in a software RAID0 \n> will\n> perform exactly the same as a $80,000 system with 8 dual core CPUs \n> and the\n> world's best SCSI RAID hardware on a large database for decision \n> support\n> (what the poster asked about).\n\nHahahahahahahahahahahahaha! Whooo... needed to fall out of my chair \nlaughing this morning.\n\nI can tell you from direct personal experience that you're just plain \nwrong.\n\nI've had to move my primary DB server from a dual P3 1GHz with 4-disk \nRAID10 SCSI, to Dual P3 2GHz with 14-disk RAID10 and faster drives, \nto Dual Opteron 2GHz with 8-disk RAID10 and even faster disks to keep \nup with my load on a 60+ GB database. The Dual opteron system has \njust a little bit of extra capacity if I offload some of the \nreporting operations to a replicated copy (via slony1). If I run all \nthe queries on the one DB it can't keep up.\n\nOne most telling point about the difference in speed is that the 14- \ndisk array system cannot keep up with the replication being generated \nby the dual opteron, even when it is no doing any other queries of \nits own. The I/O system just ain't fast enough.\n\n",
"msg_date": "Fri, 18 Nov 2005 11:05:16 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:\n\n> Still, there is a CPU limit here � this is not I/O bound, it is CPU \n> limited as evidenced by the sensitivity to readahead settings. If \n> the filesystem could do 1GB/s, you wouldn�t go any faster than \n> 244MB/s.\n\nYeah, and mysql would probably be faster on your trivial queries. \nTry concurrent large joins and updates and see which system is faster.\n\n\nOn Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:Still, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s.Yeah, and mysql would probably be faster on your trivial queries. Try concurrent large joins and updates and see which system is faster.",
"msg_date": "Fri, 18 Nov 2005 11:07:04 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alan,\n>\n> On 11/18/05 6:46 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> That's 3 hours under load, with 80 compute clients beating on the\n> database at the same time. We have the stats turned way up, so the\n> analyze tends to read a big chunk of the tables a second time as\n> well. We typically don't have three hours a day of idle time.\n>\n>\n> So I guess you�re saying you don�t know what your I/O rate is?\nNo, I'm say *you* don't know what my IO rate is.\n\nI told you in my initial post that I was observing numbers in excess of \nwhat you claiming, but you seemed to think I didn't know how to measure \nan IO rate.\n\nI should note too that our system uses about 20% of a single cpu when \nperforming a table scan at >100MB/s of IO. I think you claimed the \nsystem would be cpu bound at this low IO rate.\n\nCheers,\n\n-- Alan\n\n",
"msg_date": "Fri, 18 Nov 2005 11:13:44 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Vivek,\n\nOn 11/18/05 8:07 AM, \"Vivek Khera\" <[email protected]> wrote:\n\n> \n> On Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:\n> \n>> Still, there is a CPU limit here this is not I/O bound, it is CPU limited\n>> as evidenced by the sensitivity to readahead settings. If the filesystem\n>> could do 1GB/s, you wouldn¹t go any faster than 244MB/s.\n> \n> Yeah, and mysql would probably be faster on your trivial queries. Try\n> concurrent large joins and updates and see which system is faster.\n\nThat¹s what we do to make a living. And it¹s Oracle that a lot faster\nbecause they implemented a much tighter, optimized I/O path to disk than\nPostgres.\n\nSince you asked, we bought the 5 systems as a cluster and with Bizgres MPP\nwe get close to 400MB/s per machine on complex queries.\n\n- Luke \n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nVivek,\n\nOn 11/18/05 8:07 AM, \"Vivek Khera\" <[email protected]> wrote:\n\n\nOn Nov 18, 2005, at 10:13 AM, Luke Lonergan wrote:\n\nStill, there is a CPU limit here – this is not I/O bound, it is CPU limited as evidenced by the sensitivity to readahead settings. If the filesystem could do 1GB/s, you wouldn’t go any faster than 244MB/s.\n\nYeah, and mysql would probably be faster on your trivial queries. Try concurrent large joins and updates and see which system is faster.\n\nThat’s what we do to make a living. And it’s Oracle that a lot faster because they implemented a much tighter, optimized I/O path to disk than Postgres.\n\nSince you asked, we bought the 5 systems as a cluster – and with Bizgres MPP we get close to 400MB/s per machine on complex queries.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 08:16:39 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 8:13 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> I told you in my initial post that I was observing numbers in excess of\n> what you claiming, but you seemed to think I didn't know how to measure\n> an IO rate.\n> \nProve me wrong, post your data.\n\n> I should note too that our system uses about 20% of a single cpu when\n> performing a table scan at >100MB/s of IO. I think you claimed the\n> system would be cpu bound at this low IO rate.\n\nSee above.\n\n- Luke\n> \n\n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nAlan,\n\nOn 11/18/05 8:13 AM, \"Alan Stange\" <[email protected]> wrote:\n\nI told you in my initial post that I was observing numbers in excess of\nwhat you claiming, but you seemed to think I didn't know how to measure\nan IO rate.\n\nProve me wrong, post your data.\n\nI should note too that our system uses about 20% of a single cpu when\nperforming a table scan at >100MB/s of IO. I think you claimed the\nsystem would be cpu bound at this low IO rate.\n\nSee above.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 08:17:48 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Vivek, \n\nOn 11/18/05 8:05 AM, \"Vivek Khera\" <[email protected]> wrote:\n\n> I can tell you from direct personal experience that you're just plain\n> wrong.\n> \n> up with my load on a 60+ GB database. The Dual opteron system has\n\nI¹m always surprised by what passes for a large database. The poster is\ntalking about 5,000GB, or almost 100 times the data you have.\n\nPost your I/O numbers on sequential scan. Sequential scan is critical for\nDecision Support / Data Warehousing.\n\n- Luke \n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nVivek, \n\nOn 11/18/05 8:05 AM, \"Vivek Khera\" <[email protected]> wrote:\n\nI can tell you from direct personal experience that you're just plain \nwrong.\n\nup with my load on a 60+ GB database. The Dual opteron system has \n\nI’m always surprised by what passes for a large database. The poster is talking about 5,000GB, or almost 100 times the data you have.\n\nPost your I/O numbers on sequential scan. Sequential scan is critical for Decision Support / Data Warehousing.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 08:20:11 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Ok - so I ran the same test on my system and get a total speed of\n113MB/sec. Why is this? Why is the system so limited to around just\n110MB/sec? I tuned read ahead up a bit, and my results improve a\nbit..\n\nAlex\n\n\nOn 11/18/05, Luke Lonergan <[email protected]> wrote:\n> Dave,\n>\n> On 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> >\n> > Now there's an interesting line drawn in the sand. I presume you have\n> > numbers to back this up ?\n> >\n> > This should draw some interesting posts.\n>\n> Part 2: The answer\n>\n> System A:\n>\n> This system is running RedHat 3 Update 4, with a Fedora 2.6.10 Linux kernel.\n>\n> On a single table with 15 columns (the Bizgres IVP) at a size double memory\n> (2.12GB), Postgres 8.0.3 with Bizgres enhancements takes 32 seconds to scan\n> the table: that's 66 MB/s. Not the efficiency I'd hope from the onboard\n> SATA controller that I'd like, I would have expected to get 85% of the\n> 100MB/s raw read performance.\n>\n> So that's $1,200 / 66 MB/s (without adjusting for 2003 price versus now) =\n> 18.2 $/MB/s\n>\n> Raw data:\n> [llonergan@kite4 IVP]$ cat scan.sh\n> #!/bin/bash\n>\n> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n> [llonergan@kite4 IVP]$ cat sysout1\n> count\n> ----------\n> 10000000\n> (1 row)\n>\n>\n> real 0m32.565s\n> user 0m0.002s\n> sys 0m0.003s\n>\n> Size of the table data:\n> [llonergan@kite4 IVP]$ du -sk dgtestdb/base\n> 2121648 dgtestdb/base\n>\n> System B:\n>\n> This system is running an XFS filesystem, and has been tuned to use very\n> large (16MB) readahead. It's running the Centos 4.1 distro, which uses a\n> Linux 2.6.9 kernel.\n>\n> Same test as above, but with 17GB of data takes 69.7 seconds to scan (!)\n> That's 244.2MB/s, which is obviously double my earlier point of 110-120MB/s.\n> This system is running with a 16MB Linux readahead setting, let's try it\n> with the default (I think) setting of 256KB – AHA! Now we get 171.4 seconds\n> or 99.3MB/s.\n>\n> So, using the tuned setting of \"blockdev —setra 16384\" we get $6,000 /\n> 244MB/s = 24.6 $/MB/s\n> If we use the default Linux setting it's 2.5x worse.\n>\n> Raw data:\n> [llonergan@modena2 IVP]$ cat scan.sh\n> #!/bin/bash\n>\n> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n> [llonergan@modena2 IVP]$ cat sysout3\n> count\n> ----------\n> 80000000\n> (1 row)\n>\n>\n> real 1m9.875s\n> user 0m0.000s\n> sys 0m0.004s\n> [llonergan@modena2 IVP]$ !du\n> du -sk dgtestdb/base\n> 17021260 dgtestdb/base\n>\n> Summary:\n>\n> <cough, cough> OK – you can get more I/O bandwidth out of the current I/O\n> path for sequential scan if you tune the filesystem for large readahead.\n> This is a cheap alternative to overhauling the executor to use asynch I/O.\n>\n> Still, there is a CPU limit here – this is not I/O bound, it is CPU limited\n> as evidenced by the sensitivity to readahead settings. If the filesystem\n> could do 1GB/s, you wouldn't go any faster than 244MB/s.\n>\n> - Luke\n",
"msg_date": "Fri, 18 Nov 2005 11:28:40 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Bill,\n\nOn 11/18/05 7:55 AM, \"Bill McGonigle\" <[email protected]> wrote:\n> \n> There is some truth to it. For an app I'm currently running (full-text\n> search using tsearch2 on ~100MB of data) on:\n\nDo you mean 100GB? Sounds like you are more like a decision support\n/warehousing application.\n \n> Dev System:\n> Asus bare-bones bookshelf case/mobo\n> 3GHz P4 w/ HT\n> 800MHz memory Bus\n> Fedora Core 3 (nightly update)\n> 1GB RAM\n> 1 SATA Seagate disk (7200RPM, 8MB Cache)\n> $800\n> worst-case query: 7.2 seconds\n\nAbout the same machine I posted results for, except I had two faster disks.\n\n> now, the machine I'm deploying to:\n> \n> Dell SomthingOrOther\n> (4) 2.4GHz Xeons\n> 533MHz memory bus\n> RedHat Enterprise 3.6\n> 1GB RAM\n> (5) 150000 RPM Ultra SCSI 320 on an Adaptec RAID 5 controller\n>> $10000\n> same worst-case query: 9.6 seconds\n\nYour problem here is the HW RAID controller - if you dump it and use the\nonboard SCSI channels and Linux RAID you will see a jump from 40MB/s to\nabout 220MB/s in read performance and from 20MB/s to 110MB/s write\nperformance. It will use less CPU too.\n \n> Now it's not apples-to-apples. There's a kernel 2.4 vs. 2.6 difference\n> and the memory bus is much faster and I'm not sure what kind of context\n> switching hit you get with the Xeon MP memory controller. On a\n> previous postgresql app I did I ran nearly identically spec'ed machines\n> except for the memory bus and saw about a 30% boost in performance just\n> with the 800MHz bus. I imagine the Opteron bus does even better.\n\nMemory bandwidth is so high on both that it's not a factor. Context\nswitching / memory bus contention isn't either.\n \n> So the small machine is probably slower on disk but makes up for it in\n> single-threaded access to CPU and memory speed. But if this app were to\n> be scaled it would make much more sense to cluster several $800\n> machines than it would to buy 'big-iron'.\n\nYes it does - by a lot too. Also, having a multiprocessing executor gets\nall of each machine by having multiple CPUs scan simultaneously.\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 08:31:00 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex,\n\nOn 11/18/05 8:28 AM, \"Alex Turner\" <[email protected]> wrote:\n\n> Ok - so I ran the same test on my system and get a total speed of\n113MB/sec.\n> Why is this? Why is the system so limited to around just\n110MB/sec? I\n> tuned read ahead up a bit, and my results improve a\nbit..\n\nOK! Now we're on the same page. Finally someone who actually tests!\n\nCheck the CPU usage while it's doing the scan. Know what it's doing?\nMemory copies. We've profiled it extensively.\n\nSo - that's the suckage - throwing more CPU power helps a bit, but the\nunderlying issue is poorly optimized code in the Postgres executor and lack\nof I/O asynchrony.\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 08:33:35 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alan,\n>\n> On 11/18/05 8:13 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> I told you in my initial post that I was observing numbers in\n> excess of\n> what you claiming, but you seemed to think I didn't know how to\n> measure\n> an IO rate.\n>\n> Prove me wrong, post your data.\n>\n> I should note too that our system uses about 20% of a single cpu when\n> performing a table scan at >100MB/s of IO. I think you claimed the\n> system would be cpu bound at this low IO rate.\n>\n>\n> See above.\nHere's the output from one iteration of iostat -k 60 while the box is \ndoing a select count(1) on a 238GB table.\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.99 0.00 17.97 32.40 48.64\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsdd 345.95 130732.53 0.00 7843952 0\n\nWe're reading 130MB/s for a full minute. About 20% of a single cpu was \nbeing used. The remainder being idle.\n\nWe've done nothing fancy and achieved results you claim shouldn't be \npossible. This is a system that was re-installed yesterday, no tuning \nwas done to the file systems, kernel or storage array.\n\nWhat am I doing wrong?\n\n9 years ago I co-designed a petabyte data store with a goal of 1GB/s IO \n(for a DOE lab). And now I don't know what I'm doing,\n\nCheers,\n\n-- Alan\n",
"msg_date": "Fri, 18 Nov 2005 12:31:33 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 9:31 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> Here's the output from one iteration of iostat -k 60 while the box is\n> doing a select count(1) on a 238GB table.\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 0.99 0.00 17.97 32.40 48.64\n> \n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sdd 345.95 130732.53 0.00 7843952 0\n> \n> We're reading 130MB/s for a full minute. About 20% of a single cpu was\n> being used. The remainder being idle.\n\nCool - thanks for the results. Is that % of one CPU, or of 2? Was the\nsystem otherwise idle?\n \n> We've done nothing fancy and achieved results you claim shouldn't be\n> possible. This is a system that was re-installed yesterday, no tuning\n> was done to the file systems, kernel or storage array.\n\nAre you happy with 130MB/s? How much did you pay for that? Is it more than\n$2,000, or double my 2003 PC?\n \n> What am I doing wrong?\n> \n> 9 years ago I co-designed a petabyte data store with a goal of 1GB/s IO\n> (for a DOE lab). And now I don't know what I'm doing,\n\nCool. Would that be Sandia?\n\nWe routinely sustain 2,000 MB/s from disk on 16x 2003 era machines on\ncomplex queries.\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 09:54:07 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alan,\n>\n> On 11/18/05 9:31 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> \n>> Here's the output from one iteration of iostat -k 60 while the box is\n>> doing a select count(1) on a 238GB table.\n>>\n>> avg-cpu: %user %nice %sys %iowait %idle\n>> 0.99 0.00 17.97 32.40 48.64\n>>\n>> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n>> sdd 345.95 130732.53 0.00 7843952 0\n>>\n>> We're reading 130MB/s for a full minute. About 20% of a single cpu was\n>> being used. The remainder being idle.\n>> \n>\n> Cool - thanks for the results. Is that % of one CPU, or of 2? Was the\n> system otherwise idle?\n> \nActually, this was dual cpu and there was other activity during the full \nminute, but it was on other file devices, which I didn't include in the \nabove output. Given that, and given what I see on the box now I'd \nraise the 20% to 30% just to be more conservative. It's all in the \nkernel either way; using a different scheduler or file system would \nchange that result. Even better would be using direct IO to not flush \neverything else from memory and avoid some memory copies from kernel to \nuser space. Note that almost none of the time is user time. Changing \npostgresql won't change the cpu useage.\n\nOne IMHO obvious improvement would be to have vacuum and analyze only do \ndirect IO. Now they appear to be very effective memory flushing tools. \nTable scans on tables larger than say 4x memory should probably also use \ndirect IO for reads.\n\n> \n> \n>> We've done nothing fancy and achieved results you claim shouldn't be\n>> possible. This is a system that was re-installed yesterday, no tuning\n>> was done to the file systems, kernel or storage array.\n>> \n>\n> Are you happy with 130MB/s? How much did you pay for that? Is it more than\n> $2,000, or double my 2003 PC?\n> \nI don't know what the system cost. It was part of block of dual \nopterons from Sun that we got some time ago. I think the 130MB/s is \nslow given the hardware, but it's acceptable. I'm not too price \nsensitive; I care much more about reliability, uptime, etc. \n\n> \n> \n>> What am I doing wrong?\n>>\n>> 9 years ago I co-designed a petabyte data store with a goal of 1GB/s IO\n>> (for a DOE lab). And now I don't know what I'm doing,\n>> \n> Cool. Would that be Sandia?\n>\n> We routinely sustain 2,000 MB/s from disk on 16x 2003 era machines on\n> complex queries.\nDisk?! 4 StorageTek tape silos. That would be .002 TB/s. One has to \nchange how you think when you have that much data. And hope you don't \nhave a fire, because there's no backup. That work was while I was at \nBNL. I believe they are now at 4PB of tape and 150TB of disk.\n\n-- Alan\n",
"msg_date": "Fri, 18 Nov 2005 13:30:06 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 10:30 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> Actually, this was dual cpu and there was other activity during the full\n> minute, but it was on other file devices, which I didn't include in the\n> above output. Given that, and given what I see on the box now I'd\n> raise the 20% to 30% just to be more conservative. It's all in the\n> kernel either way; using a different scheduler or file system would\n> change that result. Even better would be using direct IO to not flush\n> everything else from memory and avoid some memory copies from kernel to\n> user space. Note that almost none of the time is user time. Changing\n> postgresql won't change the cpu useage.\n\nThese are all things that help on the IO wait side possibly, however, there\nis a producer/consumer problem in postgres that goes something like this:\n\n- Read some (small number of, sometimes 1) 8k pages\n- Do some work on those pages, including lots of copies\n- repeat\n\nThis back and forth without threading (like AIO, or a multiprocessing\nexecutor) causes cycling and inefficiency that limits throughput.\nOptimizing some of the memcopies and other garbage out, plus increasing the\ninternal (postgres) readahead would probably double the disk bandwidth.\n\nBut to be disk-bound (meaning that the disk subsystem is running at full\nspeed), requires asynchronous I/O. We do this now with Bizgres MPP, and we\nget fully saturated disk channels on every machine. That means that even on\none machine, we run many times faster than non-MPP postgres.\n\n> One IMHO obvious improvement would be to have vacuum and analyze only do\n> direct IO. Now they appear to be very effective memory flushing tools.\n> Table scans on tables larger than say 4x memory should probably also use\n> direct IO for reads.\n\nThat's been suggested many times prior - I agree, but this also needs AIO to\nbe maximally effective.\n\n> I don't know what the system cost. It was part of block of dual\n> opterons from Sun that we got some time ago. I think the 130MB/s is\n> slow given the hardware, but it's acceptable. I'm not too price\n> sensitive; I care much more about reliability, uptime, etc.\n\nThen I know what they cost - we have them too (V20z and V40z). You should\nbe getting 400MB/s+ with external RAID.\n\n>>> What am I doing wrong?\n>>> \n>>> 9 years ago I co-designed a petabyte data store with a goal of 1GB/s IO\n>>> (for a DOE lab). And now I don't know what I'm doing,\n>>> \n>> Cool. Would that be Sandia?\n>> \n>> We routinely sustain 2,000 MB/s from disk on 16x 2003 era machines on\n>> complex queries.\n> Disk?! 4 StorageTek tape silos. That would be .002 TB/s. One has to\n> change how you think when you have that much data. And hope you don't\n> have a fire, because there's no backup. That work was while I was at\n> BNL. I believe they are now at 4PB of tape and 150TB of disk.\n\nWe had 1.5 Petabytes on 2 STK Silos at NAVO from 1996-1998 where I ran R&D.\nWe also had a Cray T932 an SGI Origin 3000 with 256 CPUs, a Cray T3E with\n1280 CPUs, 2 Cray J916s with 1 TB of shared disk, a Cray C90-16, a Sun E10K,\netc etc, along with clusters of Alpha machines and lots of SGIs. It's nice\nto work with a $40M annual budget.\n\nLater, working with FSL we implemented a weather forecasting cluster that\nultimately became the #5 fastest computer on the TOP500 supercomputing list\nfrom 512 Alpha cluster nodes. That machine had a 10-way shared SAN, tape\nrobotics and a Myrinet interconnect and ran 64-bit Linux (in 1998).\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 10:52:35 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan Stange <[email protected]> writes:\n\n> Luke Lonergan wrote:\n> > Alan,\n> >\n> > On 11/18/05 9:31 AM, \"Alan Stange\" <[email protected]> wrote:\n> >\n> >\n> >> Here's the output from one iteration of iostat -k 60 while the box is\n> >> doing a select count(1) on a 238GB table.\n> >>\n> >> avg-cpu: %user %nice %sys %iowait %idle\n> >> 0.99 0.00 17.97 32.40 48.64\n> >>\n> >> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> >> sdd 345.95 130732.53 0.00 7843952 0\n> >>\n> >> We're reading 130MB/s for a full minute. About 20% of a single cpu was\n> >> being used. The remainder being idle.\n> >>\n> >\n> > Cool - thanks for the results. Is that % of one CPU, or of 2? Was the\n> > system otherwise idle?\n> >\n> Actually, this was dual cpu \n\nI hate to agree with him but that looks like a dual machine with one CPU\npegged. Yes most of the time is being spent in the kernel, but you're still\nbasically cpu limited.\n\nThat said, 130MB/s is nothing to sneeze at, that's maxing out two high end\ndrives and quite respectable for a 3-disk stripe set, even reasonable for a\n4-disk stripe set. If you're using 5 or more disks in RAID-0 or RAID 1+0 and\nonly getting 130MB/s then it does seem likely the cpu is actually holding you\nback here.\n\nStill it doesn't show Postgres being nearly so CPU wasteful as the original\nposter claimed.\n\n> It's all in the kernel either way; using a different scheduler or file\n> system would change that result. Even better would be using direct IO to not\n> flush everything else from memory and avoid some memory copies from kernel\n> to user space. Note that almost none of the time is user time. Changing\n> postgresql won't change the cpu useage.\n\nWell changing to direct i/o would still be changing Postgres so that's\nunclear. And there are plenty of more mundane ways that Postgres is\nresponsible for how efficiently or not the kernel is used. Just using fewer\nsyscalls to do the same amount of reading would reduce cpu consumption.\n\n\n> One IMHO obvious improvement would be to have vacuum and analyze only do direct\n> IO. Now they appear to be very effective memory flushing tools. Table scans\n> on tables larger than say 4x memory should probably also use direct IO for\n> reads.\n\n-- \ngreg\n\n",
"msg_date": "18 Nov 2005 14:07:34 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg,\n\nOn 11/18/05 11:07 AM, \"Greg Stark\" <[email protected]> wrote:\n\n> That said, 130MB/s is nothing to sneeze at, that's maxing out two high end\n> drives and quite respectable for a 3-disk stripe set, even reasonable for a\n> 4-disk stripe set. If you're using 5 or more disks in RAID-0 or RAID 1+0 and\n> only getting 130MB/s then it does seem likely the cpu is actually holding you\n> back here.\n\nWith an FC array, it's undoubtedly more like 14 drives, in which case\n130MB/s is laughable. On the other hand, I wouldn't be surprised if it were\na single 200MB/s Fibre Channel attachment.\n\nIt does make you wonder why people keep recommending 15K RPM drives, like it\nwould help *not*.\n\n> Still it doesn't show Postgres being nearly so CPU wasteful as the original\n> poster claimed.\n\nIt's partly about waste, and partly about lack of a concurrent I/O\nmechanism. We've profiled it for the waste, we've implemented concurrent\nI/O to prove the other point.\n \n>> It's all in the kernel either way; using a different scheduler or file\n>> system would change that result. Even better would be using direct IO to not\n>> flush everything else from memory and avoid some memory copies from kernel\n>> to user space. Note that almost none of the time is user time. Changing\n>> postgresql won't change the cpu useage.\n> \n> Well changing to direct i/o would still be changing Postgres so that's\n> unclear. And there are plenty of more mundane ways that Postgres is\n> responsible for how efficiently or not the kernel is used. Just using fewer\n> syscalls to do the same amount of reading would reduce cpu consumption.\n\nBingo.\n \n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 11:24:48 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg Stark wrote:\n> Alan Stange <[email protected]> writes:\n>\n> \n>> Luke Lonergan wrote:\n>> \n>>> Alan,\n>>>\n>>> On 11/18/05 9:31 AM, \"Alan Stange\" <[email protected]> wrote:\n>>>\n>>>\n>>> \n>>>> Here's the output from one iteration of iostat -k 60 while the box is\n>>>> doing a select count(1) on a 238GB table.\n>>>>\n>>>> avg-cpu: %user %nice %sys %iowait %idle\n>>>> 0.99 0.00 17.97 32.40 48.64\n>>>>\n>>>> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n>>>> sdd 345.95 130732.53 0.00 7843952 0\n>>>>\n>>>> We're reading 130MB/s for a full minute. About 20% of a single cpu was\n>>>> being used. The remainder being idle.\n>>>>\n>>>> \n>>> Cool - thanks for the results. Is that % of one CPU, or of 2? Was the\n>>> system otherwise idle?\n>>>\n>>> \n>> Actually, this was dual cpu \n>> \n>\n> I hate to agree with him but that looks like a dual machine with one CPU\n> pegged. Yes most of the time is being spent in the kernel, but you're still\n> basically cpu limited.\n>\n> That said, 130MB/s is nothing to sneeze at, that's maxing out two high end\n> drives and quite respectable for a 3-disk stripe set, even reasonable for a\n> 4-disk stripe set. If you're using 5 or more disks in RAID-0 or RAID 1+0 and\n> only getting 130MB/s then it does seem likely the cpu is actually holding you\n> back here.\n>\n> Still it doesn't show Postgres being nearly so CPU wasteful as the original\n> poster claimed.\n> \nYes and no. The one cpu is clearly idle. The second cpu is 40% busy \nand 60% idle (aka iowait in the above numbers).\nOf that 40%, other things were happening as well during the 1 minute \nsnapshot. During some iostat outputs that I didn't post the cpu time \nwas ~ 20%.\n\nSo, you can take your pick. The single cpu usage is somewhere between \n20% and 40%. As I can't remove other users of the system, it's the best \nmeasurement that I can make right now.\n\nEither way, it's not close to being cpu bound. This is with Opteron \n248, 2.2Ghz cpus.\n\nNote that the storage system has been a bit disappointing: it's an IBM \nFast T600 with a 200MB/s fiber attachment. It could be better, but \nit's not been the bottleneck in our work, so we haven't put any energy \ninto it. \n\n>> It's all in the kernel either way; using a different scheduler or file\n>> system would change that result. Even better would be using direct IO to not\n>> flush everything else from memory and avoid some memory copies from kernel\n>> to user space. Note that almost none of the time is user time. Changing\n>> postgresql won't change the cpu useage.\n>> \n> Well changing to direct i/o would still be changing Postgres so that's\n> unclear. And there are plenty of more mundane ways that Postgres is\n> responsible for how efficiently or not the kernel is used. Just using fewer\n> syscalls to do the same amount of reading would reduce cpu consumption.\nAbsolutely. This is why we're using a 32KB block size and also switched \nto using O_SYNC for the WAL syncing method. That's many MB/s that \ndon't need to be cached in the kernel (thus evicting other data), and we \navoid all the fysnc/fdatasync syscalls.\n\nThe purpose of direct IO isn't to make the vacuum or analyze faster, but \nto lessen their impact on queries with someone waiting for the \nresults. That's our biggest hit: running a sequential scan on 240GB \nof data and flushing everything else out of memory.\n\nNow that I'm think about this a bit, a big chunk of time is probably \nbeing lost in TLB misses and other virtual memory events that would be \navoided if a larger page size was being used.\n\n-- Alan \n\n",
"msg_date": "Fri, 18 Nov 2005 14:39:30 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n>> opterons from Sun that we got some time ago. I think the 130MB/s is\n>> slow given the hardware, but it's acceptable. I'm not too price\n>> sensitive; I care much more about reliability, uptime, etc.\n>> \n> I don't know what the system cost. It was part of block of dual\n>\n> Then I know what they cost - we have them too (V20z and V40z). You should\n> be getting 400MB/s+ with external RAID.\nYes, but we don't. This is where I would normally begin a rant on how \ncraptacular Linux can be at times. But, for the sake of this \ndiscussion, postgresql isn't reading the data any more slowly than does \nany other program.\n\nAnd we don't have the time to experiment with the box.\n\nI know it should be better, but it's good enough for our purposes at \nthis time.\n\n-- Alan\n\n",
"msg_date": "Fri, 18 Nov 2005 14:39:50 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Breaking the ~120MBps pg IO ceiling by any means \nis an important result. Particularly when you \nget a ~2x improvement. I'm curious how far we \ncan get using simple approaches like this.\n\nAt 10:13 AM 11/18/2005, Luke Lonergan wrote:\n>Dave,\n>\n>On 11/18/05 5:00 AM, \"Dave Cramer\" <[email protected]> wrote:\n> >\n> > Now there's an interesting line drawn in the sand. I presume you have\n> > numbers to back this up ?\n> >\n> > This should draw some interesting posts.\n>\n>Part 2: The answer\n>\n>System A:\n>This system is running RedHat 3 Update 4, with a Fedora 2.6.10 Linux kernel.\n>\n>On a single table with 15 columns (the Bizgres \n>IVP) at a size double memory (2.12GB), Postgres \n>8.0.3 with Bizgres enhancements takes 32 seconds \n>to scan the table: thats 66 MB/s. Not the \n>efficiency Id hope from the onboard SATA \n>controller that Id like, I would have expected \n>to get 85% of the 100MB/s raw read performance.\nHave you tried the large read ahead trick with \nthis system? It would be interesting to see how \nmuch it would help. It might even be worth it to \ndo the experiment at all of [default, 2x default, \n4x default, 8x default, etc] read ahead until \neither a) you run out of resources to support the \ndesired read ahead, or b) performance levels \noff. I can imagine the results being very enlightening.\n\n\n>System B:\n>This system is running an XFS filesystem, and \n>has been tuned to use very large (16MB) \n>readahead. Its running the Centos 4.1 distro, \n>which uses a Linux 2.6.9 kernel.\n>\n>Same test as above, but with 17GB of data takes \n>69.7 seconds to scan (!) Thats 244.2MB/s, \n>which is obviously double my earlier point of \n>110-120MB/s. This system is running with a 16MB \n>Linux readahead setting, lets try it with the \n>default (I think) setting of 256KB AHA! Now we get 171.4 seconds or 99.3MB/s.\nThe above experiment would seem useful here as well.\n\n\n>Summary:\n>\n><cough, cough> OK you can get more I/O \n>bandwidth out of the current I/O path for \n>sequential scan if you tune the filesystem for \n>large readahead. This is a cheap alternative to \n>overhauling the executor to use asynch I/O.\n>\n>Still, there is a CPU limit here this is not \n>I/O bound, it is CPU limited as evidenced by the \n>sensitivity to readahead settings. If the \n>filesystem could do 1GB/s, you wouldnt go any faster than 244MB/s.\n>\n>- Luke\n\nI respect your honesty in reporting results that \nwere different then your expectations or \npreviously taken stance. Alan Stange's comment \nre: the use of direct IO along with your comments \nre: async IO and mem copies plus the results of \nthese experiments could very well point us \ndirectly at how to most easily solve pg's CPU boundness during IO.\n\n[HACKERS] are you watching this?\n\nRon\n\n\n",
"msg_date": "Fri, 18 Nov 2005 15:29:11 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases "
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> (mass snippage) \n> time psql -c \"select count(*) from ivp.bigtable1\" dgtestdb\n> [llonergan@modena2 IVP]$ cat sysout3\n> count \n> ----------\n> 80000000\n> (1 row)\n> \n> \n> real 1m9.875s\n> user 0m0.000s\n> sys 0m0.004s\n> [llonergan@modena2 IVP]$ !du\n> du -sk dgtestdb/base\n> 17021260 dgtestdb/base\n> \n> \n> Summary:\n> \n> <cough, cough> OK � you can get more I/O bandwidth out of the current \n> I/O path for sequential scan if you tune the filesystem for large \n> readahead. This is a cheap alternative to overhauling the executor to \n> use asynch I/O.\n> \n> Still, there is a CPU limit here � this is not I/O bound, it is CPU \n> limited as evidenced by the sensitivity to readahead settings. If the \n> filesystem could do 1GB/s, you wouldn�t go any faster than 244MB/s.\n> \n> \n\nLuke,\n\nInteresting - but possibly only representative for a workload consisting \nentirely of one executor doing \"SELECT ... FROM my_single_table\".\n\nIf you alter this to involve more complex joins (e.g 4. way star) and \n(maybe add a small number of concurrent executors too) - is it still the \ncase?\n\nCheers\n\nMark\n",
"msg_date": "Sat, 19 Nov 2005 12:46:54 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nOn 11/18/05 3:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n \n> If you alter this to involve more complex joins (e.g 4. way star) and\n> (maybe add a small number of concurrent executors too) - is it still the\n> case?\n\n4-way star, same result, that's part of my point. With Bizgres MPP, the\n4-way star uses 4 concurrent scanners, though not all are active all the\ntime. And that's per segment instance - we normally use one segment\ninstance per CPU, so our concurrency is NCPUs plus some.\n\nThe trick is the \"small number of concurrent executors\" part. The only way\nto get this with normal postgres is to have concurrent users, and normally\nthey are doing different things, scanning different parts of the disk.\nThese are competing things, and for concurrency enhancement something like\n\"sync scan\" would be an effective optimization.\n\nBut in reporting, business analytics and warehousing in general, there are\nreports that take hours to run. If you can knock that down by factors of 10\nusing parallelism, it's a big win. That's the reason that Teradata did $1.5\nBillion in business last year.\n\nMore importantly - that's the kind of work that everyone using internet data\nfor analytics wants right now.\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Nov 2005 16:04:00 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nOn 11/18/05 3:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> If you alter this to involve more complex joins (e.g 4. way star) and\n> (maybe add a small number of concurrent executors too) - is it still the\n> case?\n\nI may not have listened to you - are you asking about whether the readahead\nworks for these cases?\n\nI¹ll be running some massive TPC-H benchmarks on these machines soon we¹ll\nsee then.\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nMark,\n\nOn 11/18/05 3:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\nIf you alter this to involve more complex joins (e.g 4. way star) and\n(maybe add a small number of concurrent executors too) - is it still the\ncase?\n\nI may not have listened to you - are you asking about whether the readahead works for these cases?\n\nI’ll be running some massive TPC-H benchmarks on these machines soon – we’ll see then.\n\n- Luke",
"msg_date": "Fri, 18 Nov 2005 16:05:59 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n> On 11/18/05 3:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n> \n> If you alter this to involve more complex joins (e.g 4. way star) and\n> (maybe add a small number of concurrent executors too) - is it still the\n> case?\n> \n> \n> I may not have listened to you - are you asking about whether the \n> readahead works for these cases?\n> \n> I�ll be running some massive TPC-H benchmarks on these machines soon � \n> we�ll see then.\n\n\nThat too, meaning the business of 1 executor random reading a given \nrelation file whilst another is sequentially scanning (some other) part \nof it....\n\nCheers\n\nMark\n",
"msg_date": "Sat, 19 Nov 2005 15:27:49 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/18/05 11:39 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> Yes and no. The one cpu is clearly idle. The second cpu is 40% busy\n> and 60% idle (aka iowait in the above numbers).\n\nThe \"aka iowait\" is the problem here - iowait is not idle (otherwise it\nwould be in the \"idle\" column).\n\nIowait is time spent waiting on blocking io calls. As another poster\npointed out, you have a two CPU system, and during your scan, as predicted,\none CPU went 100% busy on the seq scan. During iowait periods, the CPU can\nbe context switched to other users, but as I pointed out earlier, that's not\nuseful for getting response on decision support queries.\n\nThanks for your data, it exemplifies many of the points brought up:\n- Lots of disks and expensive I/O hardware does not help improve performance\non large table queries because I/O bandwidth does not scale beyond\n110-120MB/s on the fastest CPUs\n- OLTP performance optimizations are different than decision support\n\nRegards,\n\n- Luke\n\n\n",
"msg_date": "Sat, 19 Nov 2005 08:13:09 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nOn 11/18/05 6:27 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> That too, meaning the business of 1 executor random reading a given\n> relation file whilst another is sequentially scanning (some other) part\n> of it....\n\nI think it should actually improve things - each I/O will read 16MB into the\nI/O cache, then the next scanner will seek for 10ms to get the next 16MB\ninto cache, etc. It should minimize the seek/data ratio nicely. As long as\nthe tables are contiguous it should rock and roll.\n\n- Luke\n\n\n",
"msg_date": "Sat, 19 Nov 2005 08:15:29 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n> On 11/18/05 3:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n> \n> \n>>If you alter this to involve more complex joins (e.g 4. way star) and\n>>(maybe add a small number of concurrent executors too) - is it still the\n>>case?\n> \n> \n> 4-way star, same result, that's part of my point. With Bizgres MPP, the\n> 4-way star uses 4 concurrent scanners, though not all are active all the\n> time. And that's per segment instance - we normally use one segment\n> instance per CPU, so our concurrency is NCPUs plus some.\n>\n\nLuke - I don't think I was clear enough about what I was asking, sorry.\n\nI added the more \"complex joins\" comment because:\n\n- I am happy that seqscan is cpu bound after ~110M/s (It's cpu bound on \nmy old P3 system even earlier than that....)\n- I am curious if the *other* access methods (indexscan, nested loop, \nhash, merge, bitmap) also suffer then same fate.\n\nI'm guessing from your comment that you have tested this too, but I \nthink its worth clarifying!\n\nWith respect to Bizgres MPP, scan parallelism is a great addition... \nvery nice! (BTW - is that in 0.8, or are we talking a new product variant?)\n\nregards\n\nMark\n\n\n",
"msg_date": "Sun, 20 Nov 2005 10:28:57 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Alan,\n>\n> On 11/18/05 11:39 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> \n>> Yes and no. The one cpu is clearly idle. The second cpu is 40% busy\n>> and 60% idle (aka iowait in the above numbers).\n>> \n>\n> The \"aka iowait\" is the problem here - iowait is not idle (otherwise it\n> would be in the \"idle\" column).\n>\n> Iowait is time spent waiting on blocking io calls. As another poster\n> pointed out, you have a two CPU system, and during your scan, as predicted,\n> one CPU went 100% busy on the seq scan. During iowait periods, the CPU can\n> be context switched to other users, but as I pointed out earlier, that's not\n> useful for getting response on decision support queries.\n> \niowait time is idle time. Period. This point has been debated \nendlessly for Solaris and other OS's as well.\n\nHere's the man page:\n %iowait\n Show the percentage of time that the CPU or \nCPUs were\n idle during which the system had an outstanding \ndisk I/O\n request.\n\nIf the system had some other cpu bound work to perform you wouldn't ever \nsee any iowait time. Anyone claiming the cpu was 100% busy on the \nsequential scan using the one set of numbers I posted is \nmisunderstanding the actual metrics.\n\n> Thanks for your data, it exemplifies many of the points brought up:\n> - Lots of disks and expensive I/O hardware does not help improve performance\n> on large table queries because I/O bandwidth does not scale beyond\n> 110-120MB/s on the fastest CPUs\n> \nI don't think that is the conclusion from anecdotal numbers I posted. \nThis file subsystem doesn't perform as well as expected for any tool. \nBonnie, dd, star, etc., don't get a better data rate either. In fact, \nthe storage system wasn't built for performance; it was build to \nreliably hold a big chunk of data. Even so, postgresql is reading at \n130MB/s on it, using about 30% of a single cpu, almost all of which was \nsystem time. I would get the same 130MB/s on a system with cpus that \nwere substantially slower; the limitation isn't the cpus, or \npostgresql. It's the IO system that is poorly configured for this test, \nnot postgresqls ability to use it.\n\nIn fact, given the numbers I posted, it's clear this system could \nhandily generate more than 120 MB/s using a single cpu given a better IO \nsubsystem; it has cpu time to spare. A simple test can be done: \nbuild the database in /dev/shm and time the scans. It's the same read() \nsystem call being used and now one has made the IO system \"infinitely \nfast\". The claim is being made that standard postgresql is unable to \ngenerate more than 120MB/s of IO on any IO system due to an inefficient \nuse of the kernel API and excessive memory copies, etc. Having the \ndatabase be on a ram based file system is an example of \"expensive IO \nhardware\" and all else would be the same. Hmmm, now that I think about \nthis, I could throw a medium sized table onto /dev/shm using \ntablespaces on one of our 8GB linux boxes. So why is this experiment \nnot valid, or what is it about the above assertion that I am missing?\n\n\nAnyway, if one cares about high speed sequential IO, then one should use \na much larger block size to start. Using 8KB IOs is inappropriate for \nsuch a configuration. We happen to be using 32KB blocks on our largest \ndatabase and it's been the best move for us.\n\n-- Alan\n",
"msg_date": "Sat, 19 Nov 2005 21:43:48 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Another data point. \n\nWe had some down time on our system today to complete some maintenance \nwork. It took the opportunity to rebuild the 700GB file system using \nXFS instead of Reiser.\n\nOne iostat output for 30 seconds is\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.58 0.00 19.69 31.94 46.78\n\nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\nsdd 343.73 175035.73 277.55 5251072 8326\n\nwhile doing a select count(1) on the same large table as before. \nSubsequent iostat output all showed that this data rate was being \nmaintained. The system is otherwise mostly idle during this measurement.\n\nThe sequential read rate is 175MB/s. The system is the same as earlier, \none cpu is idle and the second is ~40% busy doing the scan and ~60% \nidle. This is postgresql 8.1rc1, 32KB block size. No tuning except \nfor using a 1024KB read ahead.\n\nThe peak speed of the attached storage is 200MB/s (a 2Gb/s fiber channel \ncontroller). I see no reason why this configuration wouldn't generate \nhigher IO rates if a faster IO connection were available.\n\nCan you explain again why you think there's an IO ceiling of 120MB/s \nbecause I really don't understand?\n\n-- Alan\n\n\n",
"msg_date": "Sat, 19 Nov 2005 23:43:28 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n> \n> - I am happy that seqscan is cpu bound after ~110M/s (It's cpu bound on \n> my old P3 system even earlier than that....)\n\nAhem - after reading Alan's postings I am not so sure, ISTM that there \nis some more investigation required here too :-).\n\n\n",
"msg_date": "Sun, 20 Nov 2005 21:55:59 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan Stange wrote:\n> Luke Lonergan wrote:\n>> The \"aka iowait\" is the problem here - iowait is not idle (otherwise it\n>> would be in the \"idle\" column).\n>>\n>> Iowait is time spent waiting on blocking io calls. As another poster\n>> pointed out, you have a two CPU system, and during your scan, as \n> \n> iowait time is idle time. Period. This point has been debated \n> endlessly for Solaris and other OS's as well.\n\nI'm sure the the theory is nice but here's my experience with iowait \njust a minute ago. I run Linux/XFce as my desktop -- decided I wanted to \nlookup some stuff in Wikipedia under Mozilla and my computer system \nbecame completely unusable for nearly a minute while who knows what \nMozilla was doing. (Probably loading all the language packs.) I could \nnot even switch to IRC (already loaded) to chat with other people while \nMozilla was chewing up all my disk I/O.\n\nSo I went to another computer, connected to mine remotely (slow...) and \nchecked top. 90% in the \"wa\" column which I assume is the iowait column. \nIt may be idle in theory but it's not a very useful idle -- wasn't able \nto switch to any programs already running, couldn't click on the XFce \nlaunchbar to run any new programs.\n",
"msg_date": "Sun, 20 Nov 2005 04:55:14 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Sat, Nov 19, 2005 at 08:13:09AM -0800, Luke Lonergan wrote:\n> Iowait is time spent waiting on blocking io calls. \n\nTo be picky, iowait is time spent in the idle task while the I/O queue is not\nempty. It does not matter if the I/O is blocking or not (from userspace's\npoint of view), and if the I/O was blocking (say, PIO) from the kernel's\npoint of view, it would be counted in system.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 20 Nov 2005 14:04:38 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "William Yu wrote:\n> Alan Stange wrote:\n>> Luke Lonergan wrote:\n>>> The \"aka iowait\" is the problem here - iowait is not idle (otherwise it\n>>> would be in the \"idle\" column).\n>>>\n>>> Iowait is time spent waiting on blocking io calls. As another poster\n>>> pointed out, you have a two CPU system, and during your scan, as \n>>\n>> iowait time is idle time. Period. This point has been debated \n>> endlessly for Solaris and other OS's as well.\n>\n> I'm sure the the theory is nice but here's my experience with iowait \n> just a minute ago. I run Linux/XFce as my desktop -- decided I wanted \n> to lookup some stuff in Wikipedia under Mozilla and my computer system \n> became completely unusable for nearly a minute while who knows what \n> Mozilla was doing. (Probably loading all the language packs.) I could \n> not even switch to IRC (already loaded) to chat with other people \n> while Mozilla was chewing up all my disk I/O.\n>\n> So I went to another computer, connected to mine remotely (slow...) \n> and checked top. 90% in the \"wa\" column which I assume is the iowait \n> column. It may be idle in theory but it's not a very useful idle -- \n> wasn't able to switch to any programs already running, couldn't click \n> on the XFce launchbar to run any new programs.\n\nSo, you have a sucky computer. I'm sorry, but iowait is still idle \ntime, whether you believe it or not.\n\n-- Alan\n\n",
"msg_date": "Sun, 20 Nov 2005 08:42:09 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nAlan Stange <[email protected]> writes:\n\n> > Iowait is time spent waiting on blocking io calls. As another poster\n> > pointed out, you have a two CPU system, and during your scan, as predicted,\n> > one CPU went 100% busy on the seq scan. During iowait periods, the CPU can\n> > be context switched to other users, but as I pointed out earlier, that's not\n> > useful for getting response on decision support queries.\n\nI don't think that's true. If the syscall was preemptable then it wouldn't\nshow up under \"iowait\", but rather \"idle\". The time spent in iowait is time in\nuninterruptable sleeps where no other process can be scheduled.\n\n> iowait time is idle time. Period. This point has been debated endlessly for\n> Solaris and other OS's as well.\n> \n> Here's the man page:\n> %iowait\n> Show the percentage of time that the CPU or CPUs were\n> idle during which the system had an outstanding disk I/O\n> request.\n> \n> If the system had some other cpu bound work to perform you wouldn't ever see\n> any iowait time. Anyone claiming the cpu was 100% busy on the sequential scan\n> using the one set of numbers I posted is misunderstanding the actual metrics.\n\nThat's easy to test. rerun the test with another process running a simple C\nprogram like \"main() {while(1);}\" (or two invocations of that on your system\nbecause of the extra processor). I bet you'll see about half the percentage of\niowait because postres will get half as much opportunity to schedule i/o. If\nwhat you are saying were true then you should get 0% iowait.\n\n-- \ngreg\n\n",
"msg_date": "20 Nov 2005 09:22:41 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Sun, Nov 20, 2005 at 09:22:41AM -0500, Greg Stark wrote:\n> I don't think that's true. If the syscall was preemptable then it wouldn't\n> show up under \"iowait\", but rather \"idle\". The time spent in iowait is time in\n> uninterruptable sleeps where no other process can be scheduled.\n\nYou are confusing userspace with kernel space. When a process is stuck in\nuninterruptable sleep, it means _that process_ can't be interrupted (say,\nby a signal). The kernel can preempt it without problems.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 20 Nov 2005 15:29:35 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg Stark wrote:\n> Alan Stange <[email protected]> writes:\n>\n> \n>>> Iowait is time spent waiting on blocking io calls. As another poster\n>>> pointed out, you have a two CPU system, and during your scan, as predicted,\n>>> one CPU went 100% busy on the seq scan. During iowait periods, the CPU can\n>>> be context switched to other users, but as I pointed out earlier, that's not\n>>> useful for getting response on decision support queries.\n>>> \n>\n> I don't think that's true. If the syscall was preemptable then it wouldn't\n> show up under \"iowait\", but rather \"idle\". The time spent in iowait is time in\n> uninterruptable sleeps where no other process can be scheduled.\n> \nThat would be wrong. The time spent in iowait is idle time. The \niowait stat would be 0 on a machine with a compute bound runnable \nprocess available for each cpu.\n\nCome on people, read the man page or look at the source code. Just \nstop making stuff up.\n\n\n> \n>> iowait time is idle time. Period. This point has been debated endlessly for\n>> Solaris and other OS's as well.\n>>\n>> Here's the man page:\n>> %iowait\n>> Show the percentage of time that the CPU or CPUs were\n>> idle during which the system had an outstanding disk I/O\n>> request.\n>>\n>> If the system had some other cpu bound work to perform you wouldn't ever see\n>> any iowait time. Anyone claiming the cpu was 100% busy on the sequential scan\n>> using the one set of numbers I posted is misunderstanding the actual metrics.\n>> \n>\n> That's easy to test. rerun the test with another process running a simple C\n> program like \"main() {while(1);}\" (or two invocations of that on your system\n> because of the extra processor). I bet you'll see about half the percentage of\n> iowait because postres will get half as much opportunity to schedule i/o. If\n> what you are saying were true then you should get 0% iowait.\nYes, I did this once about 10 years ago. But instead of saying \"I bet\" \nand guessing at the result, you should try it yourself. Without \nguessing, I can tell you that the iowait time will go to 0%. You can do \nthis loop in the shell, so there's no code to write. Also, it helps to \ndo this with the shell running at a lower priority.\n\n-- Alan\n\n\n",
"msg_date": "Sun, 20 Nov 2005 13:09:06 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan Stange wrote:\n> Another data point.\n> We had some down time on our system today to complete some maintenance \n> work. It took the opportunity to rebuild the 700GB file system using \n> XFS instead of Reiser.\n> \n> One iostat output for 30 seconds is\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 1.58 0.00 19.69 31.94 46.78\n> \n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sdd 343.73 175035.73 277.55 5251072 8326\n> \n> while doing a select count(1) on the same large table as before. \n> Subsequent iostat output all showed that this data rate was being \n> maintained. The system is otherwise mostly idle during this measurement.\n> \n> The sequential read rate is 175MB/s. The system is the same as earlier, \n> one cpu is idle and the second is ~40% busy doing the scan and ~60% \n> idle. This is postgresql 8.1rc1, 32KB block size. No tuning except \n> for using a 1024KB read ahead.\n> \n> The peak speed of the attached storage is 200MB/s (a 2Gb/s fiber channel \n> controller). I see no reason why this configuration wouldn't generate \n> higher IO rates if a faster IO connection were available.\n> \n> Can you explain again why you think there's an IO ceiling of 120MB/s \n> because I really don't understand?\n> \n\nI think what is going on here is that Luke's observation of the 120 Mb/s \nrate is taken from data using 8K block size - it looks like we can get \nhigher rates with 32K.\n\nA quick test on my P3 system seems to support this (the numbers are a \nbit feeble, but the difference is interesting):\n\nThe test is SELECT 1 FROM table, stopping Pg and unmounting the file \nsystem after each test.\n\n8K blocksize:\n25 s elapsed\n48 % idle from vmstat (dual cpu system)\n70 % busy from gstat (Freebsd GEOM io monitor)\n181819 pages in relation\n56 Mb/s effective IO throughput\n\n\n32K blocksize:\n23 s elapsed\n44 % idle from vmstat\n80 % busy from gstat\n45249 pages in relation\n60 Mb/s effective IO throughput\n\n\nI re-ran these several times - very repeatable (+/- 0.25 seconds).\n\nThis is Freebsd 6.0 with the readahead set to 16 blocks, UFS2 filesystem \ncreated with 32K blocksize (both cases). It might be interesting to see \nthe effect of using 16K (the default) with the 8K Pg block size, I would \nexpect this to widen the gap.\n\nCheers\n\nMark\n\n",
"msg_date": "Mon, 21 Nov 2005 13:11:15 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark Kirkwood wrote:\n\n> The test is SELECT 1 FROM table\n\nThat should read \"The test is SELECT count(1) FROM table....\"\n",
"msg_date": "Mon, 21 Nov 2005 14:40:54 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\n\nOn 11/19/05 8:43 PM, \"Alan Stange\" <[email protected]> wrote:\n\n> Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\n> sdd 343.73 175035.73 277.55 5251072 8326\n> \n> while doing a select count(1) on the same large table as before.\n> Subsequent iostat output all showed that this data rate was being\n> maintained. The system is otherwise mostly idle during this measurement.\n\nYes - interesting. Note the other result using XFS that I posted earlier\nwhere I got 240+MB/s. XFS has more aggressive readahead, which is why I\nused it.\n \n> Can you explain again why you think there's an IO ceiling of 120MB/s\n> because I really don't understand?\n\nOK - slower this time:\n\nWe've seen between 110MB/s and 120MB/s on a wide variety of fast CPU\nmachines with fast I/O subsystems that can sustain 250MB/s+ using dd, but\nwhich all are capped at 120MB/s when doing sequential scans with different\nversions of Postgres.\n\nUnderstand my point: It doesn't matter that there is idle or iowait on the\nCPU, the postgres executor is not able to drive the I/O rate for two\nreasons: there is a lot of CPU used for the scan (the 40% you reported) and\na lack of asynchrony (the iowait time). That means that by speeding up the\nCPU you only reduce the first part, but you don't fix the second and v.v.\n\nWith more aggressive readahead, the second problem (the I/O asynchrony) is\nhandled better by the Linux kernel and filesystem. That's what we're seeing\nwith XFS.\n\n- Luke \n\n\n",
"msg_date": "Mon, 21 Nov 2005 00:12:55 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> OK - slower this time:\n\n> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU\n> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but\n> which all are capped at 120MB/s when doing sequential scans with different\n> versions of Postgres.\n\nLuke, sometime it would be nice if you would post your raw evidence\nand let other people do their own analysis. I for one have gotten\ntired of reading sweeping generalizations unbacked by any data.\n\nI find the notion of a magic 120MB/s barrier, independent of either\nCPU or disk speed, to be pretty dubious to say the least. I would\nlike to know exactly what the \"wide variety\" of data points you\nhaven't shown us are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Nov 2005 09:56:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( "
},
{
"msg_contents": "Luke Lonergan wrote:\n> OK - slower this time:\n> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU\n> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but\n> which all are capped at 120MB/s when doing sequential scans with different\n> versions of Postgres.\n> \nPostgresql issues the exact same sequence of read() calls as does dd. \nSo why is dd so much faster?\n\nI'd be careful with the dd read of a 16GB file on an 8GB system. Make \nsure you umount the file system first, to make sure all of the file is \nflushed from memory. Some systems use a freebehind on sequential reads \nto avoid flushing memory...and you'd find that 1/2 of your 16GB file is \nstill in memory. The same point also holds for the writes: when dd \nfinishes not all the data is on disk. You need to issue a sync() call \nto make that happen. Use lmdd to ensure that the data is actually all \nwritten. In other words, I think your dd results are possibly misleading.\n\nIt's trivial to demonstrate:\n\n$ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=800000\n800000+0 records in\n800000+0 records out\n\nreal 0m13.780s\nuser 0m0.134s\nsys 0m13.510s\n\nOops. I just wrote 470MB/s to a file system that has peak write speed \nof 200MB/s peak.\n\nNow, you might say that you wrote a 16GB file on an 8 GB machine so this \nisn't an issue. It does make your dd numbers look fast as some of the \ndata will be unwritten.\n\n\nI'd also suggest running dd on the same files as postgresql. I suspect \nyou'd find that the layout of the postgresql files isn't that good as \nthey are grown bit by bit, unlike the file created by simply dd'ing a \nlarge file.\n\n> Understand my point: It doesn't matter that there is idle or iowait on the\n> CPU, the postgres executor is not able to drive the I/O rate for two\n> reasons: there is a lot of CPU used for the scan (the 40% you reported) and\n> a lack of asynchrony (the iowait time). That means that by speeding up the\n> CPU you only reduce the first part, but you don't fix the second and v.v.\n>\n> With more aggressive readahead, the second problem (the I/O asynchrony) is\n> handled better by the Linux kernel and filesystem. That's what we're seeing\n> with XFS.\n\nI think your point doesn't hold up. Every time you make it, I come away \nposting another result showing it to be incorrect.\n\nThe point your making doesn't match my experience with *any* storage or \nprogram I've ever used, including postgresql. Your point suggests that \nthe storage system is idle and that postgresql is broken because it \nisn't able to use the resources available...even when the cpu is very \nidle. How can that make sense? The issue here is that the storage \nsystem is very active doing reads on the files...which might be somewhat \npoorly allocated on disk because postgresql grows the tables bit by bit.\n\nI had the same readahead in Reiser and in XFS. The XFS performance was \nbetter because XFS does a better job of large file allocation on disk, \nthus resulting in many fewer seeks (generated by the file system itself) \nto read the files back in. As an example, some file systems like UFS \npurposely scatter large files across cylinder groups to avoid forcing \nlarge seeks on small files; one can tune this behavior so that large \nfiles are more tightly allocated.\n\n\n\nOf course, because this is engineering, I have another obligatory data \npoint: This time it's a 4.2GB table using 137,138 32KB pages with \nnearly 41 million rows.\n\nA \"select count(1)\" on the table completes in 14.6 seconds, for an \naverage read rate of 320 MB/s. \n\nOne cpu was idle, the other averaged 32% system time and 68 user time \nfor the 14 second period. This is on a 2.2Ghz Opteron. A faster cpu \nwould show increased performance as I really am cpu bound finally. \n\nPostgresql is clearly able to issue the relevant sequential read() \nsystem calls and sink the resulting data without a problem if the file \nsystem is capable of providing the data. It can do this up to a speed \nof ~300MB/s on this class of system. Now it should be fairly simple to \ntweak the few spots where some excess memory copies are being done and \nup this result substantially. I hope postgresql is always using the \nlibc memcpy as that's going to be a lot faster then some private routine.\n\n-- Alan\n\n\n",
"msg_date": "Mon, 21 Nov 2005 09:57:59 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/21/05 6:57 AM, \"Alan Stange\" <[email protected]> wrote:\n\n> $ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=800000\n> 800000+0 records in\n> 800000+0 records out\n> \n> real 0m13.780s\n> user 0m0.134s\n> sys 0m13.510s\n> \n> Oops. I just wrote 470MB/s to a file system that has peak write speed\n> of 200MB/s peak.\n\nHow much RAM on this machine?\n \n> Now, you might say that you wrote a 16GB file on an 8 GB machine so this\n> isn't an issue. It does make your dd numbers look fast as some of the\n> data will be unwritten.\n\nThis simple test, at 2x memory correlates very closely to Bonnie++ numbers\nfor sequential scan. What's more, we see close to the same peak in practice\nwith multiple scanners. Furthermore, if you run two of them simultaneously\n(on two filesystems), you can also see the I/O limited.\n \n> I'd also suggest running dd on the same files as postgresql. I suspect\n> you'd find that the layout of the postgresql files isn't that good as\n> they are grown bit by bit, unlike the file created by simply dd'ing a\n> large file.\n\nCan happen if you're not careful with filesystems (see above).\n\nThere's nothing \"wrong\" with the dd test.\n \n> I think your point doesn't hold up. Every time you make it, I come away\n> posting another result showing it to be incorrect.\n\nProve it - your Reiserfs number was about the same.\n\nI also posted an XFS number that was substantially higher than 110-120.\n\n> The point your making doesn't match my experience with *any* storage or\n> program I've ever used, including postgresql. Your point suggests that\n> the storage system is idle and that postgresql is broken because it\n> isn't able to use the resources available...even when the cpu is very\n> idle. How can that make sense? The issue here is that the storage\n> system is very active doing reads on the files...which might be somewhat\n> poorly allocated on disk because postgresql grows the tables bit by bit.\n\nThen you've made my point - if the problem is contiguity of files on disk,\nthen larger allocation blocks would help on the CPU side.\n\nThe objective is clear: given a high performance filesystem, how much of the\navailable bandwidth can Postgres achieve? I think what we're seeing is that\nXFS is dramatically improving that objective.\n \n> I had the same readahead in Reiser and in XFS. The XFS performance was\n> better because XFS does a better job of large file allocation on disk,\n> thus resulting in many fewer seeks (generated by the file system itself)\n> to read the files back in. As an example, some file systems like UFS\n> purposely scatter large files across cylinder groups to avoid forcing\n> large seeks on small files; one can tune this behavior so that large\n> files are more tightly allocated.\n\nOur other tests have used ext3, reiser and Solaris 10 UFS, so this might\nmake some sense.\n\n> Of course, because this is engineering, I have another obligatory data\n> point: This time it's a 4.2GB table using 137,138 32KB pages with\n> nearly 41 million rows.\n> \n> A \"select count(1)\" on the table completes in 14.6 seconds, for an\n> average read rate of 320 MB/s.\n\nSo, assuming that the net memory scan rate is about 2GB/s, and two copies\n(one from FS cache to buffer cache, one from buffer cache to the agg node),\nyou have a 700MB/s filesystem with the equivalent of DirectIO (no FS cache)\nbecause you are reading directly from the I/O cache. You got half of that\nbecause the I/O processing in the executor is limited to 320MB/s on that\nfast CPU. \n\nMy point is this: if you were to decrease the filesystem speed to say\n400MB/s and still use the equivalent of DirectIO, I thinkPostgres would not\ndeliver 320MB/s, but rather something like 220MB/s due to the\nproducer/consumer arch of the executor. If you get that part, then we're on\nthe same track, otherwise we disagree.\n\n> One cpu was idle, the other averaged 32% system time and 68 user time\n> for the 14 second period. This is on a 2.2Ghz Opteron. A faster cpu\n> would show increased performance as I really am cpu bound finally.\n\nYep, with the equivalent of DirectIO you are.\n \n- Luke\n\n\n",
"msg_date": "Mon, 21 Nov 2005 10:06:48 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Tom,\n\nOn 11/21/05 6:56 AM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n>> OK - slower this time:\n> \n>> We've seen between 110MB/s and 120MB/s on a wide variety of fast CPU\n>> machines with fast I/O subsystems that can sustain 250MB/s+ using dd, but\n>> which all are capped at 120MB/s when doing sequential scans with different\n>> versions of Postgres.\n> \n> Luke, sometime it would be nice if you would post your raw evidence\n> and let other people do their own analysis. I for one have gotten\n> tired of reading sweeping generalizations unbacked by any data.\n\nThis has partly been a challenge to get others to post their results.\n \n> I find the notion of a magic 120MB/s barrier, independent of either\n> CPU or disk speed, to be pretty dubious to say the least. I would\n> like to know exactly what the \"wide variety\" of data points you\n> haven't shown us are.\n\nI'll try to put up some of them, they've occurred over the last 3 years on\nvarious platforms including:\n- Dual 3.2GHz Xeon, 2 x Adaptec U320 SCSI attached to 6 x 10K RPM disks,\nLinux 2.6.4(?) - 2.6.10 kernel, ext2/3 and Reiser filesystems\n120-130MB/s Postgres seq scan rate on 7.4 and 8.0.\n\n- Dual 1.8 GHz Opteron, 2 x LSI U320 SCSI attached to 6 x 10K RPM disks,\nLinux 2.6.10 kernel, ext2/3 and Reiser filesystems\n110-120MB/s Postgres seq scan rate on 8.0\n\n- Same machine as above running Solaris 10, with UFS filesystem. When I/O\ncaching is tuned, we reach the same 110-120MB/s Postgres seq scan rate\n\n- Sam machine as above with 7 x 15K RPM 144GB disks in an external disk\ntray, same scan rate\n\nOnly when we got these new SATA systems and tried out XFS with large\nreadahead have we been able to break past the 120-130MB/s. After Alan's\npost, it seems that XFS might be a big part of that. I think we'll test\next2/3 against XFS on the same machine to find out.\n\nIt may have to wait a week, as many of us are on vacation.\n\n- Luke\n\n\n",
"msg_date": "Mon, 21 Nov 2005 10:14:29 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nAlan Stange <[email protected]> writes:\n\n> The point your making doesn't match my experience with *any* storage or program\n> I've ever used, including postgresql. Your point suggests that the storage\n> system is idle and that postgresql is broken because it isn't able to use the\n> resources available...even when the cpu is very idle. How can that make sense?\n\nWell I think what he's saying is that Postgres is issuing a read, then waiting\nfor the data to return. Then it does some processing, and goes back to issue\nanother read. The CPU is idle half the time because Postgres isn't capable of\ndoing any work while waiting for i/o, and the i/o system is idle half the time\nwhile the CPU intensive part happens.\n\n(Consider as a pathological example a program that reads 8k then sleeps for\n10ms, and loops doing that 1,000 times. Now consider the same program\noptimized to read 8M asynchronously and sleep for 10s. By the time it's\nfinished sleeping it has probably read in all 8M. Whereas the program that\nread 8k in little chunks interleaved with small sleeps would probably take\ntwice as long and appear to be entirely i/o-bound with 50% iowait and 50%\nidle.)\n\nIt's a reasonable theory and it's not inconsistent with the results you sent.\nBut it's not exactly proven either. Nor is it clear how to improve matters.\nAdding additional threads to handle the i/o adds an enormous amount of\ncomplexity and creates lots of opportunity for other contention that could\neasily eat all of the gains.\n\nI also fear that heading in that direction could push Postgres even further\nfrom the niche of software that works fine even on low end hardware into the\nrealm of software that only works on high end hardware. It's already suffering\na bit from that.\n\n-- \ngreg\n\n",
"msg_date": "21 Nov 2005 14:01:26 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg Stark wrote:\n\n> I also fear that heading in that direction could push Postgres even further\n> from the niche of software that works fine even on low end hardware into the\n> realm of software that only works on high end hardware. It's already suffering\n> a bit from that.\n\nWhat's high end hardware for you? I do development on a Celeron 533\nmachine with 448 MB of RAM and I find it to work well (for a \"slow\"\nvalue of \"well\", certainly.) If you're talking about embedded hardware,\nthat's another matter entirely and I don't think we really support the\nidea of running Postgres on one of those things.\n\nThere's certainly true in that the memory requirements have increased a\nbit, but I don't think it really qualifies as \"high end\" even on 8.1.\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org\nJude: I wish humans laid eggs\nRinglord: Why would you want humans to lay eggs?\nJude: So I can eat them\n",
"msg_date": "Mon, 21 Nov 2005 16:51:47 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Would it be worth first agreeing on a common set of criteria to \nmeasure? I see many data points going back and forth but not much \nagreement on what's worth measuring and how to measure.\n\nI'm not necessarily trying to herd cats, but it sure would be swell to \nhave the several knowledgeable minds here come up with something that \ncould uniformly tested on a range of machines, possibly even integrated \ninto pg_bench or something. Disagreements on criteria or methodology \nshould be methodically testable.\n\nThen I have dreams of a new pg_autotune that would know about these \nkinds of system-level settings.\n\nI haven't been on this list for long, and only using postgres for a \nhandful of years, so forgive it if this has been hashed out before.\n\n-Bill\n-----\nBill McGonigle, Owner Work: 603.448.4440\nBFC Computing, LLC Home: 603.448.1668\[email protected] Mobile: 603.252.2606\nhttp://www.bfccomputing.com/ Pager: 603.442.1833\nJabber: [email protected] Text: [email protected]\nBlog: http://blog.bfccomputing.com/\n\n",
"msg_date": "Mon, 21 Nov 2005 14:58:18 -0500",
"msg_from": "Bill McGonigle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Mon, Nov 21, 2005 at 02:01:26PM -0500, Greg Stark wrote:\n>I also fear that heading in that direction could push Postgres even further\n>from the niche of software that works fine even on low end hardware into the\n>realm of software that only works on high end hardware. It's already suffering\n>a bit from that.\n\nWell, there are are alread a bunch of open source DB's that can handle\nthe low end. postgres is the closest thing to being able to handle the\nhigh end.\n\nMike Stone\n",
"msg_date": "Mon, 21 Nov 2005 14:59:09 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke,\n\nit's time to back yourself up with some numbers. You're claiming the \nneed for a significant rewrite of portions of postgresql and you haven't \ndone the work to make that case. \n\nYou've apparently made some mistakes on the use of dd to benchmark a \nstorage system. Use lmdd and umount the file system before the read \nand post your results. Using a file 2x the size of memory doesn't work \ncorectly. You can quote any other numbers you want, but until you use \nlmdd correctly you should be ignored. Ideally, since postgresql uses \n1GB files, you'll want to use 1GB files for dd as well.\n\nLuke Lonergan wrote:\n> Alan,\n>\n> On 11/21/05 6:57 AM, \"Alan Stange\" <[email protected]> wrote:\n>\n> \n>> $ time dd if=/dev/zero of=/fidb1/bigfile bs=8k count=800000\n>> 800000+0 records in\n>> 800000+0 records out\n>>\n>> real 0m13.780s\n>> user 0m0.134s\n>> sys 0m13.510s\n>>\n>> Oops. I just wrote 470MB/s to a file system that has peak write speed\n>> of 200MB/s peak.\n>> \n> How much RAM on this machine?\n> \nDoesn't matter. The result will always be wrong without a call to \nsync() or fsync() before the close() if you're trying to measure the \nspeed of the disk subsystem. Add that sync() and the result will be \ncorrect for any memory size. Just for completeness: Solaris implicitly \ncalls sync() as part of close. Bonnie used to get this wrong, so \nquoting Bonnie isn't any good. Note that on some systems using 2x \nmemory for these tests is almost OK. For example, Solaris used to have \na hiwater mark that would throttle processes and not allow more than a \nfew 100K of writes to be outstanding on a file. Linux/XFS clearly \nallows a lot of write data to be outstanding. It's best to understand \nthe tools and know what they do and why they can be wrong than simply \nquoting some other tool that makes the same mistakes.\n\nI find that postgresql is able to achieve about 175MB/s on average from \na system capable of delivering 200MB/s peak and it does this with a lot \nof cpu time to spare. Maybe dd can do a little better and deliver \n185MB/s. If I were to double the speed of my IO system, I might find \nthat a single postgresql instance can sink about 300MB/s of data (based \non the last numbers I posted). That's why I have multi-cpu opterons and \nmore than one query/client as they soak up the remaining IO capacity.\n\nIt is guaranteed that postgresql will hit some threshold of performance \nin the future and possible rewrites of some core functionality will be \nneeded, but no numbers posted here so far have made the case that \npostgresql is in trouble now. In the mean time, build balanced \nsystems with cpus that match the capabilities of the storage subsystems, \nuse 32KB block sizes for large memory databases that are doing lots of \nsequential scans, use file systems tuned for large files, use opterons, etc.\n\n\nAs always, one has to post some numbers. Here's an example of how dd \ndoesn't do what you might expect:\n\nmite02:~ # lmdd if=internal of=/fidb2/bigfile bs=8k count=2k\n16.7772 MB in 0.0235 secs, 714.5931 MB/sec\n\nmite02:~ # lmdd if=internal of=/fidb2/bigfile bs=8k count=2k sync=1\n16.7772 MB in 0.1410 secs, 118.9696 MB/sec\n\nBoth numbers are \"correct\". But one measures the kernels ability to \nabsorb 2000 8KB writes with no guarantee that the data is on disk and \nthe second measures the disk subsystems ability to write 16MB of data. \ndd is equivalent to the first result. You can't use the first type of \nresult and complain that postgresql is slow. If you wrote 16G of data \non a machine with 8G memory then your dd result is possibly too fast by \na factor of two as 8G of the data might not be on disk yet. We won't \nknow until you post some results.\n\nCheers,\n\n-- Alan\n\n",
"msg_date": "Mon, 21 Nov 2005 16:53:41 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nUnless noted otherwise all results posted are for block device readahead set\nto 16M using \"blockdev --setra=16384 <block_device>\". All are using the\n2.6.9-11 Centos 4.1 kernel.\n\nFor those who don't have lmdd, here is a comparison of two results on an\next2 filesystem:\n\n============================================================================\n[root@modena1 dbfast1]# time bash -c \"(dd if=/dev/zero of=/dbfast1/bigfile\nbs=8k count=800000 && sync)\"\n800000+0 records in\n800000+0 records out\n\nreal 0m33.057s\nuser 0m0.116s\nsys 0m13.577s\n\n[root@modena1 dbfast1]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\ncount=800000 sync=1\n6553.6000 MB in 31.2957 secs, 209.4092 MB/sec\n\nreal 0m33.032s\nuser 0m0.087s\nsys 0m13.129s\n============================================================================\n\nSo lmdd with sync=1 is apparently equivalent to a sync after a dd.\n\nI use 2x memory with dd for the *READ* performance testing, but let's make\nsure things are synced on both sides for this set of comparisons.\n\nFirst, let's test ext2 versus \"ext3, data=ordered\", versus reiserfs versus\nxfs:\n\n\n\n\n",
"msg_date": "Mon, 21 Nov 2005 15:44:35 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Mon, Nov 21, 2005 at 10:14:29AM -0800, Luke Lonergan wrote:\n>This has partly been a challenge to get others to post their results.\n\nYou'll find that people respond better if you don't play games with\nthem.\n",
"msg_date": "Mon, 21 Nov 2005 18:54:44 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nLooks like Postgres gets sensible scan rate scaling as the filesystem speed\nincreases, as shown below. I'll drop my 120MB/s observation - perhaps CPUs\ngot faster since I last tested this.\n\nThe scaling looks like 64% of the I/O subsystem speed is available to the\nexecutor - so as the I/O subsystem increases in scan rate, so does Postgres'\nexecutor scan speed.\n\nSo that leaves the question - why not more than 64% of the I/O scan rate?\nAnd why is it a flat 64% as the I/O subsystem increases in speed from\n333-400MB/s?\n\n- Luke\n \n================= Results ===================\n\nUnless noted otherwise all results posted are for block device readahead set\nto 16M using \"blockdev --setra=16384 <block_device>\". All are using the\n2.6.9-11 Centos 4.1 kernel.\n\nFor those who don't have lmdd, here is a comparison of two results on an\next2 filesystem:\n\n============================================================================\n[root@modena1 dbfast1]# time bash -c \"(dd if=/dev/zero of=/dbfast1/bigfile\nbs=8k count=800000 && sync)\"\n800000+0 records in\n800000+0 records out\n\nreal 0m33.057s\nuser 0m0.116s\nsys 0m13.577s\n\n[root@modena1 dbfast1]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\ncount=800000 sync=1\n6553.6000 MB in 31.2957 secs, 209.4092 MB/sec\n\nreal 0m33.032s\nuser 0m0.087s\nsys 0m13.129s\n============================================================================\n\nSo lmdd with sync=1 is equivalent to a sync after a dd.\n\nI use 2x memory with dd for the *READ* performance testing, but let's make\nsure things are synced on both write and read for this set of comparisons.\n\nFirst, let's test ext2 versus \"ext3, data=ordered\", versus xfs:\n\n============================================================================\n16GB write, then read\n============================================================================\n-----------------------\next2:\n-----------------------\n[root@modena1 dbfast1]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 144.2670 secs, 113.5672 MB/sec\n\n[root@modena1 dbfast1]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 49.3766 secs, 331.8170 MB/sec\n\n-----------------------\next3, data=ordered:\n-----------------------\n[root@modena1 ~]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 137.1607 secs, 119.4511 MB/sec\n\n[root@modena1 ~]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 48.7398 secs, 336.1527 MB/sec\n\n-----------------------\nxfs:\n-----------------------\n[root@modena1 ~]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 52.6141 secs, 311.3994 MB/sec\n\n[root@modena1 ~]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\ncount=2000000 sync=1\n16384.0000 MB in 40.2807 secs, 406.7453 MB/sec\n============================================================================\n\nI'm liking xfs! Something about the way files are layed out, as Alan\nsuggested seems to dramatically improve write performance and perhaps\nconsequently the read also improves. There doesn't seem to be a difference\nbetween ext3 and ext2, as expected.\n\nNow on to the Postgres 8 tests. We'll do a 16GB table size to ensure that\nwe aren't reading from the read cache. I'll write this file through\nPostgres COPY to be sure that the file layout is as Postgres creates it. The\nalternative would be to use COPY once, then tar/untar onto different\nfilesystems, but that may not duplicate the real world results.\n\nThese tests will use Bizgres 0_8_1, which is an augmented 8.0.3. None of\nthe augmentations act to improve the executor I/O though, so for these\npurposes it should be the same as 8.0.3.\n\n============================================================================\n26GB of DBT-3 data from the lineitem table\n============================================================================\nllonergan=# select relpages from pg_class where relname='lineitem';\n relpages \n----------\n 3159138\n(1 row)\n\n3159138*8192/1000000\n25879 Million Bytes, or 25.9GB\n\n-----------------------\nxfs:\n-----------------------\nllonergan=# \\timing\nTiming is on.\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 394908.501 ms\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 99425.223 ms\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 99187.205 ms\n\n-----------------------\next2:\n-----------------------\nllonergan=# select relpages from pg_class where relname='lineitem';\n relpages \n----------\n 3159138\n(1 row)\n\nllonergan=# \\timing\nTiming is on.\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 395286.475 ms\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 195756.381 ms\nllonergan=# select count(1) from lineitem;\n count \n-----------\n 119994608\n(1 row)\n\nTime: 122822.090 ms\n============================================================================\nAnalysis of Postgres 8.0.3 results\n============================================================================\n ext2 xfs\nWrite Speed 114 311\nRead Speed 332 407\nPostgres Seq Scan Speed 212 263\nScan % of lmdd Read Speed 63.9% 64.6%\n\nWell - looks like we get linear scaling with disk/file subsystem speedup.\n\n- Luke\n\n\n",
"msg_date": "Mon, 21 Nov 2005 20:35:26 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> So that leaves the question - why not more than 64% of the I/O scan rate?\n> And why is it a flat 64% as the I/O subsystem increases in speed from\n> 333-400MB/s?\n> \n\nIt might be interesting to see what effect reducing the cpu consumption \n entailed by the count aggregation has - by (say) writing a little bit \nof code to heap scan the desired relation (sample attached).\n\nCheers\n\nMark",
"msg_date": "Tue, 22 Nov 2005 18:10:24 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke,\n\n- XFS will probably generate better data rates with larger files. You \nreally need to use the same file size as does postgresql. Why compare \nthe speed to reading a 16G file and the speed to reading a 1G file. \nThey won't be the same. If need be, write some code that does the test \nor modify lmdd to read a sequence of 1G files. Will this make a \ndifference? You don't know until you do it. Any time you cross a \ncouple of 2^ powers in computing, you should expect some differences.\n\n- you did umount the file system before reading the 16G file back in? \nBecause if you didn't then your read numbers are possibly garbage. \nWhen the read began, 8G of the file was in memory. You'd be very naive \nto think that somehow the read of the first 8GB somehow flushed that \ncached data out of memory. After all, why would the kernel flush pages \nfrom file X when you're in the middle of a sequential read of...file \nX? I'm not sure how Linux handles this, but Solaris would've found the \n8G still in memory.\n\n- What was the hardware and disk configuration on which these numbers \nwere generated? For example, if you have a U320 controller, how did \nthe read rate become larger than 320MB/s?\n\n- how did the results change from before? Just posting the new results \nis misleading given all the boasting we've had to read about your past \nresults.\n\n- there are two results below for writing to ext2: one at 209 MB/s and \none at 113MB/s. Why are they different?\n\n- what was the cpu usage during these tests? We see postgresql doing \n200+MB/s of IO. You've claimed many times that the machine would be \ncompute bound at lower IO rates, so how much idle time does the cpu \nstill have?\n\n- You wrote: \"We'll do a 16GB table size to ensure that we aren't \nreading from the read cache. \" Do you really believe that?? You have \nto umount the file system before each test to ensure you're really \nmeasuring the disk IO rate. If I'm reading your results correctly, it \nlooks like you have three results for ext and xfs, each of which is \nfaster than the prior one. If I'm reading this correctly, then it looks \nlike one is clearly reading from the read cache.\n\n- Gee, it's so nice of you to drop your 120MB/s observation. I guess my \nreading at 300MB/s wasn't convincing enough. Yeah, I think it was the \ncpus too...\n\n- I wouldn't focus on the flat 64% of the data rate number. It'll \nprobably be different on other systems.\n\nI'm all for testing and testing. It seems you still cut a corner \nwithout umounting the file system first. Maybe I'm a little too old \nschool on this, but I wouldn't spend a dime until you've done the \nmeasurements correctly. \n\nGood Luck. \n\n-- Alan\n\n\n\nLuke Lonergan wrote:\n> Alan,\n>\n> Looks like Postgres gets sensible scan rate scaling as the filesystem speed\n> increases, as shown below. I'll drop my 120MB/s observation - perhaps CPUs\n> got faster since I last tested this.\n>\n> The scaling looks like 64% of the I/O subsystem speed is available to the\n> executor - so as the I/O subsystem increases in scan rate, so does Postgres'\n> executor scan speed.\n>\n> So that leaves the question - why not more than 64% of the I/O scan rate?\n> And why is it a flat 64% as the I/O subsystem increases in speed from\n> 333-400MB/s?\n>\n> - Luke\n> \n> ================= Results ===================\n>\n> Unless noted otherwise all results posted are for block device readahead set\n> to 16M using \"blockdev --setra=16384 <block_device>\". All are using the\n> 2.6.9-11 Centos 4.1 kernel.\n>\n> For those who don't have lmdd, here is a comparison of two results on an\n> ext2 filesystem:\n>\n> ============================================================================\n> [root@modena1 dbfast1]# time bash -c \"(dd if=/dev/zero of=/dbfast1/bigfile\n> bs=8k count=800000 && sync)\"\n> 800000+0 records in\n> 800000+0 records out\n>\n> real 0m33.057s\n> user 0m0.116s\n> sys 0m13.577s\n>\n> [root@modena1 dbfast1]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\n> count=800000 sync=1\n> 6553.6000 MB in 31.2957 secs, 209.4092 MB/sec\n>\n> real 0m33.032s\n> user 0m0.087s\n> sys 0m13.129s\n> ============================================================================\n>\n> So lmdd with sync=1 is equivalent to a sync after a dd.\n>\n> I use 2x memory with dd for the *READ* performance testing, but let's make\n> sure things are synced on both write and read for this set of comparisons.\n>\n> First, let's test ext2 versus \"ext3, data=ordered\", versus xfs:\n>\n> ============================================================================\n> 16GB write, then read\n> ============================================================================\n> -----------------------\n> ext2:\n> -----------------------\n> [root@modena1 dbfast1]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 144.2670 secs, 113.5672 MB/sec\n>\n> [root@modena1 dbfast1]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 49.3766 secs, 331.8170 MB/sec\n>\n> -----------------------\n> ext3, data=ordered:\n> -----------------------\n> [root@modena1 ~]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 137.1607 secs, 119.4511 MB/sec\n>\n> [root@modena1 ~]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 48.7398 secs, 336.1527 MB/sec\n>\n> -----------------------\n> xfs:\n> -----------------------\n> [root@modena1 ~]# time lmdd if=/dev/zero of=/dbfast1/bigfile bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 52.6141 secs, 311.3994 MB/sec\n>\n> [root@modena1 ~]# time lmdd if=/dbfast1/bigfile of=/dev/null bs=8k\n> count=2000000 sync=1\n> 16384.0000 MB in 40.2807 secs, 406.7453 MB/sec\n> ============================================================================\n>\n> I'm liking xfs! Something about the way files are layed out, as Alan\n> suggested seems to dramatically improve write performance and perhaps\n> consequently the read also improves. There doesn't seem to be a difference\n> between ext3 and ext2, as expected.\n>\n> Now on to the Postgres 8 tests. We'll do a 16GB table size to ensure that\n> we aren't reading from the read cache. I'll write this file through\n> Postgres COPY to be sure that the file layout is as Postgres creates it. The\n> alternative would be to use COPY once, then tar/untar onto different\n> filesystems, but that may not duplicate the real world results.\n>\n> These tests will use Bizgres 0_8_1, which is an augmented 8.0.3. None of\n> the augmentations act to improve the executor I/O though, so for these\n> purposes it should be the same as 8.0.3.\n>\n> ============================================================================\n> 26GB of DBT-3 data from the lineitem table\n> ============================================================================\n> llonergan=# select relpages from pg_class where relname='lineitem';\n> relpages \n> ----------\n> 3159138\n> (1 row)\n>\n> 3159138*8192/1000000\n> 25879 Million Bytes, or 25.9GB\n>\n> -----------------------\n> xfs:\n> -----------------------\n> llonergan=# \\timing\n> Timing is on.\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 394908.501 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 99425.223 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 99187.205 ms\n>\n> -----------------------\n> ext2:\n> -----------------------\n> llonergan=# select relpages from pg_class where relname='lineitem';\n> relpages \n> ----------\n> 3159138\n> (1 row)\n>\n> llonergan=# \\timing\n> Timing is on.\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 395286.475 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 195756.381 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> -----------\n> 119994608\n> (1 row)\n>\n> Time: 122822.090 ms\n> ============================================================================\n> Analysis of Postgres 8.0.3 results\n> ============================================================================\n> ext2 xfs\n> Write Speed 114 311\n> Read Speed 332 407\n> Postgres Seq Scan Speed 212 263\n> Scan % of lmdd Read Speed 63.9% 64.6%\n>\n> Well - looks like we get linear scaling with disk/file subsystem speedup.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Tue, 22 Nov 2005 09:26:38 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg Stark wrote:\n> \n> Alan Stange <[email protected]> writes:\n> \n> > The point your making doesn't match my experience with *any* storage or program\n> > I've ever used, including postgresql. Your point suggests that the storage\n> > system is idle and that postgresql is broken because it isn't able to use the\n> > resources available...even when the cpu is very idle. How can that make sense?\n> \n> Well I think what he's saying is that Postgres is issuing a read, then waiting\n> for the data to return. Then it does some processing, and goes back to issue\n> another read. The CPU is idle half the time because Postgres isn't capable of\n> doing any work while waiting for i/o, and the i/o system is idle half the time\n> while the CPU intensive part happens.\n> \n> (Consider as a pathological example a program that reads 8k then sleeps for\n> 10ms, and loops doing that 1,000 times. Now consider the same program\n> optimized to read 8M asynchronously and sleep for 10s. By the time it's\n> finished sleeping it has probably read in all 8M. Whereas the program that\n> read 8k in little chunks interleaved with small sleeps would probably take\n> twice as long and appear to be entirely i/o-bound with 50% iowait and 50%\n> idle.)\n> \n> It's a reasonable theory and it's not inconsistent with the results you sent.\n> But it's not exactly proven either. Nor is it clear how to improve matters.\n> Adding additional threads to handle the i/o adds an enormous amount of\n> complexity and creates lots of opportunity for other contention that could\n> easily eat all of the gains.\n\nPerfect summary. We have a background writer now. Ideally we would\nhave a background reader, that reads-ahead blocks into the buffer cache.\nThe problem is that while there is a relatively long time between a\nbuffer being dirtied and the time it must be on disk (checkpoint time),\nthe read-ahead time is much shorter, requiring some kind of quick\n\"create a thread\" approach that could easily bog us down as outlined\nabove.\n\nRight now the file system will do read-ahead for a heap scan (but not an\nindex scan), but even then, there is time required to get that kernel\nblock into the PostgreSQL shared buffers, backing up Luke's observation\nof heavy memcpy() usage.\n\nSo what are our options? mmap()? I have no idea. Seems larger page\nsize does help.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Nov 2005 19:13:20 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Bruce Momjian wrote:\n> Greg Stark wrote:\n> \n>> Alan Stange <[email protected]> writes:\n>>\n>> \n>>> The point your making doesn't match my experience with *any* storage or program\n>>> I've ever used, including postgresql. Your point suggests that the storage\n>>> system is idle and that postgresql is broken because it isn't able to use the\n>>> resources available...even when the cpu is very idle. How can that make sense?\n>>> \n>> Well I think what he's saying is that Postgres is issuing a read, then waiting\n>> for the data to return. Then it does some processing, and goes back to issue\n>> another read. The CPU is idle half the time because Postgres isn't capable of\n>> doing any work while waiting for i/o, and the i/o system is idle half the time\n>> while the CPU intensive part happens.\n>>\n>> (Consider as a pathological example a program that reads 8k then sleeps for\n>> 10ms, and loops doing that 1,000 times. Now consider the same program\n>> optimized to read 8M asynchronously and sleep for 10s. By the time it's\n>> finished sleeping it has probably read in all 8M. Whereas the program that\n>> read 8k in little chunks interleaved with small sleeps would probably take\n>> twice as long and appear to be entirely i/o-bound with 50% iowait and 50%\n>> idle.)\n>>\n>> It's a reasonable theory and it's not inconsistent with the results you sent.\n>> But it's not exactly proven either. Nor is it clear how to improve matters.\n>> Adding additional threads to handle the i/o adds an enormous amount of\n>> complexity and creates lots of opportunity for other contention that could\n>> easily eat all of the gains.\n>> \n>\n> Perfect summary. We have a background writer now. Ideally we would\n> have a background reader, that reads-ahead blocks into the buffer cache.\n> The problem is that while there is a relatively long time between a\n> buffer being dirtied and the time it must be on disk (checkpoint time),\n> the read-ahead time is much shorter, requiring some kind of quick\n> \"create a thread\" approach that could easily bog us down as outlined\n> above.\n>\n> Right now the file system will do read-ahead for a heap scan (but not an\n> index scan), but even then, there is time required to get that kernel\n> block into the PostgreSQL shared buffers, backing up Luke's observation\n> of heavy memcpy() usage.\n>\n> So what are our options? mmap()? I have no idea. Seems larger page\n> size does help.\nFor sequential scans, you do have a background reader. It's the \nkernel. As long as you don't issue a seek() between read() calls, the \nkernel will get the hint about sequential IO and begin to perform a read \nahead for you. This is where the above analysis isn't quite right: \nwhile postgresql is processing the returned data from the read() call, \nthe kernel has also issued reads as part of the read ahead, keeping the \ndevice busy while the cpu is busy. (I'm assuming these details for \nLinux; Solaris/UFS does work this way). Issue one seek on the file and \nthe read ahead algorithm will back off for a while. This was my point \nabout some descriptions of how the system works not being sensible.\n\nIf your goal is sequential IO, then one must use larger block sizes. \nNo one would use 8KB IO for achieving high sequential IO rates. Simply \nput, read() is about the slowest way to get 8KB of data. Switching \nto 32KB blocks reduces all the system call overhead by a large margin. \nLarger blocks would be better still, up to the stripe size of your \nmirror. (Of course, you're using a mirror and not raid5 if you care \nabout performance.)\n\nI don't think the memcpy of data from the kernel to userspace is that \nbig of an issue right now. dd and all the high end network interfaces \nmanage OK doing it, so I'd expect postgresql to do all right with it now \nyet too. Direct IO will avoid that memcpy, but then you also don't get \nany caching of the files in memory. I'd be more concerned about any \nmemcpy calls or general data management within postgresql. Does \npostgresql use the platform specific memcpy() in libc? Some care might \nbe needed to ensure that the memory blocks within postgresql are all \nproperly aligned to make sure that one isn't ping-ponging cache lines \naround (usually done by padding the buffer sizes by an extra 32 bytes or \nL1 line size). Whatever you do, all the usual high performance \ncomputing tricks should be used prior to considering any rewriting of \nmajor code sections.\n\nPersonally, I'd like to see some detailed profiling being done using \nhardware counters for cpu cycles and cache misses, etc. Given the poor \nquality of work that has been discussed here in this thread, I don't \nhave much confidence in any other additional results at this time. \nNone of the analysis would be acceptable in any environment in which \nI've worked. Be sure to take a look at Sun's free Workshop tools as \nthey are excellent for this sort of profiling and one doesn't need to \nrecompile to use them. If I get a little time in the next week or two \nI might take a crack at this.\n\nCheers,\n\n-- Alan\n\n",
"msg_date": "Tue, 22 Nov 2005 22:57:16 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nAlan Stange <[email protected]> writes:\n\n> For sequential scans, you do have a background reader. It's the kernel. As\n> long as you don't issue a seek() between read() calls, the kernel will get the\n> hint about sequential IO and begin to perform a read ahead for you. This is\n> where the above analysis isn't quite right: while postgresql is processing the\n> returned data from the read() call, the kernel has also issued reads as part of\n> the read ahead, keeping the device busy while the cpu is busy. (I'm assuming\n> these details for Linux; Solaris/UFS does work this way). Issue one seek on\n> the file and the read ahead algorithm will back off for a while. This was my\n> point about some descriptions of how the system works not being sensible.\n\nWell that's certainly the hope. But we don't know that this is actually as\neffective as you assume it is. It's awfully hard in the kernel to make much\nmore than a vague educated guess about what kind of readahead would actually\nhelp. \n\nThis is especially true when a file isn't really being accessed in a\nsequential fashion as Postgres may well do if, for example, multiple backends\nare reading the same file. And as you pointed out it doesn't help at all for\nrandom access index scans.\n\n> If your goal is sequential IO, then one must use larger block sizes. No one\n> would use 8KB IO for achieving high sequential IO rates. Simply put, read()\n> is about the slowest way to get 8KB of data. Switching to 32KB blocks\n> reduces all the system call overhead by a large margin. Larger blocks would be\n> better still, up to the stripe size of your mirror. (Of course, you're using\n> a mirror and not raid5 if you care about performance.)\n\nSwitching to 32kB blocks throughout Postgres has pros but also major cons, not\nthe least is *extra* i/o for random access read patterns. One of the possible\nadvantages of the suggestions that were made, the ones you're shouting down,\nwould actually be the ability to use 32kB scatter/gather reads without\nnecessarily switching block sizes.\n\n(Incidentally, your parenthetical comment is a bit confused. By \"mirror\" I\nimagine you're referring to raid1+0 since mirrors alone, aka raid1, aren't a\npopular way to improve performance. But raid5 actually performs better than\nraid1+0 for sequential reads.)\n\n> Does postgresql use the platform specific memcpy() in libc? Some care might\n> be needed to ensure that the memory blocks within postgresql are all\n> properly aligned to make sure that one isn't ping-ponging cache lines around\n> (usually done by padding the buffer sizes by an extra 32 bytes or L1 line\n> size). Whatever you do, all the usual high performance computing tricks\n> should be used prior to considering any rewriting of major code sections.\n\nSo your philosophy is to worry about microoptimizations before worrying about\narchitectural issues?\n\n\n-- \ngreg\n\n",
"msg_date": "22 Nov 2005 23:21:36 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan Stange wrote:\n> Bruce Momjian wrote:\n> > Right now the file system will do read-ahead for a heap scan (but not an\n> > index scan), but even then, there is time required to get that kernel\n> > block into the PostgreSQL shared buffers, backing up Luke's observation\n> > of heavy memcpy() usage.\n> >\n> > So what are our options? mmap()? I have no idea. Seems larger page\n> > size does help.\n\n> For sequential scans, you do have a background reader. It's the \n> kernel. As long as you don't issue a seek() between read() calls, the \n\nI guess you missed my text of \"Right now the file system will do\nread-ahead\", meaning the kernel.\n\n> I don't think the memcpy of data from the kernel to userspace is that \n> big of an issue right now. dd and all the high end network interfaces \n> manage OK doing it, so I'd expect postgresql to do all right with it now \n> yet too. Direct IO will avoid that memcpy, but then you also don't get \n> any caching of the files in memory. I'd be more concerned about any \n> memcpy calls or general data management within postgresql. Does \n> postgresql use the platform specific memcpy() in libc? Some care might \n> be needed to ensure that the memory blocks within postgresql are all \n> properly aligned to make sure that one isn't ping-ponging cache lines \n> around (usually done by padding the buffer sizes by an extra 32 bytes or \n> L1 line size). Whatever you do, all the usual high performance \n> computing tricks should be used prior to considering any rewriting of \n> major code sections.\n\nWe have dealt with alignment and MemCpy is what we used for small-sized\ncopies to reduce function call overhead. If you want to improve it,\nfeel free to take a look.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 22 Nov 2005 23:53:16 -0500 (EST)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Bruce,\n\nOn 11/22/05 4:13 PM, \"Bruce Momjian\" <[email protected]> wrote:\n\n> Perfect summary. We have a background writer now. Ideally we would\n> have a background reader, that reads-ahead blocks into the buffer cache.\n> The problem is that while there is a relatively long time between a\n> buffer being dirtied and the time it must be on disk (checkpoint time),\n> the read-ahead time is much shorter, requiring some kind of quick\n> \"create a thread\" approach that could easily bog us down as outlined\n> above.\n\nYes, the question is \"how much read-ahead buffer is needed to equate to the\n38% of I/O wait time in the current executor profile?\"\n\nThe idea of asynchronous buffering would seem appropriate if the executor\nwould use the 38% of time as useful work.\n\nA background reader is an interesting approach - it would require admin\nmanagement of buffers where AIO would leave that in the kernel. The\nadvantage over AIO would be more universal platform support I suppose?\n \n> Right now the file system will do read-ahead for a heap scan (but not an\n> index scan), but even then, there is time required to get that kernel\n> block into the PostgreSQL shared buffers, backing up Luke's observation\n> of heavy memcpy() usage.\n\nAs evidenced by the 16MB readahead setting still resulting in only 36% IO\nwait.\n\n> So what are our options? mmap()? I have no idea. Seems larger page\n> size does help.\n\nNot sure about that, we used to run with 32KB page size and I didn't see a\nbenefit on seq scan at all. I haven't seen tests in this thread that\ncompare 8K to 32K. \n\n- Luke\n\n\n",
"msg_date": "Wed, 23 Nov 2005 09:51:06 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nWhy not contribute something - put up proof of your stated 8KB versus 32KB\npage size improvement.\n\n- Luke\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nAlan,\n\nWhy not contribute something - put up proof of your stated 8KB versus 32KB page size improvement.\n\n- Luke",
"msg_date": "Wed, 23 Nov 2005 09:53:04 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Why not contribute something - put up proof of your stated 8KB versus \n> 32KB page size improvement.\n\nI did observe that 32KB block sizes were a significant win \"for our \nusage patterns\". It might be a win for any of the following reasons:\n\n0) The preliminaries: ~300GB database with about ~50GB daily \nturnover. Our data is fairly reasonably grouped. If we're getting one \nitem on a page we're usually looking at the other items as well.\n\n1) we can live with a smaller FSM size. We were often leaking pages \nwith a 10M page FSM setting. With 32K pages, a 10M FSM size is \nsufficient. Yes, the solution to this is \"run vacuum more often\", but \nwhen the vacuum was taking 10 hours at a time, that was hard to do.\n\n2) The typical datum size in our largest table is about 2.8KB, which is \nmore than 1/4 page size thus resulting in the use of a toast table. \nSwitching to 32KB pages allows us to get a decent storage of this data \ninto the main tables, thus avoiding another table and associated large \nindex. Not having the extra index in memory for a table with 90M rows \nis probably beneficial.\n\n3) vacuum time has been substantially reduced. Vacuum analyze now run \nin the 2 to 3 hour range depending on load.\n\n4) less cpu time spent in the kernel. We're basically doing 1/4 as many \nsystem calls. \n\nOverall the system has now been working well. We used to see the \ndatabase being a bottleneck at times, but now it's keeping up nicely.\n\nHope this helps.\n\nHappy Thanksgiving!\n\n-- Alan\n",
"msg_date": "Wed, 23 Nov 2005 17:00:37 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan,\n\nOn 11/23/05 2:00 PM, \"Alan Stange\" <[email protected]> wrote:\n\n> Luke Lonergan wrote:\n>> Why not contribute something - put up proof of your stated 8KB versus\n>> 32KB page size improvement.\n> \n> I did observe that 32KB block sizes were a significant win \"for our\n> usage patterns\". It might be a win for any of the following reasons:\n> (* big snip *)\n\nThough all of what you relate is interesting, it seems irrelevant to your\nearlier statement here:\n\n>> Alan Stange <[email protected]> writes:\n>> If your goal is sequential IO, then one must use larger block sizes.\n>> No one would use 8KB IO for achieving high sequential IO rates. Simply\n>> put, read() is about the slowest way to get 8KB of data. Switching\n>> to 32KB blocks reduces all the system call overhead by a large margin.\n>> Larger blocks would be better still, up to the stripe size of your\n>> mirror. (Of course, you're using a mirror and not raid5 if you care\n>> about performance.)\n\nAnd I am interested in seeing if your statement is correct. Do you have any\nproof of this to share?\n\n- Luke\n\n\n",
"msg_date": "Wed, 23 Nov 2005 17:50:57 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nThis is an excellent idea unfortunately I¹m in Maui right now (Mahalo!)\nand I¹m not getting to testing with this. My first try was with 8.0.3 and\nit¹s an 8.1 function I presume.\n\nNot to be lazy but any hint as to how to do the same thing for 8.0?\n\n- Luke\n\n\nOn 11/21/05 9:10 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Luke Lonergan wrote:\n> \n>> > So that leaves the question - why not more than 64% of the I/O scan rate?\n>> > And why is it a flat 64% as the I/O subsystem increases in speed from\n>> > 333-400MB/s?\n>> >\n> \n> It might be interesting to see what effect reducing the cpu consumption\n> entailed by the count aggregation has - by (say) writing a little bit\n> of code to heap scan the desired relation (sample attached).\n> \n> Cheers\n> \n> Mark\n> \n> \n> \n> \n> \n> \n> /*\n> * fastcount.c\n> *\n> * Do a count that uses considerably less CPU time than an aggregate.\n> */\n> \n> #include \"postgres.h\"\n> \n> #include \"funcapi.h\"\n> #include \"access/heapam.h\"\n> #include \"catalog/namespace.h\"\n> #include \"utils/builtins.h\"\n> \n> \n> extern Datum fastcount(PG_FUNCTION_ARGS);\n> \n> \n> PG_FUNCTION_INFO_V1(fastcount);\n> Datum\n> fastcount(PG_FUNCTION_ARGS)\n> {\n> text *relname = PG_GETARG_TEXT_P(0);\n> RangeVar *relrv;\n> Relation rel;\n> HeapScanDesc scan;\n> HeapTuple tuple;\n> int64 result = 0;\n> \n> /* Use the name to get a suitable range variable and open the relation. */\n> relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));\n> rel = heap_openrv(relrv, AccessShareLock);\n> \n> /* Start a heap scan on the relation. */\n> scan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n> while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)\n> {\n> result++;\n> }\n> \n> /* End the scan and close up the relation. */\n> heap_endscan(scan);\n> heap_close(rel, AccessShareLock);\n> \n> \n> PG_RETURN_INT64(result);\n> }\n\n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nMark,\n\nThis is an excellent idea – unfortunately I’m in Maui right now (Mahalo!) and I’m not getting to testing with this. My first try was with 8.0.3 and it’s an 8.1 function I presume.\n\nNot to be lazy – but any hint as to how to do the same thing for 8.0?\n\n- Luke\n\n\nOn 11/21/05 9:10 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\nLuke Lonergan wrote:\n\n> So that leaves the question - why not more than 64% of the I/O scan rate?\n> And why is it a flat 64% as the I/O subsystem increases in speed from\n> 333-400MB/s?\n>\n\nIt might be interesting to see what effect reducing the cpu consumption\n entailed by the count aggregation has - by (say) writing a little bit\nof code to heap scan the desired relation (sample attached).\n\nCheers\n\nMark\n\n\n\n\n\n/*\n * fastcount.c\n *\n * Do a count that uses considerably less CPU time than an aggregate.\n */\n\n#include \"postgres.h\"\n\n#include \"funcapi.h\"\n#include \"access/heapam.h\"\n#include \"catalog/namespace.h\"\n#include \"utils/builtins.h\"\n\n\nextern Datum fastcount(PG_FUNCTION_ARGS);\n\n\nPG_FUNCTION_INFO_V1(fastcount);\nDatum\nfastcount(PG_FUNCTION_ARGS)\n{\n text *relname = PG_GETARG_TEXT_P(0);\n RangeVar *relrv;\n Relation rel;\n HeapScanDesc scan;\n HeapTuple tuple;\n int64 result = 0;\n\n /* Use the name to get a suitable range variable and open the relation. */\n relrv = makeRangeVarFromNameList(textToQualifiedNameList(relname));\n rel = heap_openrv(relrv, AccessShareLock);\n\n /* Start a heap scan on the relation. */\n scan = heap_beginscan(rel, SnapshotNow, 0, NULL);\n while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)\n {\n result++;\n }\n\n /* End the scan and close up the relation. */\n heap_endscan(scan);\n heap_close(rel, AccessShareLock);\n\n\n PG_RETURN_INT64(result);\n}",
"msg_date": "Wed, 23 Nov 2005 18:29:49 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n> This is an excellent idea � unfortunately I�m in Maui right now \n> (Mahalo!) and I�m not getting to testing with this. My first try was \n> with 8.0.3 and it�s an 8.1 function I presume.\n> \n> Not to be lazy � but any hint as to how to do the same thing for 8.0?\n> \n\nYeah, it's 8.1 - I didn't think to check against 8.0. The attached \nvariant works with 8.0.4 (textToQualifiedNameList needs 2 args)\n\ncheers\n\nMark\n\nP.s. Maui eh, sounds real nice.",
"msg_date": "Thu, 24 Nov 2005 18:34:03 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nSee the results below and analysis - the pure HeapScan gets 94.1% of the max\navailable read bandwidth (cool!). Nothing wrong with heapscan in the\npresence of large readahead, which is good news.\n\nThat says it's something else in the path. As you probably know there is a\npage lock taken, a copy of the tuple from the page, lock removed, count\nincremented for every iteration of the agg node on a count(*). Is the same\ntrue of a count(1)?\n\nI recall that the profile is full of memcpy and memory context calls.\n\nIt would be nice to put some tracers into the executor and see where the\ntime is going. I'm also curious about the impact of the new 8.1 virtual\ntuples in reducing the executor overhead. In this case my bet's on the agg\nnode itself, what do you think?\n\n- Luke\n\nOn 11/21/05 9:10 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Luke Lonergan wrote:\n> \n>> So that leaves the question - why not more than 64% of the I/O scan rate?\n>> And why is it a flat 64% as the I/O subsystem increases in speed from\n>> 333-400MB/s?\n>> \n> \n> It might be interesting to see what effect reducing the cpu consumption\n> entailed by the count aggregation has - by (say) writing a little bit\n> of code to heap scan the desired relation (sample attached).\n\nOK - here are results for a slightly smaller (still bigger than RAM)\nlineitem on the same machine, using the same xfs filesystem that achieved\n407MB/s:\n\n============================================================================\n12.9GB of DBT-3 data from the lineitem table\n============================================================================\nllonergan=# select relpages from pg_class where relname='lineitem';\n relpages \n----------\n 1579270\n(1 row)\n\n1579270*8192/1000000\n12937 Million Bytes or 12.9GB\n\nllonergan=# \\timing\nTiming is on.\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 197870.105 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 49912.164 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 49218.739 ms\n\nllonergan=# select fastcount('lineitem');\n fastcount \n-----------\n 59986052\n(1 row)\n\nTime: 33752.778 ms\nllonergan=# select fastcount('lineitem');\n fastcount \n-----------\n 59986052\n(1 row)\n\nTime: 34543.646 ms\nllonergan=# select fastcount('lineitem');\n fastcount \n-----------\n 59986052\n(1 row)\n\nTime: 34528.053 ms\n\n============================================================================\nAnalysis:\n============================================================================\n Bandwidth Percent of max\ndd Read 407MB/s 100%\nCount(1) 263MB/s 64.6%\nHeapScan 383MB/s 94.1%\n\nWow - looks like the HeapScan gets almost all of the available bandwidth!\n\n- Luke\n\n\n",
"msg_date": "Thu, 24 Nov 2005 00:17:06 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> ============================================================================\n> 12.9GB of DBT-3 data from the lineitem table\n> ============================================================================\n> llonergan=# select relpages from pg_class where relname='lineitem';\n> relpages \n> ----------\n> 1579270\n> (1 row)\n> \n> 1579270*8192/1000000\n> 12937 Million Bytes or 12.9GB\n> \n> llonergan=# \\timing\n> Timing is on.\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 197870.105 ms\n\nSo 198 seconds is the uncached read time with count (Just for clarity, \ndid you clear the Pg and filesystem caches or unmount / remount the \nfilesystem?)\n\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 49912.164 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 49218.739 ms\n> \n\nand ~50 seconds is the (partially) cached read time with count\n\n> llonergan=# select fastcount('lineitem');\n> fastcount \n> -----------\n> 59986052\n> (1 row)\n> \n> Time: 33752.778 ms\n> llonergan=# select fastcount('lineitem');\n> fastcount \n> -----------\n> 59986052\n> (1 row)\n> \n> Time: 34543.646 ms\n> llonergan=# select fastcount('lineitem');\n> fastcount \n> -----------\n> 59986052\n> (1 row)\n> \n> Time: 34528.053 ms\n> \n\nso ~34 seconds is the (partially) cached read time for fastcount -\nI calculate this to give ~362Mb/s effective IO rate (I'm doing / by \n1024*1024 not 1000*1000) FWIW.\n\nWhile this is interesting, you probably want to stop Pg, unmount the \nfilesystem, and restart Pg to get the uncached time for fastcount too \n(and how does this compare to uncached read with dd using the same block \nsize?).\n\nBut at this stage it certainly looks the the heapscan code is pretty \nefficient - great!\n\nOh - and do you want to try out 32K block size, I'm interested to see \nwhat level of improvement you get (as my system is hopelessly cpu bound...)!\n\n> ============================================================================\n> Analysis:\n> ============================================================================\n> Bandwidth Percent of max\n> dd Read 407MB/s 100%\n> Count(1) 263MB/s 64.6%\n> HeapScan 383MB/s 94.1%\n\n\nCheers\n\nMark\n",
"msg_date": "Thu, 24 Nov 2005 21:53:16 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n>\n> It would be nice to put some tracers into the executor and see where the\n> time is going. I'm also curious about the impact of the new 8.1 virtual\n> tuples in reducing the executor overhead. In this case my bet's on the agg\n> node itself, what do you think?\n>\n\nYeah - it's pretty clear that the count aggregate is fairly expensive \nwrt cpu - However, I am not sure if all agg nodes suffer this way (guess \nwe could try a trivial aggregate that does nothing for all tuples bar \nthe last and just reports the final value it sees).\n\nCheers\n\nMark\n\n",
"msg_date": "Thu, 24 Nov 2005 22:11:36 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\n>> Time: 197870.105 ms\n> \n> So 198 seconds is the uncached read time with count (Just for clarity,\n> did you clear the Pg and filesystem caches or unmount / remount the\n> filesystem?)\n\nNope - the longer time is due to the \"second write\" known issue with\nPostgres - it writes the data to the table, but all of the pages are marked\ndirty? So, always on the first scan after loading they are written again.\nThis is clear as you watch vmstat - the pattern on the first seq scan is\nhalf read / half write.\n\n>> Time: 49218.739 ms\n>> \n> \n> and ~50 seconds is the (partially) cached read time with count\n\nAgain - the pattern here is pure read and completely non-cached. You see a\nvery nearly constant I/O rate when watching vmstat for the entire scan.\n\n>> Time: 34528.053 ms\n\n> so ~34 seconds is the (partially) cached read time for fastcount -\n> I calculate this to give ~362Mb/s effective IO rate (I'm doing / by\n> 1024*1024 not 1000*1000) FWIW.\n\nThe dd number uses 1000*1000, so I maintained it for the percentage of max.\n \n> While this is interesting, you probably want to stop Pg, unmount the\n> filesystem, and restart Pg to get the uncached time for fastcount too\n> (and how does this compare to uncached read with dd using the same block\n> size?).\n\nI'll do it again sometime, but I've already deleted the file. I've done the\nfollowing in the past to validate this though:\n\n- Reboot machine\n- Rerun scan\n\nAnd we get identical results.\n \n> But at this stage it certainly looks the the heapscan code is pretty\n> efficient - great!\n\nYep.\n \n> Oh - and do you want to try out 32K block size, I'm interested to see\n> what level of improvement you get (as my system is hopelessly cpu bound...)!\n\nYah - done so in the past and not seen any - was waiting for Alan to post\nhis results.\n \n>> ============================================================================\n>> Analysis:\n>> ============================================================================\n>> Bandwidth Percent of max\n>> dd Read 407MB/s 100%\n>> Count(1) 263MB/s 64.6%\n>> HeapScan 383MB/s 94.1%\n\nNote these are all in consistent 1000x1000 units.\n\nThanks for the test - neat trick! We'll use it to do some more profiling\nsome time soon...\n\n- Luke\n\n\n",
"msg_date": "Thu, 24 Nov 2005 01:22:03 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n\n> That says it's something else in the path. As you probably know there is a\n> page lock taken, a copy of the tuple from the page, lock removed, count\n> incremented for every iteration of the agg node on a count(*). Is the same\n> true of a count(1)?\n> \n\nSorry Luke - message 3 - I seem to be suffering from a very small \nworking memory buffer myself right now, I think it's after a day of \nworking with DB2 ... :-)\n\nAnyway, as I read src/backend/parser/gram.y:6542 - count(*) is \ntransformed into count(1), so these two are identical.\n\nCheers (last time tonight, promise!)\n\nMark\n",
"msg_date": "Thu, 24 Nov 2005 22:24:35 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Luke Lonergan wrote:\n> Mark,\n> \n> \n>>>Time: 197870.105 ms\n>>\n>>So 198 seconds is the uncached read time with count (Just for clarity,\n>>did you clear the Pg and filesystem caches or unmount / remount the\n>>filesystem?)\n> \n> \n> Nope - the longer time is due to the \"second write\" known issue with\n> Postgres - it writes the data to the table, but all of the pages are marked\n> dirty? So, always on the first scan after loading they are written again.\n> This is clear as you watch vmstat - the pattern on the first seq scan is\n> half read / half write.\n>\n\nAh - indeed - first access after a COPY no? I should have thought of \nthat, sorry!\n\n",
"msg_date": "Thu, 24 Nov 2005 22:26:44 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n\n> Yeah - it's pretty clear that the count aggregate is fairly expensive wrt cpu -\n> However, I am not sure if all agg nodes suffer this way (guess we could try a\n> trivial aggregate that does nothing for all tuples bar the last and just\n> reports the final value it sees).\n\nAs you mention count(*) and count(1) are the same thing.\n\nLast I heard the reason count(*) was so expensive was because its state\nvariable was a bigint. That means it doesn't fit in a Datum and has to be\nalloced and stored as a pointer. And because of the Aggregate API that means\nit has to be allocated and freed for every tuple processed.\n\nThere was some talk of having a special case API for count(*) and maybe\nsum(...) to avoid having to do this.\n\nThere was also some talk of making Datum 8 bytes wide on platforms where that\nwas natural (I guess AMD64, Sparc64, Alpha, Itanic).\n\nAfaik none of these items have happened but I don't know for sure.\n\n-- \ngreg\n\n",
"msg_date": "24 Nov 2005 11:00:25 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Last I heard the reason count(*) was so expensive was because its state\n> variable was a bigint. That means it doesn't fit in a Datum and has to be\n> alloced and stored as a pointer. And because of the Aggregate API that means\n> it has to be allocated and freed for every tuple processed.\n\nThere's a hack in 8.1 to avoid the palloc overhead (courtesy of Neil\nConway IIRC).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2005 11:25:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> Greg Stark <[email protected]> writes:\n> > Last I heard the reason count(*) was so expensive was because its state\n> > variable was a bigint. That means it doesn't fit in a Datum and has to be\n> > alloced and stored as a pointer. And because of the Aggregate API that means\n> > it has to be allocated and freed for every tuple processed.\n> \n> There's a hack in 8.1 to avoid the palloc overhead (courtesy of Neil\n> Conway IIRC).\n\nah, cool, missed that.\n\n-- \ngreg\n\n",
"msg_date": "24 Nov 2005 11:40:21 -0500",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "The same 12.9GB distributed across 4 machines using Bizgres MPP fits into\nI/O cache. The interesting result is that the query \"select count(1)\" is\nlimited in speed to 280 MB/s per CPU when run on the lineitem table. So\nwhen I run it spread over 4 machines, one CPU per machine I get this:\n\n======================================================\nBizgres MPP, 4 data segments, 1 per 2 CPUs\n======================================================\nllonergan=# explain select count(1) from lineitem;\n QUERY PLAN\n----------------------------------------------------------------------------\n----------\n Aggregate (cost=582452.00..582452.00 rows=1 width=0)\n -> Gather Motion (cost=582452.00..582452.00 rows=1 width=0)\n -> Aggregate (cost=582452.00..582452.00 rows=1 width=0)\n -> Seq Scan on lineitem (cost=0.00..544945.00 rows=15002800\nwidth=0)\n(4 rows)\n\nllonergan=# \\timing\nTiming is on.\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 12191.435 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 11986.109 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 11448.941 ms\n======================================================\n\nThat's 12,937 MB in 11.45 seconds, or 1,130 MB/s. When you divide out the\nnumber of Postgres instances (4), that's 283MB/s per Postgres instance.\n\nTo verify that this has nothing to do with MPP, I ran it in a special\ninternal mode on one instance and got the same result.\n\nSo - we should be able to double this rate by running one segment per CPU,\nor two per host:\n\n======================================================\nBizgres MPP, 8 data segments, 1 per CPU\n======================================================\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 6484.594 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 6156.729 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 6063.416 ms\n======================================================\nThat's 12,937 MB in 11.45 seconds, or 2,134 MB/s. When you divide out the\nnumber of Postgres instances (8), that's 267MB/s per Postgres instance.\n\nSo, if you want to \"select count(1)\", using more CPUs is a good idea! For\nmost complex queries, having lots of CPUs + MPP is a good combo.\n\nHere is an example of a sorting plan - this should probably be done with a\nhash aggregation, but using 8 CPUs makes it go 8x faster:\n\n\n- Luke\n\n\n",
"msg_date": "Thu, 24 Nov 2005 09:07:40 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Tom Lane wrote:\n> Greg Stark <[email protected]> writes:\n> \n>>Last I heard the reason count(*) was so expensive was because its state\n>>variable was a bigint. That means it doesn't fit in a Datum and has to be\n>>alloced and stored as a pointer. And because of the Aggregate API that means\n>>it has to be allocated and freed for every tuple processed.\n> \n> \n> There's a hack in 8.1 to avoid the palloc overhead (courtesy of Neil\n> Conway IIRC).\n>\n\nIt certainly makes quite a difference as I measure it:\n\ndoing select(1) from a 181000 page table (completely uncached) on my PIII:\n\n8.0 : 32 s\n8.1 : 25 s\n\nNote that the 'fastcount()' function takes 21 s in both cases - so all \nthe improvement seems to be from the count overhead reduction.\n\nCheers\n\nMark\n\n\n\n\n\n\n\n",
"msg_date": "Fri, 25 Nov 2005 11:07:50 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "> Hardware-wise I'd say dual core opterons. One dual-core-opteron\n> performs better than two single-core at the same speed. Tyan makes\n> some boards that have four sockets, thereby giving you 8 cpu's (if you\n> need that many). Sun and HP also makes nice hardware although the Tyan\n> board is more competetive priced.\n\njust FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\nhttp://www.swt.com/vx50.html\n\nIt can be loaded with up to 128 gb memory if all the sockets are filled\n:).\n\nMerlin\n",
"msg_date": "Tue, 15 Nov 2005 09:09:21 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases (5TB)"
}
] |
[
{
"msg_contents": "Dave,\n\n\n________________________________\n\n\tFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Dave Cramer\n\tSent: Tuesday, November 15, 2005 6:15 AM\n\tTo: Luke Lonergan\n\tCc: Adam Weisberg; [email protected]\n\tSubject: Re: [PERFORM] Hardware/OS recommendations for large\ndatabases (\n\t\n\t\n\tLuke, \n\t\n\t\n\tHave you tried the areca cards, they are slightly faster yet. \n\nNo, I've been curious since I read an earlier posting here. I've had a\nlot more experience with the 3Ware cards, mostly good, and they've been\ndoing a lot of volume with Rackable/Yahoo which gives me some more\nconfidence.\n \nThe new 3Ware 9550SX cards use a PowerPC for checksumming, so their\nwrite performance is now up to par with the best cards I believe. We\nfind that you still need to set Linux readahead to at least 8MB\n(blockdev --setra) to get maximum read performance on them, is that your\nexperience with the Arecas? We get about 260MB/s read on 8 drives in\nRAID5 without the readahead tuning and about 400MB/s with it.\n \n- Luke\n\n\n\n\n\nDave,\n\n\n\nFrom: [email protected] \n [mailto:[email protected]] On Behalf Of Dave \n CramerSent: Tuesday, November 15, 2005 6:15 AMTo: Luke \n LonerganCc: Adam Weisberg; \n [email protected]: Re: [PERFORM] Hardware/OS \n recommendations for large databases (\nLuke,\n \nHave you tried the areca cards, they are slightly faster yet. \nNo, I've been curious since I read an earlier posting here. I've \nhad a lot more experience with the 3Ware cards, mostly good, and they've been \ndoing a lot of volume with Rackable/Yahoo which gives \nme some more confidence.\n \nThe new 3Ware 9550SX cards use a PowerPC for checksumming, so their write \nperformance is now up to par with the best cards I believe. We find that \nyou still need to set Linux readahead to at least 8MB (blockdev --setra) to get \nmaximum read performance on them, is that your experience with the Arecas? \nWe get about 260MB/s read on 8 drives in RAID5 without the readahead tuning and \nabout 400MB/s with it.\n \n- Luke",
"msg_date": "Tue, 15 Nov 2005 09:33:25 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:\n>write performance is now up to par with the best cards I believe. We\n>find that you still need to set Linux readahead to at least 8MB\n>(blockdev --setra) to get maximum read performance on them, is that your\n\nWhat on earth does that do to your seek performance?\n\nMike Stone\n",
"msg_date": "Tue, 15 Nov 2005 09:55:45 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mike, \n\nOn 11/15/05 6:55 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:\n>> >write performance is now up to par with the best cards I believe. We\n>> >find that you still need to set Linux readahead to at least 8MB\n>> >(blockdev --setra) to get maximum read performance on them, is that your\n> \n> What on earth does that do to your seek performance?\n\nWe¹re in decision support, as is our poster here, so seek isn¹t the issue,\nit¹s sustained sequential transfer rate that we need. At 8MB, I¹d not\nexpect too much damage though the default is 1.5MB.\n\n- Luke\n> \n\n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases (\n\n\nMike, \n\nOn 11/15/05 6:55 AM, \"Michael Stone\" <[email protected]> wrote:\n\nOn Tue, Nov 15, 2005 at 09:33:25AM -0500, Luke Lonergan wrote:\n>write performance is now up to par with the best cards I believe. We\n>find that you still need to set Linux readahead to at least 8MB\n>(blockdev --setra) to get maximum read performance on them, is that your\n\nWhat on earth does that do to your seek performance?\n\nWe’re in decision support, as is our poster here, so seek isn’t the issue, it’s sustained sequential transfer rate that we need. At 8MB, I’d not expect too much damage though – the default is 1.5MB.\n\n- Luke",
"msg_date": "Tue, 15 Nov 2005 10:47:59 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Merlin, \n\n> just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> http://www.swt.com/vx50.html\n> \n> It can be loaded with up to 128 gb memory if all the sockets \n> are filled :).\n\nCool!\n\nJust remember that you can't get more than 1 CPU working on a query at a\ntime without a parallel database.\n\n- Luke\n\n",
"msg_date": "Tue, 15 Nov 2005 09:36:21 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Merlin,\n\n> > just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> > http://www.swt.com/vx50.html\n> > \n> > It can be loaded with up to 128 gb memory if all the sockets are \n> > filled :).\n\nAnother thought - I priced out a maxed out machine with 16 cores and\n128GB of RAM and 1.5TB of usable disk - $71,000.\n\nYou could instead buy 8 machines that total 16 cores, 128GB RAM and 28TB\nof disk for $48,000, and it would be 16 times faster in scan rate, which\nis the most important factor for large databases. The size would be 16\nrack units instead of 5, and you'd have to add a GigE switch for $1500.\n\nScan rate for above SMP: 200MB/s\n\nScan rate for above cluster: 3,200Mb/s\n\nYou could even go dual core and double the memory on the cluster and\nyou'd about match the price of the \"god box\".\n\n- Luke\n\n",
"msg_date": "Tue, 15 Nov 2005 09:49:26 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "> Merlin,\n> \n> > > just FYI: tyan makes a 8 socket motherboard (up to 16 cores!):\n> > > http://www.swt.com/vx50.html\n> > >\n> > > It can be loaded with up to 128 gb memory if all the sockets are\n> > > filled :).\n> \n> Another thought - I priced out a maxed out machine with 16 cores and\n> 128GB of RAM and 1.5TB of usable disk - $71,000.\n> \n> You could instead buy 8 machines that total 16 cores, 128GB RAM and\n28TB\n> of disk for $48,000, and it would be 16 times faster in scan rate,\nwhich\n> is the most important factor for large databases. The size would be\n16\n> rack units instead of 5, and you'd have to add a GigE switch for\n$1500.\n \nIt's hard to say what would be better. My gut says the 5u box would be\na lot better at handling high cpu/high concurrency problems...like your\ntypical business erp backend. This is pure speculation of course...I'll\ndefer to the experts here.\n\nMerlin\n\n",
"msg_date": "Tue, 15 Nov 2005 10:20:05 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( 5TB)"
},
{
"msg_contents": "Merlin,\n\nOn 11/15/05 7:20 AM, \"Merlin Moncure\" <[email protected]> wrote:\n> \n> It's hard to say what would be better. My gut says the 5u box would be\n> a lot better at handling high cpu/high concurrency problems...like your\n> typical business erp backend. This is pure speculation of course...I'll\n> defer to the experts here.\n\nWith Oracle RAC, which is optimized for OLTP and uses a shared memory\ncaching model, maybe or maybe not. I¹d put my money on the SMP in that case\nas you suggest, but what happens when the OS dies?\n\nFor data warehousing, OLAP and decision support applications, RAC and other\nshared memory/disk architectures don¹t do you any good and the SMP machine\nis better by a bit.\n\nHowever, if you have an MPP database, where disk and memory are not shared,\nthen the SMP machine is tens or hundreds of times slower than the cluster of\nthe same price.\n\n- Luke \n\n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)\n\n\nMerlin,\n\nOn 11/15/05 7:20 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\nIt's hard to say what would be better. My gut says the 5u box would be\na lot better at handling high cpu/high concurrency problems...like your\ntypical business erp backend. This is pure speculation of course...I'll\ndefer to the experts here.\n\nWith Oracle RAC, which is optimized for OLTP and uses a shared memory caching model, maybe or maybe not. I’d put my money on the SMP in that case as you suggest, but what happens when the OS dies?\n\nFor data warehousing, OLAP and decision support applications, RAC and other shared memory/disk architectures don’t do you any good and the SMP machine is better by a bit.\n\nHowever, if you have an MPP database, where disk and memory are not shared, then the SMP machine is tens or hundreds of times slower than the cluster of the same price.\n\n- Luke",
"msg_date": "Tue, 15 Nov 2005 10:46:12 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Merlin Moncure wrote:\n>>You could instead buy 8 machines that total 16 cores, 128GB RAM and\n> \n> It's hard to say what would be better. My gut says the 5u box would be\n> a lot better at handling high cpu/high concurrency problems...like your\n> typical business erp backend. This is pure speculation of course...I'll\n> defer to the experts here.\n\nIn this specific case (data warehouse app), multiple machines is the \nbetter bet. Load data on 1 machine, copy to other servers and then use a \nmiddleman to spread out SQL statements to each machine.\n\nI was going to suggest pgpool as the middleman but I believe it's \nlimited to 2 machines max at this time. I suppose you could daisy chain \npgpools running on every machine.\n",
"msg_date": "Tue, 15 Nov 2005 10:57:05 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( 5TB)"
}
] |
[
{
"msg_contents": "Luke,\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Tuesday, November 15, 2005 7:10 AM\nTo: Adam Weisberg\nCc: [email protected]\nSubject: RE: [PERFORM] Hardware/OS recommendations for large databases (\n5TB)\n\nAdam,\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Claus \n> Guttesen\n> Sent: Tuesday, November 15, 2005 12:29 AM\n> To: Adam Weisberg\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Hardware/OS recommendations for large databases\n\n> ( 5TB)\n> \n> > Does anyone have recommendations for hardware and/or OS to\n> work with\n> > around 5TB datasets?\n> \n> Hardware-wise I'd say dual core opterons. One dual-core-opteron \n> performs better than two single-core at the same speed. Tyan makes \n> some boards that have four sockets, thereby giving you 8 cpu's (if you\n\n> need that many). Sun and HP also makes nice hardware although the Tyan\n\n> board is more competetive priced.\n> \n> OS wise I would choose the FreeBSD amd64 port but partititions larger \n> than 2 TB needs some special care, using gpt rather than disklabel \n> etc., tools like fsck may not be able to completely check partitions \n> larger than 2 TB. Linux or Solaris with either LVM or Veritas FS \n> sounds like candidates.\n\nI agree - you can get a very good one from www.acmemicro.com or\nwww.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\nRAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\non a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\nperformance on these (with tuning) on Linux using the xfs filesystem,\nwhich is one of the most critical factors for large databases. \n\nNote that you want to have your DBMS use all of the CPU and disk channel\nbandwidth you have on each query, which takes a parallel database like\nBizgres MPP to achieve.\n\nRegards,\n\n- Luke\n\n\nThe What's New FAQ for PostgreSQL 8.1 says \"the buffer manager for 8.1\nhas been enhanced to scale almost linearly with the number of\nprocessors, leading to significant performance gains on 8-way, 16-way,\ndual-core, and multi-core CPU servers.\"\n\nWhy not just use it as-is?\n\nCheers,\n\nAdam\n",
"msg_date": "Tue, 15 Nov 2005 10:40:53 -0500",
"msg_from": "\"Adam Weisberg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( 5TB)"
}
] |
[
{
"msg_contents": "Because only 1 cpu is used on each query.\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: Adam Weisberg <[email protected]>\r\nTo: Luke Lonergan <[email protected]>\r\nCC: [email protected] <[email protected]>\r\nSent: Tue Nov 15 10:40:53 2005\r\nSubject: RE: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)\r\n\r\nLuke,\r\n\r\n-----Original Message-----\r\nFrom: Luke Lonergan [mailto:[email protected]] \r\nSent: Tuesday, November 15, 2005 7:10 AM\r\nTo: Adam Weisberg\r\nCc: [email protected]\r\nSubject: RE: [PERFORM] Hardware/OS recommendations for large databases (\r\n5TB)\r\n\r\nAdam,\r\n\r\n> -----Original Message-----\r\n> From: [email protected]\r\n> [mailto:[email protected]] On Behalf Of Claus \r\n> Guttesen\r\n> Sent: Tuesday, November 15, 2005 12:29 AM\r\n> To: Adam Weisberg\r\n> Cc: [email protected]\r\n> Subject: Re: [PERFORM] Hardware/OS recommendations for large databases\r\n\r\n> ( 5TB)\r\n> \r\n> > Does anyone have recommendations for hardware and/or OS to\r\n> work with\r\n> > around 5TB datasets?\r\n> \r\n> Hardware-wise I'd say dual core opterons. One dual-core-opteron \r\n> performs better than two single-core at the same speed. Tyan makes \r\n> some boards that have four sockets, thereby giving you 8 cpu's (if you\r\n\r\n> need that many). Sun and HP also makes nice hardware although the Tyan\r\n\r\n> board is more competetive priced.\r\n> \r\n> OS wise I would choose the FreeBSD amd64 port but partititions larger \r\n> than 2 TB needs some special care, using gpt rather than disklabel \r\n> etc., tools like fsck may not be able to completely check partitions \r\n> larger than 2 TB. Linux or Solaris with either LVM or Veritas FS \r\n> sounds like candidates.\r\n\r\nI agree - you can get a very good one from www.acmemicro.com or\r\nwww.rackable.com with 8x 400GB SATA disks and the new 3Ware 9550SX SATA\r\nRAID controller for about $6K with two Opteron 272 CPUs and 8GB of RAM\r\non a Tyan 2882 motherboard. We get about 400MB/s sustained disk read\r\nperformance on these (with tuning) on Linux using the xfs filesystem,\r\nwhich is one of the most critical factors for large databases. \r\n\r\nNote that you want to have your DBMS use all of the CPU and disk channel\r\nbandwidth you have on each query, which takes a parallel database like\r\nBizgres MPP to achieve.\r\n\r\nRegards,\r\n\r\n- Luke\r\n\r\n\r\nThe What's New FAQ for PostgreSQL 8.1 says \"the buffer manager for 8.1\r\nhas been enhanced to scale almost linearly with the number of\r\nprocessors, leading to significant performance gains on 8-way, 16-way,\r\ndual-core, and multi-core CPU servers.\"\r\n\r\nWhy not just use it as-is?\r\n\r\nCheers,\r\n\r\nAdam\r\n\r\n",
"msg_date": "Tue, 15 Nov 2005 10:50:49 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Unless there was a way to guarantee consistency, it would be hard at\nbest to make this work. Convergence on large data sets across boxes is\nnon-trivial, and diffing databases is difficult at best. Unless there\nwas some form of automated way to ensure consistency, going 8 ways into\nseparate boxes is *very* hard. I do suppose that if you have fancy\nstorage (EMC, Hitachi) you could do BCV or Shadow copies. But in terms\nof commodity stuff, I'd have to agree with Merlin. \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of William Yu\nSent: Tuesday, November 15, 2005 10:57 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Hardware/OS recommendations for large databases (\n5TB)\n\nMerlin Moncure wrote:\n>>You could instead buy 8 machines that total 16 cores, 128GB RAM and\n> \n> It's hard to say what would be better. My gut says the 5u box would \n> be a lot better at handling high cpu/high concurrency problems...like \n> your typical business erp backend. This is pure speculation of \n> course...I'll defer to the experts here.\n\nIn this specific case (data warehouse app), multiple machines is the\nbetter bet. Load data on 1 machine, copy to other servers and then use a\nmiddleman to spread out SQL statements to each machine.\n\nI was going to suggest pgpool as the middleman but I believe it's\nlimited to 2 machines max at this time. I suppose you could daisy chain\npgpools running on every machine.\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n",
"msg_date": "Tue, 15 Nov 2005 11:07:40 -0800",
"msg_from": "\"James Mello\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( 5TB)"
},
{
"msg_contents": "James,\n\n\nOn 11/15/05 11:07 AM, \"James Mello\" <[email protected]> wrote:\n\n> Unless there was a way to guarantee consistency, it would be hard at\n> best to make this work. Convergence on large data sets across boxes is\n> non-trivial, and diffing databases is difficult at best. Unless there\n> was some form of automated way to ensure consistency, going 8 ways into\n> separate boxes is *very* hard. I do suppose that if you have fancy\n> storage (EMC, Hitachi) you could do BCV or Shadow copies. But in terms\n> of commodity stuff, I'd have to agree with Merlin.\n\nIt¹s a matter of good software that handles the distribution / parallel\nquery optimization / distributed transactions and management features.\nCombine that with a gigabit ethernet switch and it works we routinely get\n50x speedup over SMP on OLAP / Decision Support workloads.\n\nRegards,\n\n- Luke \n\n\n\nRe: [PERFORM] Hardware/OS recommendations for large databases ( 5TB)\n\n\nJames,\n\n\nOn 11/15/05 11:07 AM, \"James Mello\" <[email protected]> wrote:\n\nUnless there was a way to guarantee consistency, it would be hard at\nbest to make this work. Convergence on large data sets across boxes is\nnon-trivial, and diffing databases is difficult at best. Unless there\nwas some form of automated way to ensure consistency, going 8 ways into\nseparate boxes is *very* hard. I do suppose that if you have fancy\nstorage (EMC, Hitachi) you could do BCV or Shadow copies. But in terms\nof commodity stuff, I'd have to agree with Merlin.\n\nIt’s a matter of good software that handles the distribution / parallel query optimization / distributed transactions and management features. Combine that with a gigabit ethernet switch and it works – we routinely get 50x speedup over SMP on OLAP / Decision Support workloads.\n\nRegards,\n\n- Luke",
"msg_date": "Tue, 15 Nov 2005 22:11:39 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "James Mello wrote:\n> Unless there was a way to guarantee consistency, it would be hard at\n> best to make this work. Convergence on large data sets across boxes is\n> non-trivial, and diffing databases is difficult at best. Unless there\n> was some form of automated way to ensure consistency, going 8 ways into\n> separate boxes is *very* hard. I do suppose that if you have fancy\n> storage (EMC, Hitachi) you could do BCV or Shadow copies. But in terms\n> of commodity stuff, I'd have to agree with Merlin. \n\nIf you're talking about data consistency, I don't see why that's an \nissue in a bulk-load/read-only setup. Either bulk load on 1 server and \nthen do a file copy to all the others -- or simultaneously bulk load on \nall servers.\n\nIf you're talking about consistency in directly queries to the \nappropriate servers, I agree that's a more complicated issue but not \nunsurmountable. If you don't use persistent connections, you can \nprobably get pretty good routing using DNS -- monitor servers by looking \nat top/iostat/memory info/etc and continually change the DNS zonemaps to \ndirect traffic to less busy servers. (I use this method for our global \nload balancers -- pretty easy to script via Perl/Python/etc.) Mind you \nsince you need a Dual Processor motherboard anyways to get PCI-X, that \nmeans every machine would be a 2xDual Core so there's enough CPU power \nto handle the cases where 2 or 3 queries get sent to the same server \nback-to-back. Of course, I/O would take a hit in this case -- but I/O \nwould take a hit in every case on a single 16-core mega system.\n\nIf use persistent connections, it'll definitely require extra \nprogramming beyond simple scripting. Take one of the opensource projects \nlike PgPool or SQLRelay and alter it so it monitors all servers to see \nwhat server is least busy before passing a query on.\n",
"msg_date": "Wed, 16 Nov 2005 05:08:50 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ( 5TB)"
}
] |
[
{
"msg_contents": "I have a query that's making the planner do the wrong thing (for my \ndefinition of wrong) and I'm looking for advice on what to tune to make \nit do what I want.\n\nThe query consists or SELECT'ing a few fields from a table for a large \nnumber of rows. The table has about seventy thousand rows and the user \nis selecting some subset of them. I first do a SELECT...WHERE to \ndetermine the unique identifiers I want (works fine) and then I do a \nSELECT WHERE IN giving the list of id's I need additional data on \n(which I see from EXPLAIN just gets translated into a very long list of \nOR's).\n\nEverything works perfectly until I get to 65301 rows. At 65300 rows, \nit does an index scan and takes 2197.193 ms. At 65301 rows it switches \nto a sequential scan and takes 778951.556 ms. Values known not to \naffect this are: work_mem, effective_cache_size. Setting \nrandom_page_cost from 4 to 1 helps (79543.214 ms) but I'm not really \nsure what '1' means, except it's relative. Of course, setting \n'enable_seqscan false' helps immensely (2337.289 ms) but that's as \ninelegant of a solution as I've found - if there were other databases \non this install that wouldn't be the right approach.\n\nNow I can break this down into multiple SELECT's in code, capping each \nquery at 65300 rows, and that's a usable workaround, but academically \nI'd like to know how to convince the planner to do it my way. It's \nmaking a bad guess about something but I'm not sure what. I didn't see \nany hard-coded limits grepping through the source (though it is close \nto the 16-bit unsigned boundry - probably coincidental) so if anyone \nhas ideas or pointers to how I might figure out what's going wrong that \nwould be helpful.\n\nThanks,\n-Bill\n\n-----\nBill McGonigle, Owner Work: 603.448.4440\nBFC Computing, LLC Home: 603.448.1668\[email protected] Mobile: 603.252.2606\nhttp://www.bfccomputing.com/ Pager: 603.442.1833\nJabber: [email protected] Text: [email protected]\nBlog: http://blog.bfccomputing.com/\n\n",
"msg_date": "Tue, 15 Nov 2005 14:12:23 -0500",
"msg_from": "Bill McGonigle <[email protected]>",
"msg_from_op": true,
"msg_subject": "Too Many OR's?"
},
{
"msg_contents": "On Tue, 2005-11-15 at 13:12, Bill McGonigle wrote:\n> I have a query that's making the planner do the wrong thing (for my \n> definition of wrong) and I'm looking for advice on what to tune to make \n> it do what I want.\n> \n> The query consists or SELECT'ing a few fields from a table for a large \n> number of rows. The table has about seventy thousand rows and the user \n> is selecting some subset of them. I first do a SELECT...WHERE to \n> determine the unique identifiers I want (works fine) and then I do a \n> SELECT WHERE IN giving the list of id's I need additional data on \n> (which I see from EXPLAIN just gets translated into a very long list of \n> OR's).\n> \n> Everything works perfectly until I get to 65301 rows. At 65300 rows, \n> it does an index scan and takes 2197.193 ms. At 65301 rows it switches \n> to a sequential scan and takes 778951.556 ms. Values known not to \n> affect this are: work_mem, effective_cache_size. Setting \n> random_page_cost from 4 to 1 helps (79543.214 ms) but I'm not really \n> sure what '1' means, except it's relative. Of course, setting \n> 'enable_seqscan false' helps immensely (2337.289 ms) but that's as \n> inelegant of a solution as I've found - if there were other databases \n> on this install that wouldn't be the right approach.\n> \n> Now I can break this down into multiple SELECT's in code, capping each \n> query at 65300 rows, and that's a usable workaround, but academically \n> I'd like to know how to convince the planner to do it my way. It's \n> making a bad guess about something but I'm not sure what. I didn't see \n> any hard-coded limits grepping through the source (though it is close \n> to the 16-bit unsigned boundry - probably coincidental) so if anyone \n> has ideas or pointers to how I might figure out what's going wrong that \n> would be helpful.\n\nOK, there IS a point at which switching to a sequential scan will be\nfast. I.e. when you're getting everything in the table. But the\ndatabase is picking a number where to switch that is too low.\n\nFirst, we need to know if the statistics are giving the query planner a\ngood enough idea of how many rows it's really gonna get versus how many\nit expects.\n\nDo an explain <your query here> and see how many it thinks it's gonna\nget. Since you've actually run it, you know how many it really is going\nto get, so there's no need for an explain analyze <your query here> just\nyet.\n\nNow, as long as the approximation is pretty close, fine. But if it's\noff by factors, then we need to increase the statistics target on that\ncolumn, with:\n\nALTER TABLE name ALTER columnname SET STATISTICS xxx\n\nwhere xxx is the new number. The default is set in your postgresql.conf\nfile, and is usually pretty low, say 10. You can go up to 1000, but\nthat makes query planning take longer. Try some incremental increase to\nsay 20 or 40 or even 100, and run analyze on that table then do an\nexplain on it again until the estimate is close.\n\nOnce the estimate is close, you use change random_page_cost to get the\nquery planner to switch at the \"right\" time. Change the number of in()\nnumbers and play with random_page_cost and see where that sweet spot\nis. note that what seems right on a single table for a single user may\nnot be best as you increase load or access other tables.\n\nrandom_page_cost represents the increase in a random access versus a\nsequential access. As long as your data fit into ram, the difference is\npretty much none (i.e. random_page_cost=1) so don't set it too low, or\naccessing REALLY large data sets could become REALLY slow, as it uses\nindexes when it should have been sequentially scanning.\n\nAlso, check what you've got effective_cache set to. This tells\npostgresql how much memory your kernel is using for cache, and so lets\nit know about how likely it is that your current data set under your\nquery is to be in there.\n\nAlso, read this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n",
"msg_date": "Tue, 15 Nov 2005 14:01:48 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Too Many OR's?"
}
] |
[
{
"msg_contents": ">\n>I suggest you read this on the difference between enterprise/SCSI and\n>desktop/IDE drives:\n>\n>\thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n>\n> \n>\nThis is exactly the kind of vendor propaganda I was talking about\nand it proves my point quite well : that there's nothing specific relating\nto reliability that is different between SCSI and SATA drives cited in \nthat paper.\nIt does have a bunch of FUD such as 'oh yeah we do a lot more\ndrive characterization during manufacturing'.\n\n\n\n\n\n\n\n\n\n\n\n\nI suggest you read this on the difference between enterprise/SCSI and\ndesktop/IDE drives:\n\n\thttp://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n\n \n\nThis is exactly the kind of vendor propaganda I was talking about\nand it proves my point quite well : that there's nothing specific\nrelating\nto reliability that is different between SCSI and SATA drives cited in\nthat paper.\nIt does have a bunch of FUD such as 'oh yeah we do a lot more\ndrive characterization during manufacturing'.",
"msg_date": "Wed, 16 Nov 2005 09:00:12 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Just pick up a SCSI drive and a consumer ATA drive.\n\nFeel their weight.\n\nYou don't have to look inside to tell the difference.\n\nAlex\n\nOn 11/16/05, David Boreham <[email protected]> wrote:\n>\n>\n> I suggest you read this on the difference between enterprise/SCSI and\n> desktop/IDE drives:\n>\n> http://www.seagate.com/content/docs/pdf/whitepaper/D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n>\n>\n> This is exactly the kind of vendor propaganda I was talking about\n> and it proves my point quite well : that there's nothing specific relating\n> to reliability that is different between SCSI and SATA drives cited in that\n> paper.\n> It does have a bunch of FUD such as 'oh yeah we do a lot more\n> drive characterization during manufacturing'.\n>\n>\n>\n>\n",
"msg_date": "Thu, 17 Nov 2005 14:50:10 -0500",
"msg_from": "Alex Turner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alex Turner wrote:\n\n>Just pick up a SCSI drive and a consumer ATA drive.\n>\n>Feel their weight.\n> \n>\nNot sure I get your point. We would want the lighter one,\nall things being equal, right ? (lower shipping costs, less likely\nto break when dropped on the floor....)\n\n\n\n\n\n",
"msg_date": "Thu, 17 Nov 2005 13:29:59 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "David Boreham wrote:\n> Alex Turner wrote:\n>\n>> Just pick up a SCSI drive and a consumer ATA drive.\n>>\n>> Feel their weight.\n>> \n>>\n> Not sure I get your point. We would want the lighter one,\n> all things being equal, right ? (lower shipping costs, less likely\n> to break when dropped on the floor....)\nWhy would the lighter one be less likely to break when dropped on the floor?\n\n-- Alan\n",
"msg_date": "Thu, 17 Nov 2005 15:40:53 -0500",
"msg_from": "Alan Stange <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Alan Stange wrote:\n\n>> Not sure I get your point. We would want the lighter one,\n>> all things being equal, right ? (lower shipping costs, less likely\n>> to break when dropped on the floor....)\n>\n> Why would the lighter one be less likely to break when dropped on the \n> floor?\n\nThey'd have less kinetic energy upon impact.\n\n\n",
"msg_date": "Thu, 17 Nov 2005 14:02:51 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "\nOn 17-Nov-05, at 2:50 PM, Alex Turner wrote:\n\n> Just pick up a SCSI drive and a consumer ATA drive.\n>\n> Feel their weight.\n>\n> You don't have to look inside to tell the difference.\nAt one point stereo manufacturers put weights in the case just to \nmake them heavier.\nThe older ones weighed more and the consumer liked heavy stereos.\n\nBe careful what you measure.\n\nDave\n>\n> Alex\n>\n> On 11/16/05, David Boreham <[email protected]> wrote:\n>>\n>>\n>> I suggest you read this on the difference between enterprise/SCSI \n>> and\n>> desktop/IDE drives:\n>>\n>> http://www.seagate.com/content/docs/pdf/whitepaper/ \n>> D2c_More_than_Interface_ATA_vs_SCSI_042003.pdf\n>>\n>>\n>> This is exactly the kind of vendor propaganda I was talking about\n>> and it proves my point quite well : that there's nothing specific \n>> relating\n>> to reliability that is different between SCSI and SATA drives \n>> cited in that\n>> paper.\n>> It does have a bunch of FUD such as 'oh yeah we do a lot more\n>> drive characterization during manufacturing'.\n>>\n>>\n>>\n>>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n",
"msg_date": "Fri, 18 Nov 2005 07:58:43 -0500",
"msg_from": "Dave Cramer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "David Boreham wrote:\n> I guess I've never bought into the vendor story that there are\n> two reliability grades. Why would they bother making two\n> different kinds of bearing, motor etc ? Seems like it's more\n> likely an excuse to justify higher prices.\n\nthen how to account for the fact that bleeding edge SCSI drives\nturn at twice the rpms of bleeding edge consumer drives?\n\nrichard\n",
"msg_date": "Wed, 16 Nov 2005 11:07:00 -0500",
"msg_from": "\"Welty, Richard\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Welty, Richard wrote:\n> David Boreham wrote:\n> \n>>I guess I've never bought into the vendor story that there are\n>>two reliability grades. Why would they bother making two\n>>different kinds of bearing, motor etc ? Seems like it's more\n>>likely an excuse to justify higher prices.\n> \n> \n> then how to account for the fact that bleeding edge SCSI drives\n> turn at twice the rpms of bleeding edge consumer drives?\n\nThe motors spin twice as fast?\n\nI'm pretty sure the original comment was based on drives w/ similar \nspecs. E.g. 7200RPM \"enterprise\" drives versus 7200RPM \"consumer\" drives.\n\nNext time one of my 7200RPM SCSIs fail, I'll take it apart and compare \nthe insides to an older 7200RPM IDE from roughly the same era.\n",
"msg_date": "Thu, 17 Nov 2005 04:28:38 -0800",
"msg_from": "William Yu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Hi, I just have a little question, does PgPool keeps the same session\nbetween different connections? I say it cuz I have a server with the\nfollowing specifications:\n\nP4 3.2 ghz\n80 gig sata drives x 2\n1 gb ram\n5 ips\n1200 gb bandwidth\n100 mbit/s port speed.\n\nI am running a PgSQL 8.1 server with 100 max connection, pgpool with\nnum_init_children = 25 and max_pool = 4. I do the same queries all the time\n(just a bunch of sps, but they are always the same). Using explain analyze I\nget the fact that the sps are using a lot of time the first time they\nexecute (I guess preparing the plan and the sps I wrote en plpgsql) so I\nwould like to reuse the session the most possible. I need to serve 10M of\nconnection per day. Is this possible? (the client is a webapplication, I\nrepeat again, the queries are always the same).\n\nThanks a lot for your help...\n\n",
"msg_date": "Wed, 16 Nov 2005 16:56:42 -0600",
"msg_from": "\"Cristian Prieto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PgPool and Postgresql sessions..."
}
] |
[
{
"msg_contents": "i have the following query involving a view that i really need to optimise:\n\nSELECT *\nFROM\n\ttokens.ta_tokenhist h INNER JOIN\n\ttokens.vw_tokens t ON h.token_id = t.token_id\nWHERE\n\th.sarreport_id = 9\n;\n\nwhere vw_tokens is defined as\n\nCREATE VIEW tokens.vw_tokens AS SELECT\n\t-- too many columns to mention\nFROM\n\ttokens.ta_tokens t LEFT JOIN\n\ttokens.ta_tokenhist i ON t.token_id = i.token_id AND\n i.status = 'issued' LEFT JOIN\n\ttokens.ta_tokenhist s ON t.token_id = s.token_id AND\n s.status = 'sold' LEFT JOIN\n\ttokens.ta_tokenhist r ON t.token_id = r.token_id AND\n r.status = 'redeemed'\n;\n\nthis gives me the following query plan:\n\nMerge Join (cost=18276278.45..31793043.16 rows=55727 width=322)\n Merge Cond: ((\"outer\".token_id)::integer = \"inner\".\"?column23?\")\n -> Merge Left Join (cost=18043163.64..31639175.71 rows=4228018 width=76)\n Merge Cond: ((\"outer\".token_id)::integer = \"inner\".\"?column3?\")\n -> Merge Left Join (cost=13649584.94..27194793.37 rows=4228018 width=48)\n Merge Cond: ((\"outer\".token_id)::integer = \"inner\".\"?column3?\")\n -> Merge Left Join (cost=7179372.62..20653326.29 rows=4228018 width=44)\n Merge Cond: ((\"outer\".token_id)::integer = \"inner\".\"?column3?\")\n -> Index Scan using ta_tokens_pkey on ta_tokens t (cost=0.00..13400398.89 rows=4053805 width=27)\n -> Sort (cost=7179372.62..7189942.67 rows=4228018 width=21)\n Sort Key: (i.token_id)::integer\n -> Index Scan using fkx_tokenhist__status on ta_tokenhist i (cost=0.00..6315961.47 rows=4228018 width=21)\n Index Cond: ((status)::text = 'issued'::text)\n -> Sort (cost=6470212.32..6479909.69 rows=3878949 width=8)\n Sort Key: (s.token_id)::integer\n -> Index Scan using fkx_tokenhist__status on ta_tokenhist s (cost=0.00..5794509.99 rows=3878949 width=8)\n Index Cond: ((status)::text = 'sold'::text)\n -> Sort (cost=4393578.70..4400008.00 rows=2571718 width=32)\n Sort Key: (r.token_id)::integer\n -> Index Scan using fkx_tokenhist__status on ta_tokenhist r (cost=0.00..3841724.02 rows=2571718 width=32)\n Index Cond: ((status)::text = 'redeemed'::text)\n -> Sort (cost=233114.81..233248.38 rows=53430 width=246)\n Sort Key: (h.token_id)::integer\n -> Index Scan using fkx_tokenhist__sarreports on ta_tokenhist h (cost=0.00..213909.12 rows=53430 width=246)\n Index Cond: ((sarreport_id)::integer = 9)\n\n\nHowever, the following query (which i believe should be equivalent)\n\nSELECT *\nFROM\n\ttokens.ta_tokenhist h INNER JOIN\n\ttokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN\n\ttokens.ta_tokenhist i ON t.token_id = i.token_id AND\n i.status = 'issued' LEFT JOIN\n\ttokens.ta_tokenhist s ON t.token_id = s.token_id AND\n s.status = 'sold' LEFT JOIN\n\ttokens.ta_tokenhist r ON t.token_id = r.token_id AND\n r.status = 'redeemed'\nWHERE\n\th.sarreport_id = 9\n;\n\ngives the following query plan:\n\n Nested Loop Left Join (cost=0.00..3475785.52 rows=55727 width=1011)\n -> Nested Loop Left Join (cost=0.00..2474425.17 rows=55727 width=765)\n -> Nested Loop Left Join (cost=0.00..1472368.23 rows=55727 width=519)\n -> Nested Loop (cost=0.00..511614.87 rows=53430 width=273)\n -> Index Scan using fkx_tokenhist__sarreports on ta_tokenhist h (cost=0.00..213909.12 rows=53430 width=246)\n Index Cond: ((sarreport_id)::integer = 9)\n -> Index Scan using ta_tokens_pkey on ta_tokens t (cost=0.00..5.56 rows=1 width=27)\n Index Cond: ((\"outer\".token_id)::integer = (t.token_id)::integer)\n -> Index Scan using fkx_tokenhist__tokens on ta_tokenhist i (cost=0.00..17.96 rows=2 width=246)\n Index Cond: ((\"outer\".token_id)::integer = (i.token_id)::integer)\n Filter: ((status)::text = 'issued'::text)\n -> Index Scan using fkx_tokenhist__tokens on ta_tokenhist s (cost=0.00..17.96 rows=2 width=246)\n Index Cond: ((\"outer\".token_id)::integer = (s.token_id)::integer)\n Filter: ((status)::text = 'sold'::text)\n -> Index Scan using fkx_tokenhist__tokens on ta_tokenhist r (cost=0.00..17.96 rows=1 width=246)\n Index Cond: ((\"outer\".token_id)::integer = (r.token_id)::integer)\n Filter: ((status)::text = 'redeemed'::text)\n\nThis query returns a lot quicker than the plan would suggest, as the\nplanner is over-estimating the amount of rows where\n((sarreport_id)::integer = 9). it thinks there are 53430 when in fact\nthere are only 7 (despite a vacuum and analyse).\n\nCan anyone give me any suggestions? are the index stats the cause of\nmy problem, or is it the rewrite of the query?\n\nCheers\n\n\nVersion: PostgreSQL 8.0.3 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.0.2 20050821 (prerelease) (Debian 4.0.1-6)\n\n\n-- \n\n - Rich Doughty\n",
"msg_date": "Thu, 17 Nov 2005 01:06:42 +0000",
"msg_from": "Rich Doughty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange query plan invloving a view"
},
{
"msg_contents": "Rich Doughty <[email protected]> writes:\n> However, the following query (which i believe should be equivalent)\n\n> SELECT *\n> FROM\n> \ttokens.ta_tokenhist h INNER JOIN\n> \ttokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN\n> \ttokens.ta_tokenhist i ON t.token_id = i.token_id AND\n> i.status = 'issued' LEFT JOIN\n> \ttokens.ta_tokenhist s ON t.token_id = s.token_id AND\n> s.status = 'sold' LEFT JOIN\n> \ttokens.ta_tokenhist r ON t.token_id = r.token_id AND\n> r.status = 'redeemed'\n> WHERE\n> \th.sarreport_id = 9\n> ;\n\nNo, that's not equivalent at all, because the implicit parenthesization\nis left-to-right; therefore you've injected the constraint to a few rows\nof ta_tokenhist (and therefore only a few rows of ta_tokens) into the\nbottom of the LEFT JOIN stack. In the other case the constraint is at\nthe wrong end of the join stack, and so the full view output gets formed\nbefore anything gets thrown away.\n\nSome day the Postgres planner will probably be smart enough to rearrange\nthe join order despite the presence of outer joins ... but today is not\nthat day.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Nov 2005 13:06:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange query plan invloving a view "
},
{
"msg_contents": "Tom Lane wrote:\n> Rich Doughty <[email protected]> writes:\n> \n>>However, the following query (which i believe should be equivalent)\n> \n> \n>>SELECT *\n>>FROM\n>>\ttokens.ta_tokenhist h INNER JOIN\n>>\ttokens.ta_tokens t ON h.token_id = t.token_id LEFT JOIN\n>>\ttokens.ta_tokenhist i ON t.token_id = i.token_id AND\n>> i.status = 'issued' LEFT JOIN\n>>\ttokens.ta_tokenhist s ON t.token_id = s.token_id AND\n>> s.status = 'sold' LEFT JOIN\n>>\ttokens.ta_tokenhist r ON t.token_id = r.token_id AND\n>> r.status = 'redeemed'\n>>WHERE\n>>\th.sarreport_id = 9\n>>;\n> \n> \n> No, that's not equivalent at all, because the implicit parenthesization\n> is left-to-right; therefore you've injected the constraint to a few rows\n> of ta_tokenhist (and therefore only a few rows of ta_tokens) into the\n> bottom of the LEFT JOIN stack. In the other case the constraint is at\n> the wrong end of the join stack, and so the full view output gets formed\n> before anything gets thrown away.\n> \n> Some day the Postgres planner will probably be smart enough to rearrange\n> the join order despite the presence of outer joins ... but today is not\n> that day.\n\nthanks for the reply.\n\nis there any way i can achieve what i need to by using views, or should i\njust use a normal query? i'd prefer to use a view but i just can't get round\nthe performance hit.\n\n-- \n\n - Rich Doughty\n",
"msg_date": "Tue, 22 Nov 2005 13:29:29 +0000",
"msg_from": "Rich Doughty <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange query plan invloving a view"
}
] |
[
{
"msg_contents": "> > Perhaps we should put a link on the home page underneath LATEST \n> > RELEASEs saying\n> > \t7.2: de-supported\n> > \n> > with a link to a scary note along the lines of the above.\n> > \n> > ISTM that there are still too many people on older releases.\n> > \n> > We probably need an explanation of why we support so many \n> releases (in \n> > comparison to licenced software) and a note that this does \n> not imply \n> > the latest releases are not yet production (in comparison \n> to MySQL or \n> > Sybase who have been in beta for a very long time).\n> \n> By the way, is anyone interested in creating some sort of \n> online repository on pgsql.org or pgfoundry where we can keep \n> statically compiled pg_dump/all for several platforms for 8.1?\n> \n> That way if someone wanted to upgrade from 7.2 to 8.1, they \n> can just grab the latest dumper from the website, dump their \n> old database, then upgrade easily.\n\nBut if they're upgrading to 8.1, don't they already have the new\npg_dump? How else are they going to dump their *new* database?\n\n> In my experience not many pgsql admins have test servers or \n> the skills to build up test machines with the latest pg_dump, \n\nI don't, but I still dump with the latest version - works fine both on\nlinux and windows for me... \n\n> etc. (Seriously.) In fact, few realise at all that they \n> should use the 8.1 dumper.\n\nThat most people don't know they should use the new one I understand\nthough. But I don't see how this will help against that :-)\n\n//Magnus\n",
"msg_date": "Thu, 17 Nov 2005 10:19:28 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": ">>That way if someone wanted to upgrade from 7.2 to 8.1, they \n>>can just grab the latest dumper from the website, dump their \n>>old database, then upgrade easily.\n> \n> But if they're upgrading to 8.1, don't they already have the new\n> pg_dump? How else are they going to dump their *new* database?\n\nErm. Usually when you install the new package/port for 8.1, you cannot \nhave both new and old installed at the same time man. Remember they \nboth store exactly the same binary files in exactly the same place.\n\n>>In my experience not many pgsql admins have test servers or \n>>the skills to build up test machines with the latest pg_dump, \n> \n> I don't, but I still dump with the latest version - works fine both on\n> linux and windows for me... \n\nSo you're saying you DO have the skills to do it then...\n\n>>etc. (Seriously.) In fact, few realise at all that they \n>>should use the 8.1 dumper.\n> \n> That most people don't know they should use the new one I understand\n> though. But I don't see how this will help against that :-)\n\nIt'll make it easy...\n\nChris\n",
"msg_date": "Thu, 17 Nov 2005 23:05:10 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n>> That most people don't know they should use the new one I understand\n>> though. But I don't see how this will help against that :-)\n> \n> It'll make it easy...\n\nAs the miscreant that caused this thread to get started, let me\n*wholeheartedly* agree with Chris. An easy way to get the pg_dump\nfor the upgrade target to run with the upgradable source\nwould work wonders. (Instructions included, of course.)\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Thu, 17 Nov 2005 08:13:30 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
}
] |
[
{
"msg_contents": "> >>That way if someone wanted to upgrade from 7.2 to 8.1, they \n> can just \n> >>grab the latest dumper from the website, dump their old \n> database, then \n> >>upgrade easily.\n> > \n> > But if they're upgrading to 8.1, don't they already have the new \n> > pg_dump? How else are they going to dump their *new* database?\n> \n> Erm. Usually when you install the new package/port for 8.1, \n> you cannot have both new and old installed at the same time \n> man. Remember they both store exactly the same binary files \n> in exactly the same place.\n\nUrrk. Didn't think of that. I always install from source on Unix, which\ndoesn't have the problem. And the Windows port doesn't have this problem\n- it will put the binaries in a version dependant directory.\n\nOne could claim the packages are broken ;-), but that's not gonig to\nhelp here, I know...\n\n(I always install in pgsql-<version>, and then symlink pgsql there..)\n\n\n> >>etc. (Seriously.) In fact, few realise at all that they should use \n> >>the 8.1 dumper.\n> > \n> > That most people don't know they should use the new one I \n> understand \n> > though. But I don't see how this will help against that :-)\n> \n> It'll make it easy...\n\nYou assume they know enough to download it. If they don't know to look\nfor it, they still won't find it.\n\nBut the bottom line: I can see how it would be helpful if you're on a\ndistro which packages postgresql in a way that prevents you from\ninstalling more than one version at the same time.\n\n//Magnus\n",
"msg_date": "Thu, 17 Nov 2005 16:39:06 +0100",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
},
{
"msg_contents": "\n>You assume they know enough to download it. If they don't know to look\n>for it, they still won't find it.\n> \n>\nI think there should be a big, fat warning in the release notes. \nSomething like:\n\nWARNING: Upgrading to version X.Y requires a full dump/restore cycle. \nPlease download the appropriate dump-utility from http://postgresql.org/dumputils/X.Y/ \nand make a copy of your database before installing the new version X.Y.\n\n\nAnd then link to a dir with the statically linked pg_dump (and -all) for \nthe most common platforms. I must admit, I did not know that one should \nuse the new tool in a cyclus (and I have used Pg almost exclusively \nsince 7.0). That could also be the place to add a line about version S.T \nis now considered obsolete and unsupported.\n\n/Svenne\n",
"msg_date": "Thu, 17 Nov 2005 17:26:51 +0100",
"msg_from": "Svenne Krap <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Help speeding up delete"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe are operating a 1.5GB postgresql database for a year and we have \nproblems for nearly a month. Usually everything is OK with the database, \nqueries are executed fast even if they are complicated but sometimes and \nfor half an hour, we have a general slow down.\n\nThe server is a dedicated quad xeon with 4GB and a RAID1 array for the\nsystem and a RAID10 one for postgresql data. We have very few\nupdates/inserts/deletes during the day.\n\nPostgresql version is 7.4.8.\n\n- the database is vacuumed, analyzed regularly (but we are not using\nautovacuum) and usually performs pretty well ;\n- IOs are OK, the database is entirely in RAM (see top.txt and\niostat.txt attached) ;\n- CPUs are usually 25% idle, load is never really growing and its\nmaximum is below 5 ;\n- I attached two plans for a simple query, the one is what we have when\nthe server is fast, the other when we have a slow down: it's exactly the\nsame plan but, as you can see it, the time to fetch the first row from\nindexes is quite high for the slow query ;\n- during this slow down, queries that usually take 500ms can take up to\n60 seconds (and sometimes even more) ;\n- we have up to 130 permanent connections from our apache servers during\nthis slow down as we have a lot of apache children waiting for a response.\n\nI attached a vmstat output. Context switches are quite high but I'm not\nsure it can be called a context switch storm and that this is the cause\nof our problem.\n\nThanks for any advice or idea to help us understanding this problem and \nhopefully solve it.\n\nRegards,\n\n--\nGuillaume",
"msg_date": "Thu, 17 Nov 2005 18:47:09 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "weird performances problem"
},
{
"msg_contents": "On Thu, Nov 17, 2005 at 06:47:09PM +0100, Guillaume Smet wrote:\n> queries are executed fast even if they are complicated but sometimes and \n> for half an hour, we have a general slow down.\n\nIs it exactly half an hour? What changes at the time that happens\n(i.e. what else happens on the machine?). Is this a time, for\nexample, when logrotate is killing your I/O with file moves?\n\nA\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n",
"msg_date": "Thu, 17 Nov 2005 17:13:14 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Andrew,\n\nAndrew Sullivan wrote:\n > Is it exactly half an hour? What changes at the time that happens\n > (i.e. what else happens on the machine?). Is this a time, for\n > example, when logrotate is killing your I/O with file moves?\n\nNo, it's not exactly half an hour. It's just that it slows down for some \ntime (10, 20, 30 minutes) and then it's OK again. It happens several \ntimes per day. I checked if there are other processes running when we \nhave this slow down but it's not the case.\nThere's not really a difference between when it's OK or not (apart from \nthe number of connections because the db is too slow): load is still at \n4 or 5, iowait is still at 0%, there's still cpu idle and we still have \nfree memory. I can't find what is the limit and why there is cpu idle.\n\nI forgot to give our non default postgresql.conf parameters:\nshared_buffers = 28800\nsort_mem = 32768\nvacuum_mem = 32768\nmax_fsm_pages = 350000\nmax_fsm_relations = 2000\ncheckpoint_segments = 16\neffective_cache_size = 270000\nrandom_page_cost = 2\n\nThanks for your help\n\n--\nGuillaume\n",
"msg_date": "Fri, 18 Nov 2005 00:35:06 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "> I forgot to give our non default postgresql.conf parameters:\n> shared_buffers = 28800\n> sort_mem = 32768\n> vacuum_mem = 32768\n> max_fsm_pages = 350000\n> max_fsm_relations = 2000\n> checkpoint_segments = 16\n> effective_cache_size = 270000\n> random_page_cost = 2\n\nIsn't sort_mem quite high? Remember that sort_mem size is allocated\nfor each sort, not for each connection. Mine is 4096 (4 MB). My\neffective_cache_size is set to 27462.\n\nWhat OS are you running?\n\nregards\nClaus\n",
"msg_date": "Fri, 18 Nov 2005 01:15:28 +0100",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "On Fri, Nov 18, 2005 at 12:35:06AM +0100, Guillaume Smet wrote:\n> sort_mem = 32768\n\nI would be very suspicious of that much memory for sort. Please see\nthe docs for what that does. That is the amount that _each sort_ can\nallocate before spilling to disk. If some set of your users are\ncausing complicated queries with, say, four sorts apiece, then each\nuser is potentially allocating 4x that much memory. That's going to\nwreak havoc on your disk buffers (which are tricky to monitor on most\nsystems, and impossible on some).\n\nThis'd be the first knob I'd twiddle, for sure.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIt is above all style through which power defers to reason.\n\t\t--J. Robert Oppenheimer\n",
"msg_date": "Fri, 18 Nov 2005 11:13:12 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "\n\"Guillaume Smet\" <[email protected]> wrote\n> [root@bd root]# iostat 10\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 7.20 0.00 92.00 0 920\n> sda1 0.00 0.00 0.00 0 0\n> sda2 6.40 0.00 78.40 0 784\n> sda3 0.00 0.00 0.00 0 0\n> sda4 0.00 0.00 0.00 0 0\n> sda5 0.00 0.00 0.00 0 0\n> sda6 0.80 0.00 13.60 0 136\n> sdb 5.00 0.00 165.60 0 1656\n> sdb1 5.00 0.00 165.60 0 1656\n>\n> Nested Loop (cost=0.00..13.52 rows=2 width=1119) (actual \n> time=155.286..155.305 rows=1 loops=1)\n> -> Index Scan using pk_newslang on newslang nl (cost=0.00..3.87 rows=1 \n> width=1004) (actual time=44.575..44.579 rows=1 loops=1)\n> Index Cond: (((codelang)::text = 'FRA'::text) AND (3498704 = \n> numnews))\n> -> Nested Loop Left Join (cost=0.00..9.61 rows=2 width=119) (actual \n> time=110.648..110.660 rows=1 loops=1)\n> -> Index Scan using pk_news on news n (cost=0.00..3.31 rows=2 \n> width=98) (actual time=0.169..0.174 rows=1 loops=1)\n> Index Cond: (numnews = 3498704)\n> -> Index Scan using pk_photo on photo p (cost=0.00..3.14 rows=1 \n> width=25) (actual time=110.451..110.454 rows=1 loops=1)\n> Index Cond: (p.numphoto = \"outer\".numphoto)\n> Total runtime: 155.514 ms\n>\n\nSomeone is doing a massive *write* at this time, which makes your query \n*read* quite slow. Can you find out which process is doing write?\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Tue, 22 Nov 2005 02:21:12 -0500",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Andrew,\n\n> I would be very suspicious of that much memory for sort. Please see\n> the docs for what that does. That is the amount that _each sort_ can\n> allocate before spilling to disk.\n> If some set of your users are\n> causing complicated queries with, say, four sorts apiece, then each\n> user is potentially allocating 4x that much memory. That's going to\n> wreak havoc on your disk buffers (which are tricky to monitor on most\n> systems, and impossible on some).\n\nYes, we have effectively complicated queries. That's why we put the \nsort_mem so high. I'll see if we can put it lower for the next few days \nto see if it improves our performances.\n\nThanks for your help.\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Nov 2005 10:49:22 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Qingqing Zhou wrote:\n> Someone is doing a massive *write* at this time, which makes your query \n> *read* quite slow. Can you find out which process is doing write?\n\nIndexes should be in memory so I don't expect a massive write to slow \ndown the select queries. sdb is the RAID10 array dedicated to our data \nso the postgresql process is the only one to write on it. I'll check \nwhich write queries are running because there should really be a few \nupdates/inserts on our db during the day.\n\nOn a four days log analysis, I have the following:\nSELECT 403,964\nINSERT \t574\nUPDATE \t393\nDELETE \t26\nSo it's not really normal to have a massive write during the day.\n\nThanks for your help\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Nov 2005 10:56:40 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "If I understand your HW config correctly, all of the pg stuff is on \nthe same RAID 10 set?\n\nIf so, give WAL its own dedicated RAID 10 set. This looks like the \nold problem of everything stalling while WAL is being committed to HD.\n\nThis concept works for other tables as well. If you have a tables \nthat both want services at the same time, disk arm contention will \ndrag performance into the floor when they are on the same HW set.\n\nProfile your HD access and put tables that want to be accessed at the \nsame time on different HD sets. Even if you have to buy more HW to do it.\n\nRon\n\n\nAt 04:56 AM 11/22/2005, Guillaume Smet wrote:\n>Qingqing Zhou wrote:\n>>Someone is doing a massive *write* at this time, which makes your \n>>query *read* quite slow. Can you find out which process is doing write?\n>\n>Indexes should be in memory so I don't expect a massive write to \n>slow down the select queries. sdb is the RAID10 array dedicated to \n>our data so the postgresql process is the only one to write on it. \n>I'll check which write queries are running because there should \n>really be a few updates/inserts on our db during the day.\n>\n>On a four days log analysis, I have the following:\n>SELECT 403,964\n>INSERT 574\n>UPDATE 393\n>DELETE 26\n>So it's not really normal to have a massive write during the day.\n>\n>Thanks for your help\n>\n>--\n>Guillaume\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n\n\n\n",
"msg_date": "Tue, 22 Nov 2005 07:53:46 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Ron wrote:\n> If I understand your HW config correctly, all of the pg stuff is on the \n> same RAID 10 set?\n\nNo, the system and the WAL are on a RAID 1 array and the data on their \nown RAID 10 array.\nAs I said earlier, there's only a few writes in the database so I'm not \nreally sure the WAL can be a limitation: IIRC, it's only used for writes \nisn't it?\nDon't you think we should have some io wait if the database was waiting \nfor the WAL? We _never_ have any io wait on this server but our CPUs are \nstill 30-40% idle.\n\nA typical top we have on this server is:\n 15:22:39 up 24 days, 13:30, 2 users, load average: 3.86, 3.96, 3.99\n156 processes: 153 sleeping, 3 running, 0 zombie, 0 stopped\nCPU states: cpu user nice system irq softirq iowait idle\n total 50.6% 0.0% 4.7% 0.0% 0.6% 0.0% 43.8%\n cpu00 47.4% 0.0% 3.1% 0.3% 1.5% 0.0% 47.4%\n cpu01 43.7% 0.0% 3.7% 0.0% 0.5% 0.0% 51.8%\n cpu02 58.9% 0.0% 7.7% 0.0% 0.1% 0.0% 33.0%\n cpu03 52.5% 0.0% 4.1% 0.0% 0.1% 0.0% 43.0%\nMem: 3857224k av, 3307416k used, 549808k free, 0k shrd, 80640k \nbuff\n 2224424k actv, 482552k in_d, 49416k in_c\nSwap: 4281272k av, 10032k used, 4271240k free 2602424k \ncached\n\nAs you can see, we don't swap, we have free memory, we have all our data \ncached (our database size is 1.5 GB).\n\nContext switch are between 10,000 and 20,000 per seconds.\n\n> This concept works for other tables as well. If you have a tables that \n> both want services at the same time, disk arm contention will drag \n> performance into the floor when they are on the same HW set.\n> Profile your HD access and put tables that want to be accessed at the \n> same time on different HD sets. Even if you have to buy more HW to do it.\n\nI use iostat and I can only see a little write activity and no read \nactivity on both raid arrays.\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Nov 2005 15:26:43 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Claus and Andrew,\n\nClaus Guttesen wrote:\n> Isn't sort_mem quite high? Remember that sort_mem size is allocated\n> for each sort, not for each connection. Mine is 4096 (4 MB). My\n> effective_cache_size is set to 27462.\n\nI tested sort mem from 4096 to 32768 (4096, 8192, 16384, 32768) this \nafternoon and 32768 is definitely the best value for us. We still have \nfree memory using it, we don't have any swap and queries are generally \nfaster (I log all queries taking more than 500ms and the log is growing \nfar faster with lower values of sort_mem).\n\n> What OS are you running?\n\n# cat /etc/redhat-release\nCentOS release 3.6 (Final)\nso it's a RHEL 3 upd 6.\n\n# uname -a\nLinux our.host 2.4.21-37.ELsmp #1 SMP Wed Sep 28 14:05:46 EDT 2005 i686 \ni686 i386 GNU/Linux\n\n# cat /proc/cpuinfo\n4x\nprocessor : 3\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 2\nmodel name : Intel(R) Xeon(TM) MP CPU 2.20GHz\nstepping : 6\ncpu MHz : 2192.976\ncache size : 512 KB\n\n# cat /proc/meminfo\n total: used: free: shared: buffers: cached:\nMem: 3949797376 3478376448 471420928 0 83410944 2679156736\nSwap: 4384022528 9797632 4374224896\nMemTotal: 3857224 kB\nMemFree: 460372 kB\nMemShared: 0 kB\nBuffers: 81456 kB\nCached: 2610764 kB\n\nHTH\n\nRegards,\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Nov 2005 15:37:52 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "At 09:26 AM 11/22/2005, Guillaume Smet wrote:\n>Ron wrote:\n>>If I understand your HW config correctly, all of the pg stuff is on \n>>the same RAID 10 set?\n>\n>No, the system and the WAL are on a RAID 1 array and the data on \n>their own RAID 10 array.\n\nAs has been noted many times around here, put the WAL on its own \ndedicated HD's. You don't want any head movement on those HD's.\n\n\n>As I said earlier, there's only a few writes in the database so I'm \n>not really sure the WAL can be a limitation: IIRC, it's only used \n>for writes isn't it?\n\nWhen you reach a WAL checkpoint, pg commits WAL data to HD... ...and \ndoes almost nothing else until said commit is done.\n\n\n>Don't you think we should have some io wait if the database was \n>waiting for the WAL? We _never_ have any io wait on this server but \n>our CPUs are still 30-40% idle.\n_Something_ is doing long bursts of write IO on sdb and sdb1 every 30 \nminutes or so according to your previous posts.\n\nProfile your DBMS and find out what.\n\n\n>A typical top we have on this server is:\n> 15:22:39 up 24 days, 13:30, 2 users, load average: 3.86, 3.96, 3.99\n>156 processes: 153 sleeping, 3 running, 0 zombie, 0 stopped\n>CPU states: cpu user nice system irq softirq iowait idle\n> total 50.6% 0.0% 4.7% 0.0% 0.6% 0.0% 43.8%\n> cpu00 47.4% 0.0% 3.1% 0.3% 1.5% 0.0% 47.4%\n> cpu01 43.7% 0.0% 3.7% 0.0% 0.5% 0.0% 51.8%\n> cpu02 58.9% 0.0% 7.7% 0.0% 0.1% 0.0% 33.0%\n> cpu03 52.5% 0.0% 4.1% 0.0% 0.1% 0.0% 43.0%\n>Mem: 3857224k av, 3307416k used, 549808k free, 0k shrd, 80640k buff\n> 2224424k actv, 482552k in_d, 49416k in_c\n>Swap: 4281272k av, 10032k used, 4271240k \n>free 2602424k cached\n>\n>As you can see, we don't swap, we have free memory, we have all our \n>data cached (our database size is 1.5 GB).\n>\n>Context switch are between 10,000 and 20,000 per seconds.\nThat's actually a reasonably high CS rate. Again, why?\n\n\n>>This concept works for other tables as well. If you have tables \n>>that both want services at the same time, disk arm contention will \n>>drag performance into the floor when they are on the same HW set.\n>>Profile your HD access and put tables that want to be accessed at \n>>the same time on different HD sets. Even if you have to buy more HW to do it.\n>\n>I use iostat and I can only see a little write activity and no read \n>activity on both raid arrays.\nRemember it's not just the overall amount, it's _when_and _where_ the \nwrite activity takes place. If you have almost no write activity, \nbut whenever it happens it all happens to the same place by multiple \nthings contending for the same HDs, your performance during that time \nwill be poor.\n\nSince the behavior you are describing fits that cause very well, I'd \nsee if you can verify that's what's going on.\n\nRon\n\n\n",
"msg_date": "Tue, 22 Nov 2005 09:49:17 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "Ron,\n\nFirst of all, thanks for your time.\n\n> As has been noted many times around here, put the WAL on its own \n> dedicated HD's. You don't want any head movement on those HD's.\n\nYep, I know that. That's just we supposed it was not so important if it \nwas nearly a readonly database which is wrong according to you.\n\n> _Something_ is doing long bursts of write IO on sdb and sdb1 every 30 \n> minutes or so according to your previous posts.\n\nIt's not every 30 minutes. It's a 20-30 minutes slow down 3-4 times a \nday when we have a high load.\nAnyway apart from this problem which is temporary, we have cpu idle all \nthe day when we don't have any io wait (and nearly no write) and the \nserver is enough loaded to use all the 4 cpus. I don't think it's normal.\nIt's not a very good idea but do you think we can put fsync to off \nduring a few minutes to check if the WAL is effectively our problem? A \nsimple reload of the configuration seems to take it into account. So can \nwe disable it temporarily even when the database is running?\nIf it is the case, I think we'll test it and if it solved our problem, \nwe'll ask our customer to buy two new disks to have a specific RAID 1 \narray for the pg_xlog.\n\n> That's actually a reasonably high CS rate. Again, why?\n\nI'm not so surprised considering what I read before about Xeon \nmultiprocessors, pg 7.4 and the famous context switch storm. We are \nplanning a migration to 8.1 to (hopefully) solve this problem. Perhaps \nour problem is due to that high CS rate.\n\n--\nGuillaume\n",
"msg_date": "Tue, 22 Nov 2005 16:26:17 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird performances problem"
},
{
"msg_contents": "At 10:26 AM 11/22/2005, Guillaume Smet wrote:\n>Ron,\n>\n>First of all, thanks for your time.\nHappy to help.\n\n\n>>As has been noted many times around here, put the WAL on its own \n>>dedicated HD's. You don't want any head movement on those HD's.\n>\n>Yep, I know that. That's just we supposed it was not so important if \n>it was nearly a readonly database which is wrong according to you.\nIt's just good practice with pg that pg-xlog should always get it's \nown dedicated HD set. OTOH, I'm not at all convinced given the scant \nevidence so far that it is the primary problem here; particularly \nsince if I understand you correctly, px-xlog is not on sdb or sdb1 \nwhere you are having the write storm.\n\n\n>>_Something_ is doing long bursts of write IO on sdb and sdb1 every \n>>30 minutes or so according to your previous posts.\n>\n>It's not every 30 minutes. It's a 20-30 minutes slow down 3-4 times \n>a day when we have a high load.\n\nThanks for the correction and I apologize for the misunderstanding.\nClearly the first step is to instrument sdb and sdb1 so that you \nunderstand exactly what is being accessed and written on them.\n\nPossibilities that come to mind:\na) Are some of your sorts requiring more than 32MB during high \nload? If sorts do not usually require HD temp files and suddenly do, \nyou could get behavior like yours.\n\nb) Are you doing too many 32MB sorts during high load? Same comment as above.\n\nc) Are you doing some sort of data updates or appends during high \nload that do not normally occur?\n\nd) Are you constantly doing \"a little\" write IO that turns into a \nwrite storm under high load because of queuing issues?\n\nPut the scaffolding in needed to trace _exactly_ what's happening on \nsdb and sdb1 throughout the day and then examine the traces over a \nday, a few days, and a week. I'll bet you will notice some patterns \nthat will be helpful in identifying what's going on.\n\nRon\n\n\n",
"msg_date": "Tue, 22 Nov 2005 12:19:57 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird performances problem"
}
] |
[
{
"msg_contents": "> Remember - large DB is going to be IO bound. Memory will get thrashed\n> for file block buffers, even if you have large amounts, it's all gonna\n> be cycled in and out again.\n\n'fraid I have to disagree here. I manage ERP systems for manufacturing\ncompanies of various sizes. My systems are all completely cpu\nbound...even though the larger database are well into two digit gigabyte\nsizes, the data turnover while huge is relatively constrained and well\nserved by the O/S cache. OTOH, query latency is a *huge* factor and we\ndo everything possible to lower it. Even if the cpu is not 100% loaded,\nfaster processors make the application 'feel' faster to the client.\n\nMerlin\n\n\n",
"msg_date": "Thu, 17 Nov 2005 15:22:50 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "\nHi,\n\nWe want to create a database for each one of our departments, but we only \nwant to have one instance of postgresql running. There are about 10-20 \ndepartments. I can easily use createdb to create these databases. However, \nwhat is the max number of database I can create before performance goes \ndown?\n\nAssuming each database is performing well alone, how would putting 10-20 of \nthem together in one instance affect postgres?\n\nIn terms of getting a new server for this project, how do I gauge how \npowerful of a server should I get?\n\nThanks.\n\n\n",
"msg_date": "Sat, 19 Nov 2005 03:24:08 +0000",
"msg_from": "\"anon permutation\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "What is the max number of database I can create in an instance of\n\tpgsql?"
},
{
"msg_contents": "\nOn Nov 19, 2005, at 12:24 , anon permutation wrote:\n\n> However, what is the max number of database I can create before \n> performance goes down?\n>\n> Assuming each database is performing well alone, how would putting \n> 10-20 of them together in one instance affect postgres?\n>\n> In terms of getting a new server for this project, how do I gauge \n> how powerful of a server should I get?\n\nI'm sure those wiser than me will chime in with specifics. I think \nyou should be think of usage not in terms of number of databases but \nin terms of connections rates, database size (numbers of tables and \ntuples) and the types of queries that will be run. While there may be \na little overhead in from having a number of databases in the \ncluster, I think this is probably going to be insignificant in \ncomparison to these other factors. A better idea of what the usage \nwill guide you in choosing your hardware.\n\n\nMichael Glaesemann\ngrzm myrealbox com\n\n\n\n",
"msg_date": "Sat, 19 Nov 2005 12:49:43 +0900",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the max number of database I can create in an instance of\n\tpgsql?"
},
{
"msg_contents": "On 11/18/05, anon permutation <[email protected]> wrote:\n>\n> Hi,\n>\n> We want to create a database for each one of our departments, but we only\n> want to have one instance of postgresql running. There are about 10-20\n> departments. I can easily use createdb to create these databases. However,\n>\n\nAfter of doing this, you have to think if you will want to make querys\nacross the info of some or all databases (and you will) if that is the\ncase the better you can do is create schemas instead of databases...\n\n> what is the max number of database I can create before performance goes\n> down?\n>\n\nthe problem isn't about number of databases but concurrent users...\nafter all you will have the same resources for 1 or 100 databases, the\nimportant thing is the number of users, the amount of data normal\nusers will process in a normal day, and complexity of your queries.\n\n> Assuming each database is performing well alone, how would putting 10-20 of\n>\n> them together in one instance affect postgres?\n>\n> In terms of getting a new server for this project, how do I gauge how\n> powerful of a server should I get?\n>\n> Thanks.\n>\n>\n\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n",
"msg_date": "Fri, 18 Nov 2005 23:55:26 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the max number of database I can create in an instance of\n\tpgsql?"
},
{
"msg_contents": "\n>> However, what is the max number of database I can create before \n>> performance goes down?\n>\nI know I'm not directly answering your question, but you might want to \nconsider why you're splitting things up into different logical \ndatabases. If security is a big concern, you can create different \ndatabase users that own the different departments' tables, and each of \nyour apps can login as the corresponding users. \n\nEveryone loves reports. Once you've got data in your database, people \nwill ask for a billion reports...Whether or not they know it now, most \nlikely they're going to want reports that cross the department \nboundaries (gross revenue, employee listings etc.) and that will be very \ndifficult if you have multiple databases.\n\n\n",
"msg_date": "Sat, 19 Nov 2005 07:45:10 -0600",
"msg_from": "John McCawley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: What is the max number of database I can create in"
}
] |
[
{
"msg_contents": "I am using PostgreSQL in an embedded system which has only 32 or 64 MB RAM \n(run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod \ntasks. There is a daemon keep looking up the table and fork a new process to \ndownload data from internet.\n\nDaemon:\n . Check the table every 5 seconds\n . Fork a download process to download if there is new task\nDownlaod process (there are 5 download process max):\n . Update the download rate and downloaded size every 3 seconds.\n\nAt begining, everything just fine. The speed is good. But after 24 hours, \nthe speed to access database become very very slow. Even I stop all \nprocesses, restart PostgreSQL and use psql to select data, this speed is \nstill very very slow (a SQL command takes more than 2 seconds). It is a \nsmall table. There are only 8 records in the table.\n\nThe only way to solve it is remove all database, run initdb, create new \ndatabase and insert new records. I tried to run vacummdb but still very \nslow.\n\nAny idea to make it faster?\n\nThanks,\nAlex\n\n--\nHere is the table schema:\ncreate table download_queue (\n task_id SERIAL,\n username varchar(128),\n pid int,\n url text,\n filename varchar(1024),\n status int,\n created_time int,\n started_time int,\n total_size int8,\n current_size int8,\n current_rate int,\n CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)\n);\nCREATE INDEX download_queue_user_index ON download_queue USING BTREE \n(username);\n\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Sat, 19 Nov 2005 15:46:06 +0800",
"msg_from": "\"Alex Wang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "VERY slow after many updates"
},
{
"msg_contents": "Alex,\n\nI suppose the table is a kind of 'queue' table, where you\ninsert/get/delete continuously, and the life of the records is short.\nConsidering that in postgres a delete will still leave you the record in\nthe table's file and in the indexes, just mark it as dead, your table's\nactual size can grow quite a lot even if the number of live records will\nstay small (you will have a lot of dead tuples, the more tasks\nprocessed, the more dead tuples). So I guess you should vacuum this\ntable very often, so that the dead tuples are reused. I'm not an expert\non this, but it might be good to vacuum after each n deletions, where n\nis ~ half the average size of the queue you expect to have. From time to\ntime you might want to do a vacuum full on it and a reindex.\n\nRight now I guess a vacuum full + reindex will help you. I think it's\nbest to do:\n\nvacuum download_queue;\nvacuum full download_queue;\nreindex download_queue;\n\nI think the non-full vacuum which is less obtrusive than the full one\nwill do at least some of the work and it will bring all needed things in\nFS cache, so the full vacuum to be as fast as possible (vacuum full\nlocks exclusively the table). At least I do it this way with good\nresults for small queue-like tables...\n\nBTW, I wonder if the download_queue_user_index index is helping you at\nall on that table ? Do you expect it to grow bigger than 1000 ?\nOtherwise it has no point to index it.\n\nHTH,\nCsaba.\n\nOn Sat, 2005-11-19 at 08:46, Alex Wang wrote:\n> I am using PostgreSQL in an embedded system which has only 32 or 64 MB RAM \n> (run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod \n> tasks. There is a daemon keep looking up the table and fork a new process to \n> download data from internet.\n> \n> Daemon:\n> . Check the table every 5 seconds\n> . Fork a download process to download if there is new task\n> Downlaod process (there are 5 download process max):\n> . Update the download rate and downloaded size every 3 seconds.\n> \n> At begining, everything just fine. The speed is good. But after 24 hours, \n> the speed to access database become very very slow. Even I stop all \n> processes, restart PostgreSQL and use psql to select data, this speed is \n> still very very slow (a SQL command takes more than 2 seconds). It is a \n> small table. There are only 8 records in the table.\n> \n> The only way to solve it is remove all database, run initdb, create new \n> database and insert new records. I tried to run vacummdb but still very \n> slow.\n> \n> Any idea to make it faster?\n> \n> Thanks,\n> Alex\n> \n> --\n> Here is the table schema:\n> create table download_queue (\n> task_id SERIAL,\n> username varchar(128),\n> pid int,\n> url text,\n> filename varchar(1024),\n> status int,\n> created_time int,\n> started_time int,\n> total_size int8,\n> current_size int8,\n> current_rate int,\n> CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)\n> );\n> CREATE INDEX download_queue_user_index ON download_queue USING BTREE \n> (username);\n> \n> \n\n",
"msg_date": "Sat, 19 Nov 2005 12:12:12 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VERY slow after many updates"
},
{
"msg_contents": "Hi Csaba,\n\nThanks for your reply.\n\nYes, it's a \"queue\" table. But I did not perform many insert/delete before \nit becomes slow. After insert 10 records, I just do get/update continuously. \nAfter 24 hour, the whole database become very slow (not only the \ndownload_queue table but other tables, too). But you are right. Full vacuum \nfixes the problem. Thank you very much!\n\nI expect there will be less than 1000 records in the table. The index does \nobvous improvement on \"SELECT task_id, username FROM download_queue WHERE \nusername > '%s'\" even there are only 100 records.\n\nThanks,\nAlex\n\n----- Original Message ----- \nFrom: \"Csaba Nagy\" <[email protected]>\nTo: \"Alex Wang\" <[email protected]>\nCc: \"postgres performance list\" <[email protected]>\nSent: Saturday, November 19, 2005 7:12 PM\nSubject: Re: [PERFORM] VERY slow after many updates\n\n\n> Alex,\n>\n> I suppose the table is a kind of 'queue' table, where you\n> insert/get/delete continuously, and the life of the records is short.\n> Considering that in postgres a delete will still leave you the record in\n> the table's file and in the indexes, just mark it as dead, your table's\n> actual size can grow quite a lot even if the number of live records will\n> stay small (you will have a lot of dead tuples, the more tasks\n> processed, the more dead tuples). So I guess you should vacuum this\n> table very often, so that the dead tuples are reused. I'm not an expert\n> on this, but it might be good to vacuum after each n deletions, where n\n> is ~ half the average size of the queue you expect to have. From time to\n> time you might want to do a vacuum full on it and a reindex.\n>\n> Right now I guess a vacuum full + reindex will help you. I think it's\n> best to do:\n>\n> vacuum download_queue;\n> vacuum full download_queue;\n> reindex download_queue;\n>\n> I think the non-full vacuum which is less obtrusive than the full one\n> will do at least some of the work and it will bring all needed things in\n> FS cache, so the full vacuum to be as fast as possible (vacuum full\n> locks exclusively the table). At least I do it this way with good\n> results for small queue-like tables...\n>\n> BTW, I wonder if the download_queue_user_index index is helping you at\n> all on that table ? Do you expect it to grow bigger than 1000 ?\n> Otherwise it has no point to index it.\n>\n> HTH,\n> Csaba.\n>\n> On Sat, 2005-11-19 at 08:46, Alex Wang wrote:\n>> I am using PostgreSQL in an embedded system which has only 32 or 64 MB \n>> RAM\n>> (run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod\n>> tasks. There is a daemon keep looking up the table and fork a new process \n>> to\n>> download data from internet.\n>>\n>> Daemon:\n>> . Check the table every 5 seconds\n>> . Fork a download process to download if there is new task\n>> Downlaod process (there are 5 download process max):\n>> . Update the download rate and downloaded size every 3 seconds.\n>>\n>> At begining, everything just fine. The speed is good. But after 24 hours,\n>> the speed to access database become very very slow. Even I stop all\n>> processes, restart PostgreSQL and use psql to select data, this speed is\n>> still very very slow (a SQL command takes more than 2 seconds). It is a\n>> small table. There are only 8 records in the table.\n>>\n>> The only way to solve it is remove all database, run initdb, create new\n>> database and insert new records. I tried to run vacummdb but still very\n>> slow.\n>>\n>> Any idea to make it faster?\n>>\n>> Thanks,\n>> Alex\n>>\n>> --\n>> Here is the table schema:\n>> create table download_queue (\n>> task_id SERIAL,\n>> username varchar(128),\n>> pid int,\n>> url text,\n>> filename varchar(1024),\n>> status int,\n>> created_time int,\n>> started_time int,\n>> total_size int8,\n>> current_size int8,\n>> current_rate int,\n>> CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)\n>> );\n>> CREATE INDEX download_queue_user_index ON download_queue USING BTREE\n>> (username);\n>>\n>>\n>\n>\n> -- \n> This message has been scanned for viruses and\n> dangerous content by MailScanner, and is\n> believed to be clean.\n> \n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Sat, 19 Nov 2005 20:05:00 +0800",
"msg_from": "\"Alex Wang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VERY slow after many updates"
},
{
"msg_contents": "Just for clarification, update is actually equal to delete+insert in\nPostgres. So if you update rows, it's the same as you would delete the\nrow and insert a new version. So the table is bloating also in this\nsituation.\nI think there is an added problem when you update, namely to get to a\nrow, postgres will traverse all dead rows matching the criteria... so\neven if you have an index, getting 1 row which was updated 10000 times\nwill access 10000 rows only to find 1 which is still alive. So in this\ncase vacuuming should happen even more often, to eliminate the dead\nrows.\nAnd the index was probably only helping because the table was really\nbloated, so if you vacuum it often enough you will be better off without\nthe index if the row count will stay low.\n\nCheers,\nCsaba.\n\n\nOn Sat, 2005-11-19 at 13:05, Alex Wang wrote:\n> Hi Csaba,\n> \n> Thanks for your reply.\n> \n> Yes, it's a \"queue\" table. But I did not perform many insert/delete before \n> it becomes slow. After insert 10 records, I just do get/update continuously. \n> After 24 hour, the whole database become very slow (not only the \n> download_queue table but other tables, too). But you are right. Full vacuum \n> fixes the problem. Thank you very much!\n> \n> I expect there will be less than 1000 records in the table. The index does \n> obvous improvement on \"SELECT task_id, username FROM download_queue WHERE \n> username > '%s'\" even there are only 100 records.\n> \n> Thanks,\n> Alex\n> \n> ----- Original Message ----- \n> From: \"Csaba Nagy\" <[email protected]>\n> To: \"Alex Wang\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Saturday, November 19, 2005 7:12 PM\n> Subject: Re: [PERFORM] VERY slow after many updates\n> \n> \n> > Alex,\n> >\n> > I suppose the table is a kind of 'queue' table, where you\n> > insert/get/delete continuously, and the life of the records is short.\n> > Considering that in postgres a delete will still leave you the record in\n> > the table's file and in the indexes, just mark it as dead, your table's\n> > actual size can grow quite a lot even if the number of live records will\n> > stay small (you will have a lot of dead tuples, the more tasks\n> > processed, the more dead tuples). So I guess you should vacuum this\n> > table very often, so that the dead tuples are reused. I'm not an expert\n> > on this, but it might be good to vacuum after each n deletions, where n\n> > is ~ half the average size of the queue you expect to have. From time to\n> > time you might want to do a vacuum full on it and a reindex.\n> >\n> > Right now I guess a vacuum full + reindex will help you. I think it's\n> > best to do:\n> >\n> > vacuum download_queue;\n> > vacuum full download_queue;\n> > reindex download_queue;\n> >\n> > I think the non-full vacuum which is less obtrusive than the full one\n> > will do at least some of the work and it will bring all needed things in\n> > FS cache, so the full vacuum to be as fast as possible (vacuum full\n> > locks exclusively the table). At least I do it this way with good\n> > results for small queue-like tables...\n> >\n> > BTW, I wonder if the download_queue_user_index index is helping you at\n> > all on that table ? Do you expect it to grow bigger than 1000 ?\n> > Otherwise it has no point to index it.\n> >\n> > HTH,\n> > Csaba.\n> >\n> > On Sat, 2005-11-19 at 08:46, Alex Wang wrote:\n> >> I am using PostgreSQL in an embedded system which has only 32 or 64 MB \n> >> RAM\n> >> (run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep downlaod\n> >> tasks. There is a daemon keep looking up the table and fork a new process \n> >> to\n> >> download data from internet.\n> >>\n> >> Daemon:\n> >> . Check the table every 5 seconds\n> >> . Fork a download process to download if there is new task\n> >> Downlaod process (there are 5 download process max):\n> >> . Update the download rate and downloaded size every 3 seconds.\n> >>\n> >> At begining, everything just fine. The speed is good. But after 24 hours,\n> >> the speed to access database become very very slow. Even I stop all\n> >> processes, restart PostgreSQL and use psql to select data, this speed is\n> >> still very very slow (a SQL command takes more than 2 seconds). It is a\n> >> small table. There are only 8 records in the table.\n> >>\n> >> The only way to solve it is remove all database, run initdb, create new\n> >> database and insert new records. I tried to run vacummdb but still very\n> >> slow.\n> >>\n> >> Any idea to make it faster?\n> >>\n> >> Thanks,\n> >> Alex\n> >>\n> >> --\n> >> Here is the table schema:\n> >> create table download_queue (\n> >> task_id SERIAL,\n> >> username varchar(128),\n> >> pid int,\n> >> url text,\n> >> filename varchar(1024),\n> >> status int,\n> >> created_time int,\n> >> started_time int,\n> >> total_size int8,\n> >> current_size int8,\n> >> current_rate int,\n> >> CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)\n> >> );\n> >> CREATE INDEX download_queue_user_index ON download_queue USING BTREE\n> >> (username);\n> >>\n> >>\n> >\n> >\n> > -- \n> > This message has been scanned for viruses and\n> > dangerous content by MailScanner, and is\n> > believed to be clean.\n> > \n> \n\n",
"msg_date": "Sat, 19 Nov 2005 13:12:52 +0100",
"msg_from": "Csaba Nagy <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VERY slow after many updates"
},
{
"msg_contents": "On 19.11.2005, at 13:05 Uhr, Alex Wang wrote:\n\n> Yes, it's a \"queue\" table. But I did not perform many insert/delete \n> before it becomes slow. After insert 10 records, I just do get/ \n> update continuously.\n\nWhen PostgreSQL updates a row, it creates a new row with the updated \nvalues. So you should be aware, that the DB gets bigger and bigger \nwhen you only update your rows. Vacuum full reclaims that used space.\n\nThe concepts are described in detail in the manual in chapter 12.\n\ncug\n\n-- \nPharmaLine Essen, GERMANY and\nBig Nerd Ranch Europe - PostgreSQL Training, Dec. 2005, Rome, Italy\nhttp://www.bignerdranch.com/classes/postgresql.shtml",
"msg_date": "Sat, 19 Nov 2005 13:18:19 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VERY slow after many updates"
},
{
"msg_contents": "Great infomation. I didn't know that update is equal to delete+insert in \nPostgres. I would be more careful on designing the database access method in \nthis case.\n\nThanks,\nAlex\n\n----- Original Message ----- \nFrom: \"Csaba Nagy\" <[email protected]>\nTo: \"Alex Wang\" <[email protected]>\nCc: \"postgres performance list\" <[email protected]>\nSent: Saturday, November 19, 2005 8:12 PM\nSubject: Re: [PERFORM] VERY slow after many updates\n\n\n> Just for clarification, update is actually equal to delete+insert in\n> Postgres. So if you update rows, it's the same as you would delete the\n> row and insert a new version. So the table is bloating also in this\n> situation.\n> I think there is an added problem when you update, namely to get to a\n> row, postgres will traverse all dead rows matching the criteria... so\n> even if you have an index, getting 1 row which was updated 10000 times\n> will access 10000 rows only to find 1 which is still alive. So in this\n> case vacuuming should happen even more often, to eliminate the dead\n> rows.\n> And the index was probably only helping because the table was really\n> bloated, so if you vacuum it often enough you will be better off without\n> the index if the row count will stay low.\n>\n> Cheers,\n> Csaba.\n>\n>\n> On Sat, 2005-11-19 at 13:05, Alex Wang wrote:\n>> Hi Csaba,\n>>\n>> Thanks for your reply.\n>>\n>> Yes, it's a \"queue\" table. But I did not perform many insert/delete \n>> before\n>> it becomes slow. After insert 10 records, I just do get/update \n>> continuously.\n>> After 24 hour, the whole database become very slow (not only the\n>> download_queue table but other tables, too). But you are right. Full \n>> vacuum\n>> fixes the problem. Thank you very much!\n>>\n>> I expect there will be less than 1000 records in the table. The index \n>> does\n>> obvous improvement on \"SELECT task_id, username FROM download_queue WHERE\n>> username > '%s'\" even there are only 100 records.\n>>\n>> Thanks,\n>> Alex\n>>\n>> ----- Original Message ----- \n>> From: \"Csaba Nagy\" <[email protected]>\n>> To: \"Alex Wang\" <[email protected]>\n>> Cc: \"postgres performance list\" <[email protected]>\n>> Sent: Saturday, November 19, 2005 7:12 PM\n>> Subject: Re: [PERFORM] VERY slow after many updates\n>>\n>>\n>> > Alex,\n>> >\n>> > I suppose the table is a kind of 'queue' table, where you\n>> > insert/get/delete continuously, and the life of the records is short.\n>> > Considering that in postgres a delete will still leave you the record \n>> > in\n>> > the table's file and in the indexes, just mark it as dead, your table's\n>> > actual size can grow quite a lot even if the number of live records \n>> > will\n>> > stay small (you will have a lot of dead tuples, the more tasks\n>> > processed, the more dead tuples). So I guess you should vacuum this\n>> > table very often, so that the dead tuples are reused. I'm not an expert\n>> > on this, but it might be good to vacuum after each n deletions, where n\n>> > is ~ half the average size of the queue you expect to have. From time \n>> > to\n>> > time you might want to do a vacuum full on it and a reindex.\n>> >\n>> > Right now I guess a vacuum full + reindex will help you. I think it's\n>> > best to do:\n>> >\n>> > vacuum download_queue;\n>> > vacuum full download_queue;\n>> > reindex download_queue;\n>> >\n>> > I think the non-full vacuum which is less obtrusive than the full one\n>> > will do at least some of the work and it will bring all needed things \n>> > in\n>> > FS cache, so the full vacuum to be as fast as possible (vacuum full\n>> > locks exclusively the table). At least I do it this way with good\n>> > results for small queue-like tables...\n>> >\n>> > BTW, I wonder if the download_queue_user_index index is helping you at\n>> > all on that table ? Do you expect it to grow bigger than 1000 ?\n>> > Otherwise it has no point to index it.\n>> >\n>> > HTH,\n>> > Csaba.\n>> >\n>> > On Sat, 2005-11-19 at 08:46, Alex Wang wrote:\n>> >> I am using PostgreSQL in an embedded system which has only 32 or 64 MB\n>> >> RAM\n>> >> (run on PPC 266 MHz or ARM 266MHz CPU). I have a table to keep \n>> >> downlaod\n>> >> tasks. There is a daemon keep looking up the table and fork a new \n>> >> process\n>> >> to\n>> >> download data from internet.\n>> >>\n>> >> Daemon:\n>> >> . Check the table every 5 seconds\n>> >> . Fork a download process to download if there is new task\n>> >> Downlaod process (there are 5 download process max):\n>> >> . Update the download rate and downloaded size every 3 seconds.\n>> >>\n>> >> At begining, everything just fine. The speed is good. But after 24 \n>> >> hours,\n>> >> the speed to access database become very very slow. Even I stop all\n>> >> processes, restart PostgreSQL and use psql to select data, this speed \n>> >> is\n>> >> still very very slow (a SQL command takes more than 2 seconds). It is \n>> >> a\n>> >> small table. There are only 8 records in the table.\n>> >>\n>> >> The only way to solve it is remove all database, run initdb, create \n>> >> new\n>> >> database and insert new records. I tried to run vacummdb but still \n>> >> very\n>> >> slow.\n>> >>\n>> >> Any idea to make it faster?\n>> >>\n>> >> Thanks,\n>> >> Alex\n>> >>\n>> >> --\n>> >> Here is the table schema:\n>> >> create table download_queue (\n>> >> task_id SERIAL,\n>> >> username varchar(128),\n>> >> pid int,\n>> >> url text,\n>> >> filename varchar(1024),\n>> >> status int,\n>> >> created_time int,\n>> >> started_time int,\n>> >> total_size int8,\n>> >> current_size int8,\n>> >> current_rate int,\n>> >> CONSTRAINT download_queue_pkey PRIMARY KEY(task_id)\n>> >> );\n>> >> CREATE INDEX download_queue_user_index ON download_queue USING BTREE\n>> >> (username);\n>> >>\n>> >>\n>> >\n>> >\n>> > -- \n>> > This message has been scanned for viruses and\n>> > dangerous content by MailScanner, and is\n>> > believed to be clean.\n>> >\n>>\n>\n>\n> -- \n> This message has been scanned for viruses and\n> dangerous content by MailScanner, and is\n> believed to be clean.\n> \n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Sat, 19 Nov 2005 20:29:47 +0800",
"msg_from": "\"Alex Wang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: VERY slow after many updates"
},
{
"msg_contents": "On Sat, 2005-11-19 at 06:29, Alex Wang wrote:\n> Great infomation. I didn't know that update is equal to delete+insert in \n> Postgres. I would be more careful on designing the database access method in \n> this case.\n\nJust make sure you have regular vacuums scheduled (or run them from\nwithin your app) and you're fine. \n",
"msg_date": "Mon, 21 Nov 2005 10:43:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VERY slow after many updates"
}
] |
[
{
"msg_contents": "Hello\n\nOn my serwer Linux Fedora, HP DL360G3 with 2x3.06 GHz 4GB RAM working \npostgresql 7.4.6. Cpu utilization is about 40-50% but system process \nqueue is long - about 6 task. Do you have nay sugestion/solution?\n\nRegards\nMarek\n",
"msg_date": "Tue, 22 Nov 2005 15:22:59 +0100",
"msg_from": "Marek Dabrowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "System queue "
},
{
"msg_contents": "On Tue, 22 Nov 2005 15:22:59 +0100\nMarek Dabrowski <[email protected]> wrote:\n\n> Hello\n> \n> On my serwer Linux Fedora, HP DL360G3 with 2x3.06 GHz 4GB RAM working \n> postgresql 7.4.6. Cpu utilization is about 40-50% but system process \n> queue is long - about 6 task. Do you have nay sugestion/solution?\n\n We're going to need a lot more information than that to diagnose what\n is going on. Do you have any functions or queries that will need to\n use a large amount of CPU? \n\n In general I would suggest upgrading to the latest Fedora and moving\n to PostgreSQL 8.x. Doing this will get you some extra performance,\n but will probably not entirely solve your problem. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Tue, 22 Nov 2005 10:51:00 -0600",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System queue"
},
{
"msg_contents": "\nOn Nov 22, 2005, at 9:22 AM, Marek Dabrowski wrote:\n\n> Hello\n>\n> On my serwer Linux Fedora, HP DL360G3 with 2x3.06 GHz 4GB RAM \n> working postgresql 7.4.6. Cpu utilization is about 40-50% but \n> system process queue is long - about 6 task. Do you have nay \n> sugestion/solution?\\\n\nHigh run queue (loadavg) with low cpu usage means you are IO bound.\nEither change some queries around to generate less IO, or add more \ndisks.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n",
"msg_date": "Tue, 22 Nov 2005 12:28:00 -0500",
"msg_from": "Jeff Trout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: System queue "
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nOne of our PG server is experiencing extreme slowness and there are\nhundreds of SELECTS building up. I am not sure if heavy context\nswitching is the cause of this or something else is causing it.\n\n \n\nIs this pretty much the final word on this issue?\n\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\n\n \n\nprocs memory swap io system\ncpu\n\n r b swpd free buff cache si so bi bo in\ncs us sy id wa\n\n 2 0 20 2860544 124816 8042544 0 0 0 0 0 0 0\n0 0 0\n\n 2 0 20 2860376 124816 8042552 0 0 0 24 157 115322 13\n10 76 0\n\n 3 0 20 2860364 124840 8042540 0 0 0 228 172 120003 12\n10 77 0\n\n 2 0 20 2860364 124840 8042540 0 0 0 20 158 118816 15\n10 75 0\n\n 2 0 20 2860080 124840 8042540 0 0 0 10 152 117858 12\n11 77 0\n\n 1 0 20 2860080 124848 8042572 0 0 0 210 202 114724 14\n10 76 0\n\n 2 0 20 2860080 124848 8042572 0 0 0 20 169 114843 13\n10 77 0\n\n 3 0 20 2859908 124860 8042576 0 0 0 188 180 115134 14\n11 75 0\n\n 3 0 20 2859848 124860 8042576 0 0 0 20 173 113470 13\n10 77 0\n\n 2 0 20 2859836 124860 8042576 0 0 0 10 157 112839 14\n11 75 0\n\n \n\nThe system seems to be fine on iowait/memory side, except the CPU being\nbusy with the CS. Here's the top output:\n\n \n\n11:54:57 up 59 days, 14:11, 2 users, load average: 1.13, 1.66, 1.52\n\n282 processes: 281 sleeping, 1 running, 0 zombie, 0 stopped\n\nCPU states: cpu user nice system irq softirq iowait idle\n\n total 13.8% 0.0% 9.7% 0.0% 0.0% 0.0% 76.2%\n\n cpu00 12.3% 0.0% 10.5% 0.0% 0.0% 0.1% 76.8%\n\n cpu01 12.1% 0.0% 6.1% 0.0% 0.0% 0.1% 81.5%\n\n cpu02 10.9% 0.0% 9.1% 0.0% 0.0% 0.0% 79.9%\n\n cpu03 19.4% 0.0% 14.9% 0.0% 0.0% 0.0% 65.6%\n\n cpu04 13.9% 0.0% 11.1% 0.0% 0.0% 0.0% 74.9%\n\n cpu05 14.9% 0.0% 9.1% 0.0% 0.0% 0.0% 75.9%\n\n cpu06 12.9% 0.0% 8.9% 0.0% 0.0% 0.0% 78.1%\n\n cpu07 14.3% 0.0% 8.1% 0.0% 0.1% 0.0% 77.3%\n\nMem: 12081720k av, 9273304k used, 2808416k free, 0k shrd,\n126048k buff\n\n 4686808k actv, 3211872k in_d, 170240k in_c\n\nSwap: 4096532k av, 20k used, 4096512k free 8044072k\ncached\n\n \n\n \n\nPostgreSQL 7.4.7 on i686-redhat-linux-gnu\n\nRed Hat Enterprise Linux AS release 3 (Taroon Update 5)\n\nLinux vl-pe6650-004 2.4.21-32.0.1.ELsmp\n\n \n\nThis is a Dell Quad XEON. Hyperthreading is turned on, and I am planning\nto turn it off as soon as I get a chance to bring it down.\n\n \n\nWAL is on separate drives from the OS and database.\n\n \n\nAppreciate any inputs please....\n\n \n\nThanks,\nAnjan\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nOne of our PG server is experiencing extreme slowness and\nthere are hundreds of SELECTS building up. I am not sure if heavy context\nswitching is the cause of this or something else is causing it.\n \nIs this pretty much the final word on this issue?\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00249.php\n \nprocs \nmemory \nswap \nio \nsystem cpu\n r b swpd \nfree buff cache si \nso bi bo in cs us\nsy id wa\n 2 0 20 2860544 124816\n8042544 0 0 \n0 0 0 0 \n0 0 0 0\n 2 0 20 2860376 124816\n8042552 0 0 \n0 24 157 115322 13 10 76 0\n 3 0 20 2860364 124840\n8042540 0 0 \n0 228 172 120003 12 10 77 0\n 2 0 20 2860364 124840\n8042540 0 0 \n0 20 158 118816 15 10 75 0\n 2 0 20 2860080 124840\n8042540 0 0 \n0 10 152 117858 12 11 77 0\n 1 0 20 2860080 124848\n8042572 0 0 \n0 210 202 114724 14 10 76 0\n 2 0 20 2860080 124848\n8042572 0 0 \n0 20 169 114843 13 10 77 0\n 3 0 20 2859908 124860\n8042576 0 0 \n0 188 180 115134 14 11 75 0\n 3 0 20 2859848 124860\n8042576 0 0 \n0 20 173 113470 13 10 77 0\n 2 0 20 2859836 124860\n8042576 0 0 \n0 10 157 112839 14 11 75 0\n \nThe system seems to be fine on iowait/memory side, except\nthe CPU being busy with the CS. Here’s the top output:\n \n11:54:57 up 59 days, 14:11, 2 users, load\naverage: 1.13, 1.66, 1.52\n282 processes: 281 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: cpu \nuser nice system irq \nsoftirq iowait idle\n \ntotal 13.8% 0.0% 9.7% \n0.0% 0.0% 0.0% 76.2%\n \ncpu00 12.3% 0.0% 10.5% \n0.0% 0.0% 0.1% 76.8%\n \ncpu01 12.1% 0.0% \n6.1% 0.0% 0.0% \n0.1% 81.5%\n \ncpu02 10.9% 0.0% \n9.1% 0.0% 0.0% \n0.0% 79.9%\n cpu03 \n19.4% 0.0% 14.9% \n0.0% 0.0% 0.0% 65.6%\n \ncpu04 13.9% 0.0% 11.1% \n0.0% 0.0% 0.0% 74.9%\n \ncpu05 14.9% 0.0% \n9.1% 0.0% 0.0% \n0.0% 75.9%\n \ncpu06 12.9% 0.0% 8.9% 0.0% \n0.0% 0.0% 78.1%\n \ncpu07 14.3% 0.0% \n8.1% 0.0% 0.1% \n0.0% 77.3%\nMem: 12081720k av, 9273304k used, 2808416k\nfree, 0k shrd, 126048k buff\n \n4686808k actv, 3211872k in_d, 170240k in_c\nSwap: 4096532k av, 20k used,\n4096512k\nfree \n8044072k cached\n \n \nPostgreSQL 7.4.7 on i686-redhat-linux-gnu\nRed Hat Enterprise Linux AS release 3 (Taroon Update 5)\nLinux vl-pe6650-004 2.4.21-32.0.1.ELsmp\n \nThis is a Dell Quad XEON. Hyperthreading is turned on, and I\nam planning to turn it off as soon as I get a chance to bring it down.\n \nWAL is on separate drives from the OS and database.\n \nAppreciate any inputs please….\n \nThanks,\nAnjan",
"msg_date": "Tue, 22 Nov 2005 11:59:54 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High context switches occurring"
},
{
"msg_contents": "On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:\n> This is a Dell Quad XEON. Hyperthreading is turned on, and I am \n> planning to turn it off as soon as I get a chance to bring it down.\n>\n\nYou should probably also upgrade to Pg 8.0 or newer since it is a \nknown problem with XEON processors and older postgres versions. \nUpgrading Pg may solve your problem or it may not. It is just a \nfluke with XEON processors...\n\n\n\nOn Nov 22, 2005, at 11:59 AM, Anjan Dave wrote: This is a Dell Quad XEON. Hyperthreading is turned on, and I am planning to turn it off as soon as I get a chance to bring it down.You should probably also upgrade to Pg 8.0 or newer since it is a known problem with XEON processors and older postgres versions. Upgrading Pg may solve your problem or it may not. It is just a fluke with XEON processors...",
"msg_date": "Tue, 22 Nov 2005 12:14:54 -0500",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring"
},
{
"msg_contents": "Vivek Khera <[email protected]> writes:\n> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:\n>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am \n>> planning to turn it off as soon as I get a chance to bring it down.\n\n> You should probably also upgrade to Pg 8.0 or newer since it is a \n> known problem with XEON processors and older postgres versions. \n> Upgrading Pg may solve your problem or it may not.\n\nPG 8.1 is the first release that has a reasonable probability of\navoiding heavy contention for the buffer manager lock when there\nare multiple CPUs. If you're going to update to try to fix this,\nyou need to go straight to 8.1.\n\nI've recently been chasing a report from Rob Creager that seems to\nindicate contention on SubTransControlLock, so the slru code is\nlikely to be our next bottleneck to fix :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Nov 2005 12:35:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring "
}
] |
[
{
"msg_contents": "Is there another way in PG to return a recordset from a function than \nto declare a type first ?\n\ncreate function fnTest () returns setof \nmyDefinedTypeIDontWantToDefineFirst ...\n\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Tue, 22 Nov 2005 19:29:37 +0100",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Stored Procedure"
},
{
"msg_contents": "create function abc() returns setof RECORD ...\n\nthen to call it you would do\nselect * from abc() as (a text,b int,...);\n\n\n\n\n---------- Original Message -----------\nFrom: Yves Vindevogel <[email protected]>\nTo: [email protected]\nSent: Tue, 22 Nov 2005 19:29:37 +0100\nSubject: [PERFORM] Stored Procedure\n\n> Is there another way in PG to return a recordset from a function than \n> to declare a type first ?\n> \n> create function fnTest () returns setof \n> myDefinedTypeIDontWantToDefineFirst ...\n> \n> Met vriendelijke groeten,\n> Bien � vous,\n> Kind regards,\n> \n> Yves Vindevogel\n> Implements\n------- End of Original Message -------\n\n",
"msg_date": "Tue, 22 Nov 2005 13:42:53 -0500",
"msg_from": "\"Jim Buttafuoco\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure"
},
{
"msg_contents": "On Tue, Nov 22, 2005 at 07:29:37PM +0100, Yves Vindevogel wrote:\n> Is there another way in PG to return a recordset from a function than \n> to declare a type first ? \n\nIn 8.1 some languages support OUT and INOUT parameters.\n\nCREATE FUNCTION foo(IN x integer, INOUT y integer, OUT z integer) AS $$\nBEGIN\n y := y * 10;\n z := x * 10;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE STRICT;\n\nSELECT * FROM foo(1, 2);\n y | z \n----+----\n 20 | 10\n(1 row)\n\nCREATE FUNCTION fooset(IN x integer, INOUT y integer, OUT z integer) \nRETURNS SETOF record AS $$\nBEGIN\n y := y * 10;\n z := x * 10;\n RETURN NEXT;\n y := y + 1;\n z := z + 1;\n RETURN NEXT;\nEND;\n$$ LANGUAGE plpgsql IMMUTABLE STRICT;\n\nSELECT * FROM fooset(1, 2);\n y | z \n----+----\n 20 | 10\n 21 | 11\n(2 rows)\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 22 Nov 2005 11:59:42 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure"
},
{
"msg_contents": "But this does not work without the second line, right ?\nBTW, the thing returned is not a record. It's a bunch of fields, not a \ncomplete record or fields of multiple records.\nI'm not so sure it works.\n\nOn 22 Nov 2005, at 19:42, Jim Buttafuoco wrote:\n\n> create function abc() returns setof RECORD ...\n>\n> then to call it you would do\n> select * from abc() as (a text,b int,...);\n>\n>\n>\n>\n> ---------- Original Message -----------\n> From: Yves Vindevogel <[email protected]>\n> To: [email protected]\n> Sent: Tue, 22 Nov 2005 19:29:37 +0100\n> Subject: [PERFORM] Stored Procedure\n>\n>> Is there another way in PG to return a recordset from a function than\n>> to declare a type first ?\n>>\n>> create function fnTest () returns setof\n>> myDefinedTypeIDontWantToDefineFirst ...\n>>\n>> Met vriendelijke groeten,\n>> Bien à vous,\n>> Kind regards,\n>>\n>> Yves Vindevogel\n>> Implements\n> ------- End of Original Message -------\n>\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Tue, 22 Nov 2005 23:17:41 +0100",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure"
},
{
"msg_contents": "8.1, hmm, that's brand new.\nBut, still, it's quite some coding for a complete recordset, not ?\n\nOn 22 Nov 2005, at 19:59, Michael Fuhr wrote:\n\n> On Tue, Nov 22, 2005 at 07:29:37PM +0100, Yves Vindevogel wrote:\n>> Is there another way in PG to return a recordset from a function than\n>> to declare a type first ?\n>\n> In 8.1 some languages support OUT and INOUT parameters.\n>\n> CREATE FUNCTION foo(IN x integer, INOUT y integer, OUT z integer) AS $$\n> BEGIN\n> y := y * 10;\n> z := x * 10;\n> END;\n> $$ LANGUAGE plpgsql IMMUTABLE STRICT;\n>\n> SELECT * FROM foo(1, 2);\n> y | z\n> ----+----\n> 20 | 10\n> (1 row)\n>\n> CREATE FUNCTION fooset(IN x integer, INOUT y integer, OUT z integer)\n> RETURNS SETOF record AS $$\n> BEGIN\n> y := y * 10;\n> z := x * 10;\n> RETURN NEXT;\n> y := y + 1;\n> z := z + 1;\n> RETURN NEXT;\n> END;\n> $$ LANGUAGE plpgsql IMMUTABLE STRICT;\n>\n> SELECT * FROM fooset(1, 2);\n> y | z\n> ----+----\n> 20 | 10\n> 21 | 11\n> (2 rows)\n>\n> -- \n> Michael Fuhr\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.",
"msg_date": "Tue, 22 Nov 2005 23:20:09 +0100",
"msg_from": "Yves Vindevogel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Stored Procedure"
},
{
"msg_contents": "On Tue, Nov 22, 2005 at 11:17:41PM +0100, Yves Vindevogel wrote:\n> But this does not work without the second line, right ? \n\nWhat second line? Instead of returning a specific composite type\na function can return RECORD or SETOF RECORD; in these cases the\nquery must provide a column definition list.\n\n> BTW, the thing returned is not a record. It's a bunch of fields, not a \n> complete record or fields of multiple records. \n\nWhat distinction are you making between a record and a bunch of\nfields? What exactly would you like the function to return?\n\n> I'm not so sure it works. \n\nDid you try it? If you did and it didn't work then please post\nexactly what you tried and explain what happened and how that\ndiffered from what you'd like.\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 22 Nov 2005 22:05:17 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure"
},
{
"msg_contents": "On Tue, Nov 22, 2005 at 11:20:09PM +0100, Yves Vindevogel wrote:\n> 8.1, hmm, that's brand new. \n\nYes, but give it a try, at least in a test environment. The more\npeople use it, the more we'll find out if it has any problems.\n\n> But, still, it's quite some coding for a complete recordset, not ? \n\nHow so? The examples I posted are almost identical to how you'd\nreturn a composite type created with CREATE TYPE or SETOF that type,\nexcept that you declare the return columns as INOUT or OUT parameters\nand you no longer have to create a separate type. If you're referring\nto how I wrote two sets of assignments and RETURN NEXT statements,\nyou don't have to do it that way: you can use a loop, just as you\nwould with any other set-returning function.\n\n-- \nMichael Fuhr\n",
"msg_date": "Tue, 22 Nov 2005 22:13:14 -0700",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Stored Procedure"
}
] |
[
{
"msg_contents": "Thanks, guys, I'll start planning on upgrading to PG8.1\n\nWould this problem change it's nature in any way on the recent Dual-Core\nIntel XEON MP machines?\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, November 22, 2005 12:36 PM\nTo: Vivek Khera\nCc: Postgresql Performance; Anjan Dave\nSubject: Re: [PERFORM] High context switches occurring \n\nVivek Khera <[email protected]> writes:\n> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:\n>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am \n>> planning to turn it off as soon as I get a chance to bring it down.\n\n> You should probably also upgrade to Pg 8.0 or newer since it is a \n> known problem with XEON processors and older postgres versions. \n> Upgrading Pg may solve your problem or it may not.\n\nPG 8.1 is the first release that has a reasonable probability of\navoiding heavy contention for the buffer manager lock when there\nare multiple CPUs. If you're going to update to try to fix this,\nyou need to go straight to 8.1.\n\nI've recently been chasing a report from Rob Creager that seems to\nindicate contention on SubTransControlLock, so the slru code is\nlikely to be our next bottleneck to fix :-(\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 22 Nov 2005 14:23:34 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High context switches occurring "
},
{
"msg_contents": "\"Anjan Dave\" <[email protected]> writes:\n> Would this problem change it's nature in any way on the recent Dual-Core\n> Intel XEON MP machines?\n\nProbably not much.\n\nThere's some evidence that Opterons have less of a problem than Xeons\nin multi-chip configurations, but we've seen CS thrashing on Opterons\ntoo. I think the issue is probably there to some extent in any modern\nSMP architecture.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Nov 2005 14:41:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring "
}
] |
[
{
"msg_contents": "Is there any way to get a temporary relief from this Context Switching\nstorm? Does restarting postmaster help?\n\nIt seems that I can recreate the heavy CS with just one SELECT\nstatement...and then when multiple such SELECT queries are coming in,\nthings just get hosed up until we cancel a bunch of queries...\n\nThanks,\nAnjan\n\n\n-----Original Message-----\nFrom: Anjan Dave \nSent: Tuesday, November 22, 2005 2:24 PM\nTo: Tom Lane; Vivek Khera\nCc: Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring \n\nThanks, guys, I'll start planning on upgrading to PG8.1\n\nWould this problem change it's nature in any way on the recent Dual-Core\nIntel XEON MP machines?\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, November 22, 2005 12:36 PM\nTo: Vivek Khera\nCc: Postgresql Performance; Anjan Dave\nSubject: Re: [PERFORM] High context switches occurring \n\nVivek Khera <[email protected]> writes:\n> On Nov 22, 2005, at 11:59 AM, Anjan Dave wrote:\n>> This is a Dell Quad XEON. Hyperthreading is turned on, and I am \n>> planning to turn it off as soon as I get a chance to bring it down.\n\n> You should probably also upgrade to Pg 8.0 or newer since it is a \n> known problem with XEON processors and older postgres versions. \n> Upgrading Pg may solve your problem or it may not.\n\nPG 8.1 is the first release that has a reasonable probability of\navoiding heavy contention for the buffer manager lock when there\nare multiple CPUs. If you're going to update to try to fix this,\nyou need to go straight to 8.1.\n\nI've recently been chasing a report from Rob Creager that seems to\nindicate contention on SubTransControlLock, so the slru code is\nlikely to be our next bottleneck to fix :-(\n\n\t\t\tregards, tom lane\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Tue, 22 Nov 2005 15:33:26 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High context switches occurring "
},
{
"msg_contents": "On Tue, 2005-11-22 at 14:33, Anjan Dave wrote:\n> Is there any way to get a temporary relief from this Context Switching\n> storm? Does restarting postmaster help?\n> \n> It seems that I can recreate the heavy CS with just one SELECT\n> statement...and then when multiple such SELECT queries are coming in,\n> things just get hosed up until we cancel a bunch of queries...\n\nIs your machine a hyperthreaded one? Some folks have found that turning\noff hyper threading helps. I knew it made my servers better behaved in\nthe past.\n",
"msg_date": "Tue, 22 Nov 2005 14:37:47 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring"
},
{
"msg_contents": "P.s., followup to my last post, I don't know if turning of HT actually\nlowered the number of context switches, just that it made my server run\nfaster.\n",
"msg_date": "Tue, 22 Nov 2005 14:38:31 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring"
}
] |
[
{
"msg_contents": "Yes, it's turned on, unfortunately it got overlooked during the setup,\nand until now...!\n\nIt's mostly a 'read' application, I increased the vm.max-readahead to\n2048 from the default 256, after which I've not seen the CS storm,\nthough it could be incidental.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, November 22, 2005 3:38 PM\nTo: Anjan Dave\nCc: Tom Lane; Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring\n\nOn Tue, 2005-11-22 at 14:33, Anjan Dave wrote:\n> Is there any way to get a temporary relief from this Context Switching\n> storm? Does restarting postmaster help?\n> \n> It seems that I can recreate the heavy CS with just one SELECT\n> statement...and then when multiple such SELECT queries are coming in,\n> things just get hosed up until we cancel a bunch of queries...\n\nIs your machine a hyperthreaded one? Some folks have found that turning\noff hyper threading helps. I knew it made my servers better behaved in\nthe past.\n\n",
"msg_date": "Tue, 22 Nov 2005 18:17:27 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High context switches occurring"
},
{
"msg_contents": "On Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:\n\n> It's mostly a 'read' application, I increased the vm.max-readahead to\n> 2048 from the default 256, after which I've not seen the CS storm,\n> though it could be incidental.\n\nCan you verify this, please?\n\nTurn it back down again, try the test, then reset and try the test.\n\nIf that is a repeatable way of recreating one manifestation of the\nproblem then we will be further ahead than we are now.\n\nThanks,\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Wed, 23 Nov 2005 18:13:58 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring"
},
{
"msg_contents": "Hi Anjan,\n\nI can support Scott. You should turn on HT if you see high values for CS.\n\nI do have a few customers running a web-based 3-tier application with \nPostgreSQL. We had to turn off HT to have better overall performance.\nThe issue is the behavior under high load. I notice that HT on does \ncollapse faster.\n\nJust a question. Which version of XEON do you have? What is does the \nserver have as memory architecture.\n\nI think, Dual-Core XEON's are no issue. One of our customers does use a \n4-way Dual-Core Opteron 875 since a few months. We have Pg 8.0.3 and it \nruns perfect. I have to say that we use a special patch from Tom which \nfix an issue with the looking of shared buffers and the Opteron.\nI notice that this patch is also useful for XEON's with EMT64.\n\nBest regards\nSven.\n\nAnjan Dave schrieb:\n> Yes, it's turned on, unfortunately it got overlooked during the setup,\n> and until now...!\n> \n> It's mostly a 'read' application, I increased the vm.max-readahead to\n> 2048 from the default 256, after which I've not seen the CS storm,\n> though it could be incidental.\n> \n> Thanks,\n> Anjan\n> \n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]] \n> Sent: Tuesday, November 22, 2005 3:38 PM\n> To: Anjan Dave\n> Cc: Tom Lane; Vivek Khera; Postgresql Performance\n> Subject: Re: [PERFORM] High context switches occurring\n> \n> On Tue, 2005-11-22 at 14:33, Anjan Dave wrote:\n> \n>>Is there any way to get a temporary relief from this Context Switching\n>>storm? Does restarting postmaster help?\n>>\n>>It seems that I can recreate the heavy CS with just one SELECT\n>>statement...and then when multiple such SELECT queries are coming in,\n>>things just get hosed up until we cancel a bunch of queries...\n> \n> \n> Is your machine a hyperthreaded one? Some folks have found that turning\n> off hyper threading helps. I knew it made my servers better behaved in\n> the past.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany\n",
"msg_date": "Thu, 24 Nov 2005 11:38:47 +0100",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High context switches occurring"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to get better performance reading data from postgres, so I \nwould like to return the data as binary rather than text as parsing it \nis taking a considerable amount of processor.\n\nHowever I can't figure out how to do that! I have functions like.\n\nfunction my_func(ret refcursor) returns refcursor AS\n\n$$\n\nBEGIN\nOPEN $1 for select * from table;\nreturn $1\nEND;\n\n$$ language 'plpgsql'\n\nThere are queried using\n\nSELECT my_func( 'ret'::refcursor); FETCH ALL FROM ret;\n\nIs there any way I can say make ret a binary cursor?\n\nThanks\nRalph\n\n\n",
"msg_date": "Wed, 23 Nov 2005 16:39:05 +1300",
"msg_from": "Ralph Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Binary Refcursor possible?"
},
{
"msg_contents": "Ralph Mason <[email protected]> writes:\n> Is there any way I can say make ret a binary cursor?\n\nIt's possible to determine that at the protocol level, if you're using\nV3 protocol; but whether this is exposed to an application depends on\nwhat client-side software you are using. Which you didn't say.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Nov 2005 23:07:25 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Binary Refcursor possible? "
},
{
"msg_contents": "Tom Lane wrote:\n> Ralph Mason <[email protected]> writes:\n> \n>> Is there any way I can say make ret a binary cursor?\n>> \n>\n> It's possible to determine that at the protocol level, if you're using\n> V3 protocol; but whether this is exposed to an application depends on\n> what client-side software you are using. Which you didn't say.\n>\n> \t\t\tregards, tom lane\n> \nThis is probably in the documentation but I couldn't find it.\n\nAll I could see is that if you open a cursor for binary it would return \nwith a type of binary rather than text in the row data messages. The \nRowDescription format code is always text, and the cursor thing is the \nonly way I could see to change that.\n\nIs there some setting I can set that will make it return all data as \nbinary? The dream would also be that I could ask the server it's native \nbyte order and have it send me binary data in it's native byte order. \nNice and fast. :-0\n\nRalph\n\n\n\n\n",
"msg_date": "Wed, 23 Nov 2005 17:14:38 +1300",
"msg_from": "Ralph Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Binary Refcursor possible?"
}
] |
[
{
"msg_contents": "The offending SELECT query that invoked the CS storm was optimized by\nfolks here last night, so it's hard to say if the VM setting made a\ndifference. I'll give it a try anyway.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Simon Riggs [mailto:[email protected]] \nSent: Wednesday, November 23, 2005 1:14 PM\nTo: Anjan Dave\nCc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring\n\nOn Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:\n\n> It's mostly a 'read' application, I increased the vm.max-readahead to\n> 2048 from the default 256, after which I've not seen the CS storm,\n> though it could be incidental.\n\nCan you verify this, please?\n\nTurn it back down again, try the test, then reset and try the test.\n\nIf that is a repeatable way of recreating one manifestation of the\nproblem then we will be further ahead than we are now.\n\nThanks,\n\nBest Regards, Simon Riggs\n\n\n",
"msg_date": "Wed, 23 Nov 2005 13:33:09 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High context switches occurring"
}
] |
[
{
"msg_contents": "Simon,\n\nI tested it by running two of those simultaneous queries (the\n'unoptimized' one), and it doesn't make any difference whether\nvm.max-readahead is 256 or 2048...the modified query runs in a snap.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Anjan Dave \nSent: Wednesday, November 23, 2005 1:33 PM\nTo: Simon Riggs\nCc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring\n\nThe offending SELECT query that invoked the CS storm was optimized by\nfolks here last night, so it's hard to say if the VM setting made a\ndifference. I'll give it a try anyway.\n\nThanks,\nAnjan\n\n-----Original Message-----\nFrom: Simon Riggs [mailto:[email protected]] \nSent: Wednesday, November 23, 2005 1:14 PM\nTo: Anjan Dave\nCc: Scott Marlowe; Tom Lane; Vivek Khera; Postgresql Performance\nSubject: Re: [PERFORM] High context switches occurring\n\nOn Tue, 2005-11-22 at 18:17 -0500, Anjan Dave wrote:\n\n> It's mostly a 'read' application, I increased the vm.max-readahead to\n> 2048 from the default 256, after which I've not seen the CS storm,\n> though it could be incidental.\n\nCan you verify this, please?\n\nTurn it back down again, try the test, then reset and try the test.\n\nIf that is a repeatable way of recreating one manifestation of the\nproblem then we will be further ahead than we are now.\n\nThanks,\n\nBest Regards, Simon Riggs\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n",
"msg_date": "Wed, 23 Nov 2005 16:33:24 -0500",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High context switches occurring"
}
] |
[
{
"msg_contents": "Hi,\n\nPostgreSQL 8.1 fresh install on a freshly installed OpenBSD 3.8 box.\n\npostgres=# CREATE DATABASE test;\nCREATE DATABASE\npostgres=# create table test (id serial, val integer);\nNOTICE: CREATE TABLE will create implicit sequence \"test_id_seq\" for \nserial column \"test.id\"\nCREATE TABLE\npostgres=# create unique index testid on test (id);\nCREATE INDEX\npostgres=# create index testval on test (val);\nCREATE INDEX\npostgres=# insert into test (val) values (round(random() \n*1024*1024*1024));\nINSERT 0 1\n\n[...] insert many random values\n\npostgres=# vaccum full verbose analyze;\npostgres=# select count(1) from test;\n count\n---------\n2097152\n(1 row)\n\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------\nAggregate (cost=66328.72..66328.73 rows=1 width=0)\n -> Unique (cost=0.00..40114.32 rows=2097152 width=8)\n -> Index Scan using testval on test (cost=0.00..34871.44 \nrows=2097152 width=8)\n(3 rows)\n\npostgres=# set enable_indexscan=off;\npostgres=# explain analyze select count(*) from (select distinct on \n(val) * from test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------\nAggregate (cost=280438.64..280438.65 rows=1 width=0) (actual \ntime=39604.107..39604.108 rows=1 loops=1)\n -> Unique (cost=243738.48..254224.24 rows=2097152 width=8) \n(actual time=30281.004..37746.488 rows=2095104 loops=1)\n -> Sort (cost=243738.48..248981.36 rows=2097152 width=8) \n(actual time=30280.999..33744.197 rows=2097152 loops=1)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..23537.52 \nrows=2097152 width=8) (actual time=11.550..3262.433 rows=2097152 \nloops=1)\nTotal runtime: 39624.094 ms\n(6 rows)\n\npostgres=# set enable_indexscan=on;\npostgres=# explain analyze select count(*) from (select distinct on \n(val) * from test where val<10000000) as foo;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\nAggregate (cost=4739.58..4739.59 rows=1 width=0) (actual \ntime=4686.472..4686.473 rows=1 loops=1)\n -> Unique (cost=4380.56..4483.14 rows=20515 width=8) (actual \ntime=4609.046..4669.289 rows=19237 loops=1)\n -> Sort (cost=4380.56..4431.85 rows=20515 width=8) \n(actual time=4609.041..4627.976 rows=19255 loops=1)\n Sort Key: test.val\n -> Bitmap Heap Scan on test (cost=88.80..2911.24 \nrows=20515 width=8) (actual time=130.954..4559.244 rows=19255 loops=1)\n Recheck Cond: (val < 10000000)\n -> Bitmap Index Scan on testval \n(cost=0.00..88.80 rows=20515 width=0) (actual time=120.041..120.041 \nrows=19255 loops=1)\n Index Cond: (val < 10000000)\nTotal runtime: 4690.513 ms\n(9 rows)\n\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test where val<100000000) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n-----------------\nAggregate (cost=16350.20..16350.21 rows=1 width=0)\n -> Unique (cost=0.00..13748.23 rows=208158 width=8)\n -> Index Scan using testval on test (cost=0.00..13227.83 \nrows=208158 width=8)\n Index Cond: (val < 100000000)\n(4 rows)\n\npostgres=# set enable_indexscan=off;\npostgres=# explain analyze select count(*) from (select distinct on \n(val) * from test where val<100000000) as foo;\n \nQUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n----\nAggregate (cost=28081.27..28081.28 rows=1 width=0) (actual \ntime=6444.650..6444.651 rows=1 loops=1)\n -> Unique (cost=24438.50..25479.29 rows=208158 width=8) (actual \ntime=5669.118..6277.206 rows=194142 loops=1)\n -> Sort (cost=24438.50..24958.89 rows=208158 width=8) \n(actual time=5669.112..5852.351 rows=194342 loops=1)\n Sort Key: test.val\n -> Bitmap Heap Scan on test (cost=882.55..6050.53 \nrows=208158 width=8) (actual time=1341.114..4989.840 rows=194342 \nloops=1)\n Recheck Cond: (val < 100000000)\n -> Bitmap Index Scan on testval \n(cost=0.00..882.55 rows=208158 width=0) (actual \ntime=1339.707..1339.707 rows=194342 loops=1)\n Index Cond: (val < 100000000)\nTotal runtime: 6487.114 ms\n(9 rows)\n\npostgres=# explain analyze select count(*) from (select distinct on \n(val) * from test where val<750000000) as foo;\n Q \nUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------\nAggregate (cost=204576.53..204576.54 rows=1 width=0) (actual \ntime=35718.935..35718.936 rows=1 loops=1)\n -> Unique (cost=178717.28..186105.64 rows=1477671 width=8) \n(actual time=29465.856..34459.640 rows=1462348 loops=1)\n -> Sort (cost=178717.28..182411.46 rows=1477671 width=8) \n(actual time=29465.853..31658.056 rows=1463793 loops=1)\n Sort Key: test.val\n -> Bitmap Heap Scan on test (cost=6256.85..27293.73 \nrows=1477671 width=8) (actual time=8316.676..11561.018 rows=1463793 \nloops=1)\n Recheck Cond: (val < 750000000)\n -> Bitmap Index Scan on testval \n(cost=0.00..6256.85 rows=1477671 width=0) (actual \ntime=8305.963..8305.963 rows=1463793 loops=1)\n Index Cond: (val < 750000000)\nTotal runtime: 35736.167 ms\n(9 rows)\n\npostgres=# explain analyze select count(*) from (select distinct on \n(val) * from test where val<800000000) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------\nAggregate (cost=217582.20..217582.21 rows=1 width=0) (actual \ntime=28718.331..28718.332 rows=1 loops=1)\n -> Unique (cost=190140.72..197981.14 rows=1568084 width=8) \n(actual time=22175.170..27380.343 rows=1559648 loops=1)\n -> Sort (cost=190140.72..194060.93 rows=1568084 width=8) \n(actual time=22175.165..24451.892 rows=1561181 loops=1)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..28780.40 \nrows=1568084 width=8) (actual time=13.130..3358.923 rows=1561181 \nloops=1)\n Filter: (val < 800000000)\nTotal runtime: 28735.264 ms\n(7 rows)\n\nI did not post any result for the indexscan plan, because it takes to \nmuch time.\nWhy the stupid indexscan plan on the whole table ?\n\nCordialement,\nJean-G�rard Pailloncy\n\n",
"msg_date": "Wed, 23 Nov 2005 23:14:47 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": true,
"msg_subject": "8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "Pailloncy Jean-Gerard <[email protected]> writes:\n> Why the stupid indexscan plan on the whole table ?\n\nPray tell, what are you using for the planner cost parameters?\nThe only way I can come close to duplicating your numbers is\nby setting random_page_cost to somewhere around 0.01 ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Nov 2005 22:14:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan "
},
{
"msg_contents": "> Pailloncy Jean-Gerard <[email protected]> writes:\n>> Why the stupid indexscan plan on the whole table ?\n>\n> Pray tell, what are you using for the planner cost parameters?\n> The only way I can come close to duplicating your numbers is\n> by setting random_page_cost to somewhere around 0.01 ...\n>\n\nI did not change the costs.\n\n > grep cost postgresql.conf\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# note: increasing max_prepared_transactions costs ~600 bytes of \nshared memory\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n#random_page_cost = 4 # units are one sequential \npage fetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # vacuum_cost_limit\n\n\nCordialement,\nJean-G�rard Pailloncy\n\n",
"msg_date": "Thu, 24 Nov 2005 12:54:50 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan "
},
{
"msg_contents": "I redo the test, with a freshly installed data directory. Same result.\n\nNote: This is the full log. I just suppress the mistake I do like \n\"sl\" for \"ls\".\n\nJean-G�rard Pailloncy\n\n\nLast login: Thu Nov 24 12:52:32 2005 from 192.168.0.1\nOpenBSD 3.8 (WDT) #2: Tue Nov 8 00:52:38 CET 2005\n\nWelcome to OpenBSD: The proactively secure Unix-like operating system.\n\nPlease use the sendbug(1) utility to report bugs in the system.\nBefore reporting a bug, please try to reproduce it with the latest\nversion of the code. With bug reports, please try to ensure that\nenough information to reproduce the problem is enclosed, and if a\nknown fix for it exists, include that as well.\n\nTerminal type? [xterm-color]\n# cd /mnt2/pg/install/bin/\n# mkdir /mnt2/pg/data\n# chown -R _pgsql:_pgsql /mnt2/pg/data\n# su _pgsql\n$ ls\nclusterdb droplang pg_config pg_resetxlog \nreindexdb\ncreatedb dropuser pg_controldata pg_restore \nvacuumdb\ncreatelang ecpg pg_ctl postgres\ncreateuser initdb pg_dump postmaster\ndropdb ipcclean pg_dumpall psql\n$ ./initdb -D /mnt2/pg/data\nThe files belonging to this database system will be owned by user \n\"_pgsql\".\nThis user must also own the server process.\n\nThe database cluster will be initialized with locale C.\n\nfixing permissions on existing directory /mnt2/pg/data ... ok\ncreating directory /mnt2/pg/data/global ... ok\ncreating directory /mnt2/pg/data/pg_xlog ... ok\ncreating directory /mnt2/pg/data/pg_xlog/archive_status ... ok\ncreating directory /mnt2/pg/data/pg_clog ... ok\ncreating directory /mnt2/pg/data/pg_subtrans ... ok\ncreating directory /mnt2/pg/data/pg_twophase ... ok\ncreating directory /mnt2/pg/data/pg_multixact/members ... ok\ncreating directory /mnt2/pg/data/pg_multixact/offsets ... ok\ncreating directory /mnt2/pg/data/base ... ok\ncreating directory /mnt2/pg/data/base/1 ... ok\ncreating directory /mnt2/pg/data/pg_tblspc ... ok\nselecting default max_connections ... 100\nselecting default shared_buffers ... 1000\ncreating configuration files ... ok\ncreating template1 database in /mnt2/pg/data/base/1 ... ok\ninitializing pg_authid ... ok\nenabling unlimited row size for system tables ... ok\ninitializing dependencies ... ok\ncreating system views ... ok\nloading pg_description ... ok\ncreating conversions ... ok\nsetting privileges on built-in objects ... ok\ncreating information schema ... ok\nvacuuming database template1 ... ok\ncopying template1 to template0 ... ok\ncopying template1 to postgres ... ok\n\nWARNING: enabling \"trust\" authentication for local connections\nYou can change this by editing pg_hba.conf or using the -A option the\nnext time you run initdb.\n\nSuccess. You can now start the database server using:\n\n ./postmaster -D /mnt2/pg/data\nor\n ./pg_ctl -D /mnt2/pg/data -l logfile start\n\n$ ./pg_ctl -D /mnt2/pg/data -l /mnt2/pg/data/logfile start\npostmaster starting\n$ ./psql postgres\nWelcome to psql 8.1.0, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\npostgres=# create table test (id serial, val integer);\nNOTICE: CREATE TABLE will create implicit sequence \"test_id_seq\" for \nserial column \"test.id\"\nCREATE TABLE\npostgres=# create unique index testid on test (id);\nCREATE INDEX\npostgres=# create index testval on test (val);\nCREATE INDEX\npostgres=# insert into test (val) values (round(random() \n*1024*1024*1024));\nINSERT 0 1\npostgres=# vacuum full analyze;\nVACUUM\npostgres=# select count(1) from test;\ncount\n-------\n 1\n(1 row)\n\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n----------------------------------------------------------------------\nAggregate (cost=1.04..1.05 rows=1 width=0)\n -> Unique (cost=1.02..1.03 rows=1 width=8)\n -> Sort (cost=1.02..1.02 rows=1 width=8)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..1.01 rows=1 width=8)\n(5 rows)\n\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 1\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 2\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 4\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 8\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 16\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 32\npostgres=# vacuum full analyze;\nVACUUM\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n-----------------------------------------------------------------------\nAggregate (cost=4.68..4.69 rows=1 width=0)\n -> Unique (cost=3.56..3.88 rows=64 width=8)\n -> Sort (cost=3.56..3.72 rows=64 width=8)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..1.64 rows=64 width=8)\n(5 rows)\n\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 64\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 128\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 256\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 512\npostgres=# vacuum full analyze;\nVACUUM\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------\nAggregate (cost=55.63..55.64 rows=1 width=0)\n -> Unique (cost=0.00..42.82 rows=1024 width=8)\n -> Index Scan using testval on test (cost=0.00..40.26 \nrows=1024 width=8)\n(3 rows)\n\npostgres=# select count(1) from test;\ncount\n-------\n 1024\n(1 row)\n\npostgres=# set enable_indexscan=off;\nSET\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n--\nAggregate (cost=85.36..85.37 rows=1 width=0)\n -> Unique (cost=67.44..72.56 rows=1024 width=8)\n -> Sort (cost=67.44..70.00 rows=1024 width=8)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..16.24 rows=1024 \nwidth=8)\n(5 rows)\n\npostgres=# set enable_indexscan=on;\nSET\npostgres=# insert into test (val) select round(random() \n*1024*1024*1024) from test;\nINSERT 0 1024\npostgres=# vacuum full analyze;\nVACUUM\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------\nAggregate (cost=105.25..105.26 rows=1 width=0)\n -> Unique (cost=0.00..79.65 rows=2048 width=8)\n -> Index Scan using testval on test (cost=0.00..74.53 \nrows=2048 width=8)\n(3 rows)\n\npostgres=#\n\n\n\n",
"msg_date": "Fri, 25 Nov 2005 00:34:16 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "Pailloncy Jean-Gerard <[email protected]> writes:\n> I redo the test, with a freshly installed data directory. Same result.\n\nWhat \"same result\"? You only ran it up to 2K rows, not 2M. In any\ncase, EXPLAIN without ANALYZE is pretty poor ammunition for complaining\nthat the planner made the wrong choice. I ran the same test case,\nand AFAICS the indexscan is the right choice at 2K rows:\n\nregression=# explain analyze select count(*) from (select distinct on (val) * from test) as foo;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=105.24..105.25 rows=1 width=0) (actual time=41.561..41.565 rows=1 loops=1)\n -> Unique (cost=0.00..79.63 rows=2048 width=8) (actual time=0.059..32.459 rows=2048 loops=1)\n -> Index Scan using testval on test (cost=0.00..74.51 rows=2048 width=8) (actual time=0.049..13.197 rows=2048 loops=1)\n Total runtime: 41.683 ms\n(4 rows)\n\nregression=# set enable_indexscan TO 0;\nSET\nregression=# explain analyze select count(*) from (select distinct on (val) * from test) as foo;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=179.96..179.97 rows=1 width=0) (actual time=59.567..59.571 rows=1 loops=1)\n -> Unique (cost=144.12..154.36 rows=2048 width=8) (actual time=21.438..50.434 rows=2048 loops=1)\n -> Sort (cost=144.12..149.24 rows=2048 width=8) (actual time=21.425..30.589 rows=2048 loops=1)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..31.48 rows=2048 width=8) (actual time=0.014..9.902 rows=2048 loops=1)\n Total runtime: 60.265 ms\n(6 rows)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2005 21:37:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan "
},
{
"msg_contents": "Tom Lane wrote:\n\n>What \"same result\"? You only ran it up to 2K rows, not 2M. In any\n>case, EXPLAIN without ANALYZE is pretty poor ammunition for complaining\n>that the planner made the wrong choice. I ran the same \n>\n\nHello, sorry to jump in mid-stream, but this reminded me of something.\n\nI have hit cases where I have a query for which there is a somewhat \n\"obvious\" (to a human...) query plan that should make it possible to get \na query answer pretty quickly. Yet the query \"never\" finishes (or \nrather, after hours of waiting I finally kill it). I assume this is \nbecause of a sub-optimal query plan. But, it appears that an EXPLAIN \nANALYZE runs the actual query, so it takes as long as the actual query.\n\nIn such a case, how can I go about tracking down the issue, up to an \nincluding a complaint about the query planner? :-)\n\n(Overall, I'm pretty pleased with the PG query planner; it often gets \nbetter results than another, popular commercial DBMS we use here.... \nthat is just a general impression, not the result of setting up the same \nschema in each for a comparison.)\n\nKyle Cordes\nwww.kylecordes.com\n\n\n",
"msg_date": "Thu, 24 Nov 2005 21:15:44 -0600",
"msg_from": "Kyle Cordes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "On Thu, Nov 24, 2005 at 09:15:44PM -0600, Kyle Cordes wrote:\n> I have hit cases where I have a query for which there is a somewhat \n> \"obvious\" (to a human...) query plan that should make it possible to get \n> a query answer pretty quickly. Yet the query \"never\" finishes (or \n> rather, after hours of waiting I finally kill it). I assume this is \n> because of a sub-optimal query plan. But, it appears that an EXPLAIN \n> ANALYZE runs the actual query, so it takes as long as the actual query.\n\nIn this case, you probably can't do better than EXPLAIN. Look at the\nestimates, find out if the cost is way high somewhere. If a simple query\nestimates a billion disk page fetches, something is probably wrong, ie. the\nplanner did for some reason overlook the query plan you were thinking of. (A\ncommon problem here used to include data type mismatches leading to less\nefficient joins, lack of index scans and less efficient IN/NOT IN; most of\nthat is fixed, but a few cases still remain.)\n\nIf the query is estimated at a reasonable amount of disk page fetches but\nstill takes forever, look at the number of estimated rows returned. Do they\nmake sense? If you run subsets of your query, are they about right? If not,\nyou probably want to fiddle with the statistics targets.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Fri, 25 Nov 2005 12:40:17 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "Steinar H. Gunderson wrote:\n> On Thu, Nov 24, 2005 at 09:15:44PM -0600, Kyle Cordes wrote:\n> > I have hit cases where I have a query for which there is a somewhat \n> > \"obvious\" (to a human...) query plan that should make it possible to get \n> > a query answer pretty quickly. Yet the query \"never\" finishes (or \n> > rather, after hours of waiting I finally kill it). I assume this is \n> > because of a sub-optimal query plan. But, it appears that an EXPLAIN \n> > ANALYZE runs the actual query, so it takes as long as the actual query.\n> \n> In this case, you probably can't do better than EXPLAIN. Look at the\n> estimates, find out if the cost is way high somewhere.\n\nAlso you want to make absolutely sure all the involved tables have been\nANALYZEd recently.\n\nIf you have weird cases where there is an obvious query plan and the\noptimizer is not using it, by all means submit it so that developers can\ntake a look at how to improve the optimizer.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Fri, 25 Nov 2005 09:32:00 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "> What \"same result\"? You only ran it up to 2K rows, not 2M. In any\nSorry, I do this over and over until xxx.000 rows but I do not write \nin the mail.\n\nI do it again. initdb, create table, insert, vacuum full analyze, \nexplain analyze at each stage.\nAnd there was no problem.\n\nSo I make a copy of the offending data directory, and try again. And \nI got IndexScan only.\nI will get an headheak ;-)\n\nToo big to be send by mail: http://rilk.com/pg81.html\n\nCordialement,\nJean-G�rard Pailloncy\n\n",
"msg_date": "Fri, 25 Nov 2005 15:47:16 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan "
},
{
"msg_contents": "Hi,\n\nAfter few test, the difference is explained by the \neffective_cache_size parameter.\n\nwith effective_cache_size=1000 (default)\nthe planner chooses the following plan\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------\nAggregate (cost=421893.64..421893.65 rows=1 width=0)\n -> Unique (cost=385193.48..395679.24 rows=2097152 width=8)\n -> Sort (cost=385193.48..390436.36 rows=2097152 width=8)\n Sort Key: test.val\n -> Seq Scan on test (cost=0.00..31252.52 \nrows=2097152 width=8)\n(5 rows)\n\n\nwith effective_cache_size=15000\nthe planner chooses the following plan\npostgres=# explain select count(*) from (select distinct on (val) * \nfrom test) as foo;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------\nAggregate (cost=101720.39..101720.40 rows=1 width=0)\n -> Unique (cost=0.00..75505.99 rows=2097152 width=8)\n -> Index Scan using testval on test (cost=0.00..70263.11 \nrows=2097152 width=8)\n(3 rows)\n\nI test some other values for effective_cache_size.\nThe switch from seq to index scan happens between 9900 and 10000 for \neffective_cache_size.\n\nI have my sql server on a OpenBSD 3.8 box with 1 Gb of RAM with \nnothing else running on it.\nI setup the cachepercent to 25. I expect to have 25% of 1 Gb of RAM \n(256 Mb) as file cache.\neffective_cache_size=15000 means 15000 x 8K of OS cache = 120,000 Kb \nwhich is lower than my 256 MB of disk cache.\n\nI recall the result of my precedent test.\n#rows 2097152\nIndexScan 1363396,581s\nSeqScan 98758,445s\nRatio 13,805\nSo the planner when effective_cache_size=15000 chooses a plan that is \n13 times slower than the seqscan one.\n\nI did not understand where the problem comes from.\nAny help welcome.\n\nCordialement,\nJean-G�rard Pailloncy\n\n\n",
"msg_date": "Tue, 6 Dec 2005 17:19:34 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
}
] |
[
{
"msg_contents": "Pailloncy Jean-Gerard <[email protected]> wrote ..\n[snip]\n\nTHIS MAY SEEM SILLY but vacuum is mispelled below and presumably there was never any ANALYZE done.\n\n> \n> postgres=# vaccum full verbose analyze;\n\n",
"msg_date": "Wed, 23 Nov 2005 21:20:19 -0800",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
},
{
"msg_contents": "> THIS MAY SEEM SILLY but vacuum is mispelled below and presumably \n> there was never any ANALYZE done.\n>\n>>\n>> postgres=# vaccum full verbose analyze;\nI do have done the \"vacUUm full verbose analyze;\".\nBut I copy/paste the wrong line.\n\nCordialement,\nJean-G�rard Pailloncy\n\n",
"msg_date": "Thu, 24 Nov 2005 12:54:25 +0100",
"msg_from": "Pailloncy Jean-Gerard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8.1 count(*) distinct: IndexScan/SeqScan"
}
] |
[
{
"msg_contents": "Hi ,\n\ni get the following error on doing anything with the database after \nstarting it.\nCan anyone suggest how do i fix this\n\n xlog flush request 7/7D02338C is not satisfied --- flushed only to \n3/2471E324\n\nVipul Gupta\n\nHi ,\n\ni get the following error on doing anything with the database after starting it.\nCan anyone suggest how do i fix this\n\n xlog flush request 7/7D02338C is not satisfied --- flushed only to 3/2471E324\n\nVipul Gupta",
"msg_date": "Thu, 24 Nov 2005 06:24:32 -0600",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "xlog flush request error"
},
{
"msg_contents": "[email protected] writes:\n> Can anyone suggest how do i fix this\n\n> xlog flush request 7/7D02338C is not satisfied --- flushed only to \n> 3/2471E324\n\nThis looks like corrupt data to me --- specifically, garbage in the LSN\nfield of a page header. Is that all you get? PG 7.4 and up should tell\nyou the problem page number in a CONTEXT: line.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2005 10:37:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xlog flush request error "
}
] |
[
{
"msg_contents": "Hi Folks,\n\nI'm new to Postgresql.\n\nI'm having great difficulties getting the performance I had hoped for\nfrom Postgresql 8.0. The typical query below takes ~20 minutes !!\n\nI hope an expert out there will tell me what I'm doing wrong - I hope\n*I* am doing something wrong.\n\nHardware\n--------\nSingle processor, Intel Xeon 3.06 GHz machine running Red Hat\nEnt. 4. with 1.5 GB of RAM.\n\nThe machine is dedicated to running Postgresql 8.0 and Apache/mod_perl\netc. The database is being accessed for report generation via a web\nform. The web server talks to Pg over TCP/IP (I know, that I don't\nneed to do this if they are all on the same machine, but I have good\nreasons for this and don't suspect that this is where my problems are\n- I have the same poor performance when running from psql on the\nserver.)\n\nDatabase\n--------\nVery simple, not fully normalized set of two tables. The first table,\nvery small (2000 lines of 4 cols with very few chars and integers in\nin col). The other quite a bit larger (500000 lines with 15\ncols. with the largest fields ~ 256 chars)\n\nTypical query\n------------\n\nSELECT n.name\nFROM node n\nWHERE n.name\nLIKE '56x%'\nAND n.type='H'\nAND n.usage='TEST'\nAND n.node_id\nNOT IN\n(select n.node_id\nFROM job_log j\nINNER JOIN node n\nON j.node_id = n.node_id\nWHERE n.name\nLIKE '56x%'\nAND n.type='H'\nAND n.usage='TEST'\nAND j.job_name = 'COPY FILES'\nAND j.job_start >= '2005-11-14 00:00:00'\nAND (j.job_stop <= '2005-11-22 09:31:10' OR j.job_stop IS NULL))\nORDER BY n.name\n\n\nThe node table is the small table and the job_log table is the large\ntable.\n\n\nI've tried all the basic things that I found in the documentation like\nVACUUM ANALYZE, EXPLAIN etc., but I suspect there is something\nterribly wrong with what I'm doing and these measures will not shave\noff 19 min and 50 seconds off the query time.\n\nAny help and comments would be very much appreciated.\n\n\nBealach\n\n\n",
"msg_date": "Thu, 24 Nov 2005 13:06:48 +0000",
"msg_from": "\"Bealach-na Bo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow queries - please help."
},
{
"msg_contents": "> Typical query\n> ------------\n>\n> SELECT n.name\n> FROM node n\n> WHERE n.name\n> LIKE '56x%'\n> AND n.type='H'\n> AND n.usage='TEST'\n> AND n.node_id\n> NOT IN\n> (select n.node_id\n> FROM job_log j\n> INNER JOIN node n\n> ON j.node_id = n.node_id\n> WHERE n.name\n> LIKE '56x%'\n> AND n.type='H'\n> AND n.usage='TEST'\n> AND j.job_name = 'COPY FILES'\n> AND j.job_start >= '2005-11-14 00:00:00'\n> AND (j.job_stop <= '2005-11-22 09:31:10' OR j.job_stop IS NULL))\n> ORDER BY n.name\n\nDo you have any indexes?\n\nregards\nClaus\n",
"msg_date": "Thu, 24 Nov 2005 14:23:38 +0100",
"msg_from": "Claus Guttesen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries - please help."
},
{
"msg_contents": "\nHi,\n\nThanks for your comments. I've explicitly made any indexes, but the\ndefault ones are:\n\n\n\[email protected]=> \\di\n List of relations\nSchema | Name | Type | Owner | Table\n---------+-----------------+-------+---------+---------\nuser | job_log_id_pkey | index | user | job_log\nuser | node_id_pkey | index | user | node\nuser | node_name_key | index | user | node\n(3 rows)\n\n\n\nI'm also sending the EXPLAIN outputs.\n\n\n\n\n\n explain SELECT n.name,n.type,\n n.usage, j.status,\n j.job_start,j.job_stop,\n j.nfiles_in_job,j.job_name\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '56x%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL)\n ORDER BY n.name;\n\n\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=0.00..75753.31 rows=1 width=461)\n Join Filter: (\"inner\".node_id = \"outer\".node_id)\n -> Index Scan using node_name_key on node n (cost=0.00..307.75 rows=1 \nwidth=181)\n Filter: ((name ~~ '56x%'::text) AND (\"type\" = 'K'::bpchar) AND \n(\"usage\" = 'LIVE'::bpchar))\n -> Seq Scan on job_log j (cost=0.00..75445.54 rows=1 width=288)\n Filter: ((job_name = 'COPY FILES'::bpchar) AND (job_start >= \n'2005-11-14 00:00:00'::timestamp without time zone) AND ((job_stop <= \n'2005-11-14 05:00:00'::timestamp without time zone) OR (job_stop IS NULL)))\n(6 rows)\n\n\n explain SELECT n.name, n.type, n.usage\n FROM node n\n WHERE n.name\n LIKE '56x%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND n.node_id\n NOT IN\n (SELECT n.node_id\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '56x%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL))\n ORDER BY n.name;\n\n\n\n\n\n\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using node_name_key on node n (cost=75451.55..75764.94 rows=1 \nwidth=177)\n Filter: ((name ~~ '56x%'::text) AND (\"type\" = 'K'::bpchar) AND (\"usage\" = \n'LIVE'::bpchar) AND (NOT (hashed subplan)))\n SubPlan\n -> Nested Loop (cost=0.00..75451.54 rows=1 width=4)\n -> Seq Scan on job_log j (cost=0.00..75445.54 rows=1 width=4)\n Filter: ((job_name = 'COPY FILES'::bpchar) AND (job_start \n >= '2005-11-14 00:00:00'::timestamp without time zone) AND ((job_stop <= \n'2005-11-14 05:00:00'::timestamp without time zone) OR (job_stop IS NULL)))\n -> Index Scan using node_id_pkey on node n (cost=0.00..5.99 \nrows=1 width=4)\n Index Cond: (\"outer\".node_id = n.node_id)\n Filter: ((name ~~ '56x%'::text) AND (\"type\" = 'K'::bpchar) \nAND (\"usage\" = 'LIVE'::bpchar))\n\n\nYours,\n\nBealach\n\n\n>From: Claus Guttesen <[email protected]>\n>To: Bealach-na Bo <[email protected]>\n>CC: [email protected]\n>Subject: Re: [PERFORM] Very slow queries - please help.\n>Date: Thu, 24 Nov 2005 14:23:38 +0100\n>\n> > Typical query\n> > ------------\n> >\n> > SELECT n.name\n> > FROM node n\n> > WHERE n.name\n> > LIKE '56x%'\n> > AND n.type='H'\n> > AND n.usage='TEST'\n> > AND n.node_id\n> > NOT IN\n> > (select n.node_id\n> > FROM job_log j\n> > INNER JOIN node n\n> > ON j.node_id = n.node_id\n> > WHERE n.name\n> > LIKE '56x%'\n> > AND n.type='H'\n> > AND n.usage='TEST'\n> > AND j.job_name = 'COPY FILES'\n> > AND j.job_start >= '2005-11-14 00:00:00'\n> > AND (j.job_stop <= '2005-11-22 09:31:10' OR j.job_stop IS NULL))\n> > ORDER BY n.name\n>\n>Do you have any indexes?\n>\n>regards\n>Claus\n\n\n",
"msg_date": "Thu, 24 Nov 2005 14:36:06 +0000",
"msg_from": "\"Bealach-na Bo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries - please help."
},
{
"msg_contents": "Hi,\n\n> I'm also sending the EXPLAIN outputs.\n\nPlease provide EXPLAIN ANALYZE outputs instead of EXPLAIN. You will have \nmore information.\n\nIndexes on your tables are obviously missing. You should try to add:\n\nCREATE INDEX idx_node_filter ON node(name, type, usage);\nCREATE INDEX idx_job_log_filter ON job_log(job_name, job_start, job_stop);\n\nI'm not so sure it's a good idea to add job_stop in this index as you \nhave an IS NULL in your query so I'm not sure it can be used. You should \ntry it anyway and remove it if not needed.\n\nI added all your search fields in the indexes but it depends a lot on \nthe selectivity of your conditions. I don't know your data but I think \nyou understand the idea.\n\nHTH\n\n--\nGuillaume\n",
"msg_date": "Thu, 24 Nov 2005 16:03:47 +0100",
"msg_from": "Guillaume Smet <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries - please help."
},
{
"msg_contents": "\"Bealach-na Bo\" <[email protected]> writes:\n> I'm having great difficulties getting the performance I had hoped for\n> from Postgresql 8.0. The typical query below takes ~20 minutes !!\n\nYou need to show us the table definition (including indexes) and the\nEXPLAIN ANALYZE results for the query.\n\nIt seems likely that the NOT IN is the source of your problems,\nbut it's hard to be sure without EXPLAIN results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2005 10:15:35 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries - please help. "
}
] |
[
{
"msg_contents": "OK.\n\nThe consensus seems to be that I need more indexes and I also need to\nlook into the NOT IN statement as a possible bottleneck. I've\nintroduced the indexes which has led to a DRAMATIC change in response\ntime. Now I have to experiment with INNER JOIN -> OUTER JOIN\nvariations, SET ENABLE_SEQSCAN=OFF.\n\nForgive me for not mentioning each person individually and by name.\nYou have all contributed to confirming what I had suspected (and\nhoped): that *I* have a lot to learn!\n\nI'm attaching table descriptions, the first few lines of top output\nwhile the queries were running, index lists, sample queries and\nEXPLAIN ANALYSE output BEFORE and AFTER the introduction of the\nindexes. As I said, DRAMATIC :) I notice that the CPU usage does not\nvary very much, it's nearly 100% anyway, but the memory usage drops\nmarkedly, which is another very nice result of the index introduction.\n\nAny more comments and tips would be very welcome.\n\nThank you all for your input.\n\nBealach.\n\n\n\n\[email protected]=> \\d job_log\n Table \"blouser.job_log\"\n Column | Type | Modifiers\n----------------+-----------------------------+--------------------------------------------------\njob_log_id | integer | not null default \nnextval('job_log_id_seq'::text)\nfirst_registry | timestamp without time zone |\nblogger_name | character(50) |\nnode_id | integer |\njob_type | character(50) |\njob_name | character(256) |\njob_start | timestamp without time zone |\njob_timeout | interval |\njob_stop | timestamp without time zone |\nnfiles_in_job | integer |\nstatus | integer |\nerror_code | smallint |\nIndexes:\n \"job_log_id_pkey\" PRIMARY KEY, btree (job_log_id)\nCheck constraints:\n \"job_log_status_check\" CHECK (status = 0 OR status = 1 OR status = 8 OR \nstatus = 9)\nForeign-key constraints:\n \"legal_node\" FOREIGN KEY (node_id) REFERENCES node(node_id)\n\n\n\n\n\[email protected]=> \\d node\n Table \"blouser.node\"\nColumn | Type | Modifiers\n---------+---------------+-----------------------------------------------\nnode_id | integer | not null default nextval('node_id_seq'::text)\nname | character(50) |\ntype | character(1) |\nusage | character(4) |\nIndexes:\n \"node_id_pkey\" PRIMARY KEY, btree (node_id)\n \"node_name_key\" UNIQUE, btree (name)\nCheck constraints:\n \"node_type_check\" CHECK (\"type\" = 'B'::bpchar OR \"type\" = 'K'::bpchar OR \n\"type\" = 'C'::bpchar OR \"type\" = 'T'::bpchar OR \"type\" = 'R'::bpchar)\n \"node_usage_check\" CHECK (\"usage\" = 'TEST'::bpchar OR \"usage\" = \n'LIVE'::bpchar)\n\n\n#========================before new indexes were created\n\n\nTasks: 114 total, 2 running, 112 sleeping, 0 stopped, 0 zombie\nCpu(s): 25.7% us, 24.5% sy, 0.0% ni, 49.4% id, 0.3% wa, 0.0% hi, 0.0% si\nMem: 1554788k total, 1513576k used, 41212k free, 31968k buffers\nSwap: 1020024k total, 27916k used, 992108k free, 708728k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n25883 postgres 25 0 20528 12m 11m R 99.7 0.8 4:54.91 postmaster\n\n\n\n\n\[email protected]=> \\di\n List of relations\nSchema | Name | Type | Owner | Table\n---------+-----------------+-------+---------+---------\nblouser | job_log_id_pkey | index | blouser | job_log\nblouser | node_id_pkey | index | blouser | node\nblouser | node_name_key | index | blouser | node\n(3 rows)\n\n\n EXPLAIN ANALYSE SELECT n.name,n.type,\n n.usage, j.status,\n j.job_start,j.job_stop,\n j.nfiles_in_job,j.job_name\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '711%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL)\n ORDER BY n.name;\n\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\nNested Loop (cost=0.00..75753.31 rows=1 width=461) (actual \ntime=270486.692..291662.350 rows=3 loops=1)\n Join Filter: (\"inner\".node_id = \"outer\".node_id)\n -> Index Scan using node_name_key on node n (cost=0.00..307.75 rows=1 \nwidth=181) (actual time=0.135..11.034 rows=208 loops=1)\n Filter: ((name ~~ '711%'::text) AND (\"type\" = 'K'::bpchar) AND \n(\"usage\" = 'LIVE'::bpchar))\n -> Seq Scan on job_log j (cost=0.00..75445.54 rows=1 width=288) (actual \ntime=273.374..1402.089 rows=22 loops=208)\n Filter: ((job_name = 'COPY FILES'::bpchar) AND (job_start >= \n'2005-11-14 00:00:00'::timestamp without time zone) AND ((job_stop <= \n'2005-11-14 05:00:00'::timestamp without time zone) OR (job_stop IS NULL)))\nTotal runtime: 291662.482 ms\n(7 rows)\n\n\n EXPLAIN ANALYSE SELECT n.name, n.type, n.usage\n FROM node n\n WHERE n.name\n LIKE '56x%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND n.node_id NOT IN\n (SELECT n.node_id\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '711%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL))\n ORDER BY n.name;\n\n\n \n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------\nIndex Scan using node_name_key on node n (cost=75451.55..75764.94 rows=1 \nwidth=177) (actual time=1394.617..1398.609 rows=205 loops=1)\n Filter: ((name ~~ '56x%'::text) AND (\"type\" = 'K'::bpchar) AND (\"usage\" = \n'LIVE'::bpchar) AND (NOT (hashed subplan)))\n SubPlan\n -> Nested Loop (cost=0.00..75451.54 rows=1 width=4) (actual \ntime=1206.622..1394.462 rows=3 loops=1)\n -> Seq Scan on job_log j (cost=0.00..75445.54 rows=1 width=4) \n(actual time=271.361..1393.363 rows=22 loops=1)\n Filter: ((job_name = 'COPY FILES'::bpchar) AND (job_start \n >= '2005-11-14 00:00:00'::timestamp without time zone) AND ((job_stop <= \n'2005-11-14 05:00:00'::timestamp without time zone) OR (job_stop IS NULL)))\n -> Index Scan using node_id_pkey on node n (cost=0.00..5.99 \nrows=1 width=4) (actual time=0.042..0.042 rows=0 loops=22)\n Index Cond: (\"outer\".node_id = n.node_id)\n Filter: ((name ~~ '711%'::text) AND (\"type\" = 'K'::bpchar) \nAND (\"usage\" = 'LIVE'::bpchar))\nTotal runtime: 1398.868 ms\n(10 rows)\n\n\n\n\n#===================================after the new indexes were created\n\nTasks: 114 total, 2 running, 112 sleeping, 0 stopped, 0 zombie\nCpu(s): 22.9% us, 27.2% sy, 0.0% ni, 49.7% id, 0.0% wa, 0.2% hi, 0.0% si\nMem: 1554788k total, 1414632k used, 140156k free, 14784k buffers\nSwap: 1020024k total, 28008k used, 992016k free, 623652k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n26409 postgres 25 0 21580 8684 7116 R 99.9 0.6 0:25.38 postmaster\n\n\n\nSchema | Name | Type | Owner | Table\n---------+--------------------+-------+---------+---------\nblouser | idx_job_log_filter | index | blouser | job_log\nblouser | idx_node_filter | index | blouser | node\nblouser | job_log_id_pkey | index | blouser | job_log\nblouser | node_id_pkey | index | blouser | node\nblouser | node_name_key | index | blouser | node\n(5 rows)\n\n\n EXPLAIN ANALYSE SELECT n.name,n.type,\n n.usage, j.status,\n j.job_start,j.job_stop,\n j.nfiles_in_job,j.job_name\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '711%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL)\n ORDER BY n.name;\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=18049.23..18049.23 rows=1 width=461) (actual \ntime=223.540..223.543 rows=3 loops=1)\n Sort Key: n.name\n -> Nested Loop (cost=0.00..18049.22 rows=1 width=461) (actual \ntime=201.575..223.470 rows=3 loops=1)\n -> Index Scan using idx_job_log_filter on job_log j \n(cost=0.00..18043.21 rows=1 width=288) (actual time=52.567..222.855 rows=22 \nloops=1)\n Index Cond: ((job_name = 'COPY FILES'::bpchar) AND (job_start \n >= '2005-11-14 00:00:00'::timestamp without time zone))\n Filter: ((job_stop <= '2005-11-14 05:00:00'::timestamp \nwithout time zone) OR (job_stop IS NULL))\n -> Index Scan using node_id_pkey on node n (cost=0.00..5.99 \nrows=1 width=181) (actual time=0.022..0.022 rows=0 loops=22)\n Index Cond: (\"outer\".node_id = n.node_id)\n Filter: ((name ~~ '711%'::text) AND (\"type\" = 'K'::bpchar) \nAND (\"usage\" = 'LIVE'::bpchar))\nTotal runtime: 223.677 ms\n(10 rows)\n\n\n\n EXPLAIN ANALYSE SELECT n.name, n.type, n.usage\n FROM node n\n WHERE n.name\n LIKE '56x%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND n.node_id NOT IN\n (SELECT n.node_id\n FROM job_log j\n INNER JOIN node n\n ON j.node_id = n.node_id\n WHERE n.name\n LIKE '711%'\n AND n.type = 'K'\n AND n.usage = 'LIVE'\n AND j.job_name = 'COPY FILES'\n AND j.job_start >= '2005-11-14 00:00:00'\n AND (j.job_stop <= '2005-11-14 05:00:00' OR j.job_stop IS NULL))\n ORDER BY n.name;\n\n\n \nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\nSort (cost=18141.89..18141.89 rows=1 width=177) (actual \ntime=223.495..223.627 rows=205 loops=1)\n Sort Key: name\n -> Seq Scan on node n (cost=18049.22..18141.88 rows=1 width=177) \n(actual time=220.293..222.526 rows=205 loops=1)\n Filter: ((name ~~ '56x%'::text) AND (\"type\" = 'K'::bpchar) AND \n(\"usage\" = 'LIVE'::bpchar) AND (NOT (hashed subplan)))\n SubPlan\n -> Nested Loop (cost=0.00..18049.22 rows=1 width=4) (actual \ntime=198.343..220.195 rows=3 loops=1)\n -> Index Scan using idx_job_log_filter on job_log j \n(cost=0.00..18043.21 rows=1 width=4) (actual time=50.748..219.741 rows=22 \nloops=1)\n Index Cond: ((job_name = 'COPY FILES'::bpchar) AND \n(job_start >= '2005-11-14 00:00:00'::timestamp without time zone))\n Filter: ((job_stop <= '2005-11-14 \n05:00:00'::timestamp without time zone) OR (job_stop IS NULL))\n -> Index Scan using node_id_pkey on node n \n(cost=0.00..5.99 rows=1 width=4) (actual time=0.015..0.016 rows=0 loops=22)\n Index Cond: (\"outer\".node_id = n.node_id)\n Filter: ((name ~~ '711%'::text) AND (\"type\" = \n'K'::bpchar) AND (\"usage\" = 'LIVE'::bpchar))\nTotal runtime: 223.860 ms\n(13 rows)\n\n\n",
"msg_date": "Thu, 24 Nov 2005 18:14:53 +0000",
"msg_from": "\"Bealach-na Bo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries - please help"
},
{
"msg_contents": "\nOn Nov 24, 2005, at 12:14 PM, Bealach-na Bo wrote:\n\n> The consensus seems to be that I need more indexes and I also need to\n> look into the NOT IN statement as a possible bottleneck. I've\n> introduced the indexes which has led to a DRAMATIC change in response\n> time. Now I have to experiment with INNER JOIN -> OUTER JOIN\n> variations, SET ENABLE_SEQSCAN=OFF.\n>\n> Forgive me for not mentioning each person individually and by name.\n> You have all contributed to confirming what I had suspected (and\n> hoped): that *I* have a lot to learn!\n>\n> I'm attaching table descriptions, the first few lines of top output\n> while the queries were running, index lists, sample queries and\n> EXPLAIN ANALYSE output BEFORE and AFTER the introduction of the\n> indexes. As I said, DRAMATIC :) I notice that the CPU usage does not\n> vary very much, it's nearly 100% anyway, but the memory usage drops\n> markedly, which is another very nice result of the index introduction.\n>\n> Any more comments and tips would be very welcome.\n\nYou might find the following resources from techdocs instructive:\n\nhttp://techdocs.postgresql.org/redir.php?link=/techdocs/ \npgsqladventuresep2.php\n\nhttp://techdocs.postgresql.org/redir.php?link=/techdocs/ \npgsqladventuresep3.php\n\nThese documents provide some guidance into the process of index \nselection. It seems like you could still stand to benefit from more \nindexes based on your queries, table definitions, and current indexes.\n\n--\nThomas F. O'Connell\nDatabase Architecture and Programming\nCo-Founder\nSitening, LLC\n\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005 (cell)\n615-469-5150 (office)\n615-469-5151 (fax)\n",
"msg_date": "Sun, 4 Dec 2005 00:40:01 -0600",
"msg_from": "\"Thomas F. O'Connell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow queries - please help"
},
{
"msg_contents": "Thanks very much - there are a lot of good articles there... Reading as \nfast as I can :)\n\nBest,\n\nBealach\n\n\n>From: \"Thomas F. O'Connell\" <[email protected]>\n>To: Bealach-na Bo <[email protected]>\n>CC: PgSQL - Performance <[email protected]>\n>Subject: Re: [PERFORM] Very slow queries - please help\n>Date: Sun, 4 Dec 2005 00:40:01 -0600\n>\n>\n>On Nov 24, 2005, at 12:14 PM, Bealach-na Bo wrote:\n>\n>>The consensus seems to be that I need more indexes and I also need to\n>>look into the NOT IN statement as a possible bottleneck. I've\n>>introduced the indexes which has led to a DRAMATIC change in response\n>>time. Now I have to experiment with INNER JOIN -> OUTER JOIN\n>>variations, SET ENABLE_SEQSCAN=OFF.\n>>\n>>Forgive me for not mentioning each person individually and by name.\n>>You have all contributed to confirming what I had suspected (and\n>>hoped): that *I* have a lot to learn!\n>>\n>>I'm attaching table descriptions, the first few lines of top output\n>>while the queries were running, index lists, sample queries and\n>>EXPLAIN ANALYSE output BEFORE and AFTER the introduction of the\n>>indexes. As I said, DRAMATIC :) I notice that the CPU usage does not\n>>vary very much, it's nearly 100% anyway, but the memory usage drops\n>>markedly, which is another very nice result of the index introduction.\n>>\n>>Any more comments and tips would be very welcome.\n>\n>You might find the following resources from techdocs instructive:\n>\n>http://techdocs.postgresql.org/redir.php?link=/techdocs/ \n>pgsqladventuresep2.php\n>\n>http://techdocs.postgresql.org/redir.php?link=/techdocs/ \n>pgsqladventuresep3.php\n>\n>These documents provide some guidance into the process of index selection. \n>It seems like you could still stand to benefit from more indexes based on \n>your queries, table definitions, and current indexes.\n>\n>--\n>Thomas F. O'Connell\n>Database Architecture and Programming\n>Co-Founder\n>Sitening, LLC\n>\n>http://www.sitening.com/\n>110 30th Avenue North, Suite 6\n>Nashville, TN 37203-6320\n>615-260-0005 (cell)\n>615-469-5150 (office)\n>615-469-5151 (fax)\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: Don't 'kill -9' the postmaster\n\n\n",
"msg_date": "Mon, 12 Dec 2005 16:24:40 +0000",
"msg_from": "\"Bealach-na Bo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow queries - please help"
}
] |
[
{
"msg_contents": "A quick note to say that I'm very grateful for Tom Lane's input also.\nTom, I did put you on the list of recipients for my last posting to\npgsql-performance, but got:\n\n\n--------------------cut here--------------------\nThis is an automatically generated Delivery Status Notification.\n\nDelivery to the following recipients failed.\n\n [email protected]\n\n\n\nMany regards,\n\nBealach\n\n\n",
"msg_date": "Thu, 24 Nov 2005 18:51:42 +0000",
"msg_from": "\"Bealach-na Bo\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow queries - please help"
}
] |
[
{
"msg_contents": "Hi tom,\n\nbasically when i run any query with database say,\n\nselect count(*) from table1;\n\nIt gives me the following error trace: \nWARNING: could not write block 297776 of 1663/2110743/2110807\nDETAIL: Multiple failures --- write error may be permanent.\nERROR: xlog flush request 7/7D02338C is not satisfied --- flushed only to \n3/2471E324\n writing block 297776 of relation 1663/2110743/2110807\nxlog flush request 7/7D02338C is not satisfied --- flushed only to \n3/2471E324\nxlog flush request 7/7D02338C is not satisfied --- flushed only to \n3/2471E324\\q\n\ni tried using pg_resetxlog but till date, have not been able to solve this \nproblem \n\nRegards,\nVipul Gupta\n\n\n\n\n\nTom Lane <[email protected]>\n11/24/2005 09:07 PM\n\n \n To: [email protected]\n cc: [email protected]\n Subject: Re: [PERFORM] xlog flush request error\n\n\[email protected] writes:\n> Can anyone suggest how do i fix this\n\n> xlog flush request 7/7D02338C is not satisfied --- flushed only to \n> 3/2471E324\n\nThis looks like corrupt data to me --- specifically, garbage in the LSN\nfield of a page header. Is that all you get? PG 7.4 and up should tell\nyou the problem page number in a CONTEXT: line.\n\n regards, tom lane\n\n\n\n\nHi tom,\n\nbasically when i run any query with database say,\n\nselect count(*) from table1;\n\nIt gives me the following error trace: \nWARNING: could not write block 297776 of 1663/2110743/2110807\nDETAIL: Multiple failures --- write error may be permanent.\nERROR: xlog flush request 7/7D02338C is not satisfied --- flushed only to 3/2471E324\n writing block 297776 of relation 1663/2110743/2110807\nxlog flush request 7/7D02338C is not satisfied --- flushed only to 3/2471E324\nxlog flush request 7/7D02338C is not satisfied --- flushed only to 3/2471E324\\q\n\ni tried using pg_resetxlog but till date, have not been able to solve this problem \n\nRegards,\nVipul Gupta\n\n\n\n\n\n\n\nTom Lane <[email protected]>\n11/24/2005 09:07 PM\n\n \n To: [email protected]\n cc: [email protected]\n Subject: Re: [PERFORM] xlog flush request error\n\n\[email protected] writes:\n> Can anyone suggest how do i fix this\n\n> xlog flush request 7/7D02338C is not satisfied --- flushed only to \n> 3/2471E324\n\nThis looks like corrupt data to me --- specifically, garbage in the LSN\nfield of a page header. Is that all you get? PG 7.4 and up should tell\nyou the problem page number in a CONTEXT: line.\n\n regards, tom lane",
"msg_date": "Thu, 24 Nov 2005 22:48:17 -0600",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: xlog flush request error"
},
{
"msg_contents": "[email protected] writes:\n> ERROR: xlog flush request 7/7D02338C is not satisfied --- flushed only to \n> 3/2471E324\n> writing block 297776 of relation 1663/2110743/2110807\n\nYou need to fix or zero out that data block ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Nov 2005 23:59:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: xlog flush request error "
}
] |
[
{
"msg_contents": "I have been reading all this technical talk about costs and such that\nI don't (_yet_) understand.\n\nNow I'm scared... what's the fastest way to do an equivalent of\ncount(*) on a table to know how many items it has?\n\nThanks,\nRodrigo\n",
"msg_date": "Fri, 25 Nov 2005 19:36:35 +0000",
"msg_from": "Rodrigo Madera <[email protected]>",
"msg_from_op": true,
"msg_subject": "Newbie question: ultra fast count(*)"
},
{
"msg_contents": "On 11/25/05, Rodrigo Madera <[email protected]> wrote:\n> I have been reading all this technical talk about costs and such that\n> I don't (_yet_) understand.\n>\n> Now I'm scared... what's the fastest way to do an equivalent of\n> count(*) on a table to know how many items it has?\n>\n> Thanks,\n> Rodrigo\n>\n\nyou really *need* this?\n\nyou can do\nSELECT reltuples FROM pg_class WHERE relname = 'your_table_name';\n\nbut this will give you an estimate... if you want real values you can\nmake a TRIGGER that maintain a counter in another table\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n",
"msg_date": "Fri, 25 Nov 2005 14:40:06 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Newbie question: ultra fast count(*)"
}
] |
[
{
"msg_contents": "Ok, I've subscribed (hopefully list volume won't kill me :-)\n\nI'm covering several things in this message since I didn't receive the \nprior messages in the thread\n\nfirst off these benchamrks are not being sponsered by my employer, they \nneed the machines burned in and so I'm going to use them for the tests \nwhile burning them in. I can spend a little official time on this, \njustifying it as learning the proper config/tuneing settings for our \nproject, but not too much. and I'm deliberatly not useing my work e-mail \nand am not mentioning the company name, so please respect this and keep \nthe two seperate (some of you have dealt with us formally, others will \nover the next few months)\n\nthis means no remote access for people (but I am willing to run tests and \nsend configs around). in fact the machines will not have Internet access \nfor the duration of the tests.\n\nit also means I'm doing almost all the configuration work for this on my \nown time (nights and weekends). the machines will not be moved to \nproduction for a couple of months. this should mean that we can go back \nand forth with questions and answers (albeit somewhat slowly, with me \nchecking in every night) while whatever tests are done happen during the \nday. once we get past the system tuneing and start doing different tests \nit would probably be helpful if people can send me scripts to run that I \ncan just let loose.\n\nI don't have any money to pay for benchmark suites, so if things like the \nTPC benchmarks cost money to do I won't be able to do them\n\nto clarify the hardware\n\nI have 5 machines total to work with, this includes client machines to \nmake the queries (I may be able to get hold of 2-3 more, but they are \nsimilar configs)\n\nnone of these have dual-core processors on them, the CPU's are 246 or 252 \nOpterons (I'll have to double check which is in which machine, I think the \nlarge disk machine has 246's and the others 252's)\n\nI have access to a gig-E switch that's on a fairly idle network to use to \nconnect these machines\n\nthe large-disk machine has 3ware 9500 series 8-port SATA controllers in \nthem with battery backup. in our official dealings with Greenplum we \nattempted to do a set of benchmarks on that machine, but had horrible \ntiming with me being too busy when they worked with us on this and we \nnever did figure out the best setting to use for this machine.\n\nPart of the reason I posted this to /. rather then just contacting you and \nMySQL folks directly is that I would like to see a reasonable set of \nbenchmarks agreed to and have people with different hardware then I have \nrun the same sets of tests. I know the tuneing will be different for \ndifferent hardware, but if we can have a bunch of people run similar tests \nwe should learn a lot.\n\nDavid Lang\n",
"msg_date": "Sat, 26 Nov 2005 06:28:50 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Open request for benchmarking input"
}
] |
[
{
"msg_contents": ">These boxes don't look like being designed for a DB server. The first are \n>very CPU bound, and the third may be a good choice for very large amounts \n>of streamed data, but not optimal for TP random access.\n\nI don't know what you mean when you say that the first ones are CPU bound, \nthey have far more CPU then they do disk I/O\n\nhowever I will agree that they were not designed to be DB servers, they \nweren't. they happen to be the machines that I have available.\n\nthey only have a pair of disks each, which would not be reasonable for \nmost production DB uses, and they have far more CPU then is normally \nreccomended. So I'll have to run raid 0 instead of 0+1 (or not use raid) \nwhich would be unacceptable in a production environment, but can still \ngive some useful info.\n\nthe 5th box _was_ purchased to be a DB server, but one to store and \nanalyse large amounts of log data, so large amounts of data storage were \nmore important then raw DB performance (although we did max out the RAM at \n16G to try and make up for it). it was a deliberate price/performance \ntradeoff. this machine ran ~$20k, but a similar capacity with SCSI drives \nwould have been FAR more expensive (IIRC a multiple of 4x or more more \nexpensive).\n\n>Hopefully, when publicly visible benchmarks are performed, machines are \n>used that comply with common engineering knowledge, ignoring those guys \n>who still believe that sequential performance is the most important issue \n>on disk subsystems for DBMS.\n\nare you saying that I shouldn't do any benchmarks becouse the machines \naren't what you would consider good enough?\n\nif so I disagree with you and think that benchmarks should be done on even \nworse machines, but should also be done on better machines. (are you \nvolunteering to provide time on better machines for benchmarks?)\n\nnot everyone will buy a lot of high-end hardware before they start useing \na database. in fact most companies will start with a database on lower end \nhardware and then as their requirements grow they will move to better \nhardware. I'm willing to bet that what I have available is better then the \nstarting point for most places.\n\nPostgres needs to work on the low end stuff as well as the high end stuff \nor people will write their app to work with things that DO run on low end \nhardware and they spend much more money then is needed to scale the \nhardware up rather then re-writing their app.\n\nPart of the reason that I made the post on /. to start this was the hope \nthat a reasonable set of benchmarks could be hammered out and then more \npeople then just me could run them to get a wider range of results.\n\nDavid Lang\n",
"msg_date": "Sat, 26 Nov 2005 08:00:33 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open request for benchmarking input"
},
{
"msg_contents": "On Sun, 27 Nov 2005, Andreas Pflug wrote:\n\n> David Lang wrote:\n>> \n>> Postgres needs to work on the low end stuff as well as the high end stuff \n>> or people will write their app to work with things that DO run on low end \n>> hardware and they spend much more money then is needed to scale the \n>> hardware up rather then re-writing their app.\n>\n> I agree that pgsql runs on low end stuff, but a dual Opteron with 2x15kSCSI \n> isn't low end, is it? The CPU/IO performance isn't balanced for the total \n> cost, you probably could get a single CPU/6x15kRPM machine for the same price \n> delivering better TP performance in most scenarios.\n>\n> Benchmarks should deliver results that are somewhat comparable. If performed \n> on machines that don't deliver a good CPU/IO power balance for the type of DB \n> load being tested, they're misleading and hardly usable for comparision \n> purposes, and even less for learning how to configure a decent server since \n> you might have to tweak some parameters in an unusual way.\n\na couple things to note,\n\nfirst, when running benchmarks there is a need for client machines to \nstress the database, these machines are what are available to be clients \nas well as servers.\n\nsecond, the smaller machines are actually about what I would spec out for \na high performance database that's reasonably small, a couple of the boxes \nhave 144G drives, if they are setup as raid1 then the boxes would be \nreasonable to use for a database up to 50G or larger (assuming you need \nspace on the DB server to dump the database, up to 100G or so if you \ndon't)\n\nDavid Lang\n",
"msg_date": "Sun, 27 Nov 2005 03:08:17 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Open request for benchmarking input"
},
{
"msg_contents": "David Lang wrote:\n>> These boxes don't look like being designed for a DB server. The first \n>> are very CPU bound, and the third may be a good choice for very large \n>> amounts of streamed data, but not optimal for TP random access.\n> \n> \n> I don't know what you mean when you say that the first ones are CPU \n> bound, they have far more CPU then they do disk I/O\n> \n> however I will agree that they were not designed to be DB servers, they \n> weren't. they happen to be the machines that I have available.\n\nThat was what I understood from the specs.\n> \n> they only have a pair of disks each, which would not be reasonable for \n> most production DB uses, and they have far more CPU then is normally \n> reccomended. So I'll have to run raid 0 instead of 0+1 (or not use raid) \n> which would be unacceptable in a production environment, but can still \n> give some useful info.\n >\n> the 5th box _was_ purchased to be a DB server, but one to store and \n> analyse large amounts of log data, so large amounts of data storage were \n> more important then raw DB performance (although we did max out the RAM \n> at 16G to try and make up for it). it was a deliberate price/performance \n> tradeoff. this machine ran ~$20k, but a similar capacity with SCSI \n> drives would have been FAR more expensive (IIRC a multiple of 4x or more \n> more expensive).\n\nThat was my understanding too. For this specific requirement, I'd \nprobably design the server the same way, and running OLAP benchmarks \nagainst it sounds very reasonable.\n\n> \n>> Hopefully, when publicly visible benchmarks are performed, machines \n>> are used that comply with common engineering knowledge, ignoring those \n>> guys who still believe that sequential performance is the most \n>> important issue on disk subsystems for DBMS.\n> \n> \n> are you saying that I shouldn't do any benchmarks becouse the machines \n> aren't what you would consider good enough?\n> \n> if so I disagree with you and think that benchmarks should be done on \n> even worse machines, but should also be done on better machines. (are \n> you volunteering to provide time on better machines for benchmarks?)\n> \n> not everyone will buy a lot of high-end hardware before they start \n> useing a database. in fact most companies will start with a database on \n> lower end hardware and then as their requirements grow they will move to \n> better hardware. I'm willing to bet that what I have available is better \n> then the starting point for most places.\n> \n> Postgres needs to work on the low end stuff as well as the high end \n> stuff or people will write their app to work with things that DO run on \n> low end hardware and they spend much more money then is needed to scale \n> the hardware up rather then re-writing their app.\n\nI agree that pgsql runs on low end stuff, but a dual Opteron with \n2x15kSCSI isn't low end, is it? The CPU/IO performance isn't balanced \nfor the total cost, you probably could get a single CPU/6x15kRPM machine \nfor the same price delivering better TP performance in most scenarios.\n\nBenchmarks should deliver results that are somewhat comparable. If \nperformed on machines that don't deliver a good CPU/IO power balance for \nthe type of DB load being tested, they're misleading and hardly usable \nfor comparision purposes, and even less for learning how to configure a \ndecent server since you might have to tweak some parameters in an \nunusual way.\n\nRegards,\nAndreas\n",
"msg_date": "Sun, 27 Nov 2005 13:21:40 +0100",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
}
] |
[
{
"msg_contents": "by the way, this is the discussion that promped me to start this project\nhttp://lwn.net/Articles/161323/\n\nDavid Lang\n",
"msg_date": "Sat, 26 Nov 2005 08:13:40 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Open request for benchmarking input "
}
] |
[
{
"msg_contents": "Did you folks see this article on Slashdot with a fellow requesting input on \nwhat sort of benchmarks to run to get a good Postgresql vs Mysql dataset? \nPerhaps this would be a good opportunity for us to get some good benchmarking \ndone. Here's the article link and top text:\n\nhttp://ask.slashdot.org/article.pl?sid=05/11/26/0317213\n\n David Lang asks: \"With the release of MySQL 5.0, PostgreSQL 8.1, and the flap \nover Oracle purchasing InnoDB, the age old question of performance is coming \nup again. I've got some boxes that were purchased for a data warehouse project \nthat isn't going to be installed for a month or two, and could probably \nsqueeze some time in to do some benchmarks on the machines. However, the \nquestion is: what should be done that's reasonably fair to both MySQL and \nPostgreSQL? We all know that careful selection of the benchmark can seriously \nskew the results, and I want to avoid that (in fact I would consider it close \nto ideal if the results came out that each database won in some tests). I \nwould also not like to spend time generating the benchmarks only to have the \nlosing side accuse me of being unfair. So, for both MySQL and PostgreSQL \nadvocates, what would you like to see in a series of benchmarks?\"\n\n \"The hardware I have available is as follows:\n\n * 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI\n * 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI\n * 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA\n\nI would prefer to use Debian Sarge as the base install of the systems (with \ncustom built kernels), but would compile the databases from source rather then \nusing binary packages.\n\nFor my own interests, I would like to at least cover the following bases: 32 \nbit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type tests \n(data >> memory); and web prefs test (active data RAM)\n\nWhat specific benchmarks should be run, and what other things should be \ntested? Where should I go for assistance on tuning each database, evaluating \nthe benchmark results, and re-tuning them?\"\n\n---\nJeff Frost, Owner \n<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n",
"msg_date": "Sat, 26 Nov 2005 10:28:36 -0800 (PST)",
"msg_from": "Jeff Frost <[email protected]>",
"msg_from_op": true,
"msg_subject": "Open request for benchmarking input"
},
{
"msg_contents": "\n\"Jeff Frost\" <[email protected]> wrote\n>\n> Did you folks see this article on Slashdot with a fellow requesting input \n> on what sort of benchmarks to run to get a good Postgresql vs Mysql \n> dataset? Perhaps this would be a good opportunity for us to get some good \n> benchmarking done.\n> \"The hardware I have available is as follows:\n>\n> * 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI\n> * 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI\n> * 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA\n>\n\nI see this as a good chance to evaluate and boost PostgreSQL performance in \ngeneral.\n\nMy two concerns:\n(1) How long will David Lang spend on the benchmarking? We need *continous* \nfeedback after each tuning. This is important and Mark Wong has done great \njob on this.\n(2) The hardware configuration may not reflect all potentials of PostgreSQL. \nFor example, so far, PostgreSQL does not pay much attention in reducing I/O \ncost, so a stronger RAID definitely will benefit PostgreSQL performance.\n\n>\n> For my own interests, I would like to at least cover the following bases: \n> 32 bit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type \n> tests (data >> memory); and web prefs test (active data RAM)\n>\n\nDon't forget TPCC (data > memory, with intensive updates). So the benchmarks \nin my mind include TPCC, TPCH and TPCW.\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Sat, 26 Nov 2005 13:57:47 -0500",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
},
{
"msg_contents": "Jeff, Qingqing,\n\nOn 11/26/05 10:57 AM, \"Qingqing Zhou\" <[email protected]> wrote:\n\n> \n> \"Jeff Frost\" <[email protected]> wrote\n>> \n>> Did you folks see this article on Slashdot with a fellow requesting input\n>> on what sort of benchmarks to run to get a good Postgresql vs Mysql\n>> dataset? Perhaps this would be a good opportunity for us to get some good\n>> benchmarking done.\n>> \"The hardware I have available is as follows:\n>> \n>> * 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI\n>> * 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI\n>> * 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA\n>> \n\nI suggest specifying a set of basic system / HW benchmarks to baseline the\nhardware before each benchmark is run. This has proven to be a major issue\nwith most performance tests. My pick for I/O is bonnie++.\n\nYour equipment allows you the opportunity to benchmark all 5 machines\nrunning together as a cluster - this is important to measure maturity of\nsolutions for high performance warehousing. Greenplum can provide you a\nlicense for Bizgres MPP for this purpose.\n\n> (2) The hardware configuration may not reflect all potentials of PostgreSQL.\n> For example, so far, PostgreSQL does not pay much attention in reducing I/O\n> cost, so a stronger RAID definitely will benefit PostgreSQL performance.\n\nThe 16x SATA drives should be great, provided you have a high performance\nRAID adapter configured properly. You should be able to get 800MB/s of\nsequential scan performance by using a card like the 3Ware 9550SX. I've\nalso heard that the Areca cards are good (how good?). Configuration of the\nI/O must be validated though - I've seen as low as 25MB/s from a\nmisconfigured system.\n\n>> For my own interests, I would like to at least cover the following bases:\n>> 32 bit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type\n>> tests (data >> memory); and web prefs test (active data RAM)\n>> \n> \n> Don't forget TPCC (data > memory, with intensive updates). So the benchmarks\n> in my mind include TPCC, TPCH and TPCW.\n\nI agree with Qingqing, though I think the OSTG DBT-3 (very similar to TPC-H)\nis sufficient for data warehousing.\n\nThis is a fairly ambitious project - one problem I see is that MySQL may not\nrun all of these benchmarks, particularly the DBT-3. Also - would the rules\nallow for mixing / matching pluggable features of the DBMS? Innodb versus\nMyISAM?\n\n- Luke \n\n\n",
"msg_date": "Sat, 26 Nov 2005 12:15:28 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
},
{
"msg_contents": "At 03:15 PM 11/26/2005, Luke Lonergan wrote:\n\n>I suggest specifying a set of basic system / HW benchmarks to baseline the\n>hardware before each benchmark is run. This has proven to be a major issue\n>with most performance tests. My pick for I/O is bonnie++.\n>\n>Your equipment allows you the opportunity to benchmark all 5 machines\n>running together as a cluster - this is important to measure maturity of\n>solutions for high performance warehousing. Greenplum can provide you a\n>license for Bizgres MPP for this purpose.\n...and detailed config / tuning specs as well for it or everyone is \nprobably wasting their time. For instance, it seems fairly clear \nthat the default 8KB table size and default read ahead size are both \npessimal, at least for non OLTP-like apps. In addition, there's been \na reasonable amount of evidence that xfs should be the file system of \nchoice for pg.\n\nThings like optimal RAID strip size, how to allocate tables to \nvarious IO HW, and what levels of RAID to use for each RAID set also \nhave to be defined.\n\n\n>The 16x SATA drives should be great, provided you have a high performance\n>RAID adapter configured properly. You should be able to get 800MB/s of\n>sequential scan performance by using a card like the 3Ware 9550SX. I've\n>also heard that the Areca cards are good (how good?). Configuration of the\n>I/O must be validated though - I've seen as low as 25MB/s from a\n>misconfigured system.\nThe Areca cards, particularly with 1-2GB of buffer cache, are the \ncurrent commodity RAID controller performance leader. Better \nperformance can be gotten out of HW from vendors like Xyratex, but it \nwill cost much more.\n\n\nRon\n\n\n",
"msg_date": "Sat, 26 Nov 2005 15:31:33 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
},
{
"msg_contents": "Qingqing Zhou wrote:\n> \"Jeff Frost\" <[email protected]> wrote\n\n>> \"The hardware I have available is as follows:\n>>\n>> * 2x dual Opteron 8G ram, 2x144G 15Krpm SCSI\n>> * 2x dual Opteron 8G ram, 2x72G 15Krpm SCSI\n>> * 1x dual Opteron 16G ram, 2x36G 15Krpm SCSI 16x400G 7200rpm SATA\n>>\n> (2) The hardware configuration may not reflect all potentials of PostgreSQL. \n\nThese boxes don't look like being designed for a DB server. The first \nare very CPU bound, and the third may be a good choice for very large \namounts of streamed data, but not optimal for TP random access.\n\nHopefully, when publicly visible benchmarks are performed, machines are \nused that comply with common engineering knowledge, ignoring those guys \nwho still believe that sequential performance is the most important \nissue on disk subsystems for DBMS.\n\nRegards,\nAndreas\n",
"msg_date": "Sun, 27 Nov 2005 00:12:01 +0000",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
},
{
"msg_contents": "On Sat, Nov 26, 2005 at 01:57:47PM -0500, Qingqing Zhou wrote:\n> Don't forget TPCC (data > memory, with intensive updates). So the benchmarks \n> in my mind include TPCC, TPCH and TPCW.\n\nI'm lost in all those acronyms, but am I right in assuming that none of these\nactually push the planner very hard? We keep on pushing that \"PostgreSQL is a\nlot better than MySQL when it comes to many joins and complex queries\"\n(mostly because the planner is a lot more mature -- does MySQL even keep\nstatistics yet?), but I'm not sure if there are any widely used benchmarks\navailable that actaully excercise that.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Sun, 27 Nov 2005 12:51:55 +0100",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input"
}
] |
[
{
"msg_contents": ">Another thought - I priced out a maxed out machine with 16 cores and\n>128GB of RAM and 1.5TB of usable disk - $71,000.\n>\n>You could instead buy 8 machines that total 16 cores, 128GB RAM and 28TB\n>of disk for $48,000, and it would be 16 times faster in scan rate, which\n>is the most important factor for large databases. The size would be 16\n>rack units instead of 5, and you'd have to add a GigE switch for $1500.\n>\n>Scan rate for above SMP: 200MB/s\n>\n>Scan rate for above cluster: 3,200Mb/s\n>\n>You could even go dual core and double the memory on the cluster and\n>you'd about match the price of the \"god box\".\n>\n>- Luke\n\nLuke, I assume you are talking about useing the Greenplum MPP for this \n(otherwise I don't know how you are combining all the different systems).\n\nIf you are, then you are overlooking one very significant factor, the cost \nof the MPP software, at $10/cpu the cluster has an extra $160K in software \ncosts, which is double the hardware costs.\n\nif money is no object then go for it, but if it is then you comparison \nwould be (ignoring software maintinance costs) the 16 core 128G ram system \nvs ~3xsmall systems totaling 6 cores and 48G ram.\n\nyes if scan speed is the bottleneck you still win with the small systems, \nbut for most other uses the large system would win easily. and in any case \nit's not the open and shut case that you keep presenting it as.\n\nDavid Lang\n",
"msg_date": "Sat, 26 Nov 2005 10:51:18 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "On Sun, 27 Nov 2005, Luke Lonergan wrote:\n\n> For data warehousing its pretty well open and shut. To use all cpus and \n> io channels on each query you will need mpp.\n>\n> Has anyone done the math.on the original post? 5TB takes how long to \n> scan once? If you want to wait less than a couple of days just for a \n> seq scan, you'd better be in the multi-gb per second range.\n\nif you truely need to scan the entire database then you are right, however \nindexes should be able to cut the amount you need to scan drasticly.\n\nDavid Lang\n",
"msg_date": "Sat, 26 Nov 2005 11:34:14 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "For data warehousing its pretty well open and shut. To use all cpus and io channels on each query you will need mpp.\r\n\r\nHas anyone done the math.on the original post? 5TB takes how long to scan once? If you want to wait less than a couple of days just for a seq scan, you'd better be in the multi-gb per second range.\r\n\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sat Nov 26 13:51:18 2005\r\nSubject: Re: [PERFORM] Hardware/OS recommendations for large databases (\r\n\r\n>Another thought - I priced out a maxed out machine with 16 cores and\r\n>128GB of RAM and 1.5TB of usable disk - $71,000.\r\n>\r\n>You could instead buy 8 machines that total 16 cores, 128GB RAM and 28TB\r\n>of disk for $48,000, and it would be 16 times faster in scan rate, which\r\n>is the most important factor for large databases. The size would be 16\r\n>rack units instead of 5, and you'd have to add a GigE switch for $1500.\r\n>\r\n>Scan rate for above SMP: 200MB/s\r\n>\r\n>Scan rate for above cluster: 3,200Mb/s\r\n>\r\n>You could even go dual core and double the memory on the cluster and\r\n>you'd about match the price of the \"god box\".\r\n>\r\n>- Luke\r\n\r\nLuke, I assume you are talking about useing the Greenplum MPP for this \r\n(otherwise I don't know how you are combining all the different systems).\r\n\r\nIf you are, then you are overlooking one very significant factor, the cost \r\nof the MPP software, at $10/cpu the cluster has an extra $160K in software \r\ncosts, which is double the hardware costs.\r\n\r\nif money is no object then go for it, but if it is then you comparison \r\nwould be (ignoring software maintinance costs) the 16 core 128G ram system \r\nvs ~3xsmall systems totaling 6 cores and 48G ram.\r\n\r\nyes if scan speed is the bottleneck you still win with the small systems, \r\nbut for most other uses the large system would win easily. and in any case \r\nit's not the open and shut case that you keep presenting it as.\r\n\r\nDavid Lang\r\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 6: explain analyze is your friend\r\n\r\n",
"msg_date": "Sun, 27 Nov 2005 01:18:54 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Sun, 27 Nov 2005, Luke Lonergan wrote:\n\n> Has anyone done the math.on the original post? 5TB takes how long to\n> scan once? If you want to wait less than a couple of days just for a\n> seq scan, you'd better be in the multi-gb per second range.\n\nErr, I get about 31 megabytes/second to do 5TB in 170,000 seconds. I think\nperhaps you were exaggerating a bit or adding additional overhead not\nobvious from the above. ;)\n\n---\n\nAt 1 gigabyte per second, 1 terrabyte should take about 1000 seconds\n(between 16 and 17 minutes). The impressive 3.2 gigabytes per second\nlisted before (if it actually scans consistently at that rate), puts it at\na little over 5 minutes I believe for 1, so about 26 for 5 terrabytes.\nThe 200 megabyte per second number puts it about 7 hours for 5\nterrabytes AFAICS.\n",
"msg_date": "Sun, 27 Nov 2005 07:48:01 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "At 01:18 AM 11/27/2005, Luke Lonergan wrote:\n>For data warehousing its pretty well open and shut. To use all cpus \n>and io channels on each query you will need mpp.\n>\n>Has anyone done the math.on the original post? 5TB takes how long \n>to scan once? If you want to wait less than a couple of days just \n>for a seq scan, you'd better be in the multi-gb per second range.\nMore than a bit of hyperbole there Luke.\n\nSome common RW scenarios:\nDual 1GbE NICs => 200MBps => 5TB in 5x10^12/2x10^8= 25000secs= \n~6hrs57mins. Network stuff like re-transmits of dropped packets can \nincrease this, so network SLA's are critical.\n\nDual 10GbE NICs => ~1.6GBps (10GbE NICs can't yet do over ~800MBps \napiece) => 5x10^12/1.6x10^9= 3125secs= ~52mins. SLA's are even \nmoire critical here.\n\nIf you are pushing 5TB around on a regular basis, you are not wasting \nyour time & money on commodity <= 300MBps RAID HW. You'll be using \n800MBps and 1600MBps high end stuff, which means you'll need ~1-2hrs \nto sequentially scan 5TB on physical media.\n\nClever use of RAM can get a 5TB sequential scan down to ~17mins.\n\nYes, it's a lot of data. But sequential scan times should be in the \nmins or low single digit hours, not days. Particularly if you use \nRAM to maximum advantage.\n\nRon\n\n\n",
"msg_date": "Sun, 27 Nov 2005 12:10:44 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "Ron,\n\nOn 11/27/05 9:10 AM, \"Ron\" <[email protected]> wrote:\n\n> Clever use of RAM can get a 5TB sequential scan down to ~17mins.\n> \n> Yes, it's a lot of data. But sequential scan times should be in the\n> mins or low single digit hours, not days. Particularly if you use\n> RAM to maximum advantage.\n\nUnfortunately, RAM doesn't help with scanning from disk at all.\n\nWRT using network interfaces to help - it's interesting, but I think what\nyou'd want to connect to is other machines with storage on them.\n\n- Luke \n\n\n",
"msg_date": "Sun, 27 Nov 2005 11:11:53 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "Stephan,\n\nOn 11/27/05 7:48 AM, \"Stephan Szabo\" <[email protected]> wrote:\n\n> On Sun, 27 Nov 2005, Luke Lonergan wrote:\n> \n>> Has anyone done the math.on the original post? 5TB takes how long to\n>> scan once? If you want to wait less than a couple of days just for a\n>> seq scan, you'd better be in the multi-gb per second range.\n> \n> Err, I get about 31 megabytes/second to do 5TB in 170,000 seconds. I think\n> perhaps you were exaggerating a bit or adding additional overhead not\n> obvious from the above. ;)\n\nThanks - the calculator on my blackberry was broken ;-)\n \n> At 1 gigabyte per second, 1 terrabyte should take about 1000 seconds\n> (between 16 and 17 minutes). The impressive 3.2 gigabytes per second\n> listed before (if it actually scans consistently at that rate), puts it at\n> a little over 5 minutes I believe for 1, so about 26 for 5 terrabytes.\n> The 200 megabyte per second number puts it about 7 hours for 5\n> terrabytes AFAICS.\n\n7 hours, days, same thing ;-)\n\nOn the reality of sustained scan rates like that:\n\nWe're getting 2.5GB/s sustained on a 2 year old machine with 16 hosts and 96\ndisks. We run them in RAID0, which is only OK because MPP has built-in host\nto host mirroring for fault management.\n\nWe just purchased a 4-way cluster with 8 drives each using the 3Ware 9550SX.\nOur thought was to try the simplest approach first, which is a single RAID5,\nwhich gets us 7 drives worth of capacity and performance. As I posted\nearlier, we get about 400MB/s seq scan rate on the RAID, but the Postgres\n8.0 current scan rate limit is 64% of 400MB/s or 256MB/s per host. The 8.1\nmods (thanks Qingqing and Tom!) may increase that significantly toward the\n400 max - we've already merged the 8.1 codebase into MPP so we'll also\nfeature the same enhancements.\n\nOur next approach is to run these machines in a split RAID0 configuration,\nor RAID0 on 4 and 4 drives. We then run an MPP \"segment instance\" bound to\neach CPU and I/O channel. At that point, we'll have all 8 drives of\nperformance and capacity per host and we should get 333MB/s with current MPP\nand perhaps over 400MB/s with MPP/8.1. That would get us up to the 3.2GB/s\nfor 8 hosts.\n\nEven better, all operators are executed on all CPUs for each query, so\nsorting, hashing, agg, etc etc are run on all CPUs in the cluster.\n\n- Luke\n\n\n",
"msg_date": "Sun, 27 Nov 2005 11:31:24 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "At 02:11 PM 11/27/2005, Luke Lonergan wrote:\n>Ron,\n>\n>On 11/27/05 9:10 AM, \"Ron\" <[email protected]> wrote:\n>\n> > Clever use of RAM can get a 5TB sequential scan down to ~17mins.\n> >\n> > Yes, it's a lot of data. But sequential scan times should be in the\n> > mins or low single digit hours, not days. Particularly if you use\n> > RAM to maximum advantage.\n>\n>Unfortunately, RAM doesn't help with scanning from disk at all.\nI agree with you if you are scanning a table \"cold\", having never \nloaded it before, or if the system is not (or can't be) set up \nproperly with appropriate buffers.\n\nHowever, outside of those 2 cases there are often tricks you can use \nwith enough RAM (and no, you don't need RAM equal to the size of the \nitem(s) being scanned) to substantially speed things up. Best case, \nyou can get performance approximately equal to that of a RAM resident scan.\n\n\n>WRT using network interfaces to help - it's interesting, but I think what\n>you'd want to connect to is other machines with storage on them.\nMaybe. Or maybe you want to concentrate your storage in a farm that \nis connected by network or Fiber Channel to the rest of your \nHW. That's what a NAS or SAN is after all.\n\n\"The rest of your HW\" nowadays is often a cluster of RAM rich \nhosts. Assuming 64GB per host, 5TB can be split across ~79 hosts if \nyou want to make it all RAM resident.\n\nMost don't have that kind of budget, but thankfully it is not usually \nnecessary to make all of the data RAM resident in order to obtain if \nnot all of the performance benefits you'd get if all of the data was.\n\nRon\n\n\n",
"msg_date": "Sun, 27 Nov 2005 15:25:36 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases"
},
{
"msg_contents": "\nOn Sun, 27 Nov 2005, Luke Lonergan wrote:\n\n> Stephan,\n>\n> On 11/27/05 7:48 AM, \"Stephan Szabo\" <[email protected]> wrote:\n>\n> > On Sun, 27 Nov 2005, Luke Lonergan wrote:\n> >\n> >> Has anyone done the math.on the original post? 5TB takes how long to\n> >> scan once? If you want to wait less than a couple of days just for a\n> >> seq scan, you'd better be in the multi-gb per second range.\n> >\n> > Err, I get about 31 megabytes/second to do 5TB in 170,000 seconds. I think\n> > perhaps you were exaggerating a bit or adding additional overhead not\n> > obvious from the above. ;)\n>\n> Thanks - the calculator on my blackberry was broken ;-)\n\nWell, it was suspiciously close to a factor of 60 off, which when working\nin time could have just been a simple math error.\n\n> > At 1 gigabyte per second, 1 terrabyte should take about 1000 seconds\n> > (between 16 and 17 minutes). The impressive 3.2 gigabytes per second\n> > listed before (if it actually scans consistently at that rate), puts it at\n> > a little over 5 minutes I believe for 1, so about 26 for 5 terrabytes.\n> > The 200 megabyte per second number puts it about 7 hours for 5\n> > terrabytes AFAICS.\n>\n> 7 hours, days, same thing ;-)\n>\n> On the reality of sustained scan rates like that:\n\nWell, the reason I asked was that IIRC the 3.2 used earlier in the\ndiscussion was exactly multiplying scanners and base rate (ie, no\nadditional overhead). I couldn't tell if that was back of the envelope or\nif the overhead was in fact negligible. (Or I could be misremembering the\nconversation). I don't doubt that it's possible to get the rate, just\nwasn't sure if the rate was actually applicable to the ongoing discussion\nof the comparison.\n",
"msg_date": "Sun, 27 Nov 2005 16:07:32 -0800 (PST)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Have you factored in how long it takes to build an index on 5TB? And the index size?\r\n\r\nReally, it's a whole different world at multi-TB, everything has to scale.\r\n\r\nBtw we don't just scan in parallel, we do all in parallel, check the sort number on this thread. Mpp is for the god box too.\r\n\r\nAnd your price is wrong - but if you want free then you'll have to find another way to get your work done.\r\n\r\n- Luke\r\n- Luke\r\n--------------------------\r\nSent from my BlackBerry Wireless Device\r\n\r\n\r\n-----Original Message-----\r\nFrom: David Lang <[email protected]>\r\nTo: Luke Lonergan <[email protected]>\r\nCC: [email protected] <[email protected]>\r\nSent: Sat Nov 26 14:34:14 2005\r\nSubject: Re: [PERFORM] Hardware/OS recommendations for large databases (\r\n\r\nOn Sun, 27 Nov 2005, Luke Lonergan wrote:\r\n\r\n> For data warehousing its pretty well open and shut. To use all cpus and \r\n> io channels on each query you will need mpp.\r\n>\r\n> Has anyone done the math.on the original post? 5TB takes how long to \r\n> scan once? If you want to wait less than a couple of days just for a \r\n> seq scan, you'd better be in the multi-gb per second range.\r\n\r\nif you truely need to scan the entire database then you are right, however \r\nindexes should be able to cut the amount you need to scan drasticly.\r\n\r\nDavid Lang\r\n\r\n",
"msg_date": "Sun, 27 Nov 2005 03:02:48 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "On Mon, 28 Nov 2005, Brendan Duddridge wrote:\n\n> Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it \n> possible to upgrade from Postgres 8.1 to Bizgres?\n\nMPP is the Greenplum propriatary extention to postgres that spreads the \ndata over multiple machines, (raid, but with entire machines not just \ndrives, complete with data replication within the cluster to survive a \nmachine failing)\n\nfor some types of queries they can definantly scale lineraly with the \nnumber of machines (other queries are far more difficult and the overhead \nof coordinating the machines shows more. this is one of the key things \nthat the new version they recently announced the beta for is supposed to \nbe drasticly improving)\n\nearly in the year when I first looked at them their prices were \nexorbadent, but Luke says I'm wildly mistake on their current prices so \ncall them for details\n\nit uses the same interfaces as postgres so it should be a drop in \nreplacement to replace a single server with a cluster.\n\nit's facinating technology to read about.\n\nI seem to remember reading that one of the other postgres companies is \nalso producing a clustered version of postgres, but I don't remember who \nand know nothing about them.\n\nDavid Lang\n",
"msg_date": "Sun, 27 Nov 2005 19:09:04 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "On Mon, 28 Nov 2005, Brendan Duddridge wrote:\n\n> Hi David,\n>\n> Thanks for your reply. So how is that different than something like Slony2 or \n> pgcluster with multi-master replication? Is it similar technology? We're \n> currently looking for a good clustering solution that will work on our Apple \n> Xserves and Xserve RAIDs.\n\nMPP doesn't just split up the data, it splits up the processing as well, \nso if you have a 5 machine cluster, each machine holds 1/5 of your data \n(plus a backup for one of the other machines) and when you do a query MPP \nslices and dices the query to send a subset of the query to each machine, \nit then gets the responses from all the machines and combines them\n\nif you ahve to do a full table scan for example, wach machine would only \nhave to go through 20% of the data\n\na Slony of pgcluster setup has each machine with a full copy of all the \ndata, only one machine can work on a given query at a time, and if you \nhave to do a full table scan one machine needs to read 100% of the data.\n\nin many ways this is the holy grail of databases. almost all other areas \nof computing can now be scaled by throwing more machines at the problem in \na cluster, with each machine just working on it's piece of the problem, \nbut databases have had serious trouble doing the same and so have been \nruled by the 'big monster machine'. Oracle has been selling Oracle Rac for \na few years, and reports from people who have used it range drasticly \n(from it works great, to it's a total disaster), in part depending on the \ntypes of queries that have been made.\n\nGreenplum thinks that they have licked the problems for the more general \ncase (and that commodity networks are now fast enough to match disk speeds \nin processing the data) if they are right then when they hit full release \nwith the new version they should be cracking a lot of the \nprice/performance records on the big database benchmarks (TPC and \nsimilar), and if their pricing is reasonable, they may be breaking them by \nan order of magnatude or more (it's not unusual for the top machines to \nspend more then $1,000,000 on just their disk arrays for those \nsystems, MPP could conceivably put togeather a cluster of $5K machines \nthat runs rings around them (and probably will for at least some of the \nsubtests, the big question is if they can sweep the board and take the top \nspots outright)\n\nthey have more details (and marketing stuff) on their site at \nhttp://www.greenplum.com/prod_deepgreen_cluster.html\n\ndon't get me wrong, I am very impressed with their stuff, but (haveing \nranted a little here on the list about them) I think MPP and it's \nperformace is a bit off topic for the postgres performance list (at least \nuntil the postgres project itself starts implementing similar features :-)\n\nDavid Lang\n\n> Thanks,\n>\n> ____________________________________________________________________\n> Brendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n>\n> ClickSpace Interactive Inc.\n> Suite L100, 239 - 10th Ave. SE\n> Calgary, AB T2G 0V9\n>\n> http://www.clickspace.com\n>\n> On Nov 27, 2005, at 8:09 PM, David Lang wrote:\n>\n>> On Mon, 28 Nov 2005, Brendan Duddridge wrote:\n>> \n>>> Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it \n>>> possible to upgrade from Postgres 8.1 to Bizgres?\n>> \n>> MPP is the Greenplum propriatary extention to postgres that spreads the \n>> data over multiple machines, (raid, but with entire machines not just \n>> drives, complete with data replication within the cluster to survive a \n>> machine failing)\n>> \n>> for some types of queries they can definantly scale lineraly with the \n>> number of machines (other queries are far more difficult and the overhead \n>> of coordinating the machines shows more. this is one of the key things that \n>> the new version they recently announced the beta for is supposed to be \n>> drasticly improving)\n>> \n>> early in the year when I first looked at them their prices were exorbadent, \n>> but Luke says I'm wildly mistake on their current prices so call them for \n>> details\n>> \n>> it uses the same interfaces as postgres so it should be a drop in \n>> replacement to replace a single server with a cluster.\n>> \n>> it's facinating technology to read about.\n>> \n>> I seem to remember reading that one of the other postgres companies is also \n>> producing a clustered version of postgres, but I don't remember who and \n>> know nothing about them.\n>> \n>> David Lang\n>> \n>\n>\n",
"msg_date": "Sun, 27 Nov 2005 20:01:05 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "> \n> It certainly makes quite a difference as I measure it:\n> \n> doing select(1) from a 181000 page table (completely uncached) on my\nPIII:\n> \n> 8.0 : 32 s\n> 8.1 : 25 s\n> \n> Note that the 'fastcount()' function takes 21 s in both cases - so all\n> the improvement seems to be from the count overhead reduction.\n\nAre you running windows? There is a big performance improvement in\ncount(*) on pg 8.0->8.1 on win32 that is not relevant to this debate...\n\nMerlin\n",
"msg_date": "Mon, 28 Nov 2005 13:10:07 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Merlin Moncure wrote:\n>>It certainly makes quite a difference as I measure it:\n>>\n>>doing select(1) from a 181000 page table (completely uncached) on my\n> \n> PIII:\n> \n>>8.0 : 32 s\n>>8.1 : 25 s\n>>\n>>Note that the 'fastcount()' function takes 21 s in both cases - so all\n>>the improvement seems to be from the count overhead reduction.\n> \n> \n> Are you running windows? There is a big performance improvement in\n> count(*) on pg 8.0->8.1 on win32 that is not relevant to this debate...\n> \n\nNo - FreeBSD 6.0 on a dual PIII 1 Ghz. The slow cpu means that the 8.1 \nimprovements are very noticeable!\n\nA point of interest - applying Niels palloc - avoiding changes to \nNodeAgg.c and int8.c in 8.0 changes those results to:\n\n8.0 + palloc avoiding patch : 27 s\n\n(I am guessing the remaining 2 s could be shaved off if I backported \n8.1's virtual tuples - however that looked like a lot of work)\n\nCheers\n\nMark\n",
"msg_date": "Tue, 29 Nov 2005 10:45:46 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Mark,\n\nOn 11/28/05 1:45 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n>>> 8.0 : 32 s\n>>> 8.1 : 25 s\n\nA 22% reduction.\n\nselect count(1) on 12,900MB = 1617125 pages fully cached:\n\nMPP based on 8.0 : 6.06s\nMPP based on 8.1 : 4.45s\n\nA 26% reduction.\n\nI'll take it!\n\nI am looking to back-port Tom's pre-8.2 changes and test again, maybe\ntonight.\n\n- Luke\n\n\n",
"msg_date": "Mon, 28 Nov 2005 14:05:15 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is it \npossible to upgrade from Postgres 8.1 to Bizgres?\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Nov 28, 2005, at 3:05 PM, Luke Lonergan wrote:\n\n> Mark,\n>\n> On 11/28/05 1:45 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n>\n>>>> 8.0 : 32 s\n>>>> 8.1 : 25 s\n>\n> A 22% reduction.\n>\n> select count(1) on 12,900MB = 1617125 pages fully cached:\n>\n> MPP based on 8.0 : 6.06s\n> MPP based on 8.1 : 4.45s\n>\n> A 26% reduction.\n>\n> I'll take it!\n>\n> I am looking to back-port Tom's pre-8.2 changes and test again, maybe\n> tonight.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>",
"msg_date": "Mon, 28 Nov 2005 16:19:34 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Hi David,\n\nThanks for your reply. So how is that different than something like \nSlony2 or pgcluster with multi-master replication? Is it similar \ntechnology? We're currently looking for a good clustering solution \nthat will work on our Apple Xserves and Xserve RAIDs.\n\nThanks,\n\n____________________________________________________________________\nBrendan Duddridge | CTO | 403-277-5591 x24 | [email protected]\n\nClickSpace Interactive Inc.\nSuite L100, 239 - 10th Ave. SE\nCalgary, AB T2G 0V9\n\nhttp://www.clickspace.com\n\nOn Nov 27, 2005, at 8:09 PM, David Lang wrote:\n\n> On Mon, 28 Nov 2005, Brendan Duddridge wrote:\n>\n>> Forgive my ignorance, but what is MPP? Is that part of Bizgres? Is \n>> it possible to upgrade from Postgres 8.1 to Bizgres?\n>\n> MPP is the Greenplum propriatary extention to postgres that spreads \n> the data over multiple machines, (raid, but with entire machines \n> not just drives, complete with data replication within the cluster \n> to survive a machine failing)\n>\n> for some types of queries they can definantly scale lineraly with \n> the number of machines (other queries are far more difficult and \n> the overhead of coordinating the machines shows more. this is one \n> of the key things that the new version they recently announced the \n> beta for is supposed to be drasticly improving)\n>\n> early in the year when I first looked at them their prices were \n> exorbadent, but Luke says I'm wildly mistake on their current \n> prices so call them for details\n>\n> it uses the same interfaces as postgres so it should be a drop in \n> replacement to replace a single server with a cluster.\n>\n> it's facinating technology to read about.\n>\n> I seem to remember reading that one of the other postgres companies \n> is also producing a clustered version of postgres, but I don't \n> remember who and know nothing about them.\n>\n> David Lang\n>",
"msg_date": "Mon, 28 Nov 2005 20:19:41 -0700",
"msg_from": "Brendan Duddridge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
},
{
"msg_contents": "Brendan Duddridge wrote:\n\n> Thanks for your reply. So how is that different than something like \n> Slony2 or pgcluster with multi-master replication? Is it similar \n> technology? We're currently looking for a good clustering solution \n> that will work on our Apple Xserves and Xserve RAIDs.\n\nI think you need to be more specific about what you're trying to do.\n'clustering' encompasses so many things that it means almost nothing by \nitself.\n\nslony provides facilities for replicating data. Its primary purpose is\nto improve reliability. MPP distributes both data and queries. Its\nprimary purpose is to improve performance for a subset of all query types.\n\n\n",
"msg_date": "Mon, 28 Nov 2005 20:30:53 -0700",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "> I have been reading all this technical talk about costs and such that\n> I don't (_yet_) understand.\n> \n> Now I'm scared... what's the fastest way to do an equivalent of\n> count(*) on a table to know how many items it has?\n\nMake sure to analyze the database frequently and check pg_class for\nreltuples field. This gives 0 time approximations of # row in table at\nthe time of the last analyze.\n\nMany other approaches...check archives. Also your requirements are\nprobably not as high as you think they are ;)\n\nMerlin\n\n",
"msg_date": "Mon, 28 Nov 2005 08:31:06 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Newbie question: ultra fast count(*)"
}
] |
[
{
"msg_contents": "The MPP test I ran was with the release version 2.0 of MPP which is based on\nPostgres 8.0, the upcoming 2.1 release is based on 8.1, and 8.1 is far\nfaster at seq scan + agg. 12,937MB were counted in 4.5 seconds, or 2890MB/s\nfrom I/O cache. That's 722MB/s per host, and 360MB/s per Postgres instance,\nup from 267 previously with 8.0.3.\n\nI'm going to apply Tom's pre-8.2 seq scan locking optimization and see how\nmuch better we can get!\n\n- Luke\n\n ==========================================================\n Bizgres MPP CVS tip (2.1 pre), 8 data segments, 1 per CPU\n ==========================================================\n\nllonergan=# \\timing\nTiming is on.\nllonergan=# explain select count(1) from lineitem;\n QUERY PLAN\n--------------------------------------------------------------------------\n Aggregate (cost=0.01..0.01 rows=1 width=0)\n -> Gather Motion (cost=0.01..0.01 rows=1 width=0)\n -> Aggregate (cost=0.01..0.01 rows=1 width=0)\n -> Seq Scan on lineitem (cost=0.00..0.00 rows=1 width=0)\n(4 rows)\n\nTime: 1.464 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 4478.563 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 4550.917 ms\nllonergan=# select count(1) from lineitem;\n count \n----------\n 59986052\n(1 row)\n\nTime: 4482.261 ms\n\n\nOn 11/24/05 9:16 AM, \"Luke Lonergan\" <[email protected]> wrote:\n\n> The same 12.9GB distributed across 4 machines using Bizgres MPP fits into\n> I/O cache. The interesting result is that the query \"select count(1)\" is\n> limited in speed to 280 MB/s per CPU when run on the lineitem table. So\n> when I run it spread over 4 machines, one CPU per machine I get this:\n> \n> ======================================================\n> Bizgres MPP, 4 data segments, 1 per 2 CPUs\n> ======================================================\n> llonergan=# explain select count(1) from lineitem;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ----------\n> Aggregate (cost=582452.00..582452.00 rows=1 width=0)\n> -> Gather Motion (cost=582452.00..582452.00 rows=1 width=0)\n> -> Aggregate (cost=582452.00..582452.00 rows=1 width=0)\n> -> Seq Scan on lineitem (cost=0.00..544945.00 rows=15002800\n> width=0)\n> (4 rows)\n> \n> llonergan=# \\timing\n> Timing is on.\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 12191.435 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 11986.109 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 11448.941 ms\n> ======================================================\n> \n> That's 12,937 MB in 11.45 seconds, or 1,130 MB/s. When you divide out the\n> number of Postgres instances (4), that's 283MB/s per Postgres instance.\n> \n> To verify that this has nothing to do with MPP, I ran it in a special\n> internal mode on one instance and got the same result.\n> \n> So - we should be able to double this rate by running one segment per CPU,\n> or two per host:\n> \n> ======================================================\n> Bizgres MPP, 8 data segments, 1 per CPU\n> ======================================================\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 6484.594 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 6156.729 ms\n> llonergan=# select count(1) from lineitem;\n> count \n> ----------\n> 59986052\n> (1 row)\n> \n> Time: 6063.416 ms\n> ======================================================\n> That's 12,937 MB in 11.45 seconds, or 2,134 MB/s. When you divide out the\n> number of Postgres instances (8), that's 267MB/s per Postgres instance.\n> \n> So, if you want to \"select count(1)\", using more CPUs is a good idea! For\n> most complex queries, having lots of CPUs + MPP is a good combo.\n> \n> Here is an example of a sorting plan - this should probably be done with a\n> hash aggregation, but using 8 CPUs makes it go 8x faster:\n> \n> ======================================================\n> Bizgres MPP, 8 data segments, 1 per CPU\n> ======================================================\n> llonergan=# \\timing\n> Timing is on.\n> llonergan=# explain select l_orderkey from lineitem order by l_shipdate,\n> l_extendedprice limit 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> -----\n> Limit (cost=0.01..0.02 rows=1 width=24)\n> -> Gather Motion (cost=0.01..0.02 rows=1 width=24)\n> Merge Key: l_shipdate, l_extendedprice\n> -> Limit (cost=0.01..0.02 rows=1 width=24)\n> -> Sort (cost=0.01..0.02 rows=1 width=24)\n> Sort Key: l_shipdate, l_extendedprice\n> -> Seq Scan on lineitem (cost=0.00..0.00 rows=1\n> width=24)\n> (7 rows)\n> \n> Time: 0.592 ms\n> llonergan=# select l_orderkey from lineitem order by l_shipdate,\n> l_extendedprice limit 10;\n> l_orderkey\n> ------------\n> 51829667\n> 26601603\n> 16579717\n> 40046023\n> 41707078\n> 22880928\n> 35584422\n> 31272229\n> 49914018\n> 42309990\n> (10 rows)\n> \n> Time: 93469.443 ms\n> \n> ======================================================\n> \n> So that's 60M rows and 12.9GB sorted in 93.5 seconds.\n> \n> - Luke\n> \n> \n\n\n\n",
"msg_date": "Mon, 28 Nov 2005 08:26:21 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware/OS recommendations for large databases ("
}
] |
[
{
"msg_contents": "Hi all,\n\n I don't understand why this request take so long. Maybe I read the \nanalyse correctly but It seem that the first line(Nested Loop Left Join \n...) take all the time. But I don't understand where the performance \nproblem is ??? All the time is passed in the first line ...\n\nThanks for your help!\n\n/David\n\n\nexplain analyse SELECT *\n\n FROM CR INNER JOIN CS ON CR.CRNUM = CS.CSCRNUM AND \nCR.CRYPNUM = CS.CSYPNUM\n INNER JOIN GL ON CS.CSGLNUM = GL.GLNUM AND \nGL.GLSOCTRL = 1\n INNER JOIN RR ON CR.CRRRNUM = RR.RRNUM\n LEFT OUTER JOIN YR ON YR.YRYOTYPE = 'Client' AND \nYR.YRYONUM = 'Comptabilite.Recevable.Regroupement' AND YR.YRREF = RR.RRNUM\n WHERE CRYPNUM = 'M'\n AND CRDATE + INTERVAL '0 days' <= '2005-01-28'\n\n\n\"Nested Loop Left Join (cost=0.00..42.12 rows=1 width=8143) (actual \ntime=15.254..200198.524 rows=8335 loops=1)\"\n\" Join Filter: ((\"inner\".yrref)::text = (\"outer\".rrnum)::text)\"\n\" -> Nested Loop (cost=0.00..36.12 rows=1 width=7217) (actual \ntime=0.441..2719.821 rows=8335 loops=1)\"\n\" -> Nested Loop (cost=0.00..30.12 rows=1 width=1580) (actual \ntime=0.242..1837.413 rows=8335 loops=1)\"\n\" -> Nested Loop (cost=0.00..18.07 rows=2 width=752) \n(actual time=0.145..548.607 rows=13587 loops=1)\"\n\" -> Seq Scan on gl (cost=0.00..5.21 rows=1 \nwidth=608) (actual time=0.036..0.617 rows=1 loops=1)\"\n\" Filter: (glsoctrl = 1)\"\n\" -> Index Scan using cs_pk on cs (cost=0.00..12.83 \nrows=2 width=144) (actual time=0.087..444.999 rows=13587 loops=1)\"\n\" Index Cond: (('M'::text = (cs.csypnum)::text) \nAND ((cs.csglnum)::text = (\"outer\".glnum)::text))\"\n\" -> Index Scan using cr_pk on cr (cost=0.00..6.02 rows=1 \nwidth=828) (actual time=0.073..0.077 rows=1 loops=13587)\"\n\" Index Cond: (((cr.crypnum)::text = 'M'::text) AND \n(cr.crnum = \"outer\".cscrnum))\"\n\" Filter: ((crdate + '00:00:00'::interval) <= \n'2005-01-28 00:00:00'::timestamp without time zone)\"\n\" -> Index Scan using rr_pk on rr (cost=0.00..5.99 rows=1 \nwidth=5637) (actual time=0.062..0.069 rows=1 loops=8335)\"\n\" Index Cond: ((\"outer\".crrrnum)::text = (rr.rrnum)::text)\"\n\" -> Index Scan using yr_idx1 on yr (cost=0.00..5.99 rows=1 \nwidth=926) (actual time=0.127..17.379 rows=1154 loops=8335)\"\n\" Index Cond: (((yryotype)::text = 'Client'::text) AND \n((yryonum)::text = 'Comptabilite.Recevable.Regroupement'::text))\"\n\"Total runtime: 200235.732 ms\"\n\n",
"msg_date": "Mon, 28 Nov 2005 18:40:59 -0500",
"msg_from": "David Gagnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please help with this explain analyse..."
},
{
"msg_contents": "David Gagnon wrote:\n\n> \" -> Index Scan using cr_pk on cr (cost=0.00..6.02 rows=1\n> width=828) (actual time=0.073..0.077 rows=1 loops=13587)\"\n> \" Index Cond: (((cr.crypnum)::text = 'M'::text) AND\n> (cr.crnum = \"outer\".cscrnum))\"\n> \" Filter: ((crdate + '00:00:00'::interval) <=\n> '2005-01-28 00:00:00'::timestamp without time zone)\"\n> \" -> Index Scan using rr_pk on rr (cost=0.00..5.99 rows=1\n> width=5637) (actual time=0.062..0.069 rows=1 loops=8335)\"\n> \" Index Cond: ((\"outer\".crrrnum)::text = (rr.rrnum)::text)\"\n> \" -> Index Scan using yr_idx1 on yr (cost=0.00..5.99 rows=1\n> width=926) (actual time=0.127..17.379 rows=1154 loops=8335)\"\n\nYour loops are what is causing the time spent.\neg. \"actual time=0.127..17.379 rows=1154 loops=8335)\" ==\n8335*(17.379-0.127)/1000=>143 secs (if my math is correct).\n\n\n\n-- \n_______________________________\n\nThis e-mail may be privileged and/or confidential, and the sender does\nnot waive any related rights and obligations. Any distribution, use or\ncopying of this e-mail or the information it contains by other than an\nintended recipient is unauthorized. If you received this e-mail in\nerror, please advise me (by return e-mail or otherwise) immediately.\n_______________________________\n",
"msg_date": "Mon, 28 Nov 2005 15:46:14 -0800",
"msg_from": "Bricklen Anderson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please help with this explain analyse..."
},
{
"msg_contents": "Bricklen Anderson <[email protected]> writes:\n> Your loops are what is causing the time spent.\n> eg. \"actual time=0.127..17.379 rows=1154 loops=8335)\" ==\n> 8335*(17.379-0.127)/1000=>143 secs (if my math is correct).\n\nAs for where the problem is, I think it's the horrid misestimate of the\nnumber of matching rows in cs_pk:\n\n>> \" -> Index Scan using cs_pk on cs (cost=0.00..12.83 \n>> rows=2 width=144) (actual time=0.087..444.999 rows=13587 loops=1)\"\n>> \" Index Cond: (('M'::text = (cs.csypnum)::text) \n>> AND ((cs.csglnum)::text = (\"outer\".glnum)::text))\"\n\nHas that table been ANALYZEd recently? How about the gl table?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Nov 2005 19:00:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please help with this explain analyse... "
},
{
"msg_contents": "I restored my db but haven't run the analyse... That was the problem.\n\nThanks\n/David\n\n\"Merge Left Join (cost=2273.54..2290.19 rows=228 width=816) (actual \ntime=2098.257..2444.472 rows=8335 loops=1)\"\n\" Merge Cond: ((\"outer\".rrnum)::text = \"inner\".\"?column8?\")\"\n\" -> Merge Join (cost=2131.25..2141.31 rows=228 width=744) (actual \ntime=2037.953..2251.289 rows=8335 loops=1)\"\n\" Merge Cond: (\"outer\".\"?column31?\" = \"inner\".\"?column77?\")\"\n\" -> Sort (cost=1975.03..1975.60 rows=228 width=235) (actual \ntime=1798.556..1811.828 rows=8335 loops=1)\"\n\" Sort Key: (cr.crrrnum)::text\"\n\" -> Hash Join (cost=1451.41..1966.10 rows=228 width=235) \n(actual time=267.751..515.396 rows=8335 loops=1)\"\n\" Hash Cond: (\"outer\".crnum = \"inner\".cscrnum)\"\n\" -> Seq Scan on cr (cost=0.00..489.77 rows=4529 \nwidth=101) (actual time=0.077..97.615 rows=8335 loops=1)\"\n\" Filter: (((crypnum)::text = 'M'::text) AND \n((crdate + '00:00:00'::interval) <= '2005-01-28 00:00:00'::timestamp \nwithout time zone))\"\n\" -> Hash (cost=1449.70..1449.70 rows=684 \nwidth=134) (actual time=267.568..267.568 rows=13587 loops=1)\"\n\" -> Nested Loop (cost=20.59..1449.70 \nrows=684 width=134) (actual time=33.099..178.524 rows=13587 loops=1)\"\n\" -> Seq Scan on gl (cost=0.00..5.21 \nrows=2 width=91) (actual time=0.021..0.357 rows=1 loops=1)\"\n\" Filter: (glsoctrl = 1)\"\n\" -> Bitmap Heap Scan on cs \n(cost=20.59..684.42 rows=3026 width=43) (actual time=33.047..115.151 \nrows=13587 loops=1)\"\n\" Recheck Cond: ((cs.csglnum)::text \n= (\"outer\".glnum)::text)\"\n\" Filter: ('M'::text = \n(csypnum)::text)\"\n\" -> Bitmap Index Scan on \ncs_gl_fk (cost=0.00..20.59 rows=3026 width=0) (actual \ntime=32.475..32.475 rows=13587 loops=1)\"\n\" Index Cond: \n((cs.csglnum)::text = (\"outer\".glnum)::text)\"\n\" -> Sort (cost=156.22..159.65 rows=1372 width=509) (actual \ntime=239.315..254.024 rows=8974 loops=1)\"\n\" Sort Key: (rr.rrnum)::text\"\n\" -> Seq Scan on rr (cost=0.00..84.72 rows=1372 \nwidth=509) (actual time=0.055..33.564 rows=1372 loops=1)\"\n\" -> Sort (cost=142.29..144.55 rows=903 width=72) (actual \ntime=60.246..74.813 rows=8932 loops=1)\"\n\" Sort Key: (yr.yrref)::text\"\n\" -> Bitmap Heap Scan on yr (cost=16.42..97.96 rows=903 \nwidth=72) (actual time=8.513..13.587 rows=1154 loops=1)\"\n\" Recheck Cond: (((yryotype)::text = 'Client'::text) AND \n((yryonum)::text = 'Comptabilite.Recevable.Regroupement'::text))\"\n\" -> Bitmap Index Scan on yr_idx1 (cost=0.00..16.42 \nrows=903 width=0) (actual time=8.471..8.471 rows=1154 loops=1)\"\n\" Index Cond: (((yryotype)::text = 'Client'::text) \nAND ((yryonum)::text = 'Comptabilite.Recevable.Regroupement'::text))\"\n\"Total runtime: 2466.197 ms\"\n\n>Bricklen Anderson <[email protected]> writes:\n> \n>\n>>Your loops are what is causing the time spent.\n>>eg. \"actual time=0.127..17.379 rows=1154 loops=8335)\" ==\n>>8335*(17.379-0.127)/1000=>143 secs (if my math is correct).\n>> \n>>\n>\n>As for where the problem is, I think it's the horrid misestimate of the\n>number of matching rows in cs_pk:\n>\n> \n>\n>>>\" -> Index Scan using cs_pk on cs (cost=0.00..12.83 \n>>>rows=2 width=144) (actual time=0.087..444.999 rows=13587 loops=1)\"\n>>>\" Index Cond: (('M'::text = (cs.csypnum)::text) \n>>>AND ((cs.csglnum)::text = (\"outer\".glnum)::text))\"\n>>> \n>>>\n>\n>Has that table been ANALYZEd recently? How about the gl table?\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n\n",
"msg_date": "Mon, 28 Nov 2005 21:16:52 -0500",
"msg_from": "David Gagnon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Please help with this explain analyse..."
}
] |
[
{
"msg_contents": "I know in mysql, index will auto change after copying data\nOf course, index will change after inserting a line in postgresql, but what about copying data?\n\n",
"msg_date": "Tue, 29 Nov 2005 15:00:22 +0800",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "index auto changes after copying data ?"
},
{
"msg_contents": "[email protected] wrote:\n> I know in mysql, index will auto change after copying data\n> Of course, index will change after inserting a line in postgresql, but what about copying data?\n\nThe index will (of course) know about the new data.\nYou might want to ANALYZE the table again after a large copy in case the\nstatistics about how many different values are present changes.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 29 Nov 2005 12:16:20 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index auto changes after copying data ?"
},
{
"msg_contents": "[email protected] (\"[email protected]\") writes:\n> I know in mysql, index will auto change after copying data Of\n> course, index will change after inserting a line in postgresql, but\n> what about copying data?\n\nDo you mean, by this, something like...\n\n\"Are indexes affected by loading data using the COPY command just as\nthey are if data is loaded using INSERT?\"\n\nIf so, then the answer is \"Yes, certainly.\" Indexes are updated\nwhichever statement you use to load in data.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/finances.html\nRules of the Evil Overlord #160. \"Before being accepted into my\nLegions of Terror, potential recruits will have to pass peripheral\nvision and hearing tests, and be able to recognize the sound of a\npebble thrown to distract them.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 29 Nov 2005 13:48:49 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index auto changes after copying data ?"
}
] |
[
{
"msg_contents": "Hi \n\ni´m using PostgreSQL on windows 2000, the pg_dump take around 50 minutes\nto do backup of 200Mb data ( with no compression, and 15Mb with\ncompression), but in windows XP does not pass of 40 seconds... :(\n\nThis happens with 8.1 and version 8.0, somebody passed for the same\nsituation? \n\nIt will be that a configuration in the priorities of the exists\nprocesses ? in Windows XP the processing of schemes goes 70% and\nconstant accesses to the HardDisk, while that in windows 2000 it does\nnot pass of 3%.\n\nthanks\n\nFranklin\n\n",
"msg_date": "Wed, 30 Nov 2005 10:35:05 -0300",
"msg_from": "\"Franklin Haut\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_dump slow"
},
{
"msg_contents": "At 08:35 AM 11/30/2005, Franklin Haut wrote:\n>Hi\n>\n>i´m using PostgreSQL on windows 2000, the pg_dump take around 50 minutes\n>to do backup of 200Mb data ( with no compression, and 15Mb with\n>compression),\n\nCompression is reducing the data to 15/200= 3/40= 7.5% of original size?\n\n>but in windows XP does not pass of 40 seconds... :(\n\nYou mean that 40 secs in pg_dump under Win XP \ncrashes, and therefore you have a WinXP problem?\n\nOr do you mean that pg_dump takes 40 secs to \ncomplete under WinXP and 50 minutes under W2K and \ntherefore you have a W2K problem?\n\nIn fact, either 15MB/40secs= 375KBps or \n200MB/40secs= 5MBps is _slow_, so there's a problem under either platform!\n\n>This happens with 8.1 and version 8.0, somebody \n>passed for the same situation?\n>\n>It will be that a configuration in the priorities of the exists\n>processes ? in Windows XP the processing of schemes goes 70% and\n>constant accesses to the HardDisk, while that in windows 2000 it does\n>not pass of 3%.\nAssuming Win XP completes the dump, the first thing to do is\n*don't use W2K*\nM$ has stopped supporting it in anything but absolutely minimum fashion anyway.\n _If_ you are going to use an M$ OS you should be using WinXP.\n(You want to pay licensing fees for your OS, but \nyou are using free DB SW? Huh? If you are \ntrying to save $$$, use Open Source SW like Linux \nor *BSD. pg will perform better under it, and it's cheaper!)\n\n\nAssuming that for some reason you can't/won't \nmigrate to a non-M$ OS, the next problem is the \nslow HD IO you are getting under WinXP.\n\nWhat is the HW involved here? Particularly the \nHD subsystem and the IO bus(es) it is plugged into?\n\nFor some perspective, Raw HD average IO rates for \neven reasonably modern 7200rpm HD's is in the \n~50MBps per HD range. Top of the line 15Krpm \nSCSI and FC HD's have raw average IO rates of \njust under 80MBps per HD as of this post.\n\nGiven that most DB's are not on 1 HD (if you DB \n_is_ on only 1 HD, change that ASAP before you \nlose data...), for anything other than a 2 HD \nRAID 1 set I'd expect raw HD average IO rates to be at least 100MBps.\n\nIf you are getting >= 100MBps of average HD IO, \nyou should be getting > 5MBps during pg_dump, and certainly > 375MBps!\n\nRon\n\n\n",
"msg_date": "Wed, 30 Nov 2005 08:56:41 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump slow"
},
{
"msg_contents": "Hi,\n\nYes, my problem is that the pg_dump takes 40 secs to complete under\nWinXP and 50 minutes under W2K! The same database, the same hardware!,\nonly diferrent Operational Systems.\n\nThe hardware is: \n Pentium4 HT 3.2 GHz\n 1024 Mb Memory\n HD 120Gb SATA\n\nIm has make again the test, and then real size of database is 174Mb\n(avaliable on pg_admin, properties) and the file size of pg_dump is 18Mb\n( with command line pg_dump -i -F c -b -v -f \"C:\\temp\\BackupTest.bkp\"\nNameOfDatabase ). The time was equal in 40 seconds on XP and 50 minutes\non W2K, using PG 8.1\n\nUnhappyly for some reasons I cannot use other platforms, I need use PG\non Windows, and must be W2K.\n\nIs strange to have a so great difference in the time of execution of\ndump, therefore the data are the same ones and the archive is being\ncorrectly generated in both OS.\n\nFranklin\n\n-----Mensagem original-----\nDe: Ron [mailto:[email protected]] \nEnviada em: quarta-feira, 30 de novembro de 2005 10:57\nPara: Franklin Haut; [email protected]\nAssunto: Re: [PERFORM] pg_dump slow\n\n\nAt 08:35 AM 11/30/2005, Franklin Haut wrote:\n>Hi\n>\n>i´m using PostgreSQL on windows 2000, the pg_dump take around 50 \n>minutes to do backup of 200Mb data ( with no compression, and 15Mb with\n\n>compression),\n\nCompression is reducing the data to 15/200= 3/40= 7.5% of original size?\n\n>but in windows XP does not pass of 40 seconds... :(\n\nYou mean that 40 secs in pg_dump under Win XP \ncrashes, and therefore you have a WinXP problem?\n\nOr do you mean that pg_dump takes 40 secs to \ncomplete under WinXP and 50 minutes under W2K and \ntherefore you have a W2K problem?\n\nIn fact, either 15MB/40secs= 375KBps or \n200MB/40secs= 5MBps is _slow_, so there's a problem under either\nplatform!\n\n>This happens with 8.1 and version 8.0, somebody\n>passed for the same situation?\n>\n>It will be that a configuration in the priorities of the exists \n>processes ? in Windows XP the processing of schemes goes 70% and \n>constant accesses to the HardDisk, while that in windows 2000 it does \n>not pass of 3%.\nAssuming Win XP completes the dump, the first thing to do is *don't use\nW2K* M$ has stopped supporting it in anything but absolutely minimum\nfashion anyway.\n _If_ you are going to use an M$ OS you should be using WinXP. (You\nwant to pay licensing fees for your OS, but \nyou are using free DB SW? Huh? If you are \ntrying to save $$$, use Open Source SW like Linux \nor *BSD. pg will perform better under it, and it's cheaper!)\n\n\nAssuming that for some reason you can't/won't \nmigrate to a non-M$ OS, the next problem is the \nslow HD IO you are getting under WinXP.\n\nWhat is the HW involved here? Particularly the \nHD subsystem and the IO bus(es) it is plugged into?\n\nFor some perspective, Raw HD average IO rates for \neven reasonably modern 7200rpm HD's is in the \n~50MBps per HD range. Top of the line 15Krpm \nSCSI and FC HD's have raw average IO rates of \njust under 80MBps per HD as of this post.\n\nGiven that most DB's are not on 1 HD (if you DB \n_is_ on only 1 HD, change that ASAP before you \nlose data...), for anything other than a 2 HD \nRAID 1 set I'd expect raw HD average IO rates to be at least 100MBps.\n\nIf you are getting >= 100MBps of average HD IO, \nyou should be getting > 5MBps during pg_dump, and certainly > 375MBps!\n\nRon\n\n\n",
"msg_date": "Wed, 30 Nov 2005 13:20:51 -0300",
"msg_from": "\"Franklin Haut\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RES: pg_dump slow"
},
{
"msg_contents": "Franklin Haut wrote:\n> Hi,\n> \n> Yes, my problem is that the pg_dump takes 40 secs to complete under\n> WinXP and 50 minutes under W2K! The same database, the same hardware!,\n> only diferrent Operational Systems.\n> \n> The hardware is: \n> Pentium4 HT 3.2 GHz\n> 1024 Mb Memory\n> HD 120Gb SATA\n\nThere have been reports of very slow network performance on Win2k \nsystems with the default configuration. You'll have to check the \narchives for details I'm afraid. This might apply to you.\n\nIf you're happy that doesn't affect you then I'd look at the disk system \n- perhaps XP has newer drivers than Win2k.\n\nWhat do the MS performance-charts show is happening? Specifically, CPU \nand disk I/O.\n\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Wed, 30 Nov 2005 17:27:35 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: pg_dump slow"
},
{
"msg_contents": "At 12:27 PM 11/30/2005, Richard Huxton wrote:\n>Franklin Haut wrote:\n>>Hi,\n>>Yes, my problem is that the pg_dump takes 40 secs to complete under\n>>WinXP and 50 minutes under W2K! The same database, the same hardware!,\n>>only diferrent Operational Systems.\n>>The hardware is: Pentium4 HT 3.2 GHz\n>> 1024 MB Memory\n\nGet the RAM up to at least 4096MB= 4GB for a DB server. 4 1GB DIMMs \nor 2 2GB DIMMS are ~ the same $$ as a HD (~$250-$300 US) and well \nworth the expense.\n\n>> HD 120GB SATA\n\"b\" is \"bit\". \"B\" is \"Byte\". I made the correction.\n\nYou have =1= HD? and you are using it for everything: OS, pq, swap, etc?\nVery Bad Idea.\n\nAt the very least, a DB server should have the OS on separate \nspindles from pg, and pg tables should be on something like a 4 HD \nRAID 10. At the very least.\n\nDB servers are about HDs. Lots and lots of HDs compared to anything \noutside the DB realm. Start thinking in terms of at least 6+ HD's \nattached to the system in question (I've worked on system with \nliterally 100's). Usually only a few of these are directly attached \nto the DB server and most are attached by LAN or FC. But the point \nremains: DBs and DB servers eat HDs in prodigious quantities.\n\n\n>There have been reports of very slow network performance on Win2k \n>systems with the default configuration. You'll have to check the \n>archives for details I'm afraid. This might apply to you.\nUnless you are doing IO across a network, this issue will not apply to you.\n\nBy default W2K systems often had a default TCP/IP packet size of 576B \nand a tiny RWIN. Optimal for analog modems talking over noisy POTS \nlines, but horrible for everything else\n\nPacket size needs to be boosted to 1500B, the maximum. RWIN should \nbe boosted to _at least_ the largest number <= 2^16 that you can use \nwithout TCP scaling. Benchmark network IO rates. Then TCP scaling \nshould be turned on and RWIN doubled and network IO benched \nagain. Repeat until there is no performance benefit to doubling RWIN \nor you run out of RAM that you can afford to toss at the problem or \nyou hit the max for RWIN (very doubtful).\n\n\n\n>If you're happy that doesn't affect you then I'd look at the disk \n>system - perhaps XP has newer drivers than Win2k.\nI'll reiterate: Do _not_ run a production DB server on W2K. M$ has \nobsoleted the platform and that it is not supported _nor_ any of \nreliable, secure, etc. etc.\n\nA W2K based DB server, particularly one with a connection to the \nInternet, is a ticking time bomb at this point.\nGet off W2K as a production platform ASAP. Take to your \nCEO/Dean/whatever you call your Fearless Leader if you have to.\n\nEconomically and probably performance wise, it's best to use an Open \nSource OS like Linux or *BSD. However, if you must use M$, at least \nuse OS's that M$ is actively supporting.\n\nDespite M$ marketing propaganda and a post in this thread to the \ncontrary, you =CAN= often run a production DB server under WinXP and \nnot pay M$ their usurious licensing fees for W2003 Server or any of \ntheir other products with \"server\" in the title. How much RAM and \nhow many CPUs you want in your DB server is the main issue. For a \n1P, <= 4GB RAM vanilla box, WinXp will work just fine.\n\n\n>What do the MS performance-charts show is happening? Specifically, \n>CPU and disk I/O.\nHis original post said ~3% CPU under W2K and ~70% CPU under WinXP\n\nRon\n\n\n",
"msg_date": "Wed, 30 Nov 2005 16:05:38 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: pg_dump slow"
}
] |
[
{
"msg_contents": "> At 08:35 AM 11/30/2005, Franklin Haut wrote:\n> >Hi\n> >\n> >i´m using PostgreSQL on windows 2000, the pg_dump take around 50 minutes\n> >to do backup of 200Mb data ( with no compression, and 15Mb with\n> >compression),\n> \n> Compression is reducing the data to 15/200= 3/40= 7.5% of original size?\n> \n> >but in windows XP does not pass of 40 seconds... :(\n> \n> You mean that 40 secs in pg_dump under Win XP\n> crashes, and therefore you have a WinXP problem?\n> \n> Or do you mean that pg_dump takes 40 secs to\n> complete under WinXP and 50 minutes under W2K and\n> therefore you have a W2K problem?\n\nI think he is saying the time to dump does not take more than 40 seconds, but I'm not sure.\n \n> In fact, either 15MB/40secs= 375KBps or\n> 200MB/40secs= 5MBps is _slow_, so there's a problem under either platform!\n\n5 mb/sec dump output from psql is not terrible or even bad, depending on hardware.\n\n> >not pass of 3%.\n> Assuming Win XP completes the dump, the first thing to do is\n> *don't use W2K*\n\nXP is not a server platform. Next level up is 2003 server. Many organizations still have 2k deployed. About half of my servers still run it. Anyways, the 2k/xp issue does not explain why there is a performance problem.\n\n> M$ has stopped supporting it in anything but absolutely minimum fashion\n> anyway.\n> _If_ you are going to use an M$ OS you should be using WinXP.\n> (You want to pay licensing fees for your OS, but\n> you are using free DB SW? Huh? If you are\n> trying to save $$$, use Open Source SW like Linux\n> or *BSD. pg will perform better under it, and it's cheaper!)\n\nI would like to see some benchmarks supporting those claims. No comment on licensing issue, but there are many other factors in considering server platform than licensing costs. That said, there were several win32 specific pg performance issues that were rolled up into the 8.1 release. So for win32 you definitely want to be running 8.1.\n \n> Assuming that for some reason you can't/won't\n> migrate to a non-M$ OS, the next problem is the\n> slow HD IO you are getting under WinXP.\n\nProblem is almost certainly not related to disk unless there is a imminent disk failure. Could be TCP/IP issue (are you running pg_dump from remote box?), or possibly a network driver issue or some other weird software issue. Can you determine if disk is running normally with respect to other applications? Is this a fresh win2k install? A LSP, virus scanner, backup software, or some other garbage can really ruin your day.\n\nMerlin\n\n",
"msg_date": "Wed, 30 Nov 2005 11:56:50 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump slow"
},
{
"msg_contents": "Complementing...\n\n\nThe test was maked at the same machine ( localhost ) at Command-Prompt,\nno client´s connected, no concurrent processes only PostgreSQL running.\n\nIn windows XP, exists much access to the processor (+- 70%) and HD (I\nsee HD Led allways on), while in the W2K almost without activity of\nprocessor (3%)and little access to the HardDisk (most time of the led HD\nis off).\n\nLook, the database has 81 Tables, one of these, has 2 fields ( one\ninteger and another ByteA ), these table as 5.150 Records. \nI´m Dumpped only this table and the file size is 7Mb (41% of total\n(17MB is the total)) was very slow.... Then I Maked Backup of the others\ntables was fast!\n\nSo i´m conclused that pg_dump and pg_restore is very slow when\nmanipulates ByteA type on W2K!, is this possible ?\n\n\nFranklin\n\n\n\n-----Mensagem original-----\nDe: Merlin Moncure [mailto:[email protected]] \nEnviada em: quarta-feira, 30 de novembro de 2005 13:57\nPara: Ron\nCc: [email protected]; Franklin Haut\nAssunto: RE: [PERFORM] pg_dump slow\n\n\n> At 08:35 AM 11/30/2005, Franklin Haut wrote:\n> >Hi\n> >\n> >i´m using PostgreSQL on windows 2000, the pg_dump take around 50 \n> >minutes to do backup of 200Mb data ( with no compression, and 15Mb \n> >with compression),\n> \n> Compression is reducing the data to 15/200= 3/40= 7.5% of original \n> size?\n> \n> >but in windows XP does not pass of 40 seconds... :(\n> \n> You mean that 40 secs in pg_dump under Win XP\n> crashes, and therefore you have a WinXP problem?\n> \n> Or do you mean that pg_dump takes 40 secs to\n> complete under WinXP and 50 minutes under W2K and\n> therefore you have a W2K problem?\n\nI think he is saying the time to dump does not take more than 40\nseconds, but I'm not sure.\n \n> In fact, either 15MB/40secs= 375KBps or\n> 200MB/40secs= 5MBps is _slow_, so there's a problem under either \n> platform!\n\n5 mb/sec dump output from psql is not terrible or even bad, depending on\nhardware.\n\n> >not pass of 3%.\n> Assuming Win XP completes the dump, the first thing to do is *don't \n> use W2K*\n\nXP is not a server platform. Next level up is 2003 server. Many\norganizations still have 2k deployed. About half of my servers still\nrun it. Anyways, the 2k/xp issue does not explain why there is a\nperformance problem.\n\n> M$ has stopped supporting it in anything but absolutely minimum \n> fashion anyway.\n> _If_ you are going to use an M$ OS you should be using WinXP. (You\n> want to pay licensing fees for your OS, but you are using free DB SW?\n\n> Huh? If you are trying to save $$$, use Open Source SW like Linux\n> or *BSD. pg will perform better under it, and it's cheaper!)\n\nI would like to see some benchmarks supporting those claims. No comment\non licensing issue, but there are many other factors in considering\nserver platform than licensing costs. That said, there were several\nwin32 specific pg performance issues that were rolled up into the 8.1\nrelease. So for win32 you definitely want to be running 8.1.\n \n> Assuming that for some reason you can't/won't\n> migrate to a non-M$ OS, the next problem is the\n> slow HD IO you are getting under WinXP.\n\nProblem is almost certainly not related to disk unless there is a\nimminent disk failure. Could be TCP/IP issue (are you running pg_dump\nfrom remote box?), or possibly a network driver issue or some other\nweird software issue. Can you determine if disk is running normally\nwith respect to other applications? Is this a fresh win2k install? A\nLSP, virus scanner, backup software, or some other garbage can really\nruin your day.\n\nMerlin\n\n",
"msg_date": "Wed, 30 Nov 2005 18:36:42 -0300",
"msg_from": "\"Franklin Haut\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "RES: pg_dump slow"
}
] |
[
{
"msg_contents": " \n\nThis seems to me to be an expensive plan and I'm wondering if there's a\nway to improve it or a better way to do what I'm trying to do here (get\na count of distinct values for each record_id and map that value to the\nentity type) entity_type_id_mapping is 56 rows\nvolume_node_entity_data_values is approx 500,000,000 rows vq_record_id\nhas approx 11,000,000 different values vq_entity_type is a value in\nentity_type_id_mapping.entity_type\n\nI thought that the idx_vq_entities_1 index would allow an ordered scan\nof the table. I created it based pon the sort key given in the explain\nstatement. \n\nThanks in advance.\n\n Table \"data_schema.volume_queue_entities\"\n Column | Type | Modifiers\n\n-----------------+-------------------+----------------------------------\n-----------------+-------------------+-------------\n vq_record_id | bigint | default\ncurrval('seq_vq_fsmd_auto'::regclass)\n vq_entity_type | character varying |\n vq_entity_value | character varying |\nIndexes:\n \"idx_vq_entities_1\" btree (vq_record_id, vq_entity_type,\nvq_entity_value)\n\n\n\n Table \"volume_8.entity_type_id_mapping\"\n Column | Type | Modifiers\n\n-------------+-------------------+--------------------------------------\n-------------+-------------------+--------------------\n entity_id | integer | default\nnextval('volume_8.entity_id_sequence'::regclass)\n entity_type | character varying | \n\n\n\nexplain insert into volume_8.volume_node_entity_data_values\n(vs_volume_id, vs_latest_node_synthetic_id, vs_base_entity_id, vs_value,\nvs_value_count, vs_base_entity_revision_id)\n \tselect 8, vq_record_id, entity_id , vq_entity_value,\ncount(vq_entity_value),1 from data_schema.volume_queue_entities qe,\nvolume_8.entity_type_id_mapping emap\n\twhere qe.vq_entity_type = emap.entity_type group by\nvq_record_id, vq_entity_type, vq_entity_value, entity_id ;\n\n\n------------------------------------------------------------------------\n----------------------------------------\n Subquery Scan \"*SELECT*\" (cost=184879640.90..210689876.26\nrows=543373376 width=60)\n -> GroupAggregate (cost=184879640.90..199822408.74 rows=543373376\nwidth=37)\n -> Sort (cost=184879640.90..186238074.34 rows=543373376\nwidth=37)\n Sort Key: qe.vq_record_id, qe.vq_entity_type,\nqe.vq_entity_value, emap.entity_id\n -> Hash Join (cost=1.70..18234833.10 rows=543373376\nwidth=37)\n Hash Cond: ((\"outer\".vq_entity_type)::text =\n(\"inner\".entity_type)::text)\n -> Seq Scan on volume_queue_entities qe\n(cost=0.00..10084230.76 rows=543373376 width=33)\n -> Hash (cost=1.56..1.56 rows=56 width=16)\n -> Seq Scan on entity_type_id_mapping emap\n(cost=0.00..1.56 rows=56 width=16)\n(9 rows)\n\n",
"msg_date": "Wed, 30 Nov 2005 13:21:07 -0600",
"msg_from": "\"Brad Might\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Select with grouping plan question"
},
{
"msg_contents": "\"Brad Might\" <[email protected]> writes:\n> This seems to me to be an expensive plan and I'm wondering if there's a\n> way to improve it or a better way to do what I'm trying to do here (get\n> a count of distinct values for each record_id and map that value to the\n> entity type) entity_type_id_mapping is 56 rows\n> volume_node_entity_data_values is approx 500,000,000 rows vq_record_id\n> has approx 11,000,000 different values vq_entity_type is a value in\n> entity_type_id_mapping.entity_type\n\nHmm, what Postgres version is that? And have you ANALYZEd\nentity_type_id_mapping lately? I'd expect the planner to realize that\nthere cannot be more than 56 output groups, which ought to lead it to\nprefer a hashed aggregate over the sort+group method. That's what I\nget in a test case with a similar query structure, anyway.\n\nIf you're stuck on an old PG version, it might help to do the\naggregation first and then join, ie\n\n\tselect ... from\n\t(select count(vq_entity_value) as vcount, vq_entity_type\n\t from data_schema.volume_queue_entities group by vq_entity_type) qe,\n\tvolume_8.entity_type_id_mapping emap\n\twhere qe.vq_entity_type = emap.entity_type;\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Nov 2005 15:41:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Select with grouping plan question "
}
] |
[
{
"msg_contents": "> By default W2K systems often had a default TCP/IP packet size of 576B\n> and a tiny RWIN. Optimal for analog modems talking over noisy POTS\n> lines, but horrible for everything else\n\nwrong. default MTU for windows 2000 server is 1500, as was NT4.\nhttp://support.microsoft.com/?id=140375\n\nHowever tweaking rwin is certainly something to look at.\n\n> >If you're happy that doesn't affect you then I'd look at the disk\n> >system - perhaps XP has newer drivers than Win2k.\n> I'll reiterate: Do _not_ run a production DB server on W2K. M$ has\n> obsoleted the platform and that it is not supported _nor_ any of\n> reliable, secure, etc. etc.\n\nwrong again. WIN2k gets free security hotfixes and paid support until\n2010.\nhttp://www.microsoft.com/windows2000/support/lifecycle/\n \n> A W2K based DB server, particularly one with a connection to the\n> Internet, is a ticking time bomb at this point.\n> Get off W2K as a production platform ASAP. Take to your\n> CEO/Dean/whatever you call your Fearless Leader if you have to.\n\nwrong again!! There is every reason to believe win2k is *more* secure\nthan win2003 sever because it is a more stable platform. This of course\ndepends on what other services are running, firewall issues, etc etc.\n\n>> Economically and probably performance wise, it's best to use an Open\n> Source OS like Linux or *BSD. However, if you must use M$, at least\n> use OS's that M$ is actively supporting.\n\nI encourage use of open source software. However encouraging other\npeople to spontaneously switch hardware/software platforms (especially\nwhen they just stated when win2k is a requirement) is just or at least\nnot helpful.\n \n> Despite M$ marketing propaganda and a post in this thread to the\n> contrary, you =CAN= often run a production DB server under WinXP and\n> not pay M$ their usurious licensing fees for W2003 Server or any of\n> their other products with \"server\" in the title. How much RAM and\n\nyou are on a roll here. You must not be aware of 10 connection limit\nfor win2k pro and winxp pro.\n\nhttp://winhlp.com/WxConnectionLimit.htm\n\nThere are hackerish ways of getting around this which are illegal.\nCheating to get around this by pooling connections via tcp proxy for\nexample is also against EULA (and, in my opinion, unethical).\n\n> how many CPUs you want in your DB server is the main issue. For a\n> 1P, <= 4GB RAM vanilla box, WinXp will work just fine.\n\nNow, who is guilty of propaganda here? Also, your comments regarding\nhard disks while correct in the general sense are not helpful. This is\nclearly not a disk bandwidth problem.\n\n> >What do the MS performance-charts show is happening? Specifically,\n> >CPU and disk I/O.\n> His original post said ~3% CPU under W2K and ~70% CPU under WinXP\n\nSlow performance in extraction of bytea column strongly suggests tcp/ip.\nissue. I bet if you blanked out bytea column pg_dump will be fast. \n\nFranlin: are you making pg_dump from local or remote box and is this a\nclean install? Try fresh patched win2k install and see what happens.\n\nMerlin\n",
"msg_date": "Wed, 30 Nov 2005 17:13:08 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RES: pg_dump slow"
},
{
"msg_contents": "At 05:13 PM 11/30/2005, Merlin Moncure wrote:\n> > By default W2K systems often had a default TCP/IP packet size of 576B\n> > and a tiny RWIN. Optimal for analog modems talking over noisy POTS\n> > lines, but horrible for everything else\n>\n>wrong. default MTU for windows 2000 server is 1500, as was NT4.\n>http://support.microsoft.com/?id=140375\n>\nLOL. Good to see it is now. I got bit by the problem when it wasn't.\n\n\n>However tweaking rwin is certainly something to look at.\n>\n> > >If you're happy that doesn't affect you then I'd look at the disk\n> > >system - perhaps XP has newer drivers than Win2k.\n> > I'll reiterate: Do _not_ run a production DB server on W2K. M$ has\n> > obsoleted the platform and that it is not supported _nor_ any of\n> > reliable, secure, etc. etc.\n>\n>wrong again. WIN2k gets free security hotfixes and paid support until\n>2010.\n>http://www.microsoft.com/windows2000/support/lifecycle/\n>\n\nI've _lived_ what I'm talking about. I've built some of the largest \nM$ installations in existence at the time of their deployment.\n\nType of Support Availability\nMainstream\n * Paid-per-incident support\n * Free hotfix support\nJune 30, 2005\nExtended\n * Hourly support\n * Paid hotfix support\nJune 30, 2010\nSecurity hotfixes Free to all customers through March 31, 2010\n\n From your own source. And good luck getting M$ to give you free \nsupport for anything except what _They_ consider to be Their Fault \nwithout paying them a boatload of $$$$. The standard M$ party line \nat this point is \"Upgrade, Upgrade, Upgrade. ...Or pay us so much \n$$$$ to do it that we feel it makes economic sense for M$ to Play Ball\".\n\n\n> > A W2K based DB server, particularly one with a connection to the\n> > Internet, is a ticking time bomb at this point.\n> > Get off W2K as a production platform ASAP. Take to your\n> > CEO/Dean/whatever you call your Fearless Leader if you have to.\n>\n>wrong again!! There is every reason to believe win2k is *more* secure\n>than win2003 sever because it is a more stable platform. This of course\n>depends on what other services are running, firewall issues, etc etc.\nYou evidently do not have a very serious background in network or \nsystems security or professional experience would tell you that your \nabove sentence is dangerously misguided.\n\nReality is that platforms stay marginally secure _only_ by constant \npatching of newly discovered exploits and never ceasing vigilance \nlooking for new exploits to patch. Regardless of platform.\n\nObsoleted platforms are at greater risk because the White Hats are no \nlonger paying as much attention to them and the Black Hats are \nbasically opportunistic parasites. They play with anything and \neverything they can get their hands on in the hopes of finding \nexploitable security flaws.\n\n\n> >> Economically and probably performance wise, it's best to use an Open\n> > Source OS like Linux or *BSD. However, if you must use M$, at least\n> > use OS's that M$ is actively supporting.\n>\n>I encourage use of open source software. However encouraging other\n>people to spontaneously switch hardware/software platforms (especially\n>when they just stated when win2k is a requirement) is just or at least\n>not helpful.\n>\n> > Despite M$ marketing propaganda and a post in this thread to the\n> > contrary, you =CAN= often run a production DB server under WinXP and\n> > not pay M$ their usurious licensing fees for W2003 Server or any of\n> > their other products with \"server\" in the title. How much RAM and\n>\n>you are on a roll here. You must not be aware of 10 connection limit\n>for win2k pro and winxp pro.\n>\n>http://winhlp.com/WxConnectionLimit.htm\nI'm excruciatingly aware of the 10 connection limit AND how stupid it \nis. It's one of the reasons, along with what M$ thought they could \nget away with charging to increase it, M$ got thrown out of my server rooms.\n\nAlso, we are talking about a _DB_ server. Not a web server or some \nother function that deals with relatively light load per \nconnection. Just how many open active DB connections do want active \nconcurrently? Not Many. Once all the HW is being utilized to full \ncapacity, DBs get best performance by being asked to do as little as \npossible concurrently beyond that.\n\nLong before you will want more than 10 active open DB connections \nbanging on modest HW you are going to want a queuing system in front \nof the DB in order to smooth behavior. By the time you _really_ \nneed to support lots of open active DB connections, you will in a \nposition to spend more money (and probably will have to on both \nbetter HW and better SW).\n\n\n>There are hackerish ways of getting around this which are illegal.\n>Cheating to get around this by pooling connections via tcp proxy for\n>example is also against EULA (and, in my opinion, unethical).\nI'm sorry you evidently feel your income stream is threatened, but \nthere is no way that is either immoral or illegal for anyone to use \nthe industry standard layered architecture of having a DB connection \nlayer separate from a Queuing system. M$MQ is provided \n_specifically_ for that use.\n\nCertainly \"twiddling the bits\" inside a M$ OS violates the EULA, and \nI'm not advocating anything of the kind.\n\nOTOH, that Draconian EULA is yet _another_ reason to get rid of M$ \nOS's in one's organization. When I buy something, it is _mine_. You \ncan tell me you won't support it if I modify it, but it's the height \nof Hubris to tell me that I'm not allowed to modify SW I paid for and \nown. Tell your managers/employers at M$ that Customer Service and \nRespecting Customers =keeps= customers. The reverse loses them. Period.\n\n\n> > how many CPUs you want in your DB server is the main issue. For a\n> > 1P, <= 4GB RAM vanilla box, WinXp will work just fine.\n>\n>Now, who is guilty of propaganda here?\n\nThere is no propaganda here. The statement is accurate in terms of \nthe information given. The biggest differentiations among M$ \nlicenses is the CPU and RAM limit.\n\n\n>Also, your comments regarding hard disks while correct in the \n>general sense are not helpful. This is clearly not a disk bandwidth problem.\nAs Evidenced By? His IO numbers are p*ss poor for any reasonable \nRAID setup, and 375KBps is bad even for a single HD. He's claiming \nthis is local IO, not network, so that possibility is out. If you \nfeel this is \"clearly not a disk bandwidth problem\", I fail to see \nyour evidence or your alternative hypothesis.\n\n\n> > >What do the MS performance-charts show is happening? Specifically,\n> > >CPU and disk I/O.\n> > His original post said ~3% CPU under W2K and ~70% CPU under WinXP\n>\n>Slow performance in extraction of bytea column strongly suggests tcp/ip.\n>issue. I bet if you blanked out bytea column pg_dump will be fast.\n>\n>Franlin: are you making pg_dump from local or remote box and is this a\n>clean install? Try fresh patched win2k install and see what happens.\nHe claimed this was local, not network. It is certainly an \nintriguing possibility that W2K and WinXP handle bytea \ndifferently. I'm not competent to comment on that however.\n\nRon\n\n\n",
"msg_date": "Wed, 30 Nov 2005 20:14:05 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RES: pg_dump slow"
},
{
"msg_contents": "I Maked a new install on machine this night, and the same results, on\nconsole localhost\n\nWindows 2000 Server \nVersion 5.00.2195\n\nPG Version 8.1\n\n\nFranklin\n\n\n\n>Franlin: are you making pg_dump from local or remote box and is this a \n>clean install? Try fresh patched win2k install and see what happens.\nHe claimed this was local, not network. It is certainly an \nintriguing possibility that W2K and WinXP handle bytea \ndifferently. I'm not competent to comment on that however.\n\n",
"msg_date": "Thu, 1 Dec 2005 09:29:35 -0300",
"msg_from": "\"Franklin Haut\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "pg_dump slow"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a simple query that is running inside a plpgsql function.\n\nSELECT INTO _point_id id FROM ot2.point WHERE unit_id = _unit_id AND \ntime > _last_status ORDER BY time LIMIT 1;\n \nBoth _unit_id and _last_status variables in the function. the table has \nan index on unit_id,point\n\nWhen this runs inside a function it is taking about 800ms. When I run \nit stand alone it takes about .8 ms, which is a big difference.\n\nI can find no reason for this. I have checked that time and _last_status \ntime are both timestamps and unit_id and _unit_id are both oids.\n\nThe explain looks perfect\n\n explain select id from point where unit_id = 95656 and time > \n'2005-11-30 23:11:00' order by time limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Limit (cost=9.94..9.95 rows=1 width=12)\n -> Sort (cost=9.94..9.95 rows=2 width=12)\n Sort Key: \"time\"\n -> Index Scan using unit_point on point (cost=0.00..9.93 \nrows=2 width=12)\n Index Cond: ((unit_id = 95656::oid) AND (\"time\" > \n'2005-11-30 23:11:00'::timestamp without time zone))\n(5 rows)\n\nTime: 0.731 ms\n\nA query inside the same function that runs right before this one runs at \nthe expected speed (about 1 ms)\n\nSELECT INTO _last_status time FROM ot2.point WHERE unit_id = _unit_id \nAND flags & 64 = 64 ORDER BY unit_id desc, time DESC LIMIT 1;\n\nIt uses the same table and indexes.\n\nTo time individual queries inside the function I am using:\n\ntt := (timeofday()::timestamp)-startt; RAISE INFO 'Location A %' , tt; \nstartt := timeofday()::timestamp;\n\ntt is an interval and startt is a timestamp. \n\n\nI am out of things to try. Can anyone help?\n\nThanks\nRalph\n\n",
"msg_date": "Thu, 01 Dec 2005 14:17:06 +1300",
"msg_from": "Ralph Mason <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query is 800 times slower when running in function! "
},
{
"msg_contents": "Ralph Mason <[email protected]> writes:\n> I have a simple query that is running inside a plpgsql function.\n\n> SELECT INTO _point_id id FROM ot2.point WHERE unit_id = _unit_id AND \n> time > _last_status ORDER BY time LIMIT 1;\n\nIt would probably help significantly to make that be\n\"ORDER BY unit_id, time\". This'd eliminate the need for the separate\nsort step and encourage the planner to use the index, even when it does\nnot know whether the \"time > x\" condition is selective or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Nov 2005 23:58:00 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query is 800 times slower when running in function! "
}
] |
[
{
"msg_contents": "Question is about the relation between fragmentation of file and VACUUM\nperformance.\n\n<Environment>\nOS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)\n Kernel 2.4.21-37.ELsmp on an i686\n Filesystem Type ext3\n Filesystem features: has_journal filetype needs_recovery sparse_super large_file\nCPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01\nMemory:2.0GB\nHDD:80GB(S-ATA)\n SATA max UDMA/133\nPostgreSQL:7.3.8\n\n<DB Environment>\n1. Approx. there are 3500 tables in the DB\n2. Index is attached to table.\n3. Every two minutes interval approx. 10,000 records are inserted into 1000tables.\n So total 7,200,000 records are inserted in 1000 tables per days\n4. Tables data are saved for 7 days.Older than 7 days data are deleted.\n So maximum total records 50,400,000 can be exist in DB.\n5. VACCUME is executed against DB as below\n Six times a day i.e. every 4 hours the VACCUME ANALYZE is started once.\n And out of the six times once VACCUME FULL ANALYZE is processed.\n\nAt the beginning, volume of data increases linearly because\nthe records are added for seven days. After seven days older than\nseven days data are deleted. So volume of data will not increase\nafter seventh days.\n\nWhen the performance of inserting data was measured in the above-\nmentioned environment, it takes six minutes to write 10000 lines\nafter 4/5 days the measurement had begun. While searching the reason\nof bottleneck by executing iostat command it is understood that DISK I/O\nwas problem for the neck as %iowait was almost 100% at that time.\n\nOn the very first day processing time of VACUUM is not a problem but\nwhen the day progress its process time is increasing.Then I examined the\nfragmentation of database area(pgsql/data/base) by using the following tools.\n\nDisk Allocation Viewer\nhttp://sourceforge.net/projects/davtools/\n\nFragmentation rate is 28% before defrag.\n\nThe processing time of VACUUM became 20 minutes, and also inserting data\ntook short time, when data base area (pgsql/data/base) was copied, deleted,\ncopied again, and the fragmentation was canceled (defrag).\n\nMoreover, After the fragmentation cancelled the processing time for VACCUM\nwas 20 minutes, but after 7 days it took 40 minutes for processing.When again\nchecked the fragmentation rate with the tool it was 11%.Therefore, it is\nunderstood that the fragmentation progresses again.\n\nHowever, In my current environment I can't stop PostgreSQL and cancel\nfragmentation.\n\nCould anyone advise some solutions for this fragmentation problem\nwithout stopping PostgreSQL ? For example, using the followings or anything\nelse..\n\n-Tuning of postgresql.conf\n-PostgreSQL(8.1.0) of latest version and VACUUM\n\nThanks in advance.\nAbe\n",
"msg_date": "Thu, 1 Dec 2005 14:50:56 +0900",
"msg_from": "\"Tatsumi Abe\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "About the relation between fragmentation of file and VACUUM"
},
{
"msg_contents": "Tatsumi Abe wrote:\n> Question is about the relation between fragmentation of file and VACUUM\n> performance.\n> \n> <Environment>\n> OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)\n> Kernel 2.4.21-37.ELsmp on an i686\n> Filesystem Type ext3\n> Filesystem features: has_journal filetype needs_recovery sparse_super large_file\n> CPU:Intel(R) Xeon(TM) CPU 2.80GHz stepping 01\n> Memory:2.0GB\n> HDD:80GB(S-ATA)\n> SATA max UDMA/133\n> PostgreSQL:7.3.8\n> \n> <DB Environment>\n> 1. Approx. there are 3500 tables in the DB\n\n> When the performance of inserting data was measured in the above-\n> mentioned environment, it takes six minutes to write 10000 lines\n> after 4/5 days the measurement had begun. While searching the reason\n> of bottleneck by executing iostat command it is understood that DISK I/O\n> was problem for the neck as %iowait was almost 100% at that time.\n> \n> On the very first day processing time of VACUUM is not a problem but\n> when the day progress its process time is increasing.Then I examined the\n> fragmentation of database area(pgsql/data/base) by using the following tools.\n> \n> Disk Allocation Viewer\n> http://sourceforge.net/projects/davtools/\n> \n> Fragmentation rate is 28% before defrag.\n\nI'd guess the root of your problem is the number of tables (3500), which\nif each has one index represents at least 7000 files. That means a lot\nof your I/O time will probably be spent moving the disk heads between\nthe different files.\n\nYou say you can't stop the server, so there's no point in thinking about\na quick hardware upgrade to help you. Also a version-upgrade is not\ndo-able for you.\n\nI can only think of two other options:\n1. Change the database schema to reduce the number of tables involved.\nI'm assuming that of the 3500 tables most hold the same data but for\ndifferent clients (or something similar). This might not be practical\neither.\n\n2. Re-order how you access the database. ANALYSE the updated tables\nregularly, but only VACUUM them after deletions. Group your inserts so\nthat all the inserts for table1 go together, then all the inserts for\ntable2 go together and so on. This should help with the fragmentation by\nmaking sure the files get extended in larger chunks.\n\nAre you sure it's not possible to spend 15 mins offline to solve this?\n--\n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 01 Dec 2005 09:59:54 +0000",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the relation between fragmentation of file and"
},
{
"msg_contents": "On Thu, Dec 01, 2005 at 02:50:56PM +0900, Tatsumi Abe wrote:\n>Could anyone advise some solutions for this fragmentation problem\n>without stopping PostgreSQL ?\n\nStop doing VACUUM FULL so often. If your table size is constant anyway\nyou're just wasting time by compacting the table and shrinking it, and\nencouraging fragmentation as each table file grows then shrinks a little\nbit each day.\n\nMike Stone\n",
"msg_date": "Thu, 01 Dec 2005 07:00:28 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the relation between fragmentation of file and"
},
{
"msg_contents": "On Thu, 1 Dec 2005, Richard Huxton wrote:\n\n> Tatsumi Abe wrote:\n>> Question is about the relation between fragmentation of file and VACUUM\n>> performance.\n>>\n>> <Environment>\n>> OS������RedHat Enterprise Linux AS Release 3(Taroon Update 6)\n>> Kernel 2.4.21-37.ELsmp on an i686\n>> Filesystem Type ext3\n>> Filesystem features: has_journal filetype needs_recovery sparse_super large_file\n\ntry different filesystems, ext2/3 do a very poor job when you have lots of \nfiles in a directory (and 7000+ files is a lot). you can also try mounting \nthe filesystem with noatime, nodiratime to reduce the seeks when reading, \nand try mounting it with oldalloc (which changes how the files are \narranged on disk when writing and extending them), I've seen drastic \nspeed differences between ext2 and ext3 based on this option (ext2 \ndefaults to oldalloc, ext3 defaults to orlov, which is faster in many \ncases)\n\n>> CPU������Intel(R) Xeon(TM) CPU 2.80GHz stepping 01\n>> Memory������2.0GB\n>> HDD������80GB������S-ATA������\n>> SATA max UDMA/133\n>> PostgreSQL������7.3.8\n>>\n>> <DB Environment>\n>> 1. Approx. there are 3500 tables in the DB\n>\n>> When the performance of inserting data was measured in the above-\n>> mentioned environment, it takes six minutes to write 10000 lines\n>> after 4/5 days the measurement had begun. While searching the reason\n>> of bottleneck by executing iostat command it is understood that DISK I/O\n>> was problem for the neck as %iowait was almost 100% at that time.\n>>\n>> On the very first day processing time of VACUUM is not a problem but\n>> when the day progress its process time is increasing.Then I examined the\n>> fragmentation of database area(pgsql/data/base) by using the following tools.\n>>\n>> Disk Allocation Viewer\n>> http://sourceforge.net/projects/davtools/\n>>\n>> Fragmentation rate is 28% before defrag.\n>\n> I'd guess the root of your problem is the number of tables (3500), which\n> if each has one index represents at least 7000 files. That means a lot\n> of your I/O time will probably be spent moving the disk heads between\n> the different files.\n\ndepending on the size of the tables it can actually be a lot worse then \nthis (remember Postgres splits the tables into fixed size chunks)\n\nwhen postgres adds data it will eventually spill over into additional \nfiles, when you do a vaccum does it re-write the tables into a smaller \nnumber of files or just rewrite the individual files (makeing each of them \nsmaller, but keeping the same number of files)\n\nspeaking of this, the selection of the size of these chunks is a \ncomprimize between the time needed to seek in an individual file and the \nnumber of files that are created, is there an easy way to tinker with this \n(I am sure the default is not correct for all filesystems, the filesystem \nhandling of large and/or many files differ drasticly)\n\n> You say you can't stop the server, so there's no point in thinking about\n> a quick hardware upgrade to help you. Also a version-upgrade is not\n> do-able for you.\n\nthere's a difference between stopping the server once for an upgrade \n(hardware or software) and having to stop it every few days to defrag \nthings forever after.\n\nDavid Lang\n\n> I can only think of two other options:\n> 1. Change the database schema to reduce the number of tables involved.\n> I'm assuming that of the 3500 tables most hold the same data but for\n> different clients (or something similar). This might not be practical\n> either.\n>\n> 2. Re-order how you access the database. ANALYSE the updated tables\n> regularly, but only VACUUM them after deletions. Group your inserts so\n> that all the inserts for table1 go together, then all the inserts for\n> table2 go together and so on. This should help with the fragmentation by\n> making sure the files get extended in larger chunks.\n>\n> Are you sure it's not possible to spend 15 mins offline to solve this?\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>From [email protected] Thu Dec 1 11:42:40 2005\nX-Original-To: [email protected]\nReceived: from localhost (av.hub.org [200.46.204.144])\n\tby postgresql.org (Postfix) with ESMTP id 8D5629DD693\n\tfor <[email protected]>; Thu, 1 Dec 2005 11:42:39 -0400 (AST)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (av.hub.org [200.46.204.144]) (amavisd-new, port 10024)\n with ESMTP id 47445-02\n for <[email protected]>;\n Thu, 1 Dec 2005 11:42:33 -0400 (AST)\nX-Greylist: domain auto-whitelisted by SQLgrey-\nReceived: from wproxy.gmail.com (wproxy.gmail.com [64.233.184.196])\n\tby postgresql.org (Postfix) with ESMTP id 33AAC9DD684\n\tfor <[email protected]>; Thu, 1 Dec 2005 11:42:35 -0400 (AST)\nReceived: by wproxy.gmail.com with SMTP id i23so251782wra\n for <[email protected]>; Thu, 01 Dec 2005 07:42:41 -0800 (PST)\nDomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;\n s=beta; d=gmail.com;\n h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references;\n b=E4tCkbGCIfoLMxAWvaMP+9C0TYVeKZ+/7VvYQHsgMWOPJ2NOGn0g076GCQQc5Ze4OD+x1V+8JkkLLB+NcPcD+da+77XwoGaSPgZfo5V0rnsu0nL+5PbEpl628bG8Vvhu8jGGoc7qLq4+4NAdxnZ1kd6+9Jm+BVGX5iUBIVZsFCs=\nReceived: by 10.65.38.14 with SMTP id q14mr871367qbj;\n Thu, 01 Dec 2005 07:42:39 -0800 (PST)\nReceived: by 10.65.180.14 with HTTP; Thu, 1 Dec 2005 07:42:39 -0800 (PST)\nMessage-ID: <[email protected]>\nDate: Thu, 1 Dec 2005 10:42:39 -0500\nFrom: Jaime Casanova <[email protected]>\nTo: Michael Riess <[email protected]>\nSubject: Re: 15,000 tables\nCc: [email protected]\nIn-Reply-To: <[email protected]>\nMIME-Version: 1.0\nContent-Type: text/plain; charset=ISO-8859-1\nContent-Transfer-Encoding: quoted-printable\nContent-Disposition: inline\nReferences: <[email protected]>\nX-Virus-Scanned: by amavisd-new at hub.org\nX-Spam-Status: No, score=0 required=5 tests=[none]\nX-Spam-Score: 0\nX-Spam-Level: \nX-Archive-Number: 200512/15\nX-Sequence-Number: 15836\n\nOn 12/1/05, Michael Riess <[email protected]> wrote:\n> Hi,\n>\n> we are currently running a postgres server (upgraded to 8.1) which has\n> one large database with approx. 15,000 tables. Unfortunately performance\n> suffers from that, because the internal tables (especially that which\n> holds the attribute info) get too large.\n>\n> (We NEED that many tables, please don't recommend to reduce them)\n>\n\nHave you ANALYZEd your database? VACUUMing?\n\nBTW, are you using some kind of weird ERP? I have one that treat\ninformix as a fool and don't let me get all of informix potential...\nmaybe the same is in your case...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n",
"msg_date": "Thu, 1 Dec 2005 04:11:09 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the relation between fragmentation of file and"
},
{
"msg_contents": "On Dec 1, 2005, at 00:50, Tatsumi Abe wrote:\n\n> However, In my current environment I can't stop PostgreSQL and cancel\n> fragmentation.\n>\n> Could anyone advise some solutions for this fragmentation problem\n> without stopping PostgreSQL ?\n\nThis is somewhat of an aside and intended just as a helpful suggestion \nsince I've been in this spot before: if you have this kind of uptime \nrequirement the first project to work on is getting the environment to \nthe point where you can take out at least one database server at a time \nfor maintenance. You're going to be forced to do this sooner or later \n- whether by disk failure, software error (Pg or OS), user error \n(restore from backup) or security issues (must patch fixes).\n\nSo disk fragmentation is a great thing to worry about at some point, \nbut IMHO you've got your neck under the guillotine and worrying about \nyour cuticles.\n\nI've heard the arguments before, usually around budget, and if the \ncompany can't spend any money but needs blood-from-stone performance \ntweaks, somebody isn't doing the math right (I'm assuming this isn't \nrunning on a satellite). Plus, your blood pressure will go down when \nthings are more resilient. I've tried the superhero thing before and \nit's just not worth it.\n\n-Bill\n-----\nBill McGonigle, Owner Work: 603.448.4440\nBFC Computing, LLC Home: 603.448.1668\[email protected] Mobile: 603.252.2606\nhttp://www.bfccomputing.com/ Pager: 603.442.1833\nJabber: [email protected] Text: [email protected]\nBlog: http://blog.bfccomputing.com/\n\n",
"msg_date": "Thu, 1 Dec 2005 07:44:22 -0500",
"msg_from": "Bill McGonigle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: About the relation between fragmentation of file and VACUUM"
}
] |
[
{
"msg_contents": "Hi there,\n\nI need a simple but large table with several million records. I do batch \ninserts with JDBC. After the first million or so records,\nthe inserts degrade to become VERY slow (like 8 minutes vs initially 20 \nsecondes).\n\nThe table has no indices except PK while I do the inserts.\n\nThis is with PostgreSQL 8.0 final for WindowsXP on a Pentium 1.86 GHz, \n1GB Memory. HD is fast IDE.\n\nI already have shared buffers already set to 25000.\n\nI wonder what else I can do. Any ideas?\n\nKindest regards,\n\nWolfgang Gehner\n\n-- \nInfonoia SA\n7 rue de Berne\n1211 Geneva 1\nTel: +41 22 9000 009\nFax: +41 22 9000 018\nhttp://www.infonoia.com\n\n\n",
"msg_date": "Thu, 01 Dec 2005 13:33:21 +0100",
"msg_from": "Wolfgang Gehner <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow insert into very large table"
},
{
"msg_contents": "Wolfgang Gehner wrote:\n> Hi there,\n> \n> I need a simple but large table with several million records. I do batch \n> inserts with JDBC. After the first million or so records,\n> the inserts degrade to become VERY slow (like 8 minutes vs initially 20 \n> secondes).\n> \n> The table has no indices except PK while I do the inserts.\n> \n> This is with PostgreSQL 8.0 final for WindowsXP on a Pentium 1.86 GHz, \n> 1GB Memory. HD is fast IDE.\n> \n> I already have shared buffers already set to 25000.\n> \n> I wonder what else I can do. Any ideas?\n\nRun VACUUM ANALYZE to have statistics reflect the growth of the table. \nThe planner probably still assumes your table to be small, and thus \ntakes wrong plans to check PK indexes or so.\n\nRegards,\nAndreas\n",
"msg_date": "Thu, 01 Dec 2005 14:00:07 +0000",
"msg_from": "Andreas Pflug <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow insert into very large table"
},
{
"msg_contents": "Wolfgang Gehner <[email protected]> writes:\n> This is with PostgreSQL 8.0 final for WindowsXP on a Pentium 1.86 GHz, \n> 1GB Memory. HD is fast IDE.\n\nTry something more recent, like 8.0.3 or 8.0.4. IIRC we had some\nperformance issues in 8.0.0 with tables that grew from zero to large\nsize during a single session.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 10:50:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow insert into very large table "
}
] |
[
{
"msg_contents": "Hi,\n\nwe are currently running a postgres server (upgraded to 8.1) which has \none large database with approx. 15,000 tables. Unfortunately performance \nsuffers from that, because the internal tables (especially that which \nholds the attribute info) get too large.\n\n(We NEED that many tables, please don't recommend to reduce them)\n\nLogically these tables could be grouped into 500 databases. My question is:\n\nWould performance be better if I had 500 databases (on one postgres \nserver instance) which each contain 30 tables, or is it better to have \none large database with 15,000 tables? In the old days of postgres 6.5 \nwe tried that, but performance was horrible with many databases ...\n\nBTW: I searched the mailing list, but found nothing on the subject - and \nthere also isn't any information in the documentation about the effects \nof the number of databases, tables or attributes on the performance.\n\nNow, what do you say? Thanks in advance for any comment!\n\nMike\n",
"msg_date": "Thu, 01 Dec 2005 14:42:14 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "15,000 tables"
},
{
"msg_contents": "On Thu, 1 Dec 2005, Michael Riess wrote:\n\n> Hi,\n>\n> we are currently running a postgres server (upgraded to 8.1) which has one \n> large database with approx. 15,000 tables. Unfortunately performance suffers \n> from that, because the internal tables (especially that which holds the \n> attribute info) get too large.\n\nis it becouse the internal tables get large, or is it a problem with disk \nI/O?\n\nwith 15,000 tables you are talking about a LOT of files to hold these \n(30,000 files with one index each and each database being small enough to \nnot need more then one file to hold it), on linux ext2/3 this many files \nin one directory will slow you down horribly. try different filesystems \n(from my testing and from other posts it looks like XFS is a leading \ncontender), and also play around with the tablespaces feature in 8.1 to \nmove things out of the main data directory into multiple directories. if \nyou do a ls -l on the parent directory you will see that the size of the \ndirectory is large if it's ever had lots of files in it, the only way to \nshrink it is to mv the old directory to a new name, create a new directory \nand move the files from the old directory to the new one.\n\nDavid Lang\n\n",
"msg_date": "Thu, 1 Dec 2005 06:16:17 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Hi David,\n\n> \n> with 15,000 tables you are talking about a LOT of files to hold these \n> (30,000 files with one index each and each database being small enough \n> to not need more then one file to hold it), on linux ext2/3 this many \n> files in one directory will slow you down horribly. \n\nWe use ReiserFS, and I don't think that this is causing the problem ... \nalthough it would probably help to split the directory up using tablespaces.\n\nBut thanks for the suggestion!\n",
"msg_date": "Thu, 01 Dec 2005 15:51:37 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Hi David,\n\nincidentally: The directory which holds our datbase currently contains \n73883 files ... do I get a prize or something? ;-)\n\nRegards,\n\nMike\n",
"msg_date": "Thu, 01 Dec 2005 15:56:02 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "2005/12/1, Michael Riess <[email protected]>:\n> Hi,\n>\n> we are currently running a postgres server (upgraded to 8.1) which has\n> one large database with approx. 15,000 tables. Unfortunately performance\n> suffers from that, because the internal tables (especially that which\n> holds the attribute info) get too large.\n>\n> (We NEED that many tables, please don't recommend to reduce them)\n\nit's amazing!!!!! 15000 tables!!!! WOW\n\nwhat kind of information you managment in your db?\n\nImagine some querys :(\n\nonly for curiosity!!!!!\n\n\nJhon Carrillo\nDBA / Software Engineer\nCaracas-Venezuela\n\n\n>\n> Logically these tables could be grouped into 500 databases. My question is:\n\n\n>\n> Would performance be better if I had 500 databases (on one postgres\n> server instance) which each contain 30 tables, or is it better to have\n> one large database with 15,000 tables? In the old days of postgres 6.5\n> we tried that, but performance was horrible with many databases ...\n>\n> BTW: I searched the mailing list, but found nothing on the subject - and\n> there also isn't any information in the documentation about the effects\n> of the number of databases, tables or attributes on the performance.\n>\n> Now, what do you say? Thanks in advance for any comment!\n>\n> Mike\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n--\n",
"msg_date": "Thu, 1 Dec 2005 11:45:25 -0400",
"msg_from": "\"Ing. Jhon Carrillo // Caracas,\n\tVenezuela\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 15,000 tables"
},
{
"msg_contents": "Michael Riess <[email protected]> writes:\n> (We NEED that many tables, please don't recommend to reduce them)\n\nNo, you don't. Add an additional key column to fold together different\ntables of the same structure. This will be much more efficient than\nmanaging that key at the filesystem level, which is what you're\neffectively doing now.\n\n(If you really have 15000 distinct rowtypes, I'd like to know what\nyour database design is...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 11:03:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables "
},
{
"msg_contents": "Hi,\n\n\n> On 12/1/05, Michael Riess <[email protected]> wrote:\n>> Hi,\n>>\n>> we are currently running a postgres server (upgraded to 8.1) which has\n>> one large database with approx. 15,000 tables. Unfortunately performance\n>> suffers from that, because the internal tables (especially that which\n>> holds the attribute info) get too large.\n>>\n>> (We NEED that many tables, please don't recommend to reduce them)\n>>\n> \n> Have you ANALYZEd your database? VACUUMing?\n\nOf course ... before 8.1 we routinely did a vacuum full analyze each \nnight. As of 8.1 we use autovacuum.\n\n> \n> BTW, are you using some kind of weird ERP? I have one that treat\n> informix as a fool and don't let me get all of informix potential...\n> maybe the same is in your case...\n\nNo. Our database contains tables for we content management systems. The \nserver hosts approx. 500 cms applications, and each of them has approx. \n30 tables.\n\nThat's why I'm asking if it was better to have 500 databases with 30 \ntables each. In previous Postgres versions this led to even worse \nperformance ...\n\nMike\n",
"msg_date": "Thu, 01 Dec 2005 17:04:06 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "On 12/1/05, Tom Lane <[email protected]> wrote:\n> Michael Riess <[email protected]> writes:\n> > (We NEED that many tables, please don't recommend to reduce them)\n>\n> No, you don't. Add an additional key column to fold together different\n> tables of the same structure. This will be much more efficient than\n> managing that key at the filesystem level, which is what you're\n> effectively doing now.\n>\n> (If you really have 15000 distinct rowtypes, I'd like to know what\n> your database design is...)\n>\n> regards, tom lane\n>\n\nMaybe he is using some kind of weird ERP... take the case of BaaN\n(sadly i use it in my work): BaaN creates about 1200 tables per\ncompany and i have no control of it... we have about 12000 tables\nright now...\n\n--\nregards,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n",
"msg_date": "Thu, 1 Dec 2005 11:15:00 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Hi Tom,\n\n> Michael Riess <[email protected]> writes:\n>> (We NEED that many tables, please don't recommend to reduce them)\n> \n> No, you don't. Add an additional key column to fold together different\n> tables of the same structure. This will be much more efficient than\n> managing that key at the filesystem level, which is what you're\n> effectively doing now.\n\nBeen there, done that. (see below)\n\n> \n> (If you really have 15000 distinct rowtypes, I'd like to know what\n> your database design is...)\n\nSorry, I should have included that info in the initial post. You're \nright in that most of these tables have a similar structure. But they \nare independent and can be customized by the users.\n\nThink of it this way: On the server there are 500 applications, and each \nhas 30 tables. One of these might be a table which contains the products \nof a webshop, another contains news items which are displayed on the \nwebsite etc. etc..\n\nThe problem is that the customers can freely change the tables ... add \ncolumns, remove columns, change column types etc.. So I cannot use \nsystem wide tables with a key column.\n\n\nMike\n",
"msg_date": "Thu, 01 Dec 2005 17:28:44 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "On 01.12.2005, at 17:04 Uhr, Michael Riess wrote:\n\n> No. Our database contains tables for we content management systems. \n> The server hosts approx. 500 cms applications, and each of them has \n> approx. 30 tables.\n\nJust for my curiosity: Are the \"about 30 tables\" with similar schemas \nor do they differ much?\n\nWe have a small CMS system running here, where I have all information \nfor all clients in tables with relationships to a client table.\n\nBut I assume you are running a pre-build CMS which is not designed \nfor \"multi-client ability\", right?\n\ncug\n\n\n-- \nPharmaLine, Essen, GERMANY\nSoftware and Database Development",
"msg_date": "Thu, 1 Dec 2005 17:30:48 +0100",
"msg_from": "Guido Neitzer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "hi michael\n\n>> Have you ANALYZEd your database? VACUUMing?\n>\n> Of course ... before 8.1 we routinely did a vacuum full analyze each \n> night. As of 8.1 we use autovacuum.\n\n\nwhat i noticed is autovacuum not working properly as it should. i had 8.1 \nrunning with autovacuum for just 2 days or so and got warnings in pgadmin \nthat my tables would need an vacuum. i've posted this behaviour some weeks \nago to the novice list requesting more infos on how to \"tweak\" autovacuum \nproperly - unfortunately without any respones. thats when i switched the \nnightly analyze job back on - everything runs smooth since then.\n\nmaybe it helps in your case as well?\n\ncheers,\nthomas\n\n\n\n\n",
"msg_date": "Thu, 1 Dec 2005 18:07:37 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Michael Riess <[email protected]> writes:\n>> On 12/1/05, Michael Riess <[email protected]> wrote:\n>>> we are currently running a postgres server (upgraded to 8.1) which\n>>> has one large database with approx. 15,000 tables. Unfortunately\n>>> performance suffers from that, because the internal tables\n>>> (especially that which holds the attribute info) get too large.\n>>>\n>>> (We NEED that many tables, please don't recommend to reduce them)\n>>>\n>> Have you ANALYZEd your database? VACUUMing?\n>\n> Of course ... before 8.1 we routinely did a vacuum full analyze each\n> night. As of 8.1 we use autovacuum.\n\nVACUUM FULL was probably always overkill, unless \"always\" includes\nversions prior to 7.3...\n\n>> BTW, are you using some kind of weird ERP? I have one that treat\n>> informix as a fool and don't let me get all of informix potential...\n>> maybe the same is in your case...\n>\n> No. Our database contains tables for we content management\n> systems. The server hosts approx. 500 cms applications, and each of\n> them has approx. 30 tables.\n>\n> That's why I'm asking if it was better to have 500 databases with 30\n> tables each. In previous Postgres versions this led to even worse\n> performance ...\n\nThis has the feeling of fitting with Alan Perlis' dictum below...\n\nSupposing you have 500 databases, each with 30 tables, each with 4\nindices, then you'll find you have, on disk...\n\n# of files = 500 x 30 x 5 = 75000 files\n\nIf each is regularly being accessed, that's bits of 75000 files\ngetting shoved through OS and shared memory caches. Oh, yes, and\nyou'll also have regular participation of some of the pg_catalog\nfiles, with ~500 instances of THOSE, multiplied some number of ways...\n\nAn application with 15000 frequently accessed tables doesn't strike me\nas being something that can possibly turn out well. You have, in\neffect, more tables than (arguably) bloated ERP systems like SAP R/3;\nit only has a few thousand tables, and since many are module-specific,\nand nobody ever implements *all* the modules, it is likely only a few\nhundred that are \"hot spots.\" No 15000 there...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://www3.sympatico.ca/cbbrowne/languages.html\nIt is better to have 100 functions operate on one data structure than\n10 functions on 10 data structures. -- Alan J. Perlis\n",
"msg_date": "Thu, 01 Dec 2005 12:09:41 -0500",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Hi Michael,\n\nI'm a fan of ReiserFS, and I can be wrong, but I believe using a \njournaling filesystem for the PgSQL database could be slowing things \ndown.\n\nGavin\n\nOn Dec 1, 2005, at 6:51 AM, Michael Riess wrote:\n\n> Hi David,\n>\n>> with 15,000 tables you are talking about a LOT of files to hold \n>> these (30,000 files with one index each and each database being \n>> small enough to not need more then one file to hold it), on linux \n>> ext2/3 this many files in one directory will slow you down horribly.\n>\n> We use ReiserFS, and I don't think that this is causing the \n> problem ... although it would probably help to split the directory \n> up using tablespaces.\n>\n> But thanks for the suggestion!\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\nGavin M. Roy\n800 Pound Gorilla\[email protected]\n\n\n",
"msg_date": "Thu, 1 Dec 2005 10:07:44 -0800",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy:\n> Hi Michael,\n> \n> I'm a fan of ReiserFS, and I can be wrong, but I believe using a \n> journaling filesystem for the PgSQL database could be slowing things \n> down.\n\nHave a 200G+ database, someone pulling the power plug\nor a regular reboot after a year or so.\n\nWait for the fsck to finish.\n\nNow think again :-)\n\n++Tino\n\n",
"msg_date": "Thu, 01 Dec 2005 19:40:07 +0100",
"msg_from": "Tino Wildenhain <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Agreed. Also the odds of fs corruption or data loss are higher in a \nnon journaling fs. Best practice seems to be to use a journaling fs \nbut to put the fs log on dedicated spindles separate from the actual \nfs or pg_xlog.\n\nRon\n\nAt 01:40 PM 12/1/2005, Tino Wildenhain wrote:\n>Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy:\n> > Hi Michael,\n> >\n> > I'm a fan of ReiserFS, and I can be wrong, but I believe using a\n> > journaling filesystem for the PgSQL database could be slowing things\n> > down.\n>\n>Have a 200G+ database, someone pulling the power plug\n>or a regular reboot after a year or so.\n>\n>Wait for the fsck to finish.\n>\n>Now think again :-)\n>\n>++Tino\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\n\n",
"msg_date": "Thu, 01 Dec 2005 13:48:11 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Here's a fairly recent post on reiserfs (and performance):\n\nhttp://archives.postgresql.org/pgsql-novice/2005-09/msg00007.php\n\nI'm still digging on performance of ext2 vrs journaled filesystems, \nas I know I've seen it before.\n\nGavin\n\n\nMy point was not in doing an fsck, but rather in\nOn Dec 1, 2005, at 10:40 AM, Tino Wildenhain wrote:\n\n> Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy:\n>> Hi Michael,\n>>\n>> I'm a fan of ReiserFS, and I can be wrong, but I believe using a\n>> journaling filesystem for the PgSQL database could be slowing things\n>> down.\n>\n> Have a 200G+ database, someone pulling the power plug\n> or a regular reboot after a year or so.\n>\n> Wait for the fsck to finish.\n>\n> Now think again :-)\n>\n> ++Tino\n>\n\nGavin M. Roy\n800 Pound Gorilla\[email protected]\n\n\n",
"msg_date": "Thu, 1 Dec 2005 10:49:43 -0800",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Ron <[email protected]> writes:\n> Agreed. Also the odds of fs corruption or data loss are higher in a \n> non journaling fs. Best practice seems to be to use a journaling fs \n> but to put the fs log on dedicated spindles separate from the actual \n> fs or pg_xlog.\n\nI think we've determined that best practice is to journal metadata only\n(not file contents) on PG data filesystems. PG does expect the filesystem\nto remember where the files are, so you need metadata protection, but\njournalling file content updates is redundant with PG's own WAL logging.\n\nOn a filesystem dedicated to WAL, you probably do not need any\nfilesystem journalling at all --- we manage the WAL files in a way\nthat avoids changing metadata for a WAL file that's in active use.\nA conservative approach would be to journal metadata here too, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 13:57:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables "
},
{
"msg_contents": "Heh looks like I left a trailing thought...\n\nMy post wasn't saying don't use journaled filesystems, but rather \nthat it can be slower than non-journaled filesystems, and I don't \nconsider recovery time from a crash to be a factor in determining the \nspeed of reads and writes on the data. That being said, I think \nTom's reply on what to journal and not to journal should really put \nan end to this side of the conversation.\n\nGavin\n\nOn Dec 1, 2005, at 10:49 AM, Gavin M. Roy wrote:\n\n> Here's a fairly recent post on reiserfs (and performance):\n>\n> http://archives.postgresql.org/pgsql-novice/2005-09/msg00007.php\n>\n> I'm still digging on performance of ext2 vrs journaled filesystems, \n> as I know I've seen it before.\n>\n> Gavin\n>\n>\n> My point was not in doing an fsck, but rather in\n> On Dec 1, 2005, at 10:40 AM, Tino Wildenhain wrote:\n>\n>> Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy:\n>>> Hi Michael,\n>>>\n>>> I'm a fan of ReiserFS, and I can be wrong, but I believe using a\n>>> journaling filesystem for the PgSQL database could be slowing things\n>>> down.\n>>\n>> Have a 200G+ database, someone pulling the power plug\n>> or a regular reboot after a year or so.\n>>\n>> Wait for the fsck to finish.\n>>\n>> Now think again :-)\n>>\n>> ++Tino\n>>\n>\n> Gavin M. Roy\n> 800 Pound Gorilla\n> [email protected]\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\nGavin M. Roy\n800 Pound Gorilla\[email protected]\n\n\n",
"msg_date": "Thu, 1 Dec 2005 11:08:59 -0800",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "> Michael Riess <[email protected]> writes:\n>>> On 12/1/05, Michael Riess <[email protected]> wrote:\n>>>> we are currently running a postgres server (upgraded to 8.1) which\n>>>> has one large database with approx. 15,000 tables. Unfortunately\n>>>> performance suffers from that, because the internal tables\n>>>> (especially that which holds the attribute info) get too large.\n>>>>\n>>>> (We NEED that many tables, please don't recommend to reduce them)\n>>>>\n>>> Have you ANALYZEd your database? VACUUMing?\n>> Of course ... before 8.1 we routinely did a vacuum full analyze each\n>> night. As of 8.1 we use autovacuum.\n> \n> VACUUM FULL was probably always overkill, unless \"always\" includes\n> versions prior to 7.3...\n\nWell, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, \nbut the database got considerably slower near the end of the week.\n\n> \n>>> BTW, are you using some kind of weird ERP? I have one that treat\n>>> informix as a fool and don't let me get all of informix potential...\n>>> maybe the same is in your case...\n>> No. Our database contains tables for we content management\n>> systems. The server hosts approx. 500 cms applications, and each of\n>> them has approx. 30 tables.\n>>\n>> That's why I'm asking if it was better to have 500 databases with 30\n>> tables each. In previous Postgres versions this led to even worse\n>> performance ...\n> \n> This has the feeling of fitting with Alan Perlis' dictum below...\n> \n> Supposing you have 500 databases, each with 30 tables, each with 4\n> indices, then you'll find you have, on disk...\n> \n> # of files = 500 x 30 x 5 = 75000 files\n> \n> If each is regularly being accessed, that's bits of 75000 files\n> getting shoved through OS and shared memory caches. Oh, yes, and\n> you'll also have regular participation of some of the pg_catalog\n> files, with ~500 instances of THOSE, multiplied some number of ways...\n> \n\nNot all of the tables are frequently accessed. In fact I would estimate \nthat only 20% are actually used ... but there is no way to determine if \nor when a table will be used. I thought about a way to \"swap out\" tables \nwhich have not been used for a couple of days ... maybe I'll do just \nthat. But it would be cumbersome ... I had hoped that an unused table \ndoes not hurt performance. But of course the internal tables which \ncontain the meta info get too large.\n\n> An application with 15000 frequently accessed tables doesn't strike me\n> as being something that can possibly turn out well. You have, in\n> effect, more tables than (arguably) bloated ERP systems like SAP R/3;\n> it only has a few thousand tables, and since many are module-specific,\n> and nobody ever implements *all* the modules, it is likely only a few\n> hundred that are \"hot spots.\" No 15000 there..\n\nI think that my systems confirms with the 80/20 rule ...\n.\n",
"msg_date": "Thu, 01 Dec 2005 20:34:43 +0100",
"msg_from": "Michael Riess <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "On 12/1/05, Michael Riess <[email protected]> wrote:\n> > Michael Riess <[email protected]> writes:\n> >>> On 12/1/05, Michael Riess <[email protected]> wrote:\n> >>>> we are currently running a postgres server (upgraded to 8.1) which\n> >>>> has one large database with approx. 15,000 tables. Unfortunately\n> >>>> performance suffers from that, because the internal tables\n> >>>> (especially that which holds the attribute info) get too large.\n> >>>>\n> >>>> (We NEED that many tables, please don't recommend to reduce them)\n> >>>>\n> >>> Have you ANALYZEd your database? VACUUMing?\n> >> Of course ... before 8.1 we routinely did a vacuum full analyze each\n> >> night. As of 8.1 we use autovacuum.\n> >\n> > VACUUM FULL was probably always overkill, unless \"always\" includes\n> > versions prior to 7.3...\n>\n> Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL,\n> but the database got considerably slower near the end of the week.\n>\n> >\n> >>> BTW, are you using some kind of weird ERP? I have one that treat\n> >>> informix as a fool and don't let me get all of informix potential...\n> >>> maybe the same is in your case...\n> >> No. Our database contains tables for we content management\n> >> systems. The server hosts approx. 500 cms applications, and each of\n> >> them has approx. 30 tables.\n> >>\n> >> That's why I'm asking if it was better to have 500 databases with 30\n> >> tables each. In previous Postgres versions this led to even worse\n> >> performance ...\n> >\n> > This has the feeling of fitting with Alan Perlis' dictum below...\n> >\n> > Supposing you have 500 databases, each with 30 tables, each with 4\n> > indices, then you'll find you have, on disk...\n> >\n> > # of files = 500 x 30 x 5 = 75000 files\n> >\n> > If each is regularly being accessed, that's bits of 75000 files\n> > getting shoved through OS and shared memory caches. Oh, yes, and\n> > you'll also have regular participation of some of the pg_catalog\n> > files, with ~500 instances of THOSE, multiplied some number of ways...\n> >\n>\n> Not all of the tables are frequently accessed. In fact I would estimate\n> that only 20% are actually used ... but there is no way to determine if\n> or when a table will be used. I thought about a way to \"swap out\" tables\n> which have not been used for a couple of days ... maybe I'll do just\n> that. But it would be cumbersome ... I had hoped that an unused table\n> does not hurt performance. But of course the internal tables which\n> contain the meta info get too large.\n>\n> > An application with 15000 frequently accessed tables doesn't strike me\n> > as being something that can possibly turn out well. You have, in\n> > effect, more tables than (arguably) bloated ERP systems like SAP R/3;\n> > it only has a few thousand tables, and since many are module-specific,\n> > and nobody ever implements *all* the modules, it is likely only a few\n> > hundred that are \"hot spots.\" No 15000 there..\n>\n> I think that my systems confirms with the 80/20 rule ...\n> .\n>\n\nHow many disks do you have i imagine you can put tables forming one\nlogical database in a tablespace and have tables spread on various\ndisks...\n\n\n--\nAtentamente,\nJaime Casanova\n(DBA: DataBase Aniquilator ;)\n",
"msg_date": "Thu, 1 Dec 2005 14:40:59 -0500",
"msg_from": "Jaime Casanova <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "On Thu, 2005-12-01 at 13:34, Michael Riess wrote:\n> > Michael Riess <[email protected]> writes:\n> >>> On 12/1/05, Michael Riess <[email protected]> wrote:\n> >>>> we are currently running a postgres server (upgraded to 8.1) which\n> >>>> has one large database with approx. 15,000 tables. Unfortunately\n> >>>> performance suffers from that, because the internal tables\n> >>>> (especially that which holds the attribute info) get too large.\n> >>>>\n> >>>> (We NEED that many tables, please don't recommend to reduce them)\n> >>>>\n> >>> Have you ANALYZEd your database? VACUUMing?\n> >> Of course ... before 8.1 we routinely did a vacuum full analyze each\n> >> night. As of 8.1 we use autovacuum.\n> > \n> > VACUUM FULL was probably always overkill, unless \"always\" includes\n> > versions prior to 7.3...\n> \n> Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, \n> but the database got considerably slower near the end of the week.\n\nGenerally, this means either your vacuums are too infrequent, or your\nfsm settings are too small.\n\nNote that vacuum and analyze aren't \"married\" any more, like in the old\ndays. You can issue either separately, depending on your usage\nconditions.\n\nNote that with the newest versions of PostgreSQL you can change the\nsettings for vacuum priority so that while it takes longer to vacuum, it\ndoesn't stomp on the other processes toes so much anymore, so more\nfrequent plain vacuums may be the answer.\n",
"msg_date": "Thu, 01 Dec 2005 15:15:59 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "[email protected] wrote:\n\n> what i noticed is autovacuum not working properly as it should. i had 8.1 \n> running with autovacuum for just 2 days or so and got warnings in pgadmin \n> that my tables would need an vacuum.\n\nHum, so how is autovacuum's documentation lacking? Please read it\ncritically and let us know so we can improve it.\n\nhttp://www.postgresql.org/docs/8.1/static/maintenance.html#AUTOVACUUM\n\nMaybe what you need is to lower the \"vacuum base threshold\" for tables\nthat are small.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 1 Dec 2005 18:16:04 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Agreed, and I apologize for the imprecision of my post below.\n\nI should have written:\n\"Best practice seems to be to use a journaling fs and log metadata \nonly and put it on separate dedicated spindles.\"\n\nI've seen enough HD failures that I tend to be paranoid and log the \nmetadata of fs dedicated to WAL as well, but that may very well be overkill.\n\nRon\n\nAt 01:57 PM 12/1/2005, Tom Lane wrote:\n>Ron <[email protected]> writes:\n> > Agreed. Also the odds of fs corruption or data loss are higher in a\n> > non journaling fs. Best practice seems to be to use a journaling fs\n> > but to put the fs log on dedicated spindles separate from the actual\n> > fs or pg_xlog.\n>\n>I think we've determined that best practice is to journal metadata only\n>(not file contents) on PG data filesystems. PG does expect the filesystem\n>to remember where the files are, so you need metadata protection, but\n>journalling file content updates is redundant with PG's own WAL logging.\n>\n>On a filesystem dedicated to WAL, you probably do not need any\n>filesystem journalling at all --- we manage the WAL files in a way\n>that avoids changing metadata for a WAL file that's in active use.\n>A conservative approach would be to journal metadata here too, though.\n>\n> regards, tom lane\n\n\n\n",
"msg_date": "Fri, 02 Dec 2005 03:15:00 -0500",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables "
},
{
"msg_contents": "On Fri, Dec 02, 2005 at 03:15:00AM -0500, Ron wrote:\n>I've seen enough HD failures that I tend to be paranoid and log the \n>metadata of fs dedicated to WAL as well, but that may very well be overkill.\n\nEspecially since it wouldn't gain anything. Journalling doesn't give you\nany advantage whatsoever in the face of a HD failure.\n\nMike Stone\n",
"msg_date": "Fri, 02 Dec 2005 08:02:16 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "\nOn 1 Dec 2005, at 16:03, Tom Lane wrote:\n\n> Michael Riess <[email protected]> writes:\n>> (We NEED that many tables, please don't recommend to reduce them)\n>\n> No, you don't. Add an additional key column to fold together \n> different\n> tables of the same structure. This will be much more efficient than\n> managing that key at the filesystem level, which is what you're\n> effectively doing now.\n>\n> (If you really have 15000 distinct rowtypes, I'd like to know what\n> your database design is...)\n>\n\nWon't you end up with awful seek times if you just want data which \npreviously been stored in a single table? E.g. whilst before you \nwanted 1000 contiguous rows from the table, now you want 1000 rows \nwhich now have 1000 rows you don't care about in between each one you \ndo want.\n",
"msg_date": "Fri, 2 Dec 2005 14:16:24 +0000",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables "
},
{
"msg_contents": "\nOn 2 Dec 2005, at 14:16, Alex Stapleton wrote:\n\n>\n> On 1 Dec 2005, at 16:03, Tom Lane wrote:\n>\n>> Michael Riess <[email protected]> writes:\n>>> (We NEED that many tables, please don't recommend to reduce them)\n>>\n>> No, you don't. Add an additional key column to fold together \n>> different\n>> tables of the same structure. This will be much more efficient than\n>> managing that key at the filesystem level, which is what you're\n>> effectively doing now.\n>>\n>> (If you really have 15000 distinct rowtypes, I'd like to know what\n>> your database design is...)\n>>\n>\n> Won't you end up with awful seek times if you just want data which \n> previously been stored in a single table? E.g. whilst before you \n> wanted 1000 contiguous rows from the table, now you want 1000 rows \n> which now have 1000 rows you don't care about in between each one \n> you do want.\n>\n\nI must of had a total and utter failure of intellect for a moment \nthere. Please ignore that :P\n",
"msg_date": "Fri, 2 Dec 2005 14:20:41 +0000",
"msg_from": "Alex Stapleton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables "
},
{
"msg_contents": "On 12/1/2005 2:34 PM, Michael Riess wrote:\n>> VACUUM FULL was probably always overkill, unless \"always\" includes\n>> versions prior to 7.3...\n> \n> Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, \n> but the database got considerably slower near the end of the week.\n\nThis indicates that you have FSM settings that are inadequate for that \nmany tables and eventually the overall size of your database. Try \nsetting those to\n\n max_fsm_relations = 80000\n max_fsm_pages = (select sum(relpages) / 2 from pg_class)\n\nAnother thing you might be suffering from (depending on the rest of your \narchitecture) is file descriptor limits. Especially if you use some sort \nof connection pooling or persistent connections like PHP, you will have \nall the backends serving multiple of your logical applications (sets of \n30 tables). If on average one backend is called for 50 different apps, \nthen we are talking 50*30*4=6000 files accessed by that backend. 80/20 \nrule leaves 1200 files in access per backend, thus 100 active backends \nlead to 120,000 open (virtual) file descriptors. Now add to that any \nfiles that a backend would have to open in order to evict an arbitrary \ndirty block.\n\nWith a large shared buffer pool and little more aggressive background \nwriter settings, you can avoid mostly that regular backends would have \nto evict dirty blocks.\n\nIf the kernel settings allow Postgres to keep that many file descriptors \nopen, you avoid directory lookups.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n",
"msg_date": "Fri, 02 Dec 2005 10:16:28 -0500",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "On Thu, Dec 01, 2005 at 08:34:43PM +0100, Michael Riess wrote:\n> Well, we tried switching to daily VACUUM ANALYZE and weekly VACUUM FULL, \n> but the database got considerably slower near the end of the week.\n\nIf you have your FSM configured correctly and you are vacuuming\ntables often enough for your turnover, than in regular operation you\nshould _never_ need VACUUM FULL. So it sounds like your first\nproblem is that. With the 15000 tables you were talking about,\nthough, that doesn't surprise me.\n\nAre you sure more back ends wouldn't be a better answer, if you're\nreally wedded to this design? (I have a feeling that something along\nthe lines of what Tom Lane said would be a better answer -- I think\nyou need to be more clever, because I don't think this will ever work\nwell, on any system.)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n",
"msg_date": "Fri, 2 Dec 2005 16:08:56 -0500",
"msg_from": "Andrew Sullivan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
},
{
"msg_contents": "Michael Riess writes:\n\n> Sorry, I should have included that info in the initial post. You're \n> right in that most of these tables have a similar structure. But they \n> are independent and can be customized by the users.\n> \n\nHow about creating 50 databases and give each it's own tablespace?\nIt's not only whether PostgreSQL can be optimized, but also how well your \nfilesystem is handling the directory with large number of files. by \nsplitting the directories you will likely help the OS and will be able to \nperhaps better determine if the OS or the DB is at fault for the slowness.\n",
"msg_date": "Fri, 02 Dec 2005 18:00:53 -0500",
"msg_from": "Francisco Reyes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 15,000 tables"
}
] |
[
{
"msg_contents": "this subject has come up a couple times just today (and it looks like one \nthat keeps popping up).\n\nunder linux ext2/3 have two known weaknesses (or rather one weakness with \ntwo manifestations). searching through large objects on disk is slow, this \napplies to both directories (creating, opening, deleting files if there \nare (or have been) lots of files in a directory), and files (seeking to \nthe right place in a file).\n\nthe rule of thumb that I have used for years is that if files get over a \nfew tens of megs or directories get over a couple thousand entries you \nwill start slowing down.\n\ncommon places you can see this (outside of postgres)\n\n1. directories, mail or news storage.\n if you let your /var/spool/mqueue directory get large (for example a \nserver that can't send mail for a while or mail gets misconfigured on). \nthere may only be a few files in there after it gets fixed, but if the \ndirectory was once large just doing a ls on the directory will be slow.\n\n news servers that store each message as a seperate file suffer from this \nas well, they work around it by useing multiple layers of nested \ndirectories so that no directory has too many files in it (navigating the \nlayers of directories costs as well, it's all about the tradeoffs). Mail \nservers that use maildir (and Cyrus which uses a similar scheme) have the \nsame problem.\n\n to fix this you have to create a new directory and move the files to \nthat directory (and then rename the new to the old)\n\n ext3 has an option to make searching directories faster (htree), but \nenabling it kills performance when you create files. And this doesn't help \nwith large files.\n\n2. files, mbox formatted mail files and log files\n as these files get large, the process of appending to them takes more \ntime. syslog makes this very easy to test. On a box that does syncronous \nsyslog writing (default for most systems useing standard syslog, on linux \nmake sure there is not a - in front of the logfile name) time how long it \ntakes to write a bunch of syslog messages, then make the log file large \nand time it again.\n\na few weeks ago I did a series of tests to compare different filesystems. \nthe test was for a different purpose so the particulars are not what I \nwoud do for testing aimed at postgres, but I think the data is relavent) \nand I saw major differences between different filesystems, I'll see aobut \nre-running the tests to get a complete set of benchmarks in the next few \ndays. My tests had their times vary from 4 min to 80 min depending on the \nfilesystem in use (ext3 with hash_dir posted the worst case). what testing \nhave other people done with different filesystems?\n\nDavid Lang\n",
"msg_date": "Thu, 1 Dec 2005 07:09:02 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "filesystem performance with lots of files"
},
{
"msg_contents": "\n\"David Lang\" <[email protected]> wrote\n>\n> a few weeks ago I did a series of tests to compare different filesystems. \n> the test was for a different purpose so the particulars are not what I \n> woud do for testing aimed at postgres, but I think the data is relavent) \n> and I saw major differences between different filesystems, I'll see aobut \n> re-running the tests to get a complete set of benchmarks in the next few \n> days. My tests had their times vary from 4 min to 80 min depending on the \n> filesystem in use (ext3 with hash_dir posted the worst case). what testing \n> have other people done with different filesystems?\n>\n\nThat's good ... what benchmarks did you used?\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Thu, 1 Dec 2005 13:03:20 -0500",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem performance with lots of files"
},
{
"msg_contents": "On Thu, 1 Dec 2005, Qingqing Zhou wrote:\n\n> \"David Lang\" <[email protected]> wrote\n>>\n>> a few weeks ago I did a series of tests to compare different filesystems.\n>> the test was for a different purpose so the particulars are not what I\n>> woud do for testing aimed at postgres, but I think the data is relavent)\n>> and I saw major differences between different filesystems, I'll see aobut\n>> re-running the tests to get a complete set of benchmarks in the next few\n>> days. My tests had their times vary from 4 min to 80 min depending on the\n>> filesystem in use (ext3 with hash_dir posted the worst case). what testing\n>> have other people done with different filesystems?\n>>\n>\n> That's good ... what benchmarks did you used?\n\nI was doing testing in the context of a requirement to sync over a million \nsmall files from one machine to another (rsync would take >10 hours to do \nthis over a 100Mb network so I started with the question 'how long would \nit take to do a tar-ftp-untar cycle with no smarts) so I created 1m x 1K \nfiles in a three deep directory tree (10d/10d/10d/1000files) and was doing \nsimple 'time to copy tree', 'time to create tar', 'time to extract from \ntar', 'time to copy tarfile (1.6G file). I flushed the memory between each \ntest with cat largefile >/dev/null (I know now that I should have \nunmounted and remounted between each test), source and destination on \ndifferent IDE controllers\n\nI don't have all the numbers readily available (and I didn't do all the \ntests on every filesystem), but I found that even with only 1000 \nfiles/directory ext3 had some problems, and if you enabled dir_hash some \nfunctions would speed up, but writing lots of files would just collapse \n(that was the 80 min run)\n\nI'll have to script it and re-do the tests (and when I do this I'll also \nset it to do a test with far fewer, far larger files as well)\n\nDavid Lang\n",
"msg_date": "Thu, 1 Dec 2005 23:07:56 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: filesystem performance with lots of files"
},
{
"msg_contents": "\n\nOn Fri, 2 Dec 2005, David Lang wrote:\n>\n> I don't have all the numbers readily available (and I didn't do all the\n> tests on every filesystem), but I found that even with only 1000\n> files/directory ext3 had some problems, and if you enabled dir_hash some\n> functions would speed up, but writing lots of files would just collapse\n> (that was the 80 min run)\n>\n\nInteresting. I would suggest test small number but bigger file would be\nbetter if the target is for database performance comparison. By small\nnumber, I mean 10^2 - 10^3; By bigger, I mean file size from 8k to 1G\n(PostgreSQL data file is at most this size under normal installation).\n\nLet's take TPCC as an example, if we get a TPCC database of 500 files,\neach one is at most 1G (PostgreSQL has this feature/limit in ordinary\ninstallation), then this will give us a 500G database, which is big enough\nfor your current configuration.\n\nRegards,\nQingqing\n",
"msg_date": "Fri, 2 Dec 2005 02:49:53 -0500 (EST)",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem performance with lots of files"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Qingqing Zhou wrote:\n\n>>\n>> I don't have all the numbers readily available (and I didn't do all the\n>> tests on every filesystem), but I found that even with only 1000\n>> files/directory ext3 had some problems, and if you enabled dir_hash some\n>> functions would speed up, but writing lots of files would just collapse\n>> (that was the 80 min run)\n>>\n>\n> Interesting. I would suggest test small number but bigger file would be\n> better if the target is for database performance comparison. By small\n> number, I mean 10^2 - 10^3; By bigger, I mean file size from 8k to 1G\n> (PostgreSQL data file is at most this size under normal installation).\n\nI agree, that round of tests was done on my system at home, and was in \nresponse to a friend who had rsync over a local lan take > 10 hours for \n<10G of data. but even so it generated some interesting info. I need to \nmake a more controlled run at it though.\n\n> Let's take TPCC as an example, if we get a TPCC database of 500 files,\n> each one is at most 1G (PostgreSQL has this feature/limit in ordinary\n> installation), then this will give us a 500G database, which is big enough\n> for your current configuration.\n>\n> Regards,\n> Qingqing\n>\n",
"msg_date": "Fri, 2 Dec 2005 00:04:36 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: filesystem performance with lots of files"
},
{
"msg_contents": "David Lang wrote:\n\n>\n> ext3 has an option to make searching directories faster (htree), but \n> enabling it kills performance when you create files. And this doesn't \n> help with large files.\n>\nThe ReiserFS white paper talks about the data structure he uses to store \ndirectories (some kind of tree), and he says it's quick to both read and \nwrite. Don't forget if you find ls slow, that could just be ls, since \nit's ls, not the fs, that sorts this files into alphabetical order.\n\n > how long would it take to do a tar-ftp-untar cycle with no smarts\n\nNote that you can do the taring, zipping, copying and untaring \nconcurrentlt. I can't remember the exactl netcat command line options, \nbut it goes something like this\n\nBox1:\ntar czvf - myfiles/* | netcat myserver:12345\n\nBox2:\nnetcat -listen 12345 | tar xzvf -\n\nNot only do you gain from doing it all concurrently, but not writing a \ntemp file means that disk seeks a reduced too if you have a one spindle \nmachine.\n\nAlso condsider just copying files onto a network mount. May not be as \nfast as the above, but will be faster than rsync, which has high CPU \nusage and thus not a good choice on a LAN.\n\nHmm, sorry this is not directly postgres anymore...\n\nDavid\n\n\n\n\n\n\nDavid Lang wrote:\n\n ext3 has an option to make searching directories faster (htree), but\nenabling it kills performance when you create files. And this doesn't\nhelp with large files.\n \n\n\nThe ReiserFS white paper talks about the data structure he uses to\nstore directories (some kind of tree), and he says it's quick to both\nread and write. Don't forget if you find ls slow, that could just be\nls, since it's ls, not the fs, that sorts this files into alphabetical\norder.\n\n> how long would it take to do a tar-ftp-untar cycle with no smarts\n\nNote that you can do the taring, zipping, copying and untaring\nconcurrentlt. I can't remember the exactl netcat command line options,\nbut it goes something like this\n\nBox1:\ntar czvf - myfiles/* | netcat myserver:12345\n\nBox2:\nnetcat -listen 12345 | tar xzvf -\n\nNot only do you gain from doing it all concurrently, but not writing a\ntemp file means that disk seeks a reduced too if you have a one spindle\nmachine.\n\nAlso condsider just copying files onto a network mount. May not be as\nfast as the above, but will be faster than rsync, which has high CPU\nusage and thus not a good choice on a LAN.\n\nHmm, sorry this is not directly postgres anymore...\n\nDavid",
"msg_date": "Tue, 20 Dec 2005 13:26:00 +0000",
"msg_from": "David Roussel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem performance with lots of files"
},
{
"msg_contents": "On Tue, Dec 20, 2005 at 01:26:00PM +0000, David Roussel wrote:\n> Note that you can do the taring, zipping, copying and untaring \n> concurrentlt. I can't remember the exactl netcat command line options, \n> but it goes something like this\n> \n> Box1:\n> tar czvf - myfiles/* | netcat myserver:12345\n> \n> Box2:\n> netcat -listen 12345 | tar xzvf -\n\nYou can also use ssh... something like\n\ntar -cf - blah/* | ssh machine tar -xf -\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 20 Dec 2005 13:59:51 -0600",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem performance with lots of files"
}
] |
[
{
"msg_contents": "Hi!\n\nI've got an urgent problem with an application which is evaluating a\nmonthly survey; it's running quite a lot of queries like this:\n\nselect SOURCE.NAME as TYPE,\n count(PARTICIPANT.SESSION_ID) as TOTAL\nfrom (\n select PARTICIPANT.SESSION_ID\n from survey.PARTICIPANT,\n survey.ANSWER\n where PARTICIPANT.STATUS = 1\n and date_trunc('month', PARTICIPANT.CREATED) = date_trunc('month',\nnow()-'1 month'::interval)\n and PARTICIPANT.SESSION_ID = ANSWER.SESSION_ID\n and ANSWER.QUESTION_ID = 6\n and ANSWER.VALUE = 1\n )\n as PARTICIPANT,\n survey.ANSWER,\n survey.HANDY_JAVA SOURCE\nwhere PARTICIPANT.SESSION_ID = ANSWER.SESSION_ID\nand ANSWER.QUESTION_ID = 16\nand ANSWER.VALUE = SOURCE.ID\ngroup by SOURCE.NAME,\n SOURCE.POSITION\norder by SOURCE.POSITION asc;\n\nMy current PostgreSQL-version is \"PostgreSQL 8.1.0 on i686-pc-linux-gnu,\ncompiled by GCC gcc (GCC) 3.2\". Up to 8.0, a query like this took a\ncouple of seconds, maybe even up to a minute. In 8.1 a query like this\nwill run from 30 minutes up to two hours to complete, depending on ist\ncomplexity. I've got autovaccum enabled and run a nightly vacuum analyze\nover all of my databases. Here's some information about the relevant\ntables: Table answer has got ~ 8.9M rows (estimated 8,872,130, counted\n8,876,648), participant has got ~178K rows (estimated 178,165, counted\n178,248), HANDY_JAVA has got three rows. This is the\nexplain-analyze-output for the above:\n\n\"Sort (cost=11383.09..11383.10 rows=3 width=16) (actual\ntime=1952676.858..1952676.863 rows=3 loops=1)\"\n\" Sort Key: source.\"position\"\"\n\" -> HashAggregate (cost=11383.03..11383.07 rows=3 width=16) (actual\ntime=1952676.626..1952676.635 rows=3 loops=1)\"\n\" -> Nested Loop (cost=189.32..11383.00 rows=5 width=16)\n(actual time=6975.812..1952371.782 rows=9806 loops=1)\"\n\" -> Nested Loop (cost=3.48..3517.47 rows=42 width=20)\n(actual time=6819.716..15419.930 rows=9806 loops=1)\"\n\" -> Nested Loop (cost=3.48..1042.38 rows=738\nwidth=16) (actual time=258.434..6233.039 rows=162723 loops=1)\"\n\" -> Seq Scan on handy_java source\n(cost=0.00..1.03 rows=3 width=14) (actual time=0.093..0.118 rows=3\nloops=1)\"\n\" -> Bitmap Heap Scan on answer\n(cost=3.48..344.04 rows=246 width=8) (actual time=172.381..1820.499\nrows=54241 loops=3)\"\n\" Recheck Cond: ((answer.question_id =\n16) AND (answer.value = \"outer\".id))\"\n\" -> Bitmap Index Scan on\nidx02_performance (cost=0.00..3.48 rows=246 width=0) (actual\ntime=98.321..98.321 rows=54245 loops=3)\"\n\" Index Cond: ((answer.question_id\n= 16) AND (answer.value = \"outer\".id))\"\n\" -> Index Scan using idx01_perf_0006 on participant\n(cost=0.00..3.34 rows=1 width=4) (actual time=0.049..0.050 rows=0\nloops=162723)\"\n\" Index Cond: (participant.session_id =\n\"outer\".session_id)\"\n\" Filter: ((status = 1) AND\n(date_trunc('month'::text, created) = date_trunc('month'::text, (now() -\n'1 mon'::interval))))\"\n\" -> Bitmap Heap Scan on answer (cost=185.85..187.26\nrows=1 width=4) (actual time=197.490..197.494 rows=1 loops=9806)\"\n\" Recheck Cond: ((\"outer\".session_id =\nanswer.session_id) AND (answer.question_id = 6) AND (answer.value = 1))\"\n\" -> BitmapAnd (cost=185.85..185.85 rows=1 width=0)\n(actual time=197.421..197.421 rows=0 loops=9806)\"\n\" -> Bitmap Index Scan on\nidx_answer_session_id (cost=0.00..2.83 rows=236 width=0) (actual\ntime=0.109..0.109 rows=49 loops=9806)\"\n\" Index Cond: (\"outer\".session_id =\nanswer.session_id)\"\n\" -> Bitmap Index Scan on idx02_performance\n(cost=0.00..182.77 rows=20629 width=0) (actual time=195.742..195.742\nrows=165697 loops=9806)\"\n\" Index Cond: ((question_id = 6) AND\n(value = 1))\"\n\"Total runtime: 1952678.393 ms\"\n\nI am really sorry, but currently I haven't got any 8.0-installation\nleft, so I cannot provide the explain (analyze) output for 8.0. \n\nI fiddled a little with the statement and managed to speed things up\nquite a lot:\n \nselect SOURCE.NAME as TYPE,\n count(ANSWER.SESSION_ID) as TOTAL\nfrom survey.ANSWER,\n survey.HANDY_JAVA SOURCE\nwhere ANSWER.QUESTION_ID = 16\nand ANSWER.VALUE = SOURCE.ID\nand ANSWER.SESSION_ID in (\n select PARTICIPANT.SESSION_ID\n from survey.PARTICIPANT,\n survey.ANSWER\n where PARTICIPANT.STATUS = 1\n and date_trunc('month', PARTICIPANT.CREATED) = date_trunc('month',\nnow()-'1 month'::interval)\n and PARTICIPANT.SESSION_ID = ANSWER.SESSION_ID\n and ANSWER.QUESTION_ID = 6\n and ANSWER.VALUE = 1\n )\ngroup by SOURCE.NAME,\n SOURCE.POSITION\norder by SOURCE.POSITION asc;\n\nHere's the explain analyze output:\n\"Sort (cost=27835.39..27835.39 rows=3 width=16) (actual\ntime=9609.207..9609.212 rows=3 loops=1)\"\n\" Sort Key: source.\"position\"\"\n\" -> HashAggregate (cost=27835.33..27835.36 rows=3 width=16) (actual\ntime=9609.058..9609.067 rows=3 loops=1)\"\n\" -> Hash IN Join (cost=26645.78..27835.29 rows=5 width=16)\n(actual time=6374.436..9548.945 rows=9806 loops=1)\"\n\" Hash Cond: (\"outer\".session_id = \"inner\".session_id)\"\n\" -> Nested Loop (cost=3.48..1042.38 rows=738 width=16)\n(actual time=190.419..4817.977 rows=162704 loops=1)\"\n\" -> Seq Scan on handy_java source (cost=0.00..1.03\nrows=3 width=14) (actual time=0.036..0.058 rows=3 loops=1)\"\n\" -> Bitmap Heap Scan on answer (cost=3.48..344.04\nrows=246 width=8) (actual time=116.719..1390.931 rows=54235 loops=3)\"\n\" Recheck Cond: ((answer.question_id = 16) AND\n(answer.value = \"outer\".id))\"\n\" -> Bitmap Index Scan on idx02_performance\n(cost=0.00..3.48 rows=246 width=0) (actual time=63.195..63.195\nrows=54235 loops=3)\"\n\" Index Cond: ((answer.question_id = 16)\nAND (answer.value = \"outer\".id))\"\n\" -> Hash (cost=26639.37..26639.37 rows=1174 width=8)\n(actual time=3906.831..3906.831 rows=9806 loops=1)\"\n\" -> Hash Join (cost=4829.24..26639.37 rows=1174\nwidth=8) (actual time=464.011..3877.539 rows=9806 loops=1)\"\n\" Hash Cond: (\"outer\".session_id =\n\"inner\".session_id)\"\n\" -> Bitmap Heap Scan on answer\n(cost=182.76..21413.93 rows=20626 width=4) (actual\ntime=273.839..2860.984 rows=165655 loops=1)\"\n\" Recheck Cond: ((question_id = 6) AND\n(value = 1))\"\n\" -> Bitmap Index Scan on\nidx02_performance (cost=0.00..182.76 rows=20626 width=0) (actual\ntime=171.933..171.933 rows=165659 loops=1)\"\n\" Index Cond: ((question_id = 6)\nAND (value = 1))\"\n\" -> Hash (cost=4621.13..4621.13 rows=10141\nwidth=4) (actual time=123.351..123.351 rows=11134 loops=1)\"\n\" -> Index Scan using idx01_perf_0005 on\nparticipant (cost=0.01..4621.13 rows=10141 width=4) (actual\ntime=0.545..93.200 rows=11134 loops=1)\"\n\" Index Cond:\n(date_trunc('month'::text, created) = date_trunc('month'::text, (now() -\n'1 mon'::interval)))\"\n\" Filter: (status = 1)\"\n\"Total runtime: 9612.249 ms\"\n\nRegarding the total runtime, this is roughly in the same dimension as\nfor the original query in 8.0, as far as I remember. I haven't written\nthese queries myself in the first place, but they were done in the dark\nages when IN was a no-no and haven't been touched ever since - there\nhadn't been any need to do so.\n\nMy current problem is that rewriting hundreds of queries, some of them\nquite a bit more complex than this one, but all of them using the same\ngeneral scheme, would take quite a lot of time - and I'm expected to\nhand over the survey results ASAP. So I will obviously have to do a\nrewrite if there's just no other way, but I wondered if there might be\nsome other option that would allow me to point the planner in the right\ndirection so it would behave the same as in the previous versions,\nnamely 8.0?\n\nAny suggestions?\n\nKind regards\n\n Markus\n",
"msg_date": "Thu, 1 Dec 2005 16:49:14 +0100",
"msg_from": "\"Markus Wollny\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Queries taking ages in PG 8.1, have been much faster in PG<=8.0"
},
{
"msg_contents": "\"Markus Wollny\" <[email protected]> writes:\n> My current problem is that rewriting hundreds of queries, some of them\n> quite a bit more complex than this one, but all of them using the same\n> general scheme, would take quite a lot of time - and I'm expected to\n> hand over the survey results ASAP. So I will obviously have to do a\n> rewrite if there's just no other way, but I wondered if there might be\n> some other option that would allow me to point the planner in the right\n> direction so it would behave the same as in the previous versions,\n> namely 8.0?\n\nIt looks like \"set enable_nestloop = 0\" might be a workable hack for\nthe immediate need. Once you're not under deadline, I'd like to\ninvestigate more closely to find out why 8.1 does worse than 8.0 here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 11:26:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 "
}
] |
[
{
"msg_contents": "> >Franlin: are you making pg_dump from local or remote box and is this\na\n> >clean install? Try fresh patched win2k install and see what happens.\n> He claimed this was local, not network. It is certainly an\n> intriguing possibility that W2K and WinXP handle bytea\n> differently. I'm not competent to comment on that however.\n\ncan you make small extraction of this file (~ 100 rows), zip to file and\nsend to me off list? I'll test it vs. a 2000 and xp server and try to\nreproduce your results.\n\nMerlin\n",
"msg_date": "Thu, 1 Dec 2005 11:15:05 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump slow"
}
] |
[
{
"msg_contents": "\n> -----Ursprüngliche Nachricht-----\n> Von: Tom Lane [mailto:[email protected]] \n> Gesendet: Donnerstag, 1. Dezember 2005 17:26\n> An: Markus Wollny\n> Cc: [email protected]\n> Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have \n> been much faster in PG<=8.0 \n \n> It looks like \"set enable_nestloop = 0\" might be a workable \n> hack for the immediate need. \n\nWhow - that works miracles :)\n\n\"Sort (cost=81813.13..81813.14 rows=3 width=16) (actual time=7526.745..7526.751 rows=3 loops=1)\"\n\" Sort Key: source.\"position\"\"\n\" -> HashAggregate (cost=81813.07..81813.11 rows=3 width=16) (actual time=7526.590..7526.601 rows=3 loops=1)\"\n\" -> Merge Join (cost=81811.40..81813.03 rows=5 width=16) (actual time=7423.289..7479.175 rows=9806 loops=1)\"\n\" Merge Cond: (\"outer\".id = \"inner\".value)\"\n\" -> Sort (cost=1.05..1.06 rows=3 width=14) (actual time=0.085..0.091 rows=3 loops=1)\"\n\" Sort Key: source.id\"\n\" -> Seq Scan on handy_java source (cost=0.00..1.03 rows=3 width=14) (actual time=0.039..0.049 rows=3 loops=1)\"\n\" -> Sort (cost=81810.35..81811.81 rows=583 width=8) (actual time=7423.179..7440.062 rows=9806 loops=1)\"\n\" Sort Key: mafo.answer.value\"\n\" -> Hash Join (cost=27164.31..81783.57 rows=583 width=8) (actual time=6757.521..7360.822 rows=9806 loops=1)\"\n\" Hash Cond: (\"outer\".session_id = \"inner\".session_id)\"\n\" -> Bitmap Heap Scan on answer (cost=506.17..54677.92 rows=88334 width=8) (actual time=379.245..2660.344 rows=162809 loops=1)\"\n\" Recheck Cond: (question_id = 16)\"\n\" -> Bitmap Index Scan on idx_answer_question_id (cost=0.00..506.17 rows=88334 width=0) (actual time=274.632..274.632 rows=162814 loops=1)\"\n\" Index Cond: (question_id = 16)\"\n\" -> Hash (cost=26655.21..26655.21 rows=1175 width=8) (actual time=3831.362..3831.362 rows=9806 loops=1)\"\n\" -> Hash Join (cost=4829.33..26655.21 rows=1175 width=8) (actual time=542.227..3800.985 rows=9806 loops=1)\"\n\" Hash Cond: (\"outer\".session_id = \"inner\".session_id)\"\n\" -> Bitmap Heap Scan on answer (cost=182.84..21429.34 rows=20641 width=4) (actual time=292.067..2750.376 rows=165762 loops=1)\"\n\" Recheck Cond: ((question_id = 6) AND (value = 1))\"\n\" -> Bitmap Index Scan on idx02_performance (cost=0.00..182.84 rows=20641 width=0) (actual time=167.306..167.306 rows=165769 loops=1)\"\n\" Index Cond: ((question_id = 6) AND (value = 1))\"\n\" -> Hash (cost=4621.13..4621.13 rows=10141 width=4) (actual time=182.842..182.842 rows=11134 loops=1)\"\n\" -> Index Scan using idx01_perf_0005 on participant (cost=0.01..4621.13 rows=10141 width=4) (actual time=0.632..136.126 rows=11134 loops=1)\"\n\" Index Cond: (date_trunc('month'::text, created) = date_trunc('month'::text, (now() - '1 mon'::interval)))\"\n\" Filter: (status = 1)\"\n\"Total runtime: 7535.398 ms\"\n\n> Once you're not under deadline, \n> I'd like to investigate more closely to find out why 8.1 does \n> worse than 8.0 here.\n\nPlease tell me what I can do to help in clearing up this issue, I'd be very happy to help! Heck, I am happy anyway that there's such a quick fix, even if it's not a beautiful one :)\n\nKind regards\n\n Markus\n",
"msg_date": "Thu, 1 Dec 2005 17:30:42 +0100",
"msg_from": "\"Markus Wollny\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 "
},
{
"msg_contents": "\"Markus Wollny\" <[email protected]> writes:\n>> Once you're not under deadline, \n>> I'd like to investigate more closely to find out why 8.1 does \n>> worse than 8.0 here.\n\n> Please tell me what I can do to help in clearing up this issue, I'd be\n> very happy to help!\n\nThe first thing to do is get 8.0's EXPLAIN ANALYZE for the same query.\nAfter we see how that differs from 8.1, we'll know what the next\nquestion should be ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 11:34:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Queries taking ages in PG 8.1, have been much faster in PG<=8.0 "
}
] |
[
{
"msg_contents": "\nNot having found anything so far, does anyone know of, and can point me \nto, either tools, or articles, that talk about doing tuning based on the \ninformation that this sort of information can help with?\n\n----\nMarc G. Fournier Hub.Org Networking Services (http://www.hub.org)\nEmail: [email protected] Yahoo!: yscrappy ICQ: 7615664\n",
"msg_date": "Thu, 1 Dec 2005 13:31:31 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat* values ..."
}
] |
[
{
"msg_contents": "I am importing roughly 15 million rows in one batch transaction. I am\ncurrently doing this through batch inserts of around 500 at a time,\nalthough I am looking at ways to do this via multiple (one-per-table)\ncopy commands for performance reasons. \n\nI am currently running: PostgreSQL 8.0.4, Redhat Enterprise Linux 4,\next3, all-on-one partition. I am aware of methods of improving\nperformance by changing ext3 mounting options, splitting WAL, data, and\nindexes to separate physical disks, etc. I have also adjusted my\nshared_buffers, work_mem, maintenance_work_mem, and checkpoint_segments\nand can post their values if anyone thinks it is relevant to my question\n(See questions at the bottom) \n\nWhat confuses me is that at the beginning of the import, I am inserting\nroughly 25,000 rows every 7 seconds..and by the time I get towards the\nend of the import, it is taking 145 seconds for the same number of rows.\n The inserts are spread across 4 tables and I have dropped all indexes\nand constraints on these tables, including foreign keys, unique keys,\nand even primary keys (even though I think primary key doesn't improve\nperformance) The entire bulk import is done in a single transaction.\n\nThe result is a table with 4.8 million rows, two tables with 4.8*2\nmillion rows, and another table with several thousand rows.\n\nSo, my questions are:\n1) Why does the performance degrade as the table sizes grow? Shouldn't\nthe insert performance remain fairly constant if there are no indexes or\nconstraints? \n\n2) Is there anything I can do to figure out where the time is being\nspent? Will postgres log any statistics or information to help me\ndiagnose the problem? I have pasted a fairly representative sample of\nvmstat below my e-mail in case it helps, although I'm not quite how to\ninterpret it in this case. \n\n3) Any other advice, other than the things I listed above (I am aware of\nusing copy, ext3 tuning, multiple disks, tuning postgresql.conf\nsettings)? \n\nThanks in advance,\nJeremy Haile\n\n\n#vmstat 2 20\nprocs -----------memory---------- ---swap-- -----io---- --system--\n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy\n id wa\n 1 0 9368 4416 2536 1778784 0 0 124 51 3 2 2 \n 0 96 2\n 1 0 9368 4416 2536 1778784 0 0 0 0 1005 53 25 \n 0 75 0\n 1 1 9368 3904 2544 1779320 0 0 12164 6 1103 262 24 \n 1 59 16\n 1 0 9368 3704 2552 1779380 0 0 16256 24 1140 344 23 \n 1 53 23\n 1 1 9368 2936 2560 1780120 0 0 16832 6 1143 359 23 \n 1 52 24\n 1 1 9368 3328 2560 1779712 0 0 13120 0 1111 285 24 \n 1 58 18\n 1 0 9368 4544 2560 1778556 0 0 5184 0 1046 141 25 \n 0 67 8\n 1 1 9368 3776 2568 1779296 0 0 7296 6 1064 195 24 \n 0 67 9\n 1 0 9368 4480 2568 1778548 0 0 4096 0 1036 133 24 \n 0 69 6\n 1 0 9368 4480 2576 1778608 0 0 7504 0 1070 213 23 \n 0 67 10\n 1 0 9368 3136 2576 1779900 0 0 9536 0 1084 235 23 \n 0 66 10\n 1 1 9368 3072 2584 1779960 0 0 13632 6 1118 313 24 \n 1 60 16\n 1 0 9368 4480 2592 1778592 0 0 8576 24 1075 204 24 \n 0 63 12\n 1 0 9368 4480 2592 1778592 0 0 0 6 1004 52 25 \n 0 75 0\n 1 0 9368 4544 2600 1778652 0 0 0 6 1005 55 25 \n 0 75 0\n 1 1 9368 3840 2600 1779332 0 0 11264 4 1098 260 24 \n 0 63 13\n 1 1 9368 3072 2592 1780156 0 0 17088 14 1145 346 24 \n 1 51 24\n 1 1 9368 4096 2600 1779128 0 0 16768 6 1140 360 23 \n 1 54 21\n 1 1 9368 3840 2600 1779332 0 0 16960 0 1142 343 24 \n 1 54 22\n 1 0 9368 3436 2596 1779676 0 0 16960 0 1142 352 24 \n 1 53 23\n",
"msg_date": "Thu, 01 Dec 2005 12:49:11 -0500",
"msg_from": "\"Jeremy Haile\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert performance slows down in large batch"
},
{
"msg_contents": "\"Jeremy Haile\" <[email protected]> writes:\n> 1) Why does the performance degrade as the table sizes grow? Shouldn't\n> the insert performance remain fairly constant if there are no indexes or\n> constraints? \n\nYeah, insert really should be a constant-time operation if there's no\nadd-on operations like index updates or FK checks. Can you get more\ninformation about where the time is going with gprof or oprofile?\n(I'm not sure if oprofile is available for RHEL4, but it is in Fedora 4\nso maybe RHEL4 has it too.)\n\nIf you're not comfortable with performance measurement tools, perhaps\nyou could crank up a test case program that just generates dummy data\nand inserts it in the same way as your real application does. If you\ncan confirm a slowdown in a test case that other people can look at,\nwe'd be happy to look into the reason for it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 14:19:14 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert performance slows down in large batch "
}
] |
[
{
"msg_contents": "I'm running postgresql 8.1.0 with postgis 1.0.4 on a FC3 system, 3Ghz, 1 GB\nmemory.\n\n \n\nI am using COPY to fill a table that contains one postgis geometry column.\n\n \n\nWith no geometry index, it takes about 45 seconds to COPY one file.\n\n \n\nIf I add a geometry index, this time degrades. It keeps getting worse as\nmore records are\n\nadded to the table. It was up to over three minutes per file on my most\nrecent test.\n\n \n\nThe problem is that each file contains about 5 - 10 minutes of data.\nEventually, I want to\n\nadd the data to the table in \"real time\". So the COPY needs to take less\ntime than \n\nactually generating the data.\n\n \n\nHere is the relevant section of my postgresql.conf.\n\n \n\n# - Memory -\n\n \n\nshared_buffers = 5000 # min 16 or max_connections*2, 8KB each\n\n#temp_buffers = 1000 # min 100, 8KB each\n\n#max_prepared_transactions = 5 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 20000 # min 64, size in KB\n\nmaintenance_work_mem = 20000 # min 1024, size in KB\n\n#max_stack_depth = 2048 # min 100, size in KB\n\n \n\nAny suggestions for improvement?\n\n\n\n\n\n\n\n\n\n\nI’m running postgresql 8.1.0 with postgis 1.0.4 on a\nFC3 system, 3Ghz, 1 GB memory.\n \nI am using COPY to fill a table that contains one postgis\ngeometry column.\n \nWith no geometry index, it takes about 45 seconds to COPY\none file.\n \nIf I add a geometry index, this time degrades. It\nkeeps getting worse as more records are\nadded to the table. It was up to over three minutes\nper file on my most recent test.\n \nThe problem is that each file contains about 5 – 10 minutes\nof data. Eventually, I want to\nadd the data to the table in “real time”. \nSo the COPY needs to take less time than \nactually generating the data.\n \nHere is the relevant section of my postgresql.conf.\n \n# - Memory -\n \nshared_buffers = 5000 #\nmin 16 or max_connections*2, 8KB each\n#temp_buffers = 1000 #\nmin 100, 8KB each\n#max_prepared_transactions =\n5 # can be 0\nor more\n# note: increasing max_prepared_transactions\ncosts ~600 bytes of shared memory\n# per transaction slot, plus\nlock space (see max_locks_per_transaction).\nwork_mem = 20000 #\nmin 64, size in KB\nmaintenance_work_mem = 20000 #\nmin 1024, size in KB\n#max_stack_depth = 2048 #\nmin 100, size in KB\n \nAny suggestions for improvement?",
"msg_date": "Thu, 1 Dec 2005 12:58:12 -0500",
"msg_from": "\"Rick Schumeyer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "COPY into table too slow with index"
},
{
"msg_contents": "As a follow up to my own question:\n\n \n\nI reran the COPY both ways (with the index and without) while running\niostat. The following values\n\nare averages:\n\n %user %nice %sys %iowait %idle\n\nno index 39 0 2.8 11 47\n\nindex 16 1.5 2.1 34 46\n\n \n\nI'm no performance guru, so please indulge a couple of silly questions:\n\n \n\n1) Why is there so much idle time? I would think the CPU would either\nbe busy or waiting for IO.\n\n2) It seems that I need to improve my disk situation. Would it help\nto add another drive to my PC and\n\nkeep the input data on a separate drive from my pg tables? If so, some\npointers on the best way to set that up\n\nwould be appreciated.\n\n \n\nPlease let me know if anyone has additional ideas.\n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rick Schumeyer\nSent: Thursday, December 01, 2005 12:58 PM\nTo: [email protected]\nSubject: [PERFORM] COPY into table too slow with index\n\n \n\nI'm running postgresql 8.1.0 with postgis 1.0.4 on a FC3 system, 3Ghz, 1 GB\nmemory.\n\n \n\nI am using COPY to fill a table that contains one postgis geometry column.\n\n \n\nWith no geometry index, it takes about 45 seconds to COPY one file.\n\n \n\nIf I add a geometry index, this time degrades. It keeps getting worse as\nmore records are\n\nadded to the table. It was up to over three minutes per file on my most\nrecent test.\n\n \n\nThe problem is that each file contains about 5 - 10 minutes of data.\nEventually, I want to\n\nadd the data to the table in \"real time\". So the COPY needs to take less\ntime than \n\nactually generating the data.\n\n \n\nHere is the relevant section of my postgresql.conf.\n\n \n\n# - Memory -\n\n \n\nshared_buffers = 5000 # min 16 or max_connections*2, 8KB each\n\n#temp_buffers = 1000 # min 100, 8KB each\n\n#max_prepared_transactions = 5 # can be 0 or more\n\n# note: increasing max_prepared_transactions costs ~600 bytes of shared\nmemory\n\n# per transaction slot, plus lock space (see max_locks_per_transaction).\n\nwork_mem = 20000 # min 64, size in KB\n\nmaintenance_work_mem = 20000 # min 1024, size in KB\n\n#max_stack_depth = 2048 # min 100, size in KB\n\n \n\nAny suggestions for improvement?\n\n\n\n\n\n\n\n\n\n\nAs a follow up to my own question:\n \nI reran the COPY both ways (with the index\nand without) while running iostat. The following values\nare averages:\n \n%user %nice %sys %iowait %idle\nno index \n39 0 \n2.8 11 47\nindex \n16 1.5 2.1 \n34 46\n \nI’m no performance guru, so please\nindulge a couple of silly questions:\n \n1) \nWhy is there so much idle\ntime? I would think the CPU would either be busy or waiting for IO.\n2) \nIt seems that I need to\nimprove my disk situation. Would it help to add another drive to my PC\nand\nkeep the input data on a separate drive\nfrom my pg tables? If so, some pointers on the best way to set that up\nwould be appreciated.\n \nPlease let me know if anyone has\nadditional ideas.\n \n\n-----Original Message-----\nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Rick Schumeyer\nSent: Thursday,\n December 01, 2005 12:58 PM\nTo:\[email protected]\nSubject: [PERFORM] COPY into table\ntoo slow with index\n \nI’m running postgresql 8.1.0 with postgis 1.0.4 on a\nFC3 system, 3Ghz, 1 GB memory.\n \nI am using COPY to fill a table that contains one postgis\ngeometry column.\n \nWith no geometry index, it takes about 45 seconds to COPY\none file.\n \nIf I add a geometry index, this time degrades. It\nkeeps getting worse as more records are\nadded to the table. It was up to over three minutes\nper file on my most recent test.\n \nThe problem is that each file contains about 5 – 10\nminutes of data. Eventually, I want to\nadd the data to the table in “real time”. \nSo the COPY needs to take less time than \nactually generating the data.\n \nHere is the relevant section of my postgresql.conf.\n \n# - Memory -\n \nshared_buffers =\n5000 \n# min 16 or max_connections*2, 8KB each\n#temp_buffers =\n1000 \n# min 100, 8KB each\n#max_prepared_transactions =\n5 # can be 0\nor more\n# note: increasing\nmax_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus\nlock space (see max_locks_per_transaction).\nwork_mem =\n20000 \n# min 64, size in KB\nmaintenance_work_mem =\n20000 # min 1024, size in KB\n#max_stack_depth =\n2048 #\nmin 100, size in KB\n \nAny suggestions for improvement?",
"msg_date": "Thu, 1 Dec 2005 17:18:38 -0500",
"msg_from": "\"Rick Schumeyer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY into table too slow with index: now an I/O question"
},
{
"msg_contents": "Rick,\n\nOn 12/1/05 2:18 PM, \"Rick Schumeyer\" <[email protected]> wrote:\n\n> As a follow up to my own question:\n> \n> I reran the COPY both ways (with the index and without) while running iostat.\n> The following values\n> are averages:\n> %user %nice %sys %iowait %idle\n> no index 39 0 2.8 11 47\n> index 16 1.5 2.1 34 46\n> \n> I¹m no performance guru, so please indulge a couple of silly questions:\n> \n> 1) Why is there so much idle time? I would think the CPU would either be\n> busy or waiting for IO.\n\nThe 100% represents 2 CPUs. When one CPU is fully busy you should see 50%\nidle time.\n\n> 2) It seems that I need to improve my disk situation. Would it help to\n> add another drive to my PC and\n> keep the input data on a separate drive from my pg tables? If so, some\n> pointers on the best way to set that up\n> would be appreciated.\n\nPutting the index and the table on separate disks will fix this IMO. I\nthink you can do that using the \"TABLESPACE\" concept for each.\n\nThe problem I see is nicely shown by the increase in IOWAIT between the two\npatterns (with and without index). It seems likely that the pattern is:\nA - insert a tuple into the table\nB - insert an entry into the index\nC - fsync the WAL\n- repeat\n\nThis can be as bad as having a disk seek to access the table data every time\nthe 8KB page boundary is crossed, then again for the index, then again for\nthe WAL, and random disk seeks happen only as fast as about 10ms, so you can\nonly do those at a rate of 100/s.\n\n> Please let me know if anyone has additional ideas.\n\nThis is a fairly common problem, some people drop the index, load the data,\nthen recreate the index to get around it.\n\n- Luke\n\n\n",
"msg_date": "Thu, 01 Dec 2005 18:26:42 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY into table too slow with index: now an I/O"
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> The problem I see is nicely shown by the increase in IOWAIT between the two\n> patterns (with and without index). It seems likely that the pattern is:\n> A - insert a tuple into the table\n> B - insert an entry into the index\n> C - fsync the WAL\n> - repeat\n\n> This can be as bad as having a disk seek to access the table data every time\n> the 8KB page boundary is crossed, then again for the index, then again for\n> the WAL, and random disk seeks happen only as fast as about 10ms, so you can\n> only do those at a rate of 100/s.\n\nThat analysis is far too simplistic, because only the WAL write has to\nhappen before the transaction can commit. The table and index writes\nwill normally happen at some later point in the bgwriter, and with any\nluck there will only need to be one write per page, not per tuple.\n\nIt is true that having WAL and data on the same spindle is bad news,\nbecause the disk head has to divide its time between synchronous WAL\nwrites and asynchronous writes of the rest of the files.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Dec 2005 22:10:06 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY into table too slow with index: now an I/O "
},
{
"msg_contents": "I only have one CPU. Is my copy of iostat confused, or does this have\nsomething to do with hyperthreading or dual core? (AFAIK, I don't have a\ndual core!)\n\nThe problem (for me) with dropping the index during a copy is that it takes\ntens of minutes (or more) to recreate the geometry index once the table has,\nsay, 50 million rows.\n\n> -----Original Message-----\n> From: Luke Lonergan [mailto:[email protected]]\n> Sent: Thursday, December 01, 2005 9:27 PM\n> To: Rick Schumeyer; [email protected]\n> Subject: Re: [PERFORM] COPY into table too slow with index: now an I/O\n> question\n> \n> Rick,\n> \n> On 12/1/05 2:18 PM, \"Rick Schumeyer\" <[email protected]> wrote:\n> \n> > As a follow up to my own question:\n> >\n> > I reran the COPY both ways (with the index and without) while running\n> iostat.\n> > The following values\n> > are averages:\n> > %user %nice %sys %iowait %idle\n> > no index 39 0 2.8 11 47\n> > index 16 1.5 2.1 34 46\n> >\n> > I¹m no performance guru, so please indulge a couple of silly questions:\n> >\n> > 1) Why is there so much idle time? I would think the CPU would\n> either be\n> > busy or waiting for IO.\n> \n> The 100% represents 2 CPUs. When one CPU is fully busy you should see 50%\n> idle time.\n> \n> > 2) It seems that I need to improve my disk situation. Would it\n> help to\n> > add another drive to my PC and\n> > keep the input data on a separate drive from my pg tables? If so, some\n> > pointers on the best way to set that up\n> > would be appreciated.\n> \n> Putting the index and the table on separate disks will fix this IMO. I\n> think you can do that using the \"TABLESPACE\" concept for each.\n> \n> The problem I see is nicely shown by the increase in IOWAIT between the\n> two\n> patterns (with and without index). It seems likely that the pattern is:\n> A - insert a tuple into the table\n> B - insert an entry into the index\n> C - fsync the WAL\n> - repeat\n> \n> This can be as bad as having a disk seek to access the table data every\n> time\n> the 8KB page boundary is crossed, then again for the index, then again for\n> the WAL, and random disk seeks happen only as fast as about 10ms, so you\n> can\n> only do those at a rate of 100/s.\n> \n> > Please let me know if anyone has additional ideas.\n> \n> This is a fairly common problem, some people drop the index, load the\n> data,\n> then recreate the index to get around it.\n> \n> - Luke\n\n\n",
"msg_date": "Thu, 1 Dec 2005 22:26:56 -0500",
"msg_from": "\"Rick Schumeyer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY into table too slow with index: now an I/O question"
}
] |
[
{
"msg_contents": "> we are currently running a postgres server (upgraded to 8.1) which has\n> one large database with approx. 15,000 tables. Unfortunately\nperformance\n> suffers from that, because the internal tables (especially that which\n> holds the attribute info) get too large.\n> \n> (We NEED that many tables, please don't recommend to reduce them)\n> \n> Logically these tables could be grouped into 500 databases. My\nquestion\n> is:\n> \n> Would performance be better if I had 500 databases (on one postgres\n> server instance) which each contain 30 tables, or is it better to have\n> one large database with 15,000 tables? In the old days of postgres 6.5\n> we tried that, but performance was horrible with many databases ...\n> \n> BTW: I searched the mailing list, but found nothing on the subject -\nand\n> there also isn't any information in the documentation about the\neffects\n> of the number of databases, tables or attributes on the performance.\n> \n> Now, what do you say? Thanks in advance for any comment!\n\nI've never run near that many databases on one box so I can't comment on\nthe performance. But let's assume for the moment pg runs fine with 500\ndatabases. The most important advantage of multi-schema approach is\ncross schema querying. I think as you are defining your problem this is\na better way to do things.\n\nMerlin\n",
"msg_date": "Thu, 1 Dec 2005 15:28:58 -0500",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 15,000 tables"
}
] |
[
{
"msg_contents": "Our application tries to insert data into the database as fast as it can.\nCurrently the work is being split into a number of 1MB copy operations.\n\nWhen we restore the postmaster process tries to use 100% of the CPU.\n\nThe questions we have are:\n\n1) What is postmaster doing that it needs so much CPU?\n\n2) How can we get our system to go faster?\n\n\nNote: We've tried adjusting the checkpoint_segements parameter to no effect.\nAny suggestions welcome.\n\n\n\n\n\nDatabase restore speed\n\n\n\nOur application tries to insert data into the database as fast as it can.\nCurrently the work is being split into a number of 1MB copy operations.\n\nWhen we restore the postmaster process tries to use 100% of the CPU.\n\nThe questions we have are:\n\n1) What is postmaster doing that it needs so much CPU?\n\n2) How can we get our system to go faster?\n\n\nNote: We've tried adjusting the checkpoint_segements parameter to no effect.\nAny suggestions welcome.",
"msg_date": "Thu, 1 Dec 2005 16:27:06 -0800",
"msg_from": "\"Steve Oualline\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Database restore speed"
}
] |
[
{
"msg_contents": "Tom, \n\n> That analysis is far too simplistic, because only the WAL \n> write has to happen before the transaction can commit. The \n> table and index writes will normally happen at some later \n> point in the bgwriter, and with any luck there will only need \n> to be one write per page, not per tuple.\n\nThat's good to know - makes sense. I suppose we might still thrash over\na 1GB range in seeks if the BG writer starts running at full rate in the\nbackground, right? Or is there some write combining in the BG writer?\n\n> It is true that having WAL and data on the same spindle is \n> bad news, because the disk head has to divide its time \n> between synchronous WAL writes and asynchronous writes of the \n> rest of the files.\n\nThat sounds right - could be tested by him turning fsync off, or by\nmoving the WAL to a different spindle (note I'm not advocating running\nin production with fsync off).\n\n- Luke\n\n",
"msg_date": "Fri, 2 Dec 2005 00:15:57 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: COPY into table too slow with index: now an I/O"
},
{
"msg_contents": "On Fri, Dec 02, 2005 at 12:15:57AM -0500, Luke Lonergan wrote:\n>That's good to know - makes sense. I suppose we might still thrash over\n>a 1GB range in seeks if the BG writer starts running at full rate in the\n>background, right? Or is there some write combining in the BG writer?\n\nThat part your OS should be able to handle. Those writes aren't synced,\nso the OS has plenty of opportunity to buffer & aggregate them.\n\nMike Stone\n",
"msg_date": "Fri, 02 Dec 2005 08:04:19 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: COPY into table too slow with index: now an I/O"
}
] |
[
{
"msg_contents": "Steve, \n\n> When we restore the postmaster process tries to use 100% of the CPU. \n> \n> The questions we have are: \n> \n> 1) What is postmaster doing that it needs so much CPU? \n\nParsing mostly, and attribute conversion from text to DBMS native\nformats.\n \n> 2) How can we get our system to go faster? \n\nUse Postgres 8.1 or Bizgres. Get a faster CPU. \n\nThese two points are based on our work to improve COPY speed, which led\nto a near doubling in Bizgres, and in the 8.1 version it's about 60-70%\nfaster than in Postgres 8.0.\n\nThere are currently two main bottlenecks in COPY, one is parsing +\nattribute conversion (if the postgres CPU is nailed at 100% that's what\nyour limit is) and the other is the write speed through the WAL. You\ncan roughly divide the write speed of your disk by 3 to get that limit,\ne.g. if your disk can write 8k blocks at 100MB/s, then your COPY speed\nmight be limited to 33MB/s. You can tell which of these limits you've\nhit using \"vmstat 1\" on Linux or iostat on Solaris and watch the blocks\ninput/output on your disk while you watch your CPU.\n\n> Note: We've tried adjusting the checkpoint_segements \n> parameter to no effect. \n\nNo surprise.\n\n- Luke\n\n",
"msg_date": "Fri, 2 Dec 2005 01:11:53 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Luke Lonergan wrote:\n\n> Steve,\n>\n>> When we restore the postmaster process tries to use 100% of the CPU.\n>>\n>> The questions we have are:\n>>\n>> 1) What is postmaster doing that it needs so much CPU?\n>\n> Parsing mostly, and attribute conversion from text to DBMS native\n> formats.\n>\n>> 2) How can we get our system to go faster?\n>\n> Use Postgres 8.1 or Bizgres. Get a faster CPU.\n>\n> These two points are based on our work to improve COPY speed, which led\n> to a near doubling in Bizgres, and in the 8.1 version it's about 60-70%\n> faster than in Postgres 8.0.\n>\n> There are currently two main bottlenecks in COPY, one is parsing +\n> attribute conversion (if the postgres CPU is nailed at 100% that's what\n> your limit is) and the other is the write speed through the WAL. You\n> can roughly divide the write speed of your disk by 3 to get that limit,\n> e.g. if your disk can write 8k blocks at 100MB/s, then your COPY speed\n> might be limited to 33MB/s. You can tell which of these limits you've\n> hit using \"vmstat 1\" on Linux or iostat on Solaris and watch the blocks\n> input/output on your disk while you watch your CPU.\n\nLuke, would it help to have one machine read the file and have it connect \nto postgres on a different machine when doing the copy? (I'm thinking that \nthe first machine may be able to do a lot of the parseing and conversion, \nleaving the second machine to just worry about doing the writes)\n\nDavid Lang\n",
"msg_date": "Thu, 1 Dec 2005 23:49:48 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
}
] |
[
{
"msg_contents": "here are the suggestions from the MySQL folks, what additional tests \nshould I do.\n\nI'd like to see some tests submitted that map out when not to use a \nparticular database engine, so if you have a test that you know a \nparticular database chokes on let me know (bonus credibility if you \ninclude tests that your own database has trouble with :)\n\nDavid Lang\n\n---------- Forwarded message ---------- Date: Thu, 01 Dec 2005 16:14:25\n\nDavid,\n\nThe choice of benchmark depends on what kind of application would you\nlike to see performance for.\n\nThan someone speaks about one or other database to be faster than other\nin general, it makes me smile. That would be the same as tell one car\nwould be able to win all competitions starting from Formula-1 and ending\nwith off-road racing.\n\nThere are certain well known cases when MySQL will be faster - for\nexample in memory storage engine is hard to beat in point selects, or\nbulk inserts in MyISAM (no transactional overhead).\n\nThere are certain known cases when MySQL would not perform well - it is\neasy to build the query using subqueries which would be horribly slow on\nMySQL but decent on postgresql... but well writing application for\nMySQL you would not write such query.\n\n\nI think most database agnostic way would be to select the \"workload\"\nfrom user point of view and have it implemented the most efficient way\nfor database in question - for example you may find TPC-C\nimplementations by different vendors are a lot different.\n\n\n>\n> For my own interests, I would like to at least cover the following bases:\n> 32 bit vs 64 bit vs 64 bit kernel + 32 bit user-space; data warehouse type\n> tests (data >> memory); and web prefs test (active data RAM)\n\nYou may grab Dell DVD store:\n\nhttp://linux.dell.com/dvdstore/\n\nfor Web benchmark. It does not have PostgreSQL build in but there some\nimplementations available in the Internet\n\nDBT2 by OSDL is other good candidate - it does support postgreSQL and\nMySQL natively.\n\n\nIf you want some raw performance number such as number selects/sec you\nmay use SysBench - http://sysbench.sourceforge.net\n\n\nFor DataWarehouse workloads you could grab TPC-H or DBT3\nimplementation by OSDL - We run this successfully with MySQL\n\nYou also could take a look at http://benchw.sourceforge.net/\n\n\n",
"msg_date": "Thu, 1 Dec 2005 23:35:17 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Open request for benchmarking input (fwd)"
},
{
"msg_contents": "\n\"David Lang\" <[email protected]> wrote\n> here are the suggestions from the MySQL folks, what additional tests \n> should I do.\n>\n\nI think the tests you list are enough in this stage,\n\nRegards,\nQingqing \n\n\n",
"msg_date": "Fri, 2 Dec 2005 12:19:25 -0500",
"msg_from": "\"Qingqing Zhou\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Open request for benchmarking input (fwd)"
}
] |
[
{
"msg_contents": "David, \n\n> Luke, would it help to have one machine read the file and \n> have it connect to postgres on a different machine when doing \n> the copy? (I'm thinking that the first machine may be able to \n> do a lot of the parseing and conversion, leaving the second \n> machine to just worry about doing the writes)\n\nUnfortunately not - the parsing / conversion core is in the backend,\nwhere it should be IMO because of the need to do the attribute\nconversion there in the machine-native representation of the attributes\n(int4, float, etc) in addition to having the backend convert from client\nencoding (like LATIN1) to the backend encoding (like UNICODE aka UTF8).\n\nThere are a few areas of discussion about continued performance\nincreases in the codebase for COPY FROM, here are my picks:\n- More micro-optimization of the parsing and att conversion core - maybe\n100% speedup in the parse/convert stage is possible\n- A user selectable option to bypass transaction logging, similar to\nOracle's\n- A well-defined binary input format, like Oracle's SQL*Loader - this\nwould bypass most parsing / att conversion\n- A direct-to-table storage loader facility - this would probably be the\nfastest possible load rate\n\n- Luke\n\n",
"msg_date": "Fri, 2 Dec 2005 03:06:43 -0500",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "* Luke Lonergan ([email protected]) wrote:\n> > Luke, would it help to have one machine read the file and \n> > have it connect to postgres on a different machine when doing \n> > the copy? (I'm thinking that the first machine may be able to \n> > do a lot of the parseing and conversion, leaving the second \n> > machine to just worry about doing the writes)\n> \n> Unfortunately not - the parsing / conversion core is in the backend,\n> where it should be IMO because of the need to do the attribute\n> conversion there in the machine-native representation of the attributes\n> (int4, float, etc) in addition to having the backend convert from client\n> encoding (like LATIN1) to the backend encoding (like UNICODE aka UTF8).\n\nJust a thought, but couldn't psql be made to use the binary mode of\nlibpq and do at least some of the conversion on the client side? Or\ndoes binary mode not work with copy (that wouldn't suprise me, but\nperhaps copy could be made to support it)?\n\nThe other thought, of course, is that you could use PITR for your\nbackups instead of pgdump...\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 2 Dec 2005 15:18:47 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "Stephen,\n\nOn 12/2/05 12:18 PM, \"Stephen Frost\" <[email protected]> wrote:\n\n> Just a thought, but couldn't psql be made to use the binary mode of\n> libpq and do at least some of the conversion on the client side? Or\n> does binary mode not work with copy (that wouldn't suprise me, but\n> perhaps copy could be made to support it)?\n\nYes - I think this idea is implicit in what David suggested, and my response\nas well. The problem is that the way the client does conversions can\npotentially differ from the way the backend does. Some of the types in\nPostgres are machine intrinsic and the encoding conversions use on-machine\nlibraries, each of which preclude the use of client conversion methods\n(without a lot of restructuring). We'd tackled this problem in the past and\nconcluded that the parse / convert stage really belongs in the backend.\n \n> The other thought, of course, is that you could use PITR for your\n> backups instead of pgdump...\n\nTotally - great idea, if this is actually a backup / restore then PITR plus\nfilesystem copy (tarball) is hugely faster than dump / restore.\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Dec 2005 12:48:08 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "* Luke Lonergan ([email protected]) wrote:\n> On 12/2/05 12:18 PM, \"Stephen Frost\" <[email protected]> wrote:\n> > Just a thought, but couldn't psql be made to use the binary mode of\n> > libpq and do at least some of the conversion on the client side? Or\n> > does binary mode not work with copy (that wouldn't suprise me, but\n> > perhaps copy could be made to support it)?\n> \n> Yes - I think this idea is implicit in what David suggested, and my response\n> as well. The problem is that the way the client does conversions can\n> potentially differ from the way the backend does. Some of the types in\n> Postgres are machine intrinsic and the encoding conversions use on-machine\n> libraries, each of which preclude the use of client conversion methods\n> (without a lot of restructuring). We'd tackled this problem in the past and\n> concluded that the parse / convert stage really belongs in the backend.\n\nI've used the binary mode stuff before, sure, Postgres may have to\nconvert some things but I have a hard time believing it'd be more\nexpensive to do a network_encoding -> host_encoding (or toasting, or\nwhatever) than to do the ascii -> binary change.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Fri, 2 Dec 2005 16:19:25 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "Stephen,\n\nOn 12/2/05 1:19 PM, \"Stephen Frost\" <[email protected]> wrote:\n\n> I've used the binary mode stuff before, sure, Postgres may have to\n> convert some things but I have a hard time believing it'd be more\n> expensive to do a network_encoding -> host_encoding (or toasting, or\n> whatever) than to do the ascii -> binary change.\n\n From a performance standpoint no argument, although you're betting that you\ncan do parsing / conversion faster than the COPY core in the backend can (I\nknow *we* can :-). It's a matter of safety and generality - in general you\ncan't be sure that client machines / OS'es will render the same conversions\nthat the backend does in all cases IMO.\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Dec 2005 13:24:31 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "Stephen,\n\nOn 12/2/05 1:19 PM, \"Stephen Frost\" <[email protected]> wrote:\n> \n>> I've used the binary mode stuff before, sure, Postgres may have to\n>> convert some things but I have a hard time believing it'd be more\n>> expensive to do a network_encoding -> host_encoding (or toasting, or\n>> whatever) than to do the ascii -> binary change.\n> \n> From a performance standpoint no argument, although you're betting that you\n> can do parsing / conversion faster than the COPY core in the backend can (I\n> know *we* can :-). It's a matter of safety and generality - in general you\n> can't be sure that client machines / OS'es will render the same conversions\n> that the backend does in all cases IMO.\n\nOne more thing - this is really about the lack of a cross-platform binary\ninput standard for Postgres IMO. If there were such a thing, it *would* be\nsafe to do this. The current Binary spec is not cross-platform AFAICS, it\nembeds native representations of the DATUMs, and does not specify a\nuniversal binary representation of same.\n\nFor instance - when representing a float, is it an IEEE 32-bit floating\npoint number in little endian byte ordering? Or is it IEEE 64-bit? With\nlibpq, we could do something like an XDR implementation, but the machinery\nisn't there AFAICS.\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Dec 2005 13:29:47 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:\n>From a performance standpoint no argument, although you're betting that you\n>can do parsing / conversion faster than the COPY core in the backend can \n\nNot necessarily; you may be betting that it's more *efficient* to do the\nparsing on a bunch of lightly loaded clients than your server. Even if\nyou're using the same code this may be a big win. \n\nMike Stone\n",
"msg_date": "Fri, 02 Dec 2005 16:46:02 -0500",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> One more thing - this is really about the lack of a cross-platform binary\n> input standard for Postgres IMO. If there were such a thing, it *would* be\n> safe to do this. The current Binary spec is not cross-platform AFAICS, it\n> embeds native representations of the DATUMs, and does not specify a\n> universal binary representation of same.\n\nSure it does ... at least as long as you are willing to assume everybody\nuses IEEE floats, and if they don't you have semantic problems\ntranslating float datums anyhow.\n\nWhat we lack is documentation, more than functionality.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Dec 2005 18:00:59 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed "
},
{
"msg_contents": "Micahel,\n\nOn 12/2/05 1:46 PM, \"Michael Stone\" <[email protected]> wrote:\n\n> Not necessarily; you may be betting that it's more *efficient* to do the\n> parsing on a bunch of lightly loaded clients than your server. Even if\n> you're using the same code this may be a big win.\n\nIf it were possible in light of the issues on client parse / convert, then\nwe should analyze whether it's a performance win.\n\nIn the restore case, where we've got a dedicated server with a dedicated\nclient machine, I don't see why there would be a speed benefit from running\nthe same parse / convert code on the client versus running it on the server.\nImagine a pipeline where there is a bottleneck, moving the bottleneck to a\ndifferent machine doesn't make it less of a bottleneck.\n\n- Luke\n\n\n",
"msg_date": "Fri, 02 Dec 2005 15:02:11 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2005-12-02 at 13:24 -0800, Luke Lonergan wrote:\n> It's a matter of safety and generality - in general you\n> can't be sure that client machines / OS'es will render the same conversions\n> that the backend does in all cases IMO.\n\nCan't binary values can safely be sent cross-platform in DataRow\nmessages? At least from my ignorant, cursory look at printtup.c,\nthere's a binary format code path. float4send in utils/adt/float.c uses\npq_sendfloat4. I obviously haven't followed the entire rabbit trail,\nbut it seems like it happens.\n\nIOW, why isn't there a cross-platform issue when sending binary data\nfrom the backend to the client in query results? And if there isn't a\nproblem there, why can't binary data be sent from the client to the\nbackend?\n\nMitch\n",
"msg_date": "Fri, 02 Dec 2005 19:26:06 -0800",
"msg_from": "Mitch Skinner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Luke Lonergan wrote:\n\n> Stephen,\n>\n> On 12/2/05 12:18 PM, \"Stephen Frost\" <[email protected]> wrote:\n>\n>> Just a thought, but couldn't psql be made to use the binary mode of\n>> libpq and do at least some of the conversion on the client side? Or\n>> does binary mode not work with copy (that wouldn't suprise me, but\n>> perhaps copy could be made to support it)?\n>\n> Yes - I think this idea is implicit in what David suggested, and my response\n> as well. The problem is that the way the client does conversions can\n> potentially differ from the way the backend does. Some of the types in\n> Postgres are machine intrinsic and the encoding conversions use on-machine\n> libraries, each of which preclude the use of client conversion methods\n> (without a lot of restructuring). We'd tackled this problem in the past and\n> concluded that the parse / convert stage really belongs in the backend.\n\nI'll bet this parsing cost varys greatly with the data types used, I'm \nalso willing to bet that for the data types that hae different encoding on \ndifferent systems there could be a intermediate encoding that is far \nfaster to parse then ASCII text is.\n\nfor example, (and I know nothing about the data storage itself so this is \njust an example), if the issue was storing numeric values on big endian \nand little endian systems (and 32 bit vs 64 bit systems to end up with 4 \nways of holding the data) you have a substantial cost in parseing the \nASCII and converting it to a binary value, but the client can't (and \nshouldn't) know which endian type and word size the server is. but it \ncould create a big endian multi-precision encoding that would then be very \ncheap for the server to split and flip as nessasary. yes this means more \nwork is done overall, but it's split between different machines, and the \nbinary representation of the data will reduce probably your network \ntraffic as a side effect.\n\nand for things like date which get parsed in multiple ways until one is \nfound that seems sane, there's a significant amount of work that the \nserver could avoid.\n\nDavid Lang\n\n>> The other thought, of course, is that you could use PITR for your\n>> backups instead of pgdump...\n>\n> Totally - great idea, if this is actually a backup / restore then PITR plus\n> filesystem copy (tarball) is hugely faster than dump / restore.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n",
"msg_date": "Sat, 3 Dec 2005 01:07:15 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Michael Stone wrote:\n\n> On Fri, Dec 02, 2005 at 01:24:31PM -0800, Luke Lonergan wrote:\n>> From a performance standpoint no argument, although you're betting that you\n>> can do parsing / conversion faster than the COPY core in the backend can \n>\n> Not necessarily; you may be betting that it's more *efficient* to do the\n> parsing on a bunch of lightly loaded clients than your server. Even if\n> you're using the same code this may be a big win.\n\nit's a lot easier to throw hardware at the problem by spliting your \nincomeing data between multiple machines and have them all working in \nparallel throwing the data at one database then it is to throw more \nhardware at the database server to speed it up (and yes, assuming that MPP \nsplits the parseing costs as well, it can be an answer for some types of \nsystems)\n\nDavid Lang\n",
"msg_date": "Sat, 3 Dec 2005 01:12:07 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Luke Lonergan wrote:\n\n> Stephen,\n>\n> On 12/2/05 1:19 PM, \"Stephen Frost\" <[email protected]> wrote:\n>>\n>>> I've used the binary mode stuff before, sure, Postgres may have to\n>>> convert some things but I have a hard time believing it'd be more\n>>> expensive to do a network_encoding -> host_encoding (or toasting, or\n>>> whatever) than to do the ascii -> binary change.\n>>\n>> From a performance standpoint no argument, although you're betting that you\n>> can do parsing / conversion faster than the COPY core in the backend can (I\n>> know *we* can :-). It's a matter of safety and generality - in general you\n>> can't be sure that client machines / OS'es will render the same conversions\n>> that the backend does in all cases IMO.\n>\n> One more thing - this is really about the lack of a cross-platform binary\n> input standard for Postgres IMO. If there were such a thing, it *would* be\n> safe to do this. The current Binary spec is not cross-platform AFAICS, it\n> embeds native representations of the DATUMs, and does not specify a\n> universal binary representation of same.\n>\n> For instance - when representing a float, is it an IEEE 32-bit floating\n> point number in little endian byte ordering? Or is it IEEE 64-bit? With\n> libpq, we could do something like an XDR implementation, but the machinery\n> isn't there AFAICS.\n\nThis makes sense, however it then raises the question of how much effort \nit would take to define such a standard and implement the shim layer \nneeded to accept the connections vs how much of a speed up it would result \nin (the gain could probaly be approximated with just a little hacking to \nuse the existing binary format between two machines of the same type)\n\nas for the standards, standard network byte order is big endian, so that \nshould be the standard used (in spite of the quantity of x86 machines out \nthere). for the size of the data elements, useing the largest size of each \nwill probably still be a win in size compared to ASCII. converting between \nbinary formats is useally a matter of a few and and shift opcodes (and \nwith the core so much faster then it's memory you can afford to do quite a \nfew of these on each chunk of data without it being measurable in your \noverall time)\n\nan alturnative would be to add a 1-byte data type before each data element \nto specify it's type, but then the server side code would have to be \nsmarter to deal with the additional possibilities.\n\nDavid Lang\n",
"msg_date": "Sat, 3 Dec 2005 01:22:04 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2 Dec 2005, Luke Lonergan wrote:\n\n> Micahel,\n>\n> On 12/2/05 1:46 PM, \"Michael Stone\" <[email protected]> wrote:\n>\n>> Not necessarily; you may be betting that it's more *efficient* to do the\n>> parsing on a bunch of lightly loaded clients than your server. Even if\n>> you're using the same code this may be a big win.\n>\n> If it were possible in light of the issues on client parse / convert, then\n> we should analyze whether it's a performance win.\n>\n> In the restore case, where we've got a dedicated server with a dedicated\n> client machine, I don't see why there would be a speed benefit from running\n> the same parse / convert code on the client versus running it on the server.\n> Imagine a pipeline where there is a bottleneck, moving the bottleneck to a\n> different machine doesn't make it less of a bottleneck.\n\nyour database server needs to use it's CPU for \nother things besides the parseing. you could buy a bigger machine, but \nit's useally far cheaper to buy two dual-proc machines then it is one \nquad proc machine (and if you load is such that you already have a \n8-proc machine as the database, swallow hard when you ask for the price \nof a 16 proc machine), and in addition there is a substantial efficiancy \nloss in multi-proc machines (some software, some hardware) that may give \nyou more available work cycles on the multiple small machines.\n\nif you can remove almost all the parsing load (CPU cycles, memory \nfootprint, and cache thrashing effects) then that box can do the rest of \nit's stuff more efficiantly. meanwhile the client can use what would \notherwise be idle CPU to do the parseing.\n\nif you only have a 1-1 relationship it's a good question as to if it's a \nwin (it depends on how much other stuff each box is having to do to \nsupport this), but if you allow for multiple clients it easily becomes a \nwin.\n\nDavid Lang\n",
"msg_date": "Sat, 3 Dec 2005 01:32:47 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Fri, 2005-12-02 at 15:18 -0500, Stephen Frost wrote:\n\n> The other thought, of course, is that you could use PITR for your\n> backups instead of pgdump...\n\nYes, it is much faster that way.\n\nOver on -hackers a few optimizations of COPY have been discussed; one of\nthose is to optimize COPY when it is loading into a table created within\nthe same transaction as the COPY. This would allow pg_dumps to be\nrestored much faster, since no WAL need be written in this case.\nI hope to work on this fairly soon.\n\nDumping/restoring data with pg_dump has wider uses than data protecting\nbackup.\n\nBest Regards, Simon Riggs\n\n",
"msg_date": "Sat, 03 Dec 2005 12:38:48 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "Tom,\n\nOn 12/2/05 3:00 PM, \"Tom Lane\" <[email protected]> wrote:\n> \n> Sure it does ... at least as long as you are willing to assume everybody\n> uses IEEE floats, and if they don't you have semantic problems\n> translating float datums anyhow.\n> \n> What we lack is documentation, more than functionality.\n\nCool - sounds like the transport part might be there - the thing we desire\nis a file format that allows for efficient representation of portable binary\ndatums.\n\nLast I looked at the Postgres binary dump format, it was not portable or\nefficient enough to suit the need. The efficiency problem with it was that\nthere was descriptive information attached to each individual data item, as\ncompared to the approach where that information is specified once for the\ndata group as a template for input.\n\nOracle's format allows for the expression of fixed width fields within the\ninput file, and specifies the data type of the fields in the metadata. We\ncould choose to support exactly the specification of the SQL*Loader format,\nwhich would certainly be general enough, and would have the advantage of\nproviding a compatibility option with Oracle SQL*Loader input.\n\nNote that Oracle does not provide a similar functionality for the expression\nof *output* files, those that can be dumped from an Oracle database. Their\nmechanism for database dump is the exp/imp utility pair, and it is a\nproprietary \"shifting sands\" specification AFAIK. This limits the benefit\nof implementing the Oracle SQL*Loader compatibility to those customers who\nhave designed utilities to emit that format, which may still be valuable.\n\nThe alternative is to design a Postgres portable binary input file format.\nI'd like to see a record oriented format like that of FORTRAN unformatted,\nwhich uses bookends around each record to identify the length of each\nrecord. This allows for fast record oriented positioning within the file,\nand provides some self-description for integrity checking, etc.\n\n- Luke \n\n\n",
"msg_date": "Sat, 03 Dec 2005 11:42:02 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> Last I looked at the Postgres binary dump format, it was not portable or\n> efficient enough to suit the need. The efficiency problem with it was that\n> there was descriptive information attached to each individual data item, as\n> compared to the approach where that information is specified once for the\n> data group as a template for input.\n\nAre you complaining about the length words? Get real...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Dec 2005 15:32:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed "
},
{
"msg_contents": "Tom,\n\nOn 12/3/05 12:32 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n>> Last I looked at the Postgres binary dump format, it was not portable or\n>> efficient enough to suit the need. The efficiency problem with it was that\n>> there was descriptive information attached to each individual data item, as\n>> compared to the approach where that information is specified once for the\n>> data group as a template for input.\n> \n> Are you complaining about the length words? Get real...\n\nHmm - \"<sizeof int><int>\" repeat, efficiency is 1/2 of \"<int>\" repeat. I\nthink that's worth complaining about.\n\n- Luke\n\n\n",
"msg_date": "Sat, 03 Dec 2005 13:29:01 -0800",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
},
{
"msg_contents": "On Sat, 3 Dec 2005, Luke Lonergan wrote:\n\n> Tom,\n>\n> On 12/3/05 12:32 PM, \"Tom Lane\" <[email protected]> wrote:\n>\n>> \"Luke Lonergan\" <[email protected]> writes:\n>>> Last I looked at the Postgres binary dump format, it was not portable or\n>>> efficient enough to suit the need. The efficiency problem with it was that\n>>> there was descriptive information attached to each individual data item, as\n>>> compared to the approach where that information is specified once for the\n>>> data group as a template for input.\n>>\n>> Are you complaining about the length words? Get real...\n>\n> Hmm - \"<sizeof int><int>\" repeat, efficiency is 1/2 of \"<int>\" repeat. I\n> think that's worth complaining about.\n\nbut how does it compare to the ASCII representation of that int? (remember \nto include your seperator characters as well)\n\nyes it seems less efficiant, and it may be better to do something like \nsend a record description header that gives the sizes of each item and \nthen send the records following that without the size items, but either \nway should still be an advantage over the existing ASCII messages.\n\nalso, how large is the <sizeof int> in the message?\n\nthere are other optimizations that can be done as well, but if there's \nstill a question about if it's worth it to do the parseing on the client \nthen a first implmentation should be done without makeing to many changes \nto test things.\n\nalso some of the optimizations need to have measurements done to see if \nthey are worth it (even something that seems as obvious as seperating the \nsizeof from the data itself as you suggest above has a penalty, namely it \nspreads the data that needs to be accessed to process a line between \ndifferent cache lines, so in some cases it won't be worth it)\n\nDavid Lang\n",
"msg_date": "Sat, 3 Dec 2005 17:17:03 -0800 (PST)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database restore speed"
}
] |
[
{
"msg_contents": "Hello,\n\nWe used Postgresql 7.1 under Linux and recently we have changed it to \nPostgresql 8.1 under Windows XP. Our application uses ODBC and when we \ntry to get some information from the server throw a TCP connection, it's \nvery slow. We have also tried it using psql and pgAdmin III, and we get \nthe same results. If we try it locally, it runs much faster.\n\nWe have been searching the mailing lists, we have found many people with \nthe same problem, but we haven't found any final solution.\n\nHow can we solve this? Any help will be appreciated.\n\nThanks in advance.\n\nJordi.\n",
"msg_date": "Fri, 02 Dec 2005 14:05:26 +0100",
"msg_from": "Teracat <[email protected]>",
"msg_from_op": true,
"msg_subject": "Network permormance under windows"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.