threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi All,\n\nWe have a Postgres 7.4.1 server running on FreeBSD 5.2. Hardware is a Dual\nXeon 2.6 (HT enabled), 2 GB Memory, 3Ware SATA RAID-5 w/ 4 7200 RPM Seagate\ndisks and gigabit Intel Server Ethernet. The server is dedicated to serving\ndata to our web-based CMS.\n\nWe have a few web servers load balanced, and we do around 1M page\nimpressions per day. Our website is highly personalized, and we've\noptimized it to limit the number of queries, but we still see between 2 and\n3 SELECT's (with JOIN's) and 1 UPDATE per page load, selectively more - a\nfair volume.\n\nThe single UPDATE per page load is updating a timestamp in a small table\n(about 150,000 rows) with only 1 index (on the 1 field that needs to be\nmatched).\n\nWe're seeing some intermittent spikes in query time as actual connection\ntime. I.e., during these seemingly random spikes, our debug output looks\nlike this (times from start of HTTP request):\n\nSQL CONNECTION CREATING 'gf'\n0.0015 - ESTABLISHING CONNECTION\n1.7113 - CONNECTION OK\nSQL QUERY ID 1 COST 0.8155 ROWS 1\nSQL QUERY ID 2 COST 0.5607 ROWS 14\n.. etc.. (all queries taking more time than normal, see below)\n\nRefresh the page 2 seconds later, and we'll get:\n\nSQL CONNECTION CREATING 'gf'\n0.0017 - ESTABLISHING CONNECTION\n0.0086 - CONNECTION OK\nSQL QUERY ID 1 COST 0.0128 ROWS 1\nSQL QUERY ID 2 COST 0.0033 ROWS 14\n.. etc.. (with same queries)\n\nIndeed, during these types, it takes a moment for \"psql\" to connect on the\ncommand line (from the same machine using a local file socket), so it's not\na network issue or a web-server issue. During these spurts, there's nothing\ntoo out of the ordinary in vmstat, systat or top.\n\nThese programs show that we're not using much CPU (usually 60-80% idle), and\ndisks usage is virtually nil. I've attached 60 seconds of \"vmstat 5\".\nMemory usage looks like this (constantly):\n\nMem: 110M Active, 1470M Inact, 206M Wired, 61M Cache, 112M Buf, 26M Free\n\nI've cleaned up and tested query after query, and nothing is a \"hog\". On an\nidle server, every query will execute in < 0.05 sec. Perhaps some of you\nveterans have ideas?\n\nThanks,\n\nJason Coene\nGotfrag eSports\n585-598-6621 Phone\n585-598-6633 Fax\[email protected]\nhttp://www.gotfrag.com",
"msg_date": "Tue, 11 May 2004 18:10:30 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Intermittent slowdowns, connection delays"
}
] |
[
{
"msg_contents": "Hi Paul,\n\nThanks for the valuable feedback. I suspect you're correct about the\nserialization in some capacity, but the actual cause is eluding me.\n\nBasically, every time a registered user checks a page, the site has to\nauthenticate them (with a query against a table with > 200,000 records). It\ndoesn't update this table, however - it updates another table with \"user\nstats\" information (last click, last ip, etc).\n\n From what I've seen, there doesn't seem to be any serious locking issues.\nIt does make sense when a number of users whose information isn't in cache,\nit could take a bit longer - but AFAIK this shouldn't prevent other\nsimultaneous queries. What else could cause such serialization?\n\nIf I look at open locks (this is a view, info from pg tables):\n\n relname | mode | numlocks\n----------------------+------------------+----------\n users | AccessShareLock | 4\n userstats | AccessShareLock | 4\n pg_statistic | AccessShareLock | 2\n users_ix_id | AccessShareLock | 2\n countries | AccessShareLock | 2\n comments | AccessShareLock | 2\n countries_ix_id | AccessShareLock | 2\n userstats_ix_id | AccessShareLock | 2\n comments_ix_parentid | AccessShareLock | 2\n users | RowExclusiveLock | 1\n filequeue_ix_id | AccessShareLock | 1\n pg_class | AccessShareLock | 1\n vopenlocks | AccessShareLock | 1\n pg_locks | AccessShareLock | 1\n userstats | RowExclusiveLock | 1\n filequeue | AccessShareLock | 1\n pg_class_oid_index | AccessShareLock | 1\n\nAlso of note, executing a random \"in the blue\" query on our \"users\" table\nreturns results very fast. While there's no doubt that caching may help,\nreturning a row that is definitely not cached is very fast: < 0.05 sec.\n\nTop tells me that the system isn't using much memory - almost always under\n100MB (of the 2GB we have). Is there a way to increase the amount of\nphysical RAM that PG uses? It seems there's a lot of room there.\n\nPostgresql.conf has:\n\nshared_buffers = 16384\nsort_mem = 8192\nvacuum_mem = 8192\n\nAlso, would queries becoming serialized effect connection delays? I think\nthere's still something else at large here...\n\nI've attached a vmstat output, while running dd. The RAID array is tw0. It\ndoes show the tw0 device getting significantly more work, numbers not seen\nduring normal operation.\n\nThanks,\n\nJason Coene\nGotfrag eSports\n585-598-6621 Phone\n585-598-6633 Fax\[email protected]\nhttp://www.gotfrag.com\n\n\n-----Original Message-----\nFrom: Paul Tuckfield [mailto:[email protected]] \nSent: Tuesday, May 11, 2004 7:50 PM\nTo: Jason Coene\nSubject: Re: [PERFORM] Intermittent slowdowns, connection delays\n\nThe things you point out suggest a heavy dependence on good cache \nperformance\n(typical of OLTP mind you) Do not be fooled if a query runs in 2 \nseconds then the second\nrun takes < .01 secons: the first run put it in cache the second got \nall cache hits :)\n\nBut beyond that, in an OLTP system, and typical website backing \ndatabase, \"cache is king\".\nAnd serialization is the devil\n\nSo look for reasons why your cache performance might deteriorate during \npeak, (like large historical tables\nthat users pull up dozens of scattered rows from, flooding cache) or \nwhy you may be\nserializing somewhere inside postgres (ex. if every page hit re-logs \nin, then theres probably serialization\ntrying to spawn what must be 40 processes/sec assuming your 11hit/sec \navg peaks at about 40/sec)\n\nAlso:\nI am really surprised you see zero IO in the vmstat you sent, but I'm \nunfamiliar with BSD version of vmstat.\nAFAIR, Solaris shows cached filesystem reads as \"page faults\" which is \nrather confusing. Since you have 1500 page\nfaults per second, yet no paging (bi bo) does thins mean the 1500 page \nfaults are filesystem IO that pg is doing?\ndo an objective test on an idle system by dd'ing a large file in and \nwatching what vmstat does.\n\n\n\n\n\nOn May 11, 2004, at 3:10 PM, Jason Coene wrote:\n\n> Hi All,\n>\n> We have a Postgres 7.4.1 server running on FreeBSD 5.2. Hardware is a \n> Dual\n> Xeon 2.6 (HT enabled), 2 GB Memory, 3Ware SATA RAID-5 w/ 4 7200 RPM \n> Seagate\n> disks and gigabit Intel Server Ethernet. The server is dedicated to \n> serving\n> data to our web-based CMS.\n>\n> We have a few web servers load balanced, and we do around 1M page\n> impressions per day. Our website is highly personalized, and we've\n> optimized it to limit the number of queries, but we still see between \n> 2 and\n> 3 SELECT's (with JOIN's) and 1 UPDATE per page load, selectively more \n> - a\n> fair volume.\n>\n> The single UPDATE per page load is updating a timestamp in a small \n> table\n> (about 150,000 rows) with only 1 index (on the 1 field that needs to be\n> matched).\n>\n> We're seeing some intermittent spikes in query time as actual \n> connection\n> time. I.e., during these seemingly random spikes, our debug output \n> looks\n> like this (times from start of HTTP request):\n>\n> SQL CONNECTION CREATING 'gf'\n> 0.0015 - ESTABLISHING CONNECTION\n> 1.7113 - CONNECTION OK\n> SQL QUERY ID 1 COST 0.8155 ROWS 1\n> SQL QUERY ID 2 COST 0.5607 ROWS 14\n> .. etc.. (all queries taking more time than normal, see below)\n>\n> Refresh the page 2 seconds later, and we'll get:\n>\n> SQL CONNECTION CREATING 'gf'\n> 0.0017 - ESTABLISHING CONNECTION\n> 0.0086 - CONNECTION OK\n> SQL QUERY ID 1 COST 0.0128 ROWS 1\n> SQL QUERY ID 2 COST 0.0033 ROWS 14\n> .. etc.. (with same queries)\n>\n> Indeed, during these types, it takes a moment for \"psql\" to connect on \n> the\n> command line (from the same machine using a local file socket), so \n> it's not\n> a network issue or a web-server issue. During these spurts, there's \n> nothing\n> too out of the ordinary in vmstat, systat or top.\n>\n> These programs show that we're not using much CPU (usually 60-80% \n> idle), and\n> disks usage is virtually nil. I've attached 60 seconds of \"vmstat 5\".\n> Memory usage looks like this (constantly):\n>\n> Mem: 110M Active, 1470M Inact, 206M Wired, 61M Cache, 112M Buf, 26M \n> Free\n>\n> I've cleaned up and tested query after query, and nothing is a \"hog\". \n> On an\n> idle server, every query will execute in < 0.05 sec. Perhaps some of \n> you\n> veterans have ideas?\n>\n> Thanks,\n>\n> Jason Coene\n> Gotfrag eSports\n> 585-598-6621 Phone\n> 585-598-6633 Fax\n> [email protected]\n> http://www.gotfrag.com\n>\n>\n> <vmstat51min.txt>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org",
"msg_date": "Tue, 11 May 2004 21:04:16 -0400",
"msg_from": "\"Jason Coene\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Intermittent slowdowns, connection delays"
}
] |
[
{
"msg_contents": "Hi everybody..\n\n \n\n Before anything else I would like to thank all those person who answers\nmy previous question. again thank you very much\n\n \n\nThis is my question .\n\n \n\n In my query .. Select * from table1 where lastname LIKE 'PUNCIA%'..\n\n \n\nIn the query plan ..it uses seq scan rather than index scan .. why ? I have\nindex on lastname, firtname. \n\n \n\n \n\nThanks\n\n\n\n\n\n\n\n\n\n\nHi everybody..\n \n Before anything else I would like to thank all those\nperson who answers my previous question… again thank you very much\n \nThis is my question …\n \n In my query .. Select * from table1 where lastname LIKE\n ‘PUNCIA%’..\n \nIn the query plan ..it uses seq scan rather than index scan\n.. why ? I have index on lastname, firtname… \n \n \nThanks",
"msg_date": "Wed, 12 May 2004 14:18:48 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using LIKE expression problem.."
},
{
"msg_contents": "> In the query plan ..it uses seq scan rather than index scan .. why ? I \n> have index on lastname, firtname�\n\nHave you run VACUUM ANALYZE; on the table recently?\n\nChris\n\n",
"msg_date": "Wed, 12 May 2004 14:48:29 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using LIKE expression problem.."
},
{
"msg_contents": "Yes , I already do that but the same result .. LIKE uses seq scan\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Christopher\nKings-Lynne\nSent: Wednesday, May 12, 2004 2:48 PM\nTo: Michael Ryan S. Puncia\nCc: [email protected]\nSubject: Re: [PERFORM] Using LIKE expression problem..\n\n> In the query plan ..it uses seq scan rather than index scan .. why ? I \n> have index on lastname, firtname.\n\nHave you run VACUUM ANALYZE; on the table recently?\n\nChris\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n\n\n",
"msg_date": "Wed, 12 May 2004 15:46:07 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using LIKE expression problem.."
},
{
"msg_contents": "Are you in a non-C locale?\n\nChris\n\nMichael Ryan S. Puncia wrote:\n\n> Yes , I already do that but the same result .. LIKE uses seq scan\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, May 12, 2004 2:48 PM\n> To: Michael Ryan S. Puncia\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Using LIKE expression problem..\n> \n> \n>>In the query plan ..it uses seq scan rather than index scan .. why ? I \n>>have index on lastname, firtname.\n> \n> \n> Have you run VACUUM ANALYZE; on the table recently?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n",
"msg_date": "Wed, 12 May 2004 15:59:10 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using LIKE expression problem.."
},
{
"msg_contents": "Sorry .. I am a newbie and I don't know :( \nHow can I know that I am in C locale ?\nHow can I change my database to use C locale?\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Christopher\nKings-Lynne\nSent: Wednesday, May 12, 2004 3:59 PM\nTo: Michael Ryan S. Puncia\nCc: [email protected]\nSubject: Re: [PERFORM] Using LIKE expression problem..\n\nAre you in a non-C locale?\n\nChris\n\nMichael Ryan S. Puncia wrote:\n\n> Yes , I already do that but the same result .. LIKE uses seq scan\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, May 12, 2004 2:48 PM\n> To: Michael Ryan S. Puncia\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Using LIKE expression problem..\n> \n> \n>>In the query plan ..it uses seq scan rather than index scan .. why ? I \n>>have index on lastname, firtname.\n> \n> \n> Have you run VACUUM ANALYZE; on the table recently?\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faqs/FAQ.html\n\n\n",
"msg_date": "Wed, 12 May 2004 16:17:36 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using LIKE expression problem.."
},
{
"msg_contents": "Use the text_pattern_ops operator when creating the index, see:\nhttp://www.postgresql.org/docs/7.4/static/indexes-opclass.html\n\nMichael Ryan S. Puncia wrote:\n> Sorry .. I am a newbie and I don't know :( \n> How can I know that I am in C locale ?\n> How can I change my database to use C locale?\n> \n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, May 12, 2004 3:59 PM\n> To: Michael Ryan S. Puncia\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Using LIKE expression problem..\n> \n> Are you in a non-C locale?\n> \n> Chris\n> \n> Michael Ryan S. Puncia wrote:\n> \n> \n>>Yes , I already do that but the same result .. LIKE uses seq scan\n>>\n>>-----Original Message-----\n>>From: [email protected]\n>>[mailto:[email protected]] On Behalf Of Christopher\n>>Kings-Lynne\n>>Sent: Wednesday, May 12, 2004 2:48 PM\n>>To: Michael Ryan S. Puncia\n>>Cc: [email protected]\n>>Subject: Re: [PERFORM] Using LIKE expression problem..\n>>\n>>\n>>\n>>>In the query plan ..it uses seq scan rather than index scan .. why ? I \n>>>have index on lastname, firtname.\n>>\n>>\n>>Have you run VACUUM ANALYZE; on the table recently?\n>>\n>>Chris\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 7: don't forget to increase your free space map settings\n>>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n",
"msg_date": "Tue, 18 May 2004 22:51:30 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using LIKE expression problem.."
}
] |
[
{
"msg_contents": "Hello I'm tuning a postgresql (7.4.2) server for best performance .\nI have a question about the planner .\nI have two identical tables : one stores short data (about 2.000.000\nrecord now) and\nthe other historycal data ( about 8.000.000 record now and growing ...) \n\n \nA simple test query : select tag_id,valore_tag,data_tag from\nstorico_misure where (data_tag>'2004-05-03' and data_tag <'2004-05-12')\nand tag_id=37423 ;\n\nTakes 57,637 ms on the short table and 1321,448 ms (!!) on the\nhistorycal table .Tables are vacuumed and reindexed . \n\n\n\nTables and query plans :\n\n\\d storico_misure\n Table \"tenore.storico_misure\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"pk_storico_misure_2\" primary key, btree (data_tag, tag_id)\n \"pk_anagtstorico_misuree_idx_2\" btree (tag_id)\n \"storico_misure_data_tag_idx_2\" btree (data_tag)\n\nstorico=# \\d storico_misure_short\n Table \"tenore.storico_misure_short\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"storico_misure_short_pkey_2\" primary key, btree (data_tag, tag_id)\n \"pk_anagtstorico_misuree_short_idx_2\" btree (tag_id)\n \"storico_misure_short_data_tag_idx_2\" btree (data_tag)\n\nstorico=#\nstorico=#\nstorico=# explain select tag_id,valore_tag,data_tag from storico_misure\nwhere (data_tag>'2004-05-03' and data_tag <'2004-05-12') and\ntag_id=37423 ;\n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------\n Index Scan using pk_storico_misure_2 on storico_misure\n(cost=0.00..1984.64 rows=658 width=21)\n Index Cond: ((data_tag > '2004-05-03 00:00:00'::timestamp without\ntime zone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without time\nzone) AND (tag_id = 37423))\n(2 rows)\n\nTime: 1,667 ms\nstorico=# explain select tag_id,valore_tag,data_tag from\nstorico_misure_short where (data_tag>'2004-05-03' and data_tag\n<'2004-05-12') and tag_id=37423 ;\n QUERY\nPLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-\n Index Scan using pk_anagtstorico_misuree_short_idx_2 on\nstorico_misure_short (cost=0.00..1784.04 rows=629 width=20)\n Index Cond: (tag_id = 37423)\n Filter: ((data_tag > '2004-05-03 00:00:00'::timestamp without time\nzone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without time\nzone))\n\n\nHow can i force the planner to use the same query plan ? I'd like to\ntest if using the same query plan i've better performace .\n\nThanks in advance\n\n\n\n\nthis is my posgresql.conf\n\n#-----------------------------------------------------------------------\n----\n# CONNECTIONS AND AUTHENTICATION\n#-----------------------------------------------------------------------\n----\n\n# - Connection Settings -\n\ntcpip_socket = true\nmax_connections = 100\n\t# note: increasing max_connections costs about 500 bytes of\nshared\n\t# memory per connection slot, in addition to costs from\nshared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\nport = 5432\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777\t# octal\n#virtual_host = ''\t\t# what interface to listen on; defaults\nto any\n#rendezvous_name = ''\t\t# defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60\t# 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#-----------------------------------------------------------------------\n----\n# RESOURCE USAGE (except WAL)\n#-----------------------------------------------------------------------\n----\n\n# - Memory -\n\nshared_buffers = 3000\t\t# min 16, at least max_connections*2,\n8KB each\nsort_mem = 4096 \t\t# min 64, size in KB\nvacuum_mem = 32768\t\t# min 1024, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000\t\t# min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000\t# min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t# min 25\n#preload_libraries = ''\n\n\n#-----------------------------------------------------------------------\n----\n# WRITE AHEAD LOG\n#-----------------------------------------------------------------------\n----\n\n# - Settings -\n\nfsync = false\t\t\t# turns forced synchronization on or off\n#wal_sync_method = fsync\t# the default varies across platforms:\n\t\t\t\t# fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_buffers = 8\t\t# min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 12\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300\t# range 30-3600, in seconds\n#checkpoint_warning = 30\t# 0 is off, in seconds\n#commit_delay = 0\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t# range 1-1000\n\n\n#-----------------------------------------------------------------------\n----\n# QUERY TUNING\n#-----------------------------------------------------------------------\n----\n\n# - Planner Method Enabling -\n\nenable_hashagg = false\nenable_hashjoin = false\nenable_indexscan = true\nenable_mergejoin = true\nenable_nestloop = false\nenable_seqscan = true\nenable_sort = false\nenable_tidscan = false\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000\t# typically 8KB each\n#random_page_cost = 4\t\t# units are one sequential page fetch\ncost\n#cpu_tuple_cost = 0.01\t\t# (same)\n#cpu_index_tuple_cost = 0.001\t# (same)\n#cpu_operator_cost = 0.0025\t# (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0\t\t# default based on tables in statement,\n\t\t\t\t# range 128-1024\n#geqo_selection_bias = 2.0\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t# range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t# 1 disables collapsing of explicit\nJOINs\n\n\n#-----------------------------------------------------------------------\n----\n# ERROR REPORTING AND LOGGING\n#-----------------------------------------------------------------------\n----\n\n# - Syslog -\n\n#syslog = 0\t\t\t# range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\n#client_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2,\ndebug1,\n\t\t\t\t# log, info, notice, warning, error\n\n#log_min_messages = notice\t# Values, in order of decreasing detail:\n\t\t\t\t# debug5, debug4, debug3, debug2,\ndebug1,\n\t\t\t\t# info, notice, warning, error, log,\nfatal,\n\t\t\t\t# panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing\nseverity:\n\t\t\t\t # debug5, debug4, debug3, debug2,\ndebug1,\n\t\t\t\t # info, notice, warning, error,\npanic(off)\n\t\t\t\t \n#log_min_duration_statement = -1 # Log all statements whose\n\t\t\t\t # execution time exceeds the value, in\n\t\t\t\t # milliseconds. Zero prints all\nqueries.\n\t\t\t\t # Minus-one disables.\n\n#silent_mode = false\t\t # DO NOT USE without Syslog!\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\n\n#-----------------------------------------------------------------------\n----\n# RUNTIME STATISTICS\n#-----------------------------------------------------------------------\n----\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = true\nstats_command_string = true\n#stats_block_level = false\nstats_row_level = true\n#stats_reset_on_server_start = true\n\n\n#-----------------------------------------------------------------------\n----\n# CLIENT CONNECTION DEFAULTS\n#-----------------------------------------------------------------------\n----\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t# schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\nstatement_timeout = 360000\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown\t\t# actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#extra_float_digits = 0\t\t# min -15, max 2\n#client_encoding = sql_ascii\t# actually, defaults to database\nencoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'it_IT.UTF-8'\t\t# locale for system error\nmessage strings\nlc_monetary = 'it_IT.UTF-8'\t\t# locale for monetary formatting\nlc_numeric = 'it_IT.UTF-8'\t\t# locale for number formatting\nlc_time = 'it_IT.UTF-8'\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000\t\t# min 10\n\n\n#-----------------------------------------------------------------------\n----\n# LOCK MANAGEMENT\n#-----------------------------------------------------------------------\n----\n\n#deadlock_timeout = 1000\t# in milliseconds\n#max_locks_per_transaction = 64\t# min 10, ~260*max_connections bytes\neach\n\n\n#-----------------------------------------------------------------------\n----\n# VERSION/PLATFORM COMPATIBILITY\n#-----------------------------------------------------------------------\n----\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced\t# advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n\n\n\n\n\n\n",
"msg_date": "Thu, 13 May 2004 14:42:51 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "Fabio Panizzutti wrote:\n> storico=# explain select tag_id,valore_tag,data_tag from storico_misure\n> where (data_tag>'2004-05-03' and data_tag <'2004-05-12') and\n> tag_id=37423 ;\n\nCan you please post explain analyze? That includes actual timings.\n\nLooking at the schema, can you try \"and tag_id=37423::integer\" instead?\n\n> enable_hashagg = false\n> enable_hashjoin = false\n> enable_indexscan = true\n> enable_mergejoin = true\n> enable_nestloop = false\n> enable_seqscan = true\n> enable_sort = false\n> enable_tidscan = false\n\nWhy do you have these off? AFAIK, 7.4 improved hash aggregates a lot. So you \nmight miss on these in this case.\n\n> # - Planner Cost Constants -\n> \n> #effective_cache_size = 1000\t# typically 8KB each\n\nYou might set it to something realistic.\n\nAnd what is your hardware setup? Disks/CPU/RAM?\n\nJust to be sure, you went thr.\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html and \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html?\n\nHTH\n\n Regards\n Shridhar\n",
"msg_date": "Thu, 13 May 2004 18:35:19 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "\n\n>>>-----Messaggio originale-----\n>>>Da: [email protected] \n>>>[mailto:[email protected]] Per conto di \n>>>Shridhar Daithankar\n>>>Inviato: giovedì 13 maggio 2004 15.05\n>>>A: Fabio Panizzutti\n>>>Cc: [email protected]\n>>>Oggetto: Re: [PERFORM] Query plan on identical tables differs . Why ?\n>>>\n>>>\n>>>Fabio Panizzutti wrote:\n>>>> storico=# explain select tag_id,valore_tag,data_tag from \n>>>> storico_misure where (data_tag>'2004-05-03' and data_tag \n>>>> <'2004-05-12') and tag_id=37423 ;\n>>>\n>>>Can you please post explain analyze? That includes actual timings.\n\nstorico=# explain analyze select tag_id,valore_tag,data_tag from\nstorico_misure where (data_tag>'2004-05-03' and data_tag <'2004-05-12')\nand tag_id=37423 ;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------\n Index Scan using pk_storico_misure_2 on storico_misure\n(cost=0.00..1984.64 rows=658 width=21) (actual time=723.441..1858.107\nrows=835 loops=1)\n Index Cond: ((data_tag > '2004-05-03 00:00:00'::timestamp without\ntime zone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without time\nzone) AND (tag_id = 37423))\n Total runtime: 1860.641 ms\n(3 rows)\n\nstorico=# explain analyze select tag_id,valore_tag,data_tag from\nstorico_misure_short where (data_tag>'2004-05-03' and data_tag\n<'2004-05-12') and tag_id=37423 ;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n-------------------\n Index Scan using pk_anagtstorico_misuree_short_idx_2 on\nstorico_misure_short (cost=0.00..1783.04 rows=629 width=20) (actual\ntime=0.323..42.186 rows=864 loops=1)\n Index Cond: (tag_id = 37423)\n Filter: ((data_tag > '2004-05-03 00:00:00'::timestamp without time\nzone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without time\nzone))\n Total runtime: 43.166 ms\n\n\n\n\n>>>Looking at the schema, can you try \"and \n>>>tag_id=37423::integer\" instead?\n>>>\n\nI try : \nexplain analyze select tag_id,valore_tag,data_tag from storico_misure\nwhere (data_tag>'2004-05-03' and data_tag <'2004-05-12') and\ntag_id=37423::integer;\nIndex Scan using pk_storico_misure_2 on storico_misure\n(cost=0.00..1984.64 rows=658 width=21) (actual time=393.337..1303.998\nrows=835 loops=1)\n Index Cond: ((data_tag > '2004-05-03 00:00:00'::timestamp without\ntime zone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without time\nzone) AND (tag_id = 37423))\n Total runtime: 1306.484 ms\n\n>>>> enable_hashagg = false\n>>>> enable_hashjoin = false\n>>>> enable_indexscan = true\n>>>> enable_mergejoin = true\n>>>> enable_nestloop = false\n>>>> enable_seqscan = true\n>>>> enable_sort = false\n>>>> enable_tidscan = false\n>>>Why do you have these off? AFAIK, 7.4 improved hash \n>>>aggregates a lot. So you \n>>>might miss on these in this case.\n\nI try for debug purpose , now i reset all 'enable' to default :\n \nselect * from pg_settings where name like 'enable%';\n name | setting | context | vartype | source |\nmin_val | max_val\n------------------+---------+---------+---------+--------------------+--\n-------+---------\n enable_hashagg | on | user | bool | configuration file |\n|\n enable_hashjoin | on | user | bool | configuration file |\n|\n enable_indexscan | on | user | bool | configuration file |\n|\n enable_mergejoin | on | user | bool | configuration file |\n|\n enable_nestloop | on | user | bool | configuration file |\n|\n enable_seqscan | on | user | bool | configuration file |\n|\n enable_sort | on | user | bool | configuration file |\n|\n enable_tidscan | on | user | bool | configuration file |\n|\n(8 rows)\n\nThe query plan are the same ....\n\n>>>> # - Planner Cost Constants -\n>>>> \n>>>> #effective_cache_size = 1000\t# typically 8KB each\n>>>\n>>>You might set it to something realistic.\n>>>\n\nI try 10000 and 100000 but nothing change .\n\n\n\n>>>And what is your hardware setup? Disks/CPU/RAM?\n\n32GB SCSI/DUAL Intel(R) Pentium(R) III CPU family 1133MHz/ 1GB RAM\nand linux red-hat 9\n\n\nI don't understand why the planner chose a different query plan on\nidentical tables with same indexes . \n\nThanks a lot for help!.\n\nFabio\n\n",
"msg_date": "Thu, 13 May 2004 16:06:01 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "\"Fabio Panizzutti\" <[email protected]> writes:\n> I don't understand why the planner chose a different query plan on\n> identical tables with same indexes . \n\nDifferent data statistics; not to mention different table sizes\n(the cost equations are not linear).\n\nHave you ANALYZEd (or VACUUM ANALYZEd) both tables recently?\n\nIf the stats are up to date but still not doing the right thing,\nyou might try increasing the statistics target for the larger\ntable's tag_id column. See ALTER TABLE SET STATISTICS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 May 2004 11:01:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R: Query plan on identical tables differs . Why ? "
},
{
"msg_contents": "On Thu, 13 May 2004, Fabio Panizzutti wrote:\n\n\n> I don't understand why the planner chose a different query plan on\n> identical tables with same indexes .\n\nBecause it's more than table structure that affects the choice made by the\nplanner. In addition the statistics about the values that are there as\nwell as the estimated size of the table have effects. One way to see is\nto see what it thinks is best is to remove the indexes it is using and see\nwhat plan it gives then, how long it takes and the estimated costs for\nthose plans.\n\nIn other suggestions, I think having a (tag_id, data_tag) index rather\nthan (data_tag, tag_id) may be a win for queries like this. Also, unless\nyou're doing many select queries by only the first field of the composite\nindex and you're not doing very many insert/update/deletes, you may want\nto drop the other index on just that field.\n",
"msg_date": "Thu, 13 May 2004 08:16:34 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R: Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "\n\n>>>-----Messaggio originale-----\n>>>Da: Tom Lane [mailto:[email protected]] \n>>>Inviato: giovedì 13 maggio 2004 17.01\n>>>A: Fabio Panizzutti\n>>>Cc: 'Shridhar Daithankar'; [email protected]\n>>>Oggetto: Re: R: [PERFORM] Query plan on identical tables \n>>>differs . Why ? \n>>>\n>>>\n>>>\"Fabio Panizzutti\" <[email protected]> writes:\n>>>> I don't understand why the planner chose a different query plan on \n>>>> identical tables with same indexes .\n>>>\n>>>Different data statistics; not to mention different table \n>>>sizes (the cost equations are not linear).\n>>>\n>>>Have you ANALYZEd (or VACUUM ANALYZEd) both tables recently?\n>>>\n>>>If the stats are up to date but still not doing the right \n>>>thing, you might try increasing the statistics target for \n>>>the larger table's tag_id column. See ALTER TABLE SET STATISTICS.\n>>>\n>>>\t\t\tregards, tom lane\n>>>\n\nAll tables are vacumed and analyzed . \nI try so set statistics to 1000 to tag_id columns with ALTER TABLE SET\nSTATISTIC, revacuum analyze , but the planner choose the same query\nplan . \nI'm trying now to change the indexes .\n\nThanks \n\n\n\n",
"msg_date": "Fri, 14 May 2004 10:40:24 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: R: Query plan on identical tables differs . Why ? "
},
{
"msg_contents": "\n\n>>>-----Messaggio originale-----\n>>>Da: Stephan Szabo [mailto:[email protected]] \n>>>Inviato: giovedì 13 maggio 2004 17.17\n>>>A: Fabio Panizzutti\n>>>Cc: 'Shridhar Daithankar'; [email protected]\n>>>Oggetto: Re: R: [PERFORM] Query plan on identical tables \n>>>differs . Why ?\n>>>\n>>>\n>>>On Thu, 13 May 2004, Fabio Panizzutti wrote:\n>>>\n>>>\n>>>> I don't understand why the planner chose a different query plan on \n>>>> identical tables with same indexes .\n>>>\n>>>Because it's more than table structure that affects the \n>>>choice made by the planner. In addition the statistics \n>>>about the values that are there as well as the estimated \n>>>size of the table have effects. One way to see is to see \n>>>what it thinks is best is to remove the indexes it is using \n>>>and see what plan it gives then, how long it takes and the \n>>>estimated costs for those plans.\n>>>\n>>>In other suggestions, I think having a (tag_id, data_tag) \n>>>index rather than (data_tag, tag_id) may be a win for \n>>>queries like this. Also, unless you're doing many select \n>>>queries by only the first field of the composite index and \n>>>you're not doing very many insert/update/deletes, you may \n>>>want to drop the other index on just that field.\n>>>\n\nThanks for your attention , i change the indexes on the tables as you\nsuggested :\n\n storico=# \\d storico_misure_short\n Table \"tenore.storico_misure_short\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"storico_misure_short_idx\" primary key, btree (tag_id, data_tag)\n \"storico_misure_short_data_tag_idx2\" btree (data_tag)\n\nstorico=# \\d storico_misure\n Table \"tenore.storico_misure\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"storico_misure_idx\" primary key, btree (tag_id, data_tag)\n \"storico_misure_data_tag_idx2\" btree (data_tag)\n\nAnd now performance are similar and the planner works correctly :\n\nstorico=# \\d storico_misure_short\n Table \"tenore.storico_misure_short\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"storico_misure_short_idx\" primary key, btree (tag_id, data_tag)\n \"storico_misure_short_data_tag_idx2\" btree (data_tag)\n\nstorico=# \\d storico_misure\n Table \"tenore.storico_misure\"\n Column | Type | Modifiers\n-------------------------+-----------------------------+-----------\n data_tag | timestamp without time zone | not null\n tag_id | integer | not null\n unita_misura | character varying(6) | not null\n valore_tag | numeric(20,3) | not null\n qualita | integer | not null\n numero_campioni | numeric(5,0) |\n frequenza_campionamento | numeric(3,0) |\nIndexes:\n \"storico_misure_idx\" primary key, btree (tag_id, data_tag)\n \"storico_misure_data_tag_idx2\" btree (data_tag)\n\nstorico=# explain analyze select tag_id,valore_tag,data_tag from\nstorico_misure_short where (data_tag>'2004-05-03' and data_tag\n<'2004-05-12') and tag_id=37423 ;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------\n Index Scan using storico_misure_short_idx on storico_misure_short\n(cost=0.00..2104.47 rows=584 width=20) (actual time=0.232..39.932\nrows=864 loops=1)\n Index Cond: ((tag_id = 37423) AND (data_tag > '2004-05-03\n00:00:00'::timestamp without time zone) AND (data_tag < '2004-05-12\n00:00:00'::timestamp without time zone))\n Total runtime: 40.912 ms\n(3 rows)\n\nTime: 43,233 ms\nstorico=# explain analyze select tag_id,valore_tag,data_tag from\nstorico_misure where (data_tag>'2004-05-03' and data_tag <'2004-05-12')\nand tag_id=37423 ;\n \nQUERY PLAN\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n--------------------------\n Index Scan using storico_misure_idx on storico_misure\n(cost=0.00..2097.56 rows=547 width=21) (actual time=0.518..92.067\nrows=835 loops=1)\n Index Cond: ((tag_id = 37423) AND (data_tag > '2004-05-03\n00:00:00'::timestamp without time zone) AND (data_tag < '2004-05-12\n00:00:00'::timestamp without time zone))\n Total runtime: 93.459 ms\n(3 rows)\n\n\nI need the index on data_tag for other query ( last values on the last\ndate ) .\n\n\nRegards \n\nFabio \n\n\n\n",
"msg_date": "Fri, 14 May 2004 10:55:07 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: R: Query plan on identical tables differs . Why ?"
}
] |
[
{
"msg_contents": "(Sorry if this ends up being a duplicate post, I sent a reply yesterday, \nbut it doesn't appear to have gone through... I think I typo'd the address \nbut never got a bounce.)\n\nHi,\n Thanks for your initial help. I have some more questions below.\n\nAt 05:02 AM 5/12/2004, Shridhar Daithankar wrote:\n>Doug Y wrote:\n>\n>>Hello,\n>> I've been having some performance issues with a DB I use. I'm trying \n>> to come up with some performance recommendations to send to the \"adminstrator\".\n>>\n>>Ok for what I'm uncertain of...\n>>shared_buffers:\n>>According to http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>>Its more of a staging area and more isn't necessarily better. That psql \n>>relies on the OS to cache data for later use.\n>>But according to \n>>http://www.ca.postgresql.org/docs/momjian/hw_performance/node3.html its \n>>where psql caches previous data for queries because the OS cache is \n>>slower, and should be as big as possible without causing swap.\n>>Those seem to be conflicting statements. In our case, the \"administrator\" \n>>kept increasing this until performance seemed to increase, which means \n>>its now 250000 (x 8k is 2G).\n>>Is this just a staging area for data waiting to move to the OS cache, or \n>>is this really the area that psql caches its data?\n>\n>It is the area where postgresql works. It updates data in this area and \n>pushes it to OS cache for disk writes later.\n>\n>By experience, larger does not mean better for this parameter. For \n>multi-Gig RAM machines, the best(on an average for wide variety of load) \n>value found to be around 10000-15000. May be even lower.\n>\n>It is a well known fact that raising this parameter unnecessarily \n>decreases the performance. You indicate that best performance occurred at \n>250000. This is very very large compared to other people's experience.\n\nOk. I think I understand a bit better now.\n\n>>effective_cache_size:\n>>Again, according to the Varlena guide this tells psql how much system \n>>memory is available for it to do its work in.\n>>until recently, this was set at the default value of 1000. It was just \n>>recently increased to 180000 (1.5G)\n>>according to \n>>http://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html \n>>it should be about 25% of memory?\n>\n>No rule of thumb. It is amount of memory OS will dedicate to psotgresql \n>data buffers. Depending uponn what else you run on machine, it could be \n>straight-forward or noodly value to calculate. For a 4GB machine, 1.5GB is \n>quite good but coupled with 2G of shared buffers it could push the \n>machines to swap storm. And swapping shared buffers is a big performance hit.\n\nWe don't seem to be swapping much:\n\n# top\n\n 2:21pm up 236 days, 19:12, 1 user, load average: 1.45, 1.09, 1.00\n53 processes: 51 sleeping, 2 running, 0 zombie, 0 stopped\nCPU0 states: 30.3% user, 9.1% system, 0.0% nice, 60.0% idle\nCPU1 states: 32.0% user, 9.3% system, 0.0% nice, 58.1% idle\nMem: 3863468K av, 3845844K used, 17624K free, 2035472K shrd, 198340K buff\nSwap: 1052248K av, 1092K used, 1051156K free 1465112K cached\n\nlooks like at some point it did swap a little, but from running vmstat, I \ncan't seem to catch it actively swapping.\n\n>>Finally sort_mem:\n>>Was until recently left at the default of 1000. Is now 16000.\n>\n>Sort memory is per sort not per query or per connection. So depending upon \n>how many concurrent connections you entertain, it could take quite a chuck \n>of RAM.\n\nRight I understand that. How does one calculate the size of a sort? Rows * \nwidth from an explain?\n\n>>Increasing the effective cache and sort mem didn't seem to make much of a \n>>difference. I'm guessing the eff cache was probably raised a bit too \n>>much, and shared_buffers is way to high.\n>\n>I agree. For shared buffers start with 5000 and increase in batches on \n>1000. Or set it to a high value and check with ipcs for maximum shared \n>memory usage. If share memory usage peaks at 100MB, you don't need more \n>than say 120MB of buffers.\n\nMy results from ipcs seems confusing... says its using the full 2G of \nshared cache:\n\n# ipcs\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x0052e2c1 6389760 postgres 600 2088370176 4\n\n------ Semaphore Arrays --------\nkey semid owner perms nsems status\n0x0052e2c1 424378368 postgres 600 17\n0x0052e2c2 424411137 postgres 600 17\n0x0052e2c3 424443906 postgres 600 17\n0x0052e2c4 424476675 postgres 600 17\n0x0052e2c5 424509444 postgres 600 17\n0x0052e2c6 424542213 postgres 600 17\n0x0052e2c7 424574982 postgres 600 17\n0x0052e2c8 424607751 postgres 600 17\n0x0052e2c9 424640520 postgres 600 17\n0x0052e2ca 424673289 postgres 600 17\n0x0052e2cb 424706058 postgres 600 17\n0x0052e2cc 424738827 postgres 600 17\n0x0052e2cd 424771596 postgres 600 17\n0x0052e2ce 424804365 postgres 600 17\n0x0052e2cf 424837134 postgres 600 17\n0x0052e2d0 424869903 postgres 600 17\n0x0052e2d1 424902672 postgres 600 17\n0x00018d45 505544721 root 777 1\n\n------ Message Queues --------\nkey msqid owner perms used-bytes messages\n\n\n>>What can I do to help determine what the proper settings should be and/or \n>>look at other possible choke points. What should I look for in iostat, \n>>mpstat, or vmstat as red flags that cpu, memory, or i/o bound?\n>\n>Yes. vmstat is usually a lot of help to locate the bottelneck.\n\nWhat would I be looking for here?\n\n# vmstat 2 10\n procs memory swap io system \n cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 0 0 1092 14780 198120 1467164 0 0 0 0 0 0 0 0 0\n 0 0 0 1092 19488 198120 \n1467204 0 0 0 0 240 564 11 5 84\n 0 0 0 1092 19520 198120 \n1467300 0 0 0 210 443 1094 29 8 63\n 0 0 0 1092 15832 198120 \n1467356 0 0 4 110 368 1455 27 5 68\n 3 0 0 1092 10956 198120 \n1467464 0 0 4 336 417 1679 33 10 57\n 1 0 0 1092 17840 198124 \n1465980 0 0 200 334 581 1914 63 14 23\n 1 0 0 1092 16556 198124 \n1466012 0 0 0 226 397 1069 30 4 66\n 0 0 0 1092 19096 198124 \n1466028 0 0 0 160 230 314 12 2 86\n 2 0 1 1092 16100 198128 \n1466748 0 0 28 1484 711 1578 23 12 65\n 0 0 0 1092 20140 198128 \n1466780 0 0 0 414 291 746 15 8 77\n\nI'm guessing what I should look at is the io: bi & bo ? when I run some \nparticularly large queries I see bo activity so I'm speculating that that \nmeans its reading pages from disk, correct?\n\n>>DB maintenance wise, I don't believe they were running vacuum full until \n>>I told them a few months ago that regular vacuum analyze no longer cleans \n>>out dead tuples. Now normal vac is run daily, vac full weekly \n>>(supposedly). How can I tell from the output of vacuum if the vac fulls \n>>aren't being done, or not done often enough? Or from the system tables, \n>>what can I read?\n>\n>In 7.4 you can do vacuum full verbose and it will tell you the stats at \n>the end. For 7.3.x, its not there.\n>\n>I suggest you vacuum full database once.(For large database, dumping \n>restoring might work faster. Dump/restore and vacuum full both lock the \n>database exclusively i.e. downtime. So I guess faster the better for you. \n>But there is no tool/guideline to determine which way to go.)\n\nOk they had not done a full vacuum in a long time. I them run vacuumdb \n--full --analyze --verbose and dump it into a file. What should I look for \nto see if it was useful?\n\nfor example:\nINFO: Pages 118200: Changed 74, reaped 117525, Empty 0, New 0; Tup 575298: \nVac 11006, Keep/VTL 0/0, UnUsed 2454159, MinLen 68, MaxLen 1911; Re-using: \nFree/Avai\nl. Space 774122944/774122944; EndEmpty/Avail. Pages 0/118200.\n CPU 9.41s/1.33u sec elapsed 97.35 sec.\n\nIs there any documentation on what those numbers represent?\n\nAlso do we need to use REINDEX on the indexes, or does vacuum full take \ncase of that?\n\n\n>>Is there anywhere else I can look for possible clues? I have access to \n>>the DB super-user, but not the system root/user.\n>\n>Other than hardware tuning, find out slow/frequent queries. Use explain \n>analyze to determine why they are so slow. Forgetting to typecast a where \n>clause and using sequential scan could cost you lot more than mistuned \n>postgresql configuration.\n\nRight. One example I can think of is one particular query takes about 120 \nseconds to run (explain analyze), but if I set enable_seqscan to off, it \ntakes about 10 seconds.\n\n>>Thank you for your time. Please let me know any help or suggestions you \n>>may have. Unfortunately upgrading postgres, OS, kernel, or re-writing \n>>schema is most likely not an option.\n>\n>I hope you can change your queries.\n\nFor the most part we're not having too much trouble, just some newer \nqueries were building for some new features is what we're seeing trouble with.\n\n\n>HTH\n>\n> Shridhar\n\n",
"msg_date": "Thu, 13 May 2004 15:42:20 -0400",
"msg_from": "Doug Y <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Clarification on some settings"
},
{
"msg_contents": "On Thu, 2004-05-13 at 14:42, Doug Y wrote:\n\n\n> We don't seem to be swapping much:\n> \n\nLinux aggressively swaps. If you have any process in memory which is\nsleeping a lot, Linux may actively attempt to page it out. This is true\neven when you are not low on memory. Just because you see some swap\nspace being used, does not mean that your actively running processes are\ncausing your system to swap.\n\nI didn't catch what kernel version you are running, so I'm tossing this\nout there. Depending on the kernel (I believe 2.6+, but there may be\nsomething like it in older kernels) that you are running, you can\nattempt to tune this buy setting a value of 0-100 in\n/proc/sys/vm/swappiness. The higher the number, the more aggressive the\nkernel will attempt to swap. Some misc. kernel patches attempt to\ndynamically tune this parameter.\n\nFor a dedicated DB server, a higher number will probably be better. \nThis is because it should result in the most cache being available to\nthe system. This, of course means, you may have to wait an tad bit long\nwhen you ssh into the system, assuming sshd was swapped out. I think\nyou get the idea.\n\n\n> Swap: 1052248K av, 1092K used, 1051156K free 1465112K cached\n> \n> looks like at some point it did swap a little, but from running vmstat, I \n> can't seem to catch it actively swapping.\n> \n\nChances are, you have some dormant process which is partially or\ncompletely paged out.\n\nFor an interesting read on Linux and swapping, you can find out more\nhere: http://kerneltrap.org/node/view/3080.\n\nCheers!\n\n-- \nGreg Copeland, Owner\[email protected]\nCopeland Computer Consulting\n940.206.8004\n\n\n",
"msg_date": "Thu, 13 May 2004 15:28:11 -0500",
"msg_from": "Greg Copeland <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Clarification on some settings"
}
] |
[
{
"msg_contents": "\n\n>>>> Index Scan using pk_storico_misure_2 on storico_misure \n>>>>(cost=0.00..1984.64 rows=658 width=21) (actual \n>>>time=723.441..1858.107 \n>>>>rows=835 loops=1)\n>>>> Index Cond: ((data_tag > '2004-05-03 \n>>>00:00:00'::timestamp without \n>>>>time zone) AND (data_tag < '2004-05-12 00:00:00'::timestamp without \n>>>>time\n>>>>zone) AND (tag_id = 37423))\n>>>\n>>>Either most of the time is spent skipping index tuples in \n>>>the data_tag range 2004-05-03 to 2004-05-12 which have \n>>>tag_id <> 37423, or getting those 835 rows causes a lot of \n>>>disk seeks.\n>>>\n>>>If the former is true, an index on (tag_id, data_tag) will help.\n>>>\nIs true , i recreate the indexes making an index on (tag_id, data_tag)\nand works fine . \n\n\n>>>In your first message you wrote:\n>>>>fsync = false\n>>>\n>>>Do this only if you don't care for your data.\n>>>\n\nI set it to false , for performance tests .I've a stored procedure that\nmake about 2000 insert in 2 tables and 2000 delete in another and with\nfsync false perfomrmance are 2.000 -3.000 ms (stable) with fsync 3.000\nms to 15.000 ms . I trust in my hardware an O.S so fsync setting is a\nbig dubt for my production enviroment .\n\nThanks a lot\n\nBye \n\n",
"msg_date": "Fri, 14 May 2004 11:22:44 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: R: Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "> I trust in my hardware an O.S so fsync setting is a\n> big dubt for my production enviroment .\n\nThen you are making a big mistake, loving your hardware more than your\ndata...\n\nChris\n\n\n",
"msg_date": "Fri, 14 May 2004 17:55:17 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: R: R: Query plan on identical tables differs . Why ?"
},
{
"msg_contents": "\n\n>>>-----Messaggio originale-----\n>>>Da: [email protected] \n>>>[mailto:[email protected]] Per conto di \n>>>Christopher Kings-Lynne\n>>>Inviato: venerdì 14 maggio 2004 11.55\n>>>A: Fabio Panizzutti\n>>>Cc: 'Manfred Koizar'; [email protected]\n>>>Oggetto: Re: R: R: [PERFORM] Query plan on identical tables \n>>>differs . Why ?\n>>>\n>>>\n>>>> I trust in my hardware an O.S so fsync setting is a\n>>>> big dubt for my production enviroment .\n>>>\n>>>Then you are making a big mistake, loving your hardware more \n>>>than your data...\n>>>\n>>>Chris\n>>>\n\n\nI'm testing for better performance in insert/delete so i turn off fsync\n, i don't love hardware more than data , so i'll set fsync on in the\nproduction enviroment .\nThanks a lot\n\nBest regards\n\nFabio \n\n",
"msg_date": "Fri, 14 May 2004 12:30:16 +0200",
"msg_from": "\"Fabio Panizzutti\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: R: R: Query plan on identical tables differs . Why ?"
}
] |
[
{
"msg_contents": "Hi folks,\n\nI need some help in a TPCH 100GB benchmark.\n\nI described our settings in:\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00377.php\n\nSome queries are taking to long to finish (4, 8, 9,\n10, 19,20 and 22) and I need some help to increase the\nsystem performance.\nHere I put the query #19, the explain and the \"top\"\nfor it. \nThis query is running since yesterday 10 AM.\n\nQuery text is:\n\nselect\n sum(l_extendedprice* (1 - l_discount)) as\nrevenue\nfrom\n lineitem,\n part\nwhere\n (\n p_partkey = l_partkey\n and p_brand = 'Brand#32'\n and p_container in ('SM CASE', 'SM\nBOX', 'SM PACK', 'SM PKG')\n and l_quantity >= 2 and l_quantity <=\n2 + 10\n and p_size between 1 and 5\n and l_shipmode in ('AIR', 'AIR REG')\n and l_shipinstruct = 'DELIVER IN\nPERSON'\n )\n or\n (\n p_partkey = l_partkey\n and p_brand = 'Brand#42'\n and p_container in ('MED BAG', 'MED\nBOX', 'MED PKG', 'MED PACK')\n and l_quantity >= 11 and l_quantity <=\n11 + 10\n and p_size between 1 and 10\n and l_shipmode in ('AIR', 'AIR REG')\n and l_shipinstruct = 'DELIVER IN\nPERSON'\n )\n or\n (\n p_partkey = l_partkey\n and p_brand = 'Brand#54'\n and p_container in ('LG CASE', 'LG\nBOX', 'LG PACK', 'LG PKG')\n and l_quantity >= 27 and l_quantity <=\n27 + 10\n and p_size between 1 and 15\n and l_shipmode in ('AIR', 'AIR REG')\n and l_shipinstruct = 'DELIVER IN\nPERSON'\n );\n\n\n\nTasks: 57 total, 2 running, 55 sleeping, 0\nstopped, 0 zombie\nCpu(s): 16.5% user, 1.8% system, 0.0% nice, \n59.2% idle, 22.5% IO-wait\nMem: 4036184k total, 4025008k used, 11176k free,\n 4868k buffers\nSwap: 4088500k total, 13204k used, 4075296k free,\n 3770208k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM \nTIME+ COMMAND\n28118 postgres 25 0 372m 354m 335m R 99.4 9.0 \n1724:45 postmaster\n\n\n Aggregate \n(cost=6825900228313539.00..6825900228313539.00 rows=1\nwidth=22)\n -> Nested Loop \n(cost=887411.00..6825900228313538.00 rows=325\nwidth=22)\n -> Seq Scan on lineitem \n(cost=0.00..21797716.88 rows=600037888 width=79)\n -> Materialize (cost=887411.00..1263193.00\nrows=20000000 width=36)\n -> Seq Scan on part \n(cost=0.00..711629.00 rows=20000000 width=36)\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nSBC Yahoo! - Internet access at a great low price.\nhttp://promo.yahoo.com/sbc/\n",
"msg_date": "Fri, 14 May 2004 11:00:37 -0700 (PDT)",
"msg_from": "Eduardo Almeida <[email protected]>",
"msg_from_op": true,
"msg_subject": "TPCH 100GB - need some help"
},
{
"msg_contents": "On Fri, 2004-05-14 at 14:00, Eduardo Almeida wrote:\n> Hi folks,\n> \n> I need some help in a TPCH 100GB benchmark.\n\nPerformance with 7.5 is much improved over 7.4 for TPCH due to efforts\nof Tom Lane and OSDL. Give it a try with a recent snapshot of\nPostgreSQL.\n\nOtherwise, disable nested loops for that query. \n\n set enable_nestloop = off;\n\n\n",
"msg_date": "Fri, 14 May 2004 17:40:23 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPCH 100GB - need some help"
},
{
"msg_contents": "Eduardo Almeida <[email protected]> writes:\n> I need some help in a TPCH 100GB benchmark.\n> Here I put the query #19, the explain and the \"top\"\n> for it. \n\nIIRC, this is one of the cases that inspired the work that's been done\non the query optimizer for 7.5. I don't think you will be able to get\n7.4 to generate a good plan for it (at least not without changing the\nquery, which is against the TPC rules). How do you feel about running\nCVS tip?\n\nBTW, are you aware that OSDL has already done a good deal of work with\nrunning TPC benchmarks for Postgres (and some other OS databases)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 May 2004 12:00:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: TPCH 100GB - need some help "
},
{
"msg_contents": "Mr. Tom Lane\n\n\n--- Tom Lane <[email protected]> wrote:\n> Eduardo Almeida <[email protected]> writes:\n> > I need some help in a TPCH 100GB benchmark.\n> > Here I put the query #19, the explain and the\n> \"top\"\n> > for it. \n> \n> IIRC, this is one of the cases that inspired the\n> work that's been done\n> on the query optimizer for 7.5. I don't think you\n> will be able to get\n> 7.4 to generate a good plan for it (at least not\n> without changing the\n> query, which is against the TPC rules). How do you\n> feel about running\n> CVS tip?\n\nWe are testing the postgre 7.4.2 to show results to\nsome projects here in Brazil. We are near the deadline\nfor these projects and we need to show results with a\nstable version.\n\nASAP I want and I will help the PG community testing\nthe CVS with VLDB.\n\n> \n> BTW, are you aware that OSDL has already done a good\n> deal of work with\n> running TPC benchmarks for Postgres (and some other\n> OS databases)?\n\nNo! Now I'm considering the use of OSDL because of\nquery rewrite. Yesterday the query #19 that I describe\nruns in the OSDL way.\n\nWe found some interesting patterns in queries that\ntake to long to finish in the 100 GB test.\n���\tSub-queries inside other sub-queries (Q20 and Q22);\n���\tExists and Not exists selection (Q4, Q21 and Q22);\n���\tAggregations with in-line views, that is queries\ninside FROM clause (Q7, Q8, Q9 and Q22);\n\nIn fact these queries were aborted by timeout \nstatement_timeout = 25000000\n\nI took off the timeout to Q20 and it finished in\n23:53:49 hs.\n\ntks a lot,\nEduardo\n\nps. sorry about my english\n\n> \n> \t\t\tregards, tom lane\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nSBC Yahoo! - Internet access at a great low price.\nhttp://promo.yahoo.com/sbc/\n",
"msg_date": "Tue, 18 May 2004 05:49:12 -0700 (PDT)",
"msg_from": "Eduardo Almeida <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: TPCH 100GB - need some help "
}
] |
[
{
"msg_contents": "Hi all,\n\ni have a question, is there any advantages in using numeric(1) or numeric(2) \nin place of smallint?\nis there any diff. in performance if i use smallint in place of integer?\n\nThanx in advance,\nJaime Casanova\n\n_________________________________________________________________\nHelp STOP SPAM with the new MSN 8 and get 2 months FREE* \nhttp://join.msn.com/?page=features/junkmail\n\n",
"msg_date": "Fri, 14 May 2004 21:08:19 +0000",
"msg_from": "\"Jaime Casanova\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "numeric data types"
},
{
"msg_contents": "\"Jaime Casanova\" <[email protected]> writes:\n> i have a question, is there any advantages in using numeric(1) or numeric(2) \n> in place of smallint?\n\nPerformance-wise, smallint is an order of magnitude better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 May 2004 00:03:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric data types "
},
{
"msg_contents": "On Fri, 2004-05-14 at 17:08, Jaime Casanova wrote:\n> is there any diff. in performance if i use smallint in place of integer?\n\nAssuming you steer clear of planner deficiencies, smallint should be\nslightly faster (since it consumes less disk space), but the performance\ndifference should be very small. Also, alignment/padding considerations\nmay mean that smallint doesn't actually save any space anyway.\n\n-Neil\n\n\n",
"msg_date": "Sat, 15 May 2004 22:27:47 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: numeric data types"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have recently started evaluating Postgresql 7.4.2 to replace some *cough*\nmore proprietary database systems... Thanks to the _excellent_ documentation\n(a point I cannot overemphasize) I was up and running in no time, and got a\nfirst test application running on the native C interface.\n\nThere is just one point where I found the documentation lacking any\ndescription and practical hints (as opposed to all other topics), namely\nthat of how to tune a setup for maximum performance regarding the layout of\npartitions on hard-disks and their mount options.\n\nI gather that the pg_xlog directory contains the transaction log and would\nbenefit greatly from being put on a separate partition. I would then mount\nthat partition with the noatime and forcedirectio options (on Solaris, the\nlatter to circumvent the OS' buffer cache)? On the other hand the data\npartition should not be mounted with direct io, since Postgresql is\ndocumented as relying heavily on the OS' cache?\n\nThen I was wondering whether the fsync option refers only to the wal log (is\nthat another name for the xlog, or is one a subset of the other?), or also\nto data write operations? With forcedirectio for the wal, do I still need\nfsync (or O_SYNC...) because otherwise I could corrupt the data?\n\nAre there any other directories that might benefit from being put on a\ndedicated disk, and with which mount options? Even without things like\ntablespaces there should be some headroom over having everything on one\npartition like in the default setup.\n\nWhat I should add is that reliability is a premium for us, we do not want to\nsacrifice integrity for speed, and that we are tuning for a high commit rate\nof small, simple transactions...\n\nI would be greatly thankful if somebody could give me some hints or pointers\nto further documentation as my search on the web did not show up much.\n\nRegards, Colin\n",
"msg_date": "Sat, 15 May 2004 12:52:27 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "filesystem option tuning"
},
{
"msg_contents": "[email protected] wrote:\n> Hi All,\n> \n> I have recently started evaluating Postgresql 7.4.2 to replace some *cough*\n> more proprietary database systems... Thanks to the _excellent_ documentation\n> (a point I cannot overemphasize) I was up and running in no time, and got a\n> first test application running on the native C interface.\n\nIn no official capacity whatsoever, welcome aboard.\n\n> There is just one point where I found the documentation lacking any\n> description and practical hints (as opposed to all other topics), namely\n> that of how to tune a setup for maximum performance regarding the layout of\n> partitions on hard-disks and their mount options.\n\nI'm not a Sun user, so I can't give any OS-specific notes, but in general:\n - Don't bypass the filesystem, but feel free to tinker with mount \noptions if you think it will help\n - If you can put WAL on separate disk(s), all the better.\n - The general opinion seems to be RAID5 is slower than RAID10 unless \nyou have a lot of disks\n - Battery-backed write-cache for your SCSI controller can be a big \nperformance win\n - Tablespaces _should_ be available in the next release of PG, we'll \nknow for sure soon. That might make life simpler for you if you do want \nto spread your database around by hand,\n\n> What I should add is that reliability is a premium for us, we do not want to\n> sacrifice integrity for speed, and that we are tuning for a high commit rate\n> of small, simple transactions...\n\nMake sure the WAL is on fast disks I'd suggest. At a guess that'll be \nyour bottleneck.\n\nFor more info, your best bet is to check the archives on the \nplpgsql-performance list, and then post there. People will probably want \nto know more about your database size/number of concurrent \ntransactions/disk systems etc.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 17 May 2004 18:04:54 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem option tuning"
},
{
"msg_contents": "Hi!\n\nOn Mon, May 17, 2004 at 06:04:54PM +0100, Richard Huxton wrote:\n> [email protected] wrote:\n> > [...]\n> \n> In no official capacity whatsoever, welcome aboard.\n\nThanks ;-)\n\n> > There is just one point where I found the documentation lacking any\n> > description and practical hints (as opposed to all other topics), namely\n> > that of how to tune a setup for maximum performance regarding the layout of\n> > partitions on hard-disks and their mount options.\n> \n> I'm not a Sun user, so I can't give any OS-specific notes, but in general:\n> - Don't bypass the filesystem, but feel free to tinker with mount \n> options if you think it will help\n\nRight, raw partitions are too low-level for me these days anyhow...\nI assume that all postgres partitions can be mounted with noatime?\n\n> - If you can put WAL on separate disk(s), all the better.\n\nDoes that mean only the xlog, or also the clog? As far as I understand, the\nclog contains some meta-information on the xlog, so presumably it is flushed\nto disc synchronously together with the xlog? That would mean that they each\nneed a separate disk to prevent one disk having to seek too often...?\n\n> - Battery-backed write-cache for your SCSI controller can be a big \n> performance win\n\nI probably won't be able to get such a setup for this project; that's why I\nam bothering about which disk will be seeking how often.\n\n> - Tablespaces _should_ be available in the next release of PG, we'll \n> know for sure soon. That might make life simpler for you if you do want \n> to spread your database around by hand,\n\nOk, I think tablespaces are not the important thing - at least for this\nproject of ours.\n\n> > What I should add is that reliability is a premium for us, we do not want to\n> > sacrifice integrity for speed, and that we are tuning for a high commit rate\n> > of small, simple transactions...\n> \n> Make sure the WAL is on fast disks I'd suggest. At a guess that'll be \n> your bottleneck.\n> \n> For more info, your best bet is to check the archives on the \n> plpgsql-performance list, and then post there. People will probably want \n> to know more about your database size/number of concurrent \n> transactions/disk systems etc.\n\nHere goes ... we are talking about a database cluster with two tables where\nthings are happening, one is a kind of log that is simply \"appended\" to and\nwill expect to reach a size of several million entries in the time window\nthat is kept, the other is a persistent backing of application data that\nwill mostly see read-modify-writes of single records. Two writers to the\nhistory, one writer to the data table. The volume of data is not very high\nand RAM is enough...\n\nIf any more information is required feel free to ask - I would really\nappreciate getting this disk layout sorted out.\n\nThanks,\nColin\n",
"msg_date": "Wed, 19 May 2004 09:32:39 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: filesystem option tuning"
},
{
"msg_contents": "On Wednesday 19 May 2004 13:02, [email protected] wrote:\n> > - If you can put WAL on separate disk(s), all the better.\n>\n> Does that mean only the xlog, or also the clog? As far as I understand, the\n> clog contains some meta-information on the xlog, so presumably it is\n> flushed to disc synchronously together with the xlog? That would mean that\n> they each need a separate disk to prevent one disk having to seek too\n> often...?\n\nYou can put clog and xlog on same drive. That should be enough in most cases. \nxlog is written sequentially and never read back other than for recovery \nafter a crash. clog is typically 8KB or a page and should not be an IO \noverhead even in high traffic databases.\n\n> > - Battery-backed write-cache for your SCSI controller can be a big\n> > performance win\n>\n> I probably won't be able to get such a setup for this project; that's why I\n> am bothering about which disk will be seeking how often.\n\nAs I said earlier, xlog is written sequentially and if I am not mistaken clog \nas well. So there should not be much seeking if they are on a separate drive.\n\n(Please correct me if I am wrong)\n\n> > - Tablespaces _should_ be available in the next release of PG, we'll\n> > know for sure soon. That might make life simpler for you if you do want\n> > to spread your database around by hand,\n>\n> Ok, I think tablespaces are not the important thing - at least for this\n> project of ours.\n\nWell, if you have tablespaces, you don't have to mess with symlinking \nclog/xlog or use location facility which is bit rough. You should be able to \nmanage such a setup solely from postgresql. That is an advantage of \ntablespaces.\n\n> Here goes ... we are talking about a database cluster with two tables where\n> things are happening, one is a kind of log that is simply \"appended\" to and\n> will expect to reach a size of several million entries in the time window\n> that is kept, the other is a persistent backing of application data that\n> will mostly see read-modify-writes of single records. Two writers to the\n> history, one writer to the data table. The volume of data is not very high\n> and RAM is enough...\n\nEven if you have enough RAM, you should use pg_autovacuum so that your tables \nare in shape. This is especially required when your update/insert rate is \nhigh.\n\nIf your history logs needs to be rotated, you can take advantage of the fact \nthat DDL's in postgresql are fully transacted. So you can drop the table in a \ntransaction but nobody will notice anything unless it is committed. Makes a \ntransparent rotation.\n\nHTH\n\n Shridhar\n",
"msg_date": "Sat, 29 May 2004 15:01:45 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem option tuning"
},
{
"msg_contents": "Shridhar Daithankar <[email protected]> writes:\n> On Wednesday 19 May 2004 13:02, [email protected] wrote:\n> - If you can put WAL on separate disk(s), all the better.\n>> \n>> Does that mean only the xlog, or also the clog?\n\n> You can put clog and xlog on same drive.\n\nYou can, but I think you shouldn't. The entire argument for giving xlog\nits own drive revolves around the fact that xlog is written\nsequentially, and so if it has its own spindle then you have near-zero\nseek requirements. As soon as you give that drive any other work to do,\nyou start losing the low-seek property.\n\nNow as Shridhar says, clog is not a very high-I/O-volume thing, so in\none sense it doesn't much matter which drive you put it on. But it\nseems to me that clog acts much more like ordinary table files than it\nacts like xlog.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 May 2004 11:18:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem option tuning "
}
] |
[
{
"msg_contents": "I'm working on a project using PostgreSQL as our database designing a budget\nsystem. We are still in the design and testing phases but I thought I would\nask advice about a platform to host this system.\n\nWe aren't a large system, probably no more than 50-75 users at any one time.\n>From a data standpoint I can't guess at the number of gigabytes but suffice\nto say it is not going to be that large. Our biggest table will probably\nhold about 1 million rows and is about 120 bytes (closer to about 100).\n\nDell and HP servers are being mentioned but we currently have no preference.\n\nAny help you could provide will be appreciated.\n\nThanks,\nDuane\n\nP.S. I've only just begun using PostgreSQL after having used (and still\nusing) DB2 on a mainframe for the past 14 years. My experience with\nUnix/Linux is limited to some community college classes I've taken but we do\nhave a couple of experienced Linux sysadmins on our team. I tell you this\nbecause my \"ignorance\" will probably show more than once in my inquiries.\n\n\n\n\n\n\nHardware Platform\n\n\nI'm working on a project using PostgreSQL as our database designing a budget system. We are still in the design and testing phases but I thought I would ask advice about a platform to host this system.\nWe aren't a large system, probably no more than 50-75 users at any one time. From a data standpoint I can't guess at the number of gigabytes but suffice to say it is not going to be that large. Our biggest table will probably hold about 1 million rows and is about 120 bytes (closer to about 100).\nDell and HP servers are being mentioned but we currently have no preference.\n\nAny help you could provide will be appreciated.\n\nThanks,\nDuane\n\nP.S. I've only just begun using PostgreSQL after having used (and still using) DB2 on a mainframe for the past 14 years. My experience with Unix/Linux is limited to some community college classes I've taken but we do have a couple of experienced Linux sysadmins on our team. I tell you this because my \"ignorance\" will probably show more than once in my inquiries.",
"msg_date": "Mon, 17 May 2004 16:08:34 -0700",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware Platform"
}
] |
[
{
"msg_contents": "Hello,\n (note best viewed in fixed-width font)\n\n I'm still trying to find where my performance bottle neck is...\nI have 4G ram, PG 7.3.4\nshared_buffers = 75000\neffective_cache_size = 75000\n\nRun a query I've been having trouble with and watch the output of vmstat \n(linux):\n\n$ vmstat 1\n procs memory swap io system \n cpu\n r b w swpd free buff cache si so bi bo in cs us \nsy id\n 0 0 0 148 8732 193652 \n2786668 0 0 0 0 292 151 0 2 98\n 2 0 2 148 7040 193652 \n2786668 0 0 0 208 459 697 45 10 45\n 0 0 0 148 9028 193652 \n2786684 0 0 16 644 318 613 25 4 71\n 1 0 0 148 5092 193676 \n2780196 0 0 12 184 441 491 37 5 58\n 0 1 0 148 5212 193684 \n2772512 0 0 112 9740 682 1063 45 12 43\n 1 0 0 148 5444 193684 \n2771584 0 0 120 4216 464 1303 44 3 52\n 1 0 0 148 12232 193660 \n2771620 0 0 244 628 340 681 43 20 38\n 1 0 0 148 12168 193664 \n2771832 0 0 196 552 332 956 42 2 56\n 1 0 0 148 12080 193664 \n2772248 0 0 272 204 371 201 40 1 59\n 1 1 0 148 12024 193664 \n2772624 0 0 368 0 259 127 42 3 55\n\nThats the first 10 lines or so... the query takes 60 seconds to run.\n\nI'm confused on the bo & bi parts of the io:\n IO\n bi: Blocks sent to a block device (blocks/s).\n bo: Blocks received from a block device (blocks/s).\n\nyet it seems to be opposite of that... bi only increases when doing a \nlargish query, while bo also goes up, I typically see periodic bo numbers \nin the low 100's, which I'd guess are log writes.\n\nI would think that my entire DB should end up cached since a raw pg_dump \nfile is about 1G in size, yet my performance doesn't indicate that that is \nthe case... running the same query a few minutes later, I'm not seeing a \nsignificant performance improvement.\n\nHere's a sample from iostat while the query is running:\n\n$ iostat -x -d 1\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949552.96 0.00 0.00 100.00\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949662.96 0.00 0.00 100.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949642.96 0.00 0.00 100.00\nsdb 0.00 428.00 0.00 116.00 0.00 \n4368.00 37.66 2844.40 296.55 86.21 100.00\nsdb1 0.00 428.00 0.00 116.00 0.00 \n4368.00 37.66 6874.40 296.55 86.21 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949552.96 0.00 0.00 100.00\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949662.96 0.00 0.00 100.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949642.96 0.00 0.00 100.00\nsdb 4.00 182.00 6.00 77.00 80.00 \n2072.00 25.93 2814.50 54.22 120.48 100.00\nsdb1 4.00 182.00 6.00 77.00 80.00 \n2072.00 25.93 6844.50 54.22 120.48 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949552.96 0.00 0.00 100.00\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949662.96 0.00 0.00 100.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949642.96 0.00 0.00 100.00\nsdb 0.00 43.00 0.00 \n11.00 0.00 432.00 39.27 2810.40 36.36 909.09 100.00\nsdb1 0.00 43.00 0.00 \n11.00 0.00 432.00 39.27 6840.40 36.36 909.09 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 15.84 0.00 17.82 0.00 269.31 15.11 \n42524309.47 44.44 561.11 100.00\nsda1 0.00 15.84 0.00 17.82 0.00 269.31 15.11 \n42524419.47 44.44 561.11 100.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42524398.67 0.00 0.00 100.00\nsdb 0.99 222.77 0.99 114.85 15.84 \n2700.99 23.45 2814.16 35.90 86.32 100.00\nsdb1 0.99 222.77 0.99 114.85 15.84 \n2700.99 23.45 6844.16 35.90 86.32 100.00\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz \navgqu-sz await svctm %util\nsda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949551.76 0.00 0.00 101.00\nsda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949662.86 0.00 0.00 101.00\nsda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 \n42949642.66 0.00 0.00 101.00\nsdb 1.00 91.00 1.00 \n28.00 16.00 960.00 33.66 2838.40 10.34 348.28 101.00\nsdb1 1.00 91.00 1.00 \n28.00 16.00 960.00 33.66 6908.70 10.34 348.28 101.00\n\nThe DB files and logs are on sdb1.\n\nCan someone point me in the direction of some documentation on how to \ninterpret these numbers?\n\nAlso, I've tried to figure out what's getting cached by PostgreSQL by \nlooking at pg_statio_all_tables. What kind of ratio should I be seeing for \nheap_blks_read / heap_blks_hit ?\n\nThanks.\n\n",
"msg_date": "Tue, 18 May 2004 14:12:14 -0400",
"msg_from": "Doug Y <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interpreting vmstat"
},
{
"msg_contents": "\n\n\n\nWell,\n\nSince I haven't seen any other responds, I'll offer a bit of advice and let\nothers correct me. :)\n\nYour shared buffers may be too big (?). It is much larger than the guide\non varlena.com recommends. All I can suggest is trying some experiments\nwith halving/doubling the numbers to see which way performance goes. Also,\nif you are counting on cache to improve performance, then the db has to be\nloaded into cache the first time. So, are subsequent re-queries faster?\n\nThom Dyson\nDirector of Information Services\nSybex, Inc.\n\n\n\[email protected] wrote on 05/18/2004 11:12:14 AM:\n\n> Hello,\n> (note best viewed in fixed-width font)\n>\n> I'm still trying to find where my performance bottle neck is...\n> I have 4G ram, PG 7.3.4\n> shared_buffers = 75000\n> effective_cache_size = 75000\n\n\n",
"msg_date": "Thu, 20 May 2004 09:00:22 -0700",
"msg_from": "Thom Dyson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting vmstat"
},
{
"msg_contents": "On Tue, 2004-05-18 at 14:12, Doug Y wrote:\n> Run a query I've been having trouble with and watch the output of vmstat \n> (linux):\n> \n> $ vmstat 1\n> procs memory swap io system \n> cpu\n> r b w swpd free buff cache si so bi bo in cs us \n> sy id\n> 0 0 0 148 8732 193652 \n> 2786668 0 0 0 0 292 151 0 2 98\n> 2 0 2 148 7040 193652 \n> 2786668 0 0 0 208 459 697 45 10 45\n> 0 0 0 148 9028 193652 \n> 2786684 0 0 16 644 318 613 25 4 71\n> 1 0 0 148 5092 193676 \n> 2780196 0 0 12 184 441 491 37 5 58\n> 0 1 0 148 5212 193684 \n> 2772512 0 0 112 9740 682 1063 45 12 43\n> 1 0 0 148 5444 193684 \n> 2771584 0 0 120 4216 464 1303 44 3 52\n> 1 0 0 148 12232 193660 \n> 2771620 0 0 244 628 340 681 43 20 38\n> 1 0 0 148 12168 193664 \n> 2771832 0 0 196 552 332 956 42 2 56\n> 1 0 0 148 12080 193664 \n> 2772248 0 0 272 204 371 201 40 1 59\n> 1 1 0 148 12024 193664 \n> 2772624 0 0 368 0 259 127 42 3 55\n> \n> Thats the first 10 lines or so... the query takes 60 seconds to run.\n> \n> I'm confused on the bo & bi parts of the io:\n> IO\n> bi: Blocks sent to a block device (blocks/s).\n> bo: Blocks received from a block device (blocks/s).\n> \n> yet it seems to be opposite of that... bi only increases when doing a \n> largish query, while bo also goes up, I typically see periodic bo numbers \n> in the low 100's, which I'd guess are log writes.\n> \n> I would think that my entire DB should end up cached since a raw pg_dump \n> file is about 1G in size, yet my performance doesn't indicate that that is \n> the case... running the same query a few minutes later, I'm not seeing a \n> significant performance improvement.\n> \n\nBeen meaning to try and address this thread since it touches on one of\nthe areas that I think is sorely lacking in the postgresql admin\nknowledge base; how to use various unix commands to deduce performance\ninformation. This would seem even more important given that PostgreSQL\nadmins are expected to use said tools to find out some information that\nthe commercial databases provide to there users... but alas this is\n-performance and not -advocacy so let me get on with it eh?\n\nAs you noted, bi and bo are actually reversed, and I believe if you\nsearch the kernel hackers mailing lists you'll find references to\nthis... here's some more empirical evidence though, the following\nvmstat was taken from a high-write traffic monitoring type database\napplication... \n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 27412 593336 112036 1865936 0 1 0 1 1 0 1 0 0\n 5 1 1 27412 593336 112036 1865952 0 0 0 477 600 1346 53 7 40\n 4 0 0 27412 593336 112036 1865960 0 0 0 1296 731 2087 47 5 48\n 3 3 2 27412 594408 112052 1865972 4 0 4 2973 904 2957 32 20 48\n 3 1 1 26596 594544 112068 1865976 64 0 64 1433 770 2766 41 22 37\n 1 1 1 26596 594544 112072 1866004 0 0 5 959 702 1687 50 10 41\n 3 1 1 26596 594512 112072 1866024 0 0 0 1155 731 2209 52 12 37\n 2 0 0 26596 594512 112072 1866040 0 0 0 635 511 1293 48 5 46\n 0 1 1 26596 594472 112076 1866076 0 0 7 739 551 1248 49 8 43\n 1 0 0 26596 594472 112076 1866088 0 0 0 1048 598 1295 49 8 43\n 2 0 0 26596 594208 112084 1866696 0 0 203 1253 686 1506 42 16 41\n 1 0 0 26596 593920 112084 1866716 0 0 0 1184 599 1329 39 12 49\n 0 1 1 26596 593060 112084 1866740 0 0 3 1036 613 3442 48 8 44\n 0 1 2 26596 592920 112084 1866752 0 0 0 3825 836 1323 9 14 76\n 0 0 0 26596 593544 112084 1866788 0 0 0 1064 625 1197 9 15 76\n 0 1 1 26596 596300 112088 1866808 0 0 0 747 625 1558 7 13 79\n 0 0 1 26596 599600 112100 1866892 0 0 0 468 489 1331 6 4 91\n 0 0 0 26596 599600 112100 1866896 0 0 0 237 418 997 5 4 91\n 0 1 1 26596 599600 112104 1866896 0 0 0 1063 582 1371 7 7 86\n 0 0 0 26596 599612 112104 1866904 0 0 0 561 648 1556 6 4 89\n\nnotice all the bo as it continually writes data to disk. Also notice how\ngenerally speaking it has no bi since it does not have to pull data up\nfrom disk... you will notice that the couple of times it grabs\ninformation from swap space, you'll also find a corresponding pull on\nthe io. \n\ngetting back to your issue in order to determine if there is a problem\nin this case, you need to run explain analyze a few times repeatedly,\ntake the relative score given by these runs, and then come back 5-10\nminutes later and run explain analyze again and see if the results are\ndrastically different. troll vmstat while you do this to see if there is\nbi occurring. I probably should mention that just because you see\nactivity on bi doesn't mean that you'll notice any difference in\nperformance against running the query with no bi, it's dependent on a\nnumber of factors really. \n\nOh, and as the other poster alluded to, knock down your shared buffers\nby about 50% and see where that gets you. I might also knock *up* your\neffective cache size... try doubling that and see how things go. \n\nHope this helps... and others jump in with corrections if needed.\n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "25 May 2004 17:07:51 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Interpreting vmstat"
}
] |
[
{
"msg_contents": "All,\n\nDoes PG store when a table was last analyzed?\n\nThanks,\n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nSBC Yahoo! - Internet access at a great low price.\nhttp://promo.yahoo.com/sbc/\n",
"msg_date": "Tue, 18 May 2004 14:13:21 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": true,
"msg_subject": "where to find out when a table was last analyzed?"
},
{
"msg_contents": "On Tue, 2004-05-18 at 17:13, Litao Wu wrote:\n> All,\n> \n> Does PG store when a table was last analyzed?\n> \n> Thanks,\n> \n\nno. you can do something like select attname,s.* from pg_statistic s,\npg_attribute a, pg_class c where starelid = c.oid and attrelid = c.oid\nand staattnum = attnum and relname = 'mytable' to see the current\nstatistics on the table, but its not timestamped. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "26 May 2004 09:29:06 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: where to find out when a table was last analyzed?"
}
] |
[
{
"msg_contents": "After reading the replies to this, it is clear that this is a \nLintel-centric question, but I will throw in my experience.\n\n > I am curious if there are any real life production\n > quad processor setups running postgresql out there.\n\nYes. We are running a 24/7 operation on a quad CPU Sun V880.\n\n > Since postgresql lacks a proper replication/cluster\n > solution, we have to buy a bigger machine.\n\nThis was a compelling reason for us to stick with SPARC and avoid \nIntel/AMD when picking a DB server. We moved off of an IBM mainframe in \n1993 to Sun gear and never looked back. We can upgrade to our heart's \ncontent with minimal disruption and are only on our third box in 11 \nyears with plenty of life left in our current one.\n\n > Right now we are running on a dual 2.4 Xeon, 3 GB Ram\n > and U160 SCSI hardware-raid 10.\n\nA couple people mentioned hardware RAID, which I completely agree with. \n I prefer an external box with a SCSI or FC connector. There are no \ndriver issues that way. We boot from our arrays.\n\nThe Nexsan ATABoy2 is a nice blend of performance, reliability and cost. \n Some of these with 1TB and 2TB of space were recently spotted on ebay \nfor under $5k. We run a VERY random i/o mix on ours and it will \nconsistently sustain 15 MB/s in blended read and write i/o, sustaining \nwell over 1200 io/s. These are IDE drives, so they fail more often than \nSCSI, so run RAID1 or RAID5. The cache on these pretty much eliminates \nthe RAID5 penalties.\n\n > The 30k+ setups from Dell etc. don't fit our budget.\n\nFor that kind of money you could get a lower end Sun box (or IBM RS/6000 \nI would imagine) and give yourself an astounding amount of headroom for \nfuture growth.\n\nSincerely,\nMarty\n\n",
"msg_date": "Tue, 18 May 2004 21:38:25 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Quad processor options"
}
] |
[
{
"msg_contents": "Hi Guys,\n\n \n\n My question is .. which is better design\n\n \n\n1.\tSingle Table with 50 million records or\n2.\tMultiple Table using inheritance to the parents table\n\n \n\n \n\nI will use this only for query purpose ..\n\n \n\nThanks ..\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi Guys,\n \n My question is .. which is better design\n \n\nSingle Table with 50 million\n records or\nMultiple Table using\n inheritance to the parents table\n\n \n \nI will use this only for query purpose ..\n \nThanks ..",
"msg_date": "Wed, 19 May 2004 15:37:06 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "DB Design"
},
{
"msg_contents": "On Wed, 2004-05-19 at 15:37 +0800, Michael Ryan S. Puncia wrote:\n> Hi Guys,\n> \n> \n> \n> My question is .. which is better design\n> \n> \n> \n> 1. Single Table with 50 million records or\n> 2. Multiple Table using inheritance to the parents table\n\nIt's not that simple.\n\nGiven your e-mail address I assume you want to store Philippines Census\ndata in such a table, but does Census data fit well in a single flat\ntable structure? Not from what I have seen here in NZ, but perhaps\nCensus is simpler there.\n\nSo to know what the best answer to that question is, people here will\nsurely need more and better information from you about database schema,\nrecord size, indexing and query characteristics, and so on.\n\n\n> I will use this only for query purpose ..\n\nThen you may quite possibly want to consider a different database.\nParticularly if it is single-user query purposes.\n\nFor example, there are some SQL databases that would load the entire\ndatabase into RAM from static files, and then allow query against this.\nThis can obviously give huge performance improvements in situations\nwhere volatility is not a problem.\n\nCheers,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)916-7201 MOB: +64(21)635-694 OFFICE: +64(4)499-2267\n Do not overtax your powers.\n-------------------------------------------------------------------------\n\n",
"msg_date": "Wed, 19 May 2004 23:21:22 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Design"
},
{
"msg_contents": "The complete answer is probably \"it depends\", but this does not help \nmuch...:-)\n\nI would try out the simple approach first (i.e one 50 million row \ntable), but read up about :\n\ni) partial indexes and maybe\nii) clustering\niii) think about presorting the data before loading to place \"likely to \nbe accessed\" rows \"close\" together in the table (if possible).\niv) get to know the analyze, explain, explain analyze commands....\n\nBest wishes\n\nMark\n\nMichael Ryan S. Puncia wrote:\n\n> Hi Guys,\n>\n> \n>\n> My question is .. which is better design\n>\n> \n>\n> 1. Single Table with 50 million records or\n> 2. Multiple Table using inheritance to the parents table\n>\n> \n>\n> \n>\n> I will use this only for query purpose ..\n>\n> \n>\n> Thanks ..\n>\n> \n>\n> \n>\n",
"msg_date": "Thu, 20 May 2004 21:28:28 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: DB Design"
}
] |
[
{
"msg_contents": "See http://kerneltrap.org/node/view/3148, about 40% down, under the \nheader \"2.6 -aa patchset, object-based reverse mapping\". Does this mean \nthat the more shared memory the bigger the potential for a swap storm?\n",
"msg_date": "Wed, 19 May 2004 15:26:31 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared buffer size on linux"
}
] |
[
{
"msg_contents": "Hello for all!\n\nI have PostgreSQL 7.4 under last version of Cygwin and have some\nproblems with performance :( It is very strange... I don't remember\nthis problem on previous version Cygwin and PostgreSQL 7.3\n\nI have only two simple tables:\n\nCREATE TABLE public.files_t\n(\n id int8 NOT NULL,\n parent int8,\n size int8 NOT NULL,\n dir bool NOT NULL DEFAULT false,\n ctime timestamp NOT NULL,\n ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n name text NOT NULL,\n access varchar(10) NOT NULL,\n host int4 NOT NULL,\n uname text NOT NULL,\n CONSTRAINT pk_files_k PRIMARY KEY (id),\n CONSTRAINT fk_files_k FOREIGN KEY (parent) REFERENCES public.files_t (id) ON UPDATE CASCADE ON DELETE CASCADE,\n CONSTRAINT fk_hosts_k FOREIGN KEY (host) REFERENCES public.hosts_t (id) ON UPDATE CASCADE ON DELETE CASCADE\n) WITH OIDS;\n\nand\n\nCREATE TABLE public.hosts_t\n(\n id int4 NOT NULL,\n ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n utime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n name text NOT NULL,\n address inet NOT NULL,\n CONSTRAINT pk_hosts_k PRIMARY KEY (id)\n) WITH OIDS;\n\nTable files_t has 249259 records and table hosts_t has only 59 records.\n\nI tries to run simple query:\n\nselect * from files_t where parent = 3333\n\nThis query works 0.256 seconds! It is very big time for this small\ntable!\nI have index for field \"parent\":\n\nCREATE INDEX files_parent_idx\n ON public.files_t\n USING btree\n (parent);\n\nBut if I tries to see query plan then I see following text:\n\nSeq Scan on files_t (cost=0.00..6103.89 rows=54 width=102)\n Filter: (parent = 3333)\n\nPostgreSQL do not uses index files_parent_idx!\n\nI have enabled all options of \"QUERY TUNING\" in postgresql.conf, I\nhave increased memory sizes for PostgreSQL:\n\nshared_buffers = 2000 # min 16, at least max_connections*2, 8KB each\nsort_mem = 32768 # min 64, size in KB\nvacuum_mem = 65536 # min 1024, size in KB\nfsync = false # turns forced synchronization on or off\ncheckpoint_segments = 3 # in logfile segments, min 1, 16MB each\nenable_hashagg = true\nenable_hashjoin = true\nenable_indexscan = true\nenable_mergejoin = true\nenable_nestloop = true\nenable_seqscan = true\nenable_sort = true\nenable_tidscan = true\ngeqo = true\ngeqo_threshold = 22\ngeqo_effort = 1\ngeqo_generations = 0\ngeqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\ngeqo_selection_bias = 2.0 # range 1.5-2.0\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = false\n\n\nPlease help me!\nMy database has a very small size (only 249259 records) but it works\nvery slowly :(\n\nBest regards\nEugeny\n\n\n\n",
"msg_date": "Thu, 20 May 2004 00:07:57 +0400",
"msg_from": "Eugeny Balakhonov <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL performance in simple queries"
},
{
"msg_contents": "Try using \n\nselect * from files_t where parent = 3333::int8\n\nYou have declared parent as int8, but the query will assume int4 for \"3333\" and may not \nuse the index.\n\nAlso make sure you have ANALYZEd this table.\n\nRegards,\nGary.\n\nOn 20 May 2004 at 0:07, Eugeny Balakhonov wrote:\n\n> Hello for all!\n> \n> I have PostgreSQL 7.4 under last version of Cygwin and have some\n> problems with performance :( It is very strange... I don't remember\n> this problem on previous version Cygwin and PostgreSQL 7.3\n> \n> I have only two simple tables:\n> \n> CREATE TABLE public.files_t\n> (\n> id int8 NOT NULL,\n> parent int8,\n> size int8 NOT NULL,\n> dir bool NOT NULL DEFAULT false,\n> ctime timestamp NOT NULL,\n> ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> name text NOT NULL,\n> access varchar(10) NOT NULL,\n> host int4 NOT NULL,\n> uname text NOT NULL,\n> CONSTRAINT pk_files_k PRIMARY KEY (id),\n> CONSTRAINT fk_files_k FOREIGN KEY (parent) REFERENCES public.files_t (id) ON UPDATE CASCADE ON DELETE CASCADE,\n> CONSTRAINT fk_hosts_k FOREIGN KEY (host) REFERENCES public.hosts_t (id) ON UPDATE CASCADE ON DELETE CASCADE\n> ) WITH OIDS;\n> \n> and\n> \n> CREATE TABLE public.hosts_t\n> (\n> id int4 NOT NULL,\n> ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> utime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> name text NOT NULL,\n> address inet NOT NULL,\n> CONSTRAINT pk_hosts_k PRIMARY KEY (id)\n> ) WITH OIDS;\n> \n> Table files_t has 249259 records and table hosts_t has only 59 records.\n> \n> I tries to run simple query:\n> \n> select * from files_t where parent = 3333\n> \n> This query works 0.256 seconds! It is very big time for this small\n> table!\n> I have index for field \"parent\":\n> \n> CREATE INDEX files_parent_idx\n> ON public.files_t\n> USING btree\n> (parent);\n> \n> But if I tries to see query plan then I see following text:\n> \n> Seq Scan on files_t (cost=0.00..6103.89 rows=54 width=102)\n> Filter: (parent = 3333)\n> \n> PostgreSQL do not uses index files_parent_idx!\n> \n> I have enabled all options of \"QUERY TUNING\" in postgresql.conf, I\n> have increased memory sizes for PostgreSQL:\n> \n> shared_buffers = 2000 # min 16, at least max_connections*2, 8KB each\n> sort_mem = 32768 # min 64, size in KB\n> vacuum_mem = 65536 # min 1024, size in KB\n> fsync = false # turns forced synchronization on or off\n> checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n> enable_hashagg = true\n> enable_hashjoin = true\n> enable_indexscan = true\n> enable_mergejoin = true\n> enable_nestloop = true\n> enable_seqscan = true\n> enable_sort = true\n> enable_tidscan = true\n> geqo = true\n> geqo_threshold = 22\n> geqo_effort = 1\n> geqo_generations = 0\n> geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n> geqo_selection_bias = 2.0 # range 1.5-2.0\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = false\n> \n> \n> Please help me!\n> My database has a very small size (only 249259 records) but it works\n> very slowly :(\n> \n> Best regards\n> Eugeny\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n\n\n\n\n\n\nTry using \n\n\nselect * from files_t where parent = 3333::int8\n\n\nYou have declared parent as int8, but the query will assume int4 for \"3333\" and may not \nuse the index.\n\n\nAlso make sure you have ANALYZEd this table.\n\n\nRegards,\nGary.\n\n\nOn 20 May 2004 at 0:07, Eugeny Balakhonov wrote:\n\n\n> Hello for all!\n> \n> I have PostgreSQL 7.4 under last version of Cygwin and have some\n> problems with performance :( It is very strange... I don't remember\n> this problem on previous version Cygwin and PostgreSQL 7.3\n> \n> I have only two simple tables:\n> \n> CREATE TABLE public.files_t\n> (\n> id int8 NOT NULL,\n> parent int8,\n> size int8 NOT NULL,\n> dir bool NOT NULL DEFAULT false,\n> ctime timestamp NOT NULL,\n> ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> name text NOT NULL,\n> access varchar(10) NOT NULL,\n> host int4 NOT NULL,\n> uname text NOT NULL,\n> CONSTRAINT pk_files_k PRIMARY KEY (id),\n> CONSTRAINT fk_files_k FOREIGN KEY (parent) REFERENCES public.files_t (id) \nON UPDATE CASCADE ON DELETE CASCADE,\n> CONSTRAINT fk_hosts_k FOREIGN KEY (host) REFERENCES public.hosts_t (id) ON \nUPDATE CASCADE ON DELETE CASCADE\n> ) WITH OIDS;\n> \n> and\n> \n> CREATE TABLE public.hosts_t\n> (\n> id int4 NOT NULL,\n> ftime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> utime timestamp NOT NULL DEFAULT ('now'::text)::timestamp(6) with time zone,\n> name text NOT NULL,\n> address inet NOT NULL,\n> CONSTRAINT pk_hosts_k PRIMARY KEY (id)\n> ) WITH OIDS;\n> \n> Table files_t has 249259 records and table hosts_t has only 59 records.\n> \n> I tries to run simple query:\n> \n> select * from files_t where parent = 3333\n> \n> This query works 0.256 seconds! It is very big time for this small\n> table!\n> I have index for field \"parent\":\n> \n> CREATE INDEX files_parent_idx\n> ON public.files_t\n> USING btree\n> (parent);\n> \n> But if I tries to see query plan then I see following text:\n> \n> Seq Scan on files_t (cost=0.00..6103.89 rows=54 width=102)\n> Filter: (parent = 3333)\n> \n> PostgreSQL do not uses index files_parent_idx!\n> \n> I have enabled all options of \"QUERY TUNING\" in postgresql.conf, I\n> have increased memory sizes for PostgreSQL:\n> \n> shared_buffers = 2000 # min \n16, at least max_connections*2, 8KB each\n> sort_mem = 32768 \n# min 64, size in KB\n> vacuum_mem = 65536 \n# min 1024, size in KB\n> fsync = false \n# turns forced synchronization on or off\n> checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n> enable_hashagg = true\n> enable_hashjoin = true\n> enable_indexscan = true\n> enable_mergejoin = true\n> enable_nestloop = true\n> enable_seqscan = true\n> enable_sort = true\n> enable_tidscan = true\n> geqo = true\n> geqo_threshold = 22\n> geqo_effort = 1\n> geqo_generations = 0\n> geqo_pool_size = 0 \n# default based on tables in statement,\n> \n# range 128-1024\n> geqo_selection_bias = 2.0 # range 1.5-2.0\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = false\n> \n> \n> Please help me!\n> My database has a very small size (only 249259 records) but it works\n> very slowly :(\n> \n> Best regards\n> Eugeny\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] \nso that your\n> message can get through to the mailing list cleanly",
"msg_date": "Wed, 19 May 2004 21:23:47 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries"
},
{
"msg_contents": "Eugeny Balakhonov wrote:\n> I tries to run simple query:\n> \n> select * from files_t where parent = 3333\n\nUse this instead:\n\nselect * from files_t where parent = '3333';\n\n(\"parent = 3333::int8\" would work as well.)\n\nPostgreSQL (< 7.5) won't consider using an indexscan when the predicate \ninvolves an integer literal and the column datatype is int2 or int8.\n\n-Neil\n",
"msg_date": "Wed, 19 May 2004 17:51:08 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries"
},
{
"msg_contents": "Neil Conway wrote:\n\n> PostgreSQL (< 7.5) won't consider using an indexscan when the predicate \n> involves an integer literal and the column datatype is int2 or int8.\n\nIs this fixed for 7.5? It isn't checked off on the TODO list at\nhttp://developer.postgresql.org/todo.php\n",
"msg_date": "Thu, 20 May 2004 00:49:43 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries"
},
{
"msg_contents": "Joseph Shraibman <[email protected]> writes:\n> Neil Conway wrote:\n>> PostgreSQL (< 7.5) won't consider using an indexscan when the predicate \n>> involves an integer literal and the column datatype is int2 or int8.\n\n> Is this fixed for 7.5? It isn't checked off on the TODO list at\n> http://developer.postgresql.org/todo.php\n\nIt is. I don't know why Bruce hasn't checked it off.\n\n\nSome other stuff that needs work in TODO:\n\n: Bracketed items \"[]\" have more detailed.\n\nMore detailed what? Grammar please.\n\n: * Remove unreferenced table files and temp tables during database vacuum\n: or postmaster startup (Bruce)\n\nI'm not sure this is still needed given that we now log file deletion in\nWAL.\n\n: * Allow pg_dump to dump sequences using NO_MAXVALUE and NO_MINVALUE\n\nSeems to be done.\n\n: * Prevent whole-row references from leaking memory, e.g. SELECT COUNT(tab.*)\n\nDone.\n\n: * Make LENGTH() of CHAR() not count trailing spaces\n\nDone.\n\n: * Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,\n: float4, numeric/decimal too\n\nDone, per above.\n\n: * Allow more ISOLATION LEVELS to be accepted, but issue a warning for them\n\nPresently we accept all four with no warning ...\n\n: * Add GUC setting to make created tables default to WITHOUT OIDS\n\nSeems to be done, other than the argument about how pg_dump should work.\n\n: * Allow fastpast to pass values in portable format\n\nThis was done in 7.4.\n\n: * Move psql backslash database information into the backend, use nmumonic\n: commands? [psql]\n\nSpelling problem...\n\n: * JDBC\n\nWith JDBC out of the core, I'm not sure why we still have a JDBC section\nin the core TODO.\n\n: * Have pg_dump -c clear the database using dependency information\n\nI think this works now. Not really tested, but in principle it should\nwork.\n\n: * Cache last known per-tuple offsets to speed long tuple access\n\nThis sounds exactly like attcacheoff, which has been there since\nBerkeley. Either remove this or fix the description to give some\nidea what's really meant.\n\n: * Automatically place fixed-width, NOT NULL columns first in a table\n\nThis is not ever going to happen, given that we've rejected the idea of\nhaving separate logical and physical column positions.\n\n: * Change representation of whole-tuple parameters to functions\n\nDone. (However, you might want to add something about supporting\ncomposite types as table columns, which isn't done.)\n\n: * Allow the regression tests to start postmaster with -i so the tests\n: can be run on systems that don't support unix-domain sockets\n\nDone long ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 May 2004 11:33:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries "
},
{
"msg_contents": "Tom Lane wrote:\n> Joseph Shraibman <[email protected]> writes:\n> > Neil Conway wrote:\n> >> PostgreSQL (< 7.5) won't consider using an indexscan when the predicate \n> >> involves an integer literal and the column datatype is int2 or int8.\n> \n> > Is this fixed for 7.5? It isn't checked off on the TODO list at\n> > http://developer.postgresql.org/todo.php\n> \n> It is. I don't know why Bruce hasn't checked it off.\n> \n\n\nOK, marked as done:\n\n* -Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,\n float4, numeric/decimal too\n> \n> Some other stuff that needs work in TODO:\n> \n> : Bracketed items \"[]\" have more detailed.\n> \n> More detailed what? Grammar please.\n\nFixed. \"more detail\".\n\n> : * Remove unreferenced table files and temp tables during database vacuum\n> : or postmaster startup (Bruce)\n> \n> I'm not sure this is still needed given that we now log file deletion in\n> WAL.\n\nOK, removed.\n\n> \n> : * Allow pg_dump to dump sequences using NO_MAXVALUE and NO_MINVALUE\n> \n> Seems to be done.\n\n\nOK.\n\n\n> \n> : * Prevent whole-row references from leaking memory, e.g. SELECT COUNT(tab.*)\n> \n> Done.\n\nOK.\n\n> \n> : * Make LENGTH() of CHAR() not count trailing spaces\n> \n> Done.\n\nOK.\n\n> \n> : * Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,\n> : float4, numeric/decimal too\n> \n> Done, per above.\n\nGot it.\n\n> \n> : * Allow more ISOLATION LEVELS to be accepted, but issue a warning for them\n> \n> Presently we accept all four with no warning ...\n\nOK. Warning part removed.\n\n> \n> : * Add GUC setting to make created tables default to WITHOUT OIDS\n> \n> Seems to be done, other than the argument about how pg_dump should work.\n\nI did the pg_dump part using SET only where needed. That is done.\n\n> \n> : * Allow fastpast to pass values in portable format\n> \n> This was done in 7.4.\n\n\nRemoved.\n\n> \n> : * Move psql backslash database information into the backend, use nmumonic\n> : commands? [psql]\n> \n> Spelling problem...\n\nFixed.\n\n> \n> : * JDBC\n> \n> With JDBC out of the core, I'm not sure why we still have a JDBC section\n> in the core TODO.\n\nRemoved. If they want it they can get it from our CVS history.\n\n> : * Have pg_dump -c clear the database using dependency information\n> \n> I think this works now. Not really tested, but in principle it should\n> work.\n\nOK.\n> \n> : * Cache last known per-tuple offsets to speed long tuple access\n> \n> This sounds exactly like attcacheoff, which has been there since\n> Berkeley. Either remove this or fix the description to give some\n> idea what's really meant.\n\nAdded \"adjusting for NULLs and TOAST values. The issue is that when\nNULLs or TOAST is present, those aren't useful. I was thinking we could\nremember the pattern of the previous row and use those offsets if the\nTOAST/NULL pattern was the same, or something like that. Is that a\nvalid idea?\n\n> : * Automatically place fixed-width, NOT NULL columns first in a table\n> \n> This is not ever going to happen, given that we've rejected the idea of\n> having separate logical and physical column positions.\n\nRemoved.\n\n> \n> : * Change representation of whole-tuple parameters to functions\n> \n> Done. (However, you might want to add something about supporting\n> composite types as table columns, which isn't done.)\n\nOK, marked a done, and added new line:\n\n\t* Support composite types as table columns\n\n> : * Allow the regression tests to start postmaster with -i so the tests\n> : can be run on systems that don't support unix-domain sockets\n> \n> Done long ago.\n\nRemoved.\n\n\nThanks for the updates!\n\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Thu, 20 May 2004 11:56:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries"
},
{
"msg_contents": "Tom Lane wrote:\n\n> \n> : * JDBC\n> \n> With JDBC out of the core, I'm not sure why we still have a JDBC section\n> in the core TODO.\n\nSpeaking of which why is the jdbc site so hard to find? For that matter \nthe new foundry can only be found through the news article on the front \npage.\n",
"msg_date": "Thu, 20 May 2004 12:52:13 -0400",
"msg_from": "Joseph Shraibman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL performance in simple queries"
}
] |
[
{
"msg_contents": "Duane wrote:\n\n > P.S. I've only just begun using PostgreSQL after having\n > used (and still using) DB2 on a mainframe for the past 14\n > years. My experience with Unix/Linux is limited to some\n > community college classes I've taken but we do have\n > a couple of experienced Linux sysadmins on our team.\n > I tell you this because my \"ignorance\" will probably\n > show more than once in my inquiries.\n\nDuane,\n\nIf you've been actively using and developing in DB2, presumably under \nMVS or whatever big blue is calling it these days, for 14 years, then \nyou will bring a wealth of big system expertise to Pg.\n\nPlease stay involved and make suggestions where you thing Pg could be \nimproved.\n\nMarty\n\n",
"msg_date": "Thu, 20 May 2004 07:46:08 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Hardware Platform"
}
] |
[
{
"msg_contents": "Hello,\n\nI have the following problem:\n\nWhen I run some query after I just run the Postmaster, it takse\nseveral seconds to execute (sometimes more than 10), if I rerun it\nagain afterwards, it takes mere milliseconds.\n\nSo, I guess it has to do with PostgreSQL caching.. But how exactly\ndoes it work? What does it cache? And how can I control it?\n\nI would like to load selected information in the memory before a user\nruns the query. Can I do it somehow? As PostgreSQL is used in my case\nas webserver, it isn't really helping if the user has to wait 10\nseconds every time he goes to a new page (even if refreshing the page\nwould be really quick, sine Postgre already loaded the data to\nmemory).\n\nP.S If the query or its EXPLAIN are critical for a better\nunderstanding, let me know.\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\n",
"msg_date": "Fri, 21 May 2004 17:42:09 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL caching"
},
{
"msg_contents": "while you weren't looking, Vitaly Belman wrote:\n\n> So, I guess it has to do with PostgreSQL caching.. But how exactly\n> does it work? What does it cache? And how can I control it?\n\nPostgreSQL uses the operating system's disk cache. You can hint to\nthe postmaster how much memory is available for caching with the\neffective_cache_size directive in your postgresql.conf. If you're\nrunning a *nix OS, you can find this by watching `top` for a while;\nin the header, there's a \"cached\" value (or something to that effect).\nWatching this value, you can determine a rough average and set your\neffective_cache_size to that rough average, or perhaps slightly less.\nI'm not sure how to get this value on Windows.\n\nPgsql uses the OS's disk cache instead of its own cache management\nbecause the former is more likely to persist. If the postmaster\nmanaged the cache, as soon as the last connection died, the memory\nallocated for caching would be released, and all the cached data\nwould be lost. Relying instead on the OS to cache data means that,\nwhether or not there's a postmaster, so long as there has been one,\nthere'll be some data cached.\n\nYou can \"prepopulate\" the OS disk cache by periodically running a\nhandful of SELECT queries that pull from your most commonly accessed\ntables in a background process. (A good way of doing that is simply\nto run your most commonly executed SELECTS.) Those queries should\ntake the performance hit of fetching from disk, while your regular\nqueries hit the cache.\n\n/rls\n\n--\nRosser Schwarz\nTotal Card, Inc.\n\n",
"msg_date": "Fri, 21 May 2004 10:29:37 -0500",
"msg_from": "\"Rosser Schwarz\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Vitaly Belman wrote:\n> Hello,\n> \n> I have the following problem:\n> \n> When I run some query after I just run the Postmaster, it takse\n> several seconds to execute (sometimes more than 10), if I rerun it\n> again afterwards, it takes mere milliseconds.\n> \n> So, I guess it has to do with PostgreSQL caching.. But how exactly\n> does it work? What does it cache? And how can I control it?\n\nThere are two areas of cache - PostgreSQL's shared buffers and the \noperating system's disk-cache. You can't directly control what data is \ncached, it just keeps track of recently used data. It sounds like PG \nisn't being used for a while so your OS decides to use its cache for \nwebserver files.\n\n> I would like to load selected information in the memory before a user\n> runs the query. Can I do it somehow? As PostgreSQL is used in my case\n> as webserver, it isn't really helping if the user has to wait 10\n> seconds every time he goes to a new page (even if refreshing the page\n> would be really quick, sine Postgre already loaded the data to\n> memory).\n\nIf you could \"pin\" data in the cache it would run quicker, but at the \ncost of everything else running slower.\n\nSuggested steps:\n1. Read the configuration/tuning guide at:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n2. Post a sample query/explain analyse that runs very slowly when not \ncached.\n3. If needs be, you can write a simple timed script that performs a \nquery. Or, the autovacuum daemon might be what you want.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 21 May 2004 16:34:12 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "[email protected] (Richard Huxton) writes:\n> If you could \"pin\" data in the cache it would run quicker, but at the\n> cost of everything else running slower.\n>\n> Suggested steps:\n> 1. Read the configuration/tuning guide at:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> 2. Post a sample query/explain analyse that runs very slowly when not\n> cached.\n> 3. If needs be, you can write a simple timed script that performs a\n> query. Or, the autovacuum daemon might be what you want.\n\nI don't think this case will be anywhere near so simple to resolve.\n\nI have seen this phenomenon occur when a query needs to pull a\nmoderate number of blocks into memory to satisfy a query that involves\nsome moderate number of rows.\n\nLet's say you need 2000 rows, which fit into 400 blocks.\n\nThe first time the query runs, it needs to pull those 400 blocks off\ndisk, which requires 400 reads of 8K of data. That can easily take a\nfew seconds of I/O.\n\nThe second time, not only are those blocks cached, they are probably\ncached in the buffer cache, so that the I/O overhead disappears.\n\nThere's very likely no problem with the table statistics; they are\nleading to the right query plan, which happens to need to do 5 seconds\nof I/O to pull the data into memory.\n\nWhat is essentially required is the \"prescient cacheing algorithm,\"\nwhere the postmaster must consult /dev/esp in order to get a\nprediction of what blocks it may need to refer to in the next sixty\nseconds.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/linuxdistributions.html\n\"Normally, we don't do people's homework around here, but Venice is a\nvery beautiful city, so I'll make a small exception.\"\n--- Robert Redelmeier compromises his principles\n",
"msg_date": "Fri, 21 May 2004 13:22:50 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Hello Richard and Rosser,\n\nThank you both for the answers.\n\nI tried creating a semi cache by running all the queries and indeed it\nworked and I might use such way in the future if needed, yet though, I\ncan't help but to feel it isn't exactly the right way to work around\nthis problem. If I do, I might as well increase the effective_cache\nvalue as pointed by the config docs.\n\nAlso on this subject, previously I was only fighting with queries that\nrun poorly even if you run them 10 days in the row.. They don't seem\nto be cached at all. Does it cahce the query result? If so, it should\nmake any query run almost immediately the second time. If it doesn't\ncache the actual result, what does it cache?\n\nIf you'll be so kind though, I'd be glad if you could spot anything to\nspeed up in this query. Here's the query and its plan that happens\nwithout any caching:\n\n-------------------------------------------------------------------------------------------------------------\nQUERY\n-----\nSELECT bv_books. * ,\n vote_avg, \n vote_count \nFROM bv_bookgenres, \n bv_books \nWHERE bv_books.book_id = bv_bookgenres.book_id AND \n bv_bookgenres.genre_id = 5830\nORDER BY vote_avg DESC LIMIT 10 OFFSET 0; \n \nQUERY PLAN\n----------\nLimit (cost=2337.41..2337.43 rows=10 width=76) (actual time=7875.000..7875.000 rows=10 loops=1)\n -> Sort (cost=2337.41..2337.94 rows=214 width=76) (actual time=7875.000..7875.000 rows=10 loops=1)\n Sort Key: bv_books.vote_avg\n -> Nested Loop (cost=0.00..2329.13 rows=214 width=76) (actual time=16.000..7844.000 rows=1993 loops=1)\n -> Index Scan using i_bookgenres_genre_id on bv_bookgenres (cost=0.00..1681.54 rows=214 width=4) (actual time=16.000..3585.000 rows=1993 loops=1)\n Index Cond: (genre_id = 5830)\n -> Index Scan using bv_books_pkey on bv_books (cost=0.00..3.01 rows=1 width=76) (actual time=2.137..2.137 rows=1 loops=1993)\n Index Cond: (bv_books.book_id = \"outer\".book_id)\nTotal runtime: 7875.000 ms\n-------------------------------------------------------------------------------------------------------------\n\nSome general information:\n\nbv_books holds 17000 rows.\nbv_bookgenres holds 938797 rows.\n\nUsing the WHERE (genre_id == 5838) it cuts the number of book_ids to\naround 2000.\n\nAs far as indexes are concerned, there's an index on all the rows\nmentioned in the query (as can be seen from the explain), including\nthe vote_avg row.\n\nThanks and regards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\nFriday, May 21, 2004, 6:34:12 PM, you wrote:\n\nRH> Vitaly Belman wrote:\n>> Hello,\n>> \n>> I have the following problem:\n>> \n>> When I run some query after I just run the Postmaster, it takse\n>> several seconds to execute (sometimes more than 10), if I rerun it\n>> again afterwards, it takes mere milliseconds.\n>> \n>> So, I guess it has to do with PostgreSQL caching.. But how exactly\n>> does it work? What does it cache? And how can I control it?\n\nRH> There are two areas of cache - PostgreSQL's shared buffers and the\nRH> operating system's disk-cache. You can't directly control what data is\nRH> cached, it just keeps track of recently used data. It sounds like PG\nRH> isn't being used for a while so your OS decides to use its cache for\nRH> webserver files.\n\n>> I would like to load selected information in the memory before a user\n>> runs the query. Can I do it somehow? As PostgreSQL is used in my case\n>> as webserver, it isn't really helping if the user has to wait 10\n>> seconds every time he goes to a new page (even if refreshing the page\n>> would be really quick, sine Postgre already loaded the data to\n>> memory).\n\nRH> If you could \"pin\" data in the cache it would run quicker, but at the\nRH> cost of everything else running slower.\n\nRH> Suggested steps:\nRH> 1. Read the configuration/tuning guide at:\nRH> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nRH> 2. Post a sample query/explain analyse that runs very slowly when not\nRH> cached.\nRH> 3. If needs be, you can write a simple timed script that performs a\nRH> query. Or, the autovacuum daemon might be what you want.\n\n\n\n",
"msg_date": "Fri, 21 May 2004 20:33:37 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "> What is essentially required is the \"prescient cacheing algorithm,\"\n> where the postmaster must consult /dev/esp in order to get a\n> prediction of what blocks it may need to refer to in the next sixty\n> seconds.\n\nEasy enough. Television does it all the time with live shows. The guy\nwith the buzzer always seems to know what will be said before they say\nit. All we need is a 5 to 10 second delay...\n\n",
"msg_date": "Fri, 21 May 2004 14:33:54 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Rosser Schwarz wrote:\n> PostgreSQL uses the operating system's disk cache.\n\n... in addition to its own buffer cache, which is stored in shared \nmemory. You're correct though, in that the best practice is to keep the \nPostgreSQL cache small and give more memory to the operating system's \ndisk cache.\n\n> Pgsql uses the OS's disk cache instead of its own cache management\n> because the former is more likely to persist. If the postmaster\n> managed the cache, as soon as the last connection died, the memory\n> allocated for caching would be released, and all the cached data\n> would be lost.\n\nNo; the cache is stored in shared memory. It wouldn't persist over \npostmaster restarts (without some scheme of saving and restoring it), \nbut that has nothing to do with why the OS disk cache is usually kept \nlarger than the PG shared buffer cache.\n\n-Neil\n\n",
"msg_date": "Fri, 21 May 2004 15:02:32 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Vitaly Belman wrote:\n> \n> If you'll be so kind though, I'd be glad if you could spot anything to\n> speed up in this query. Here's the query and its plan that happens\n> without any caching:\n> \n> -------------------------------------------------------------------------------------------------------------\n> QUERY\n> -----\n> SELECT bv_books. * ,\n> vote_avg, \n> vote_count \n> FROM bv_bookgenres, \n> bv_books \n> WHERE bv_books.book_id = bv_bookgenres.book_id AND \n> bv_bookgenres.genre_id = 5830\n> ORDER BY vote_avg DESC LIMIT 10 OFFSET 0; \n> \n> QUERY PLAN\n> ----------\n> Limit (cost=2337.41..2337.43 rows=10 width=76) (actual time=7875.000..7875.000 rows=10 loops=1)\n> -> Sort (cost=2337.41..2337.94 rows=214 width=76) (actual time=7875.000..7875.000 rows=10 loops=1)\n> Sort Key: bv_books.vote_avg\n> -> Nested Loop (cost=0.00..2329.13 rows=214 width=76) (actual time=16.000..7844.000 rows=1993 loops=1)\n> -> Index Scan using i_bookgenres_genre_id on bv_bookgenres (cost=0.00..1681.54 rows=214 width=4) (actual time=16.000..3585.000 rows=1993 loops=1)\n> Index Cond: (genre_id = 5830)\n> -> Index Scan using bv_books_pkey on bv_books (cost=0.00..3.01 rows=1 width=76) (actual time=2.137..2.137 rows=1 loops=1993)\n> Index Cond: (bv_books.book_id = "outer".book_id)\n> Total runtime: 7875.000 ms\n\nPresuming that vote_avg is a field in the table bv_bookgenres, \ntry a composite index on genre_id and vote_avg and then see if \nyou can use the limit clause to reduce the number of loop \niterations from 1993 to 10.\n\nCREATE INDEX test_idx ON bv_bookgenres (genre_id, vote_avg);\n\n\nThe following query tries to force that execution lan and, \npresuming there is a foreign key relation between \nbv_books.book_id AND bv_bookgenres.book_id, I expect it will give \nthe same results, but be carefull with NULL's:\n\nSELECT\tbv_books. * ,\n\tvote_avg,\n\tvote_count\nFROM \t(\n\t\tSELECT\tbg.*\n\t\tFROM \tbv_bookgenres bg\n\t\tWHERE\tbg.genre_id = 5830\n\t\tORDER BY\n\t\t\tbg.vote_avg DESC\n\t\tLIMIT\t10\n\t) bv_bookgenres,\n\tbv_books\nWHERE\tbv_books.book_id = bv_bookgenres.book_id\nORDER BY\n\tvote_avg DESC\nLIMIT\t10;\n\nJochem\n\n\n-- \nI don't get it\nimmigrants don't work\nand steal our jobs\n - Loesje\n\n",
"msg_date": "Tue, 25 May 2004 17:37:44 +0200",
"msg_from": "Jochem van Dieten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Hello Jochem and Marty,\n\nI guess I should have posted the table structure before =(:\n\nTable structure + Indexes\n-------------------------\n\nCREATE TABLE public.bv_books\n(\n book_id serial NOT NULL,\n book_title varchar(255) NOT NULL,\n series_id int4,\n series_index int2,\n annotation_desc_id int4,\n description_desc_id int4,\n book_picture varchar(255) NOT NULL,\n vote_avg float4 NOT NULL,\n vote_count int4 NOT NULL,\n CONSTRAINT bv_books_pkey PRIMARY KEY (book_id)\n) WITH OIDS;\n\nCREATE INDEX i_books_vote_avg\n ON public.bv_books\n USING btree\n (vote_avg);\n\nCREATE INDEX i_books_vote_count\n ON public.bv_books\n USING btree\n (vote_count);\n\n-------------------------\n\nCREATE TABLE public.bv_bookgenres\n(\n book_id int4 NOT NULL,\n genre_id int4 NOT NULL,\n CONSTRAINT bv_bookgenres_pkey PRIMARY KEY (book_id, genre_id),\n CONSTRAINT fk_bookgenres_book_id FOREIGN KEY (book_id) REFERENCES public.bv_books (book_id) ON UPDATE RESTRICT ON DELETE RESTRICT\n) WITH OIDS;\n\nCREATE INDEX i_bookgenres_book_id\n ON public.bv_bookgenres\n USING btree\n (book_id);\n\nCREATE INDEX i_bookgenres_genre_id\n ON public.bv_bookgenres\n USING btree\n (genre_id);\n-------------------------\n\nMS> I didn't see the table structure, but I assume that the vote_avg and\nMS> vote_count fields are in bv_bookgenres. If no fields are actually \nMS> needed from bv_bookgenres, then the query might be constructed in a way \nMS> that only the index would be read, without loading any row data.\n\nI didn't understand you. vote_avg is stored in bv_books.. So yes, the\nonly thing I need from bv_bookgenres is the id of the book, but I can't\nstore this info in bv_books because there is N to N relationship\nbetween them - every book can belong to a number of genres... If\nthat's what you meant.\n\nMS> I think that you mentioned this was for a web app. Do you actually have\nMS> a web page that displays 2000 rows of data?\n\nWell.. It is all \"paginated\", you can access 2000 items of the data\n(as there are actually 2000 books in the genre) but you only see 10\nitems at a time.. I mean, probably no one would go over the 2000\nbooks, but I can't just hide them =\\.\n\nJvD> Presuming that vote_avg is a field in the table bv_bookgenres,\nJvD> try a composite index on genre_id and vote_avg and then see if \nJvD> you can use the limit clause to reduce the number of loop \nJvD> iterations from 1993 to 10.\n\nI'm afraid your idea is invalid in my case =\\... Naturally I could\neventually do data coupling to gain perforemnce boost if this issue\nwill not be solved in other ways. I'll keep your idea in mind anyway,\nthanks.\n\nOnce again thanks for you feedback.\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\nTuesday, May 25, 2004, 6:37:44 PM, you wrote:\n\nJvD> Vitaly Belman wrote:\n>> \n>> If you'll be so kind though, I'd be glad if you could spot anything to\n>> speed up in this query. Here's the query and its plan that happens\n>> without any caching:\n>> \n>> -------------------------------------------------------------------------------------------------------------\n>> QUERY\n>> -----\n>> SELECT bv_books. * ,\n>> vote_avg, \n>> vote_count \n>> FROM bv_bookgenres, \n>> bv_books \n>> WHERE bv_books.book_id = bv_bookgenres.book_id AND \n>> bv_bookgenres.genre_id = 5830\n>> ORDER BY vote_avg DESC LIMIT 10 OFFSET 0; \n>> \n>> QUERY PLAN\n>> ----------\n>> Limit (cost=2337.41..2337.43 rows=10 width=76) (actual\n>> time=7875.000..7875.000 rows=10 loops=1)\n>> -> Sort (cost=2337.41..2337.94 rows=214 width=76) (actual\n>> time=7875.000..7875.000 rows=10 loops=1)\n>> Sort Key: bv_books.vote_avg\n>> -> Nested Loop (cost=0.00..2329.13 rows=214 width=76)\n>> (actual time=16.000..7844.000 rows=1993 loops=1)\n>> -> Index Scan using i_bookgenres_genre_id on\n>> bv_bookgenres (cost=0.00..1681.54 rows=214 width=4) (actual\n>> time=16.000..3585.000 rows=1993 loops=1)\n>> Index Cond: (genre_id = 5830)\n>> -> Index Scan using bv_books_pkey on bv_books \n>> (cost=0.00..3.01 rows=1 width=76) (actual time=2.137..2.137 rows=1\n>> loops=1993)\n>> Index Cond: (bv_books.book_id = "outer".book_id)\n>> Total runtime: 7875.000 ms\n\nJvD> Presuming that vote_avg is a field in the table bv_bookgenres, \nJvD> try a composite index on genre_id and vote_avg and then see if \nJvD> you can use the limit clause to reduce the number of loop \nJvD> iterations from 1993 to 10.\n\nJvD> CREATE INDEX test_idx ON bv_bookgenres (genre_id, vote_avg);\n\n\nJvD> The following query tries to force that execution lan and, \nJvD> presuming there is a foreign key relation between \nJvD> bv_books.book_id AND bv_bookgenres.book_id, I expect it will give\nJvD> the same results, but be carefull with NULL's:\n\nJvD> SELECT\tbv_books. * ,\nJvD> \tvote_avg,\nJvD> \tvote_count\nJvD> FROM \t(\nJvD> \t\tSELECT\tbg.*\nJvD> \t\tFROM \tbv_bookgenres bg\nJvD> \t\tWHERE\tbg.genre_id = 5830\nJvD> \t\tORDER BY\nJvD> \t\t\tbg.vote_avg DESC\nJvD> \t\tLIMIT\t10\nJvD> \t) bv_bookgenres,\nJvD> \tbv_books\nJvD> WHERE\tbv_books.book_id = bv_bookgenres.book_id\nJvD> ORDER BY\nJvD> \tvote_avg DESC\nJvD> LIMIT\t10;\n\nJvD> Jochem\n\n\n\n",
"msg_date": "Tue, 25 May 2004 22:53:05 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Vitaly,\n\nThis looks like there might be some room for performance improvement...\n\n > MS> I didn't see the table structure, but I assume\n > MS> that the vote_avg and\n > MS> vote_count fields are in bv_bookgenres.\n >\n > I didn't understand you. vote_avg is stored in bv_books.\n\nOk. That helps. The confusion (on my end) came from the SELECT clause \nof the query you provided:\n\n > SELECT bv_books. * ,\n > vote_avg,\n > vote_count\n\nAll fields from bv_books were selected (bv_books.*) along with vote_agv \nand vote_count. My assumption was that vote_avg and vote_count were \ntherefore not in bv_books.\n\nAt any rate, a query with an IN clause should help quite a bit:\n\nSELECT bv_books. *\nFROM bv_books\nWHERE bv_books.book_id IN (\n SELECT book_id\n FROM bv_genres\n WHERE bv_bookgenres.genre_id = 5830\n )\nORDER BY vote_avg DESC LIMIT 10 OFFSET 0;\n\nGive it a whirl.\n\nMarty\n\n",
"msg_date": "Tue, 25 May 2004 16:24:18 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "On Tue, 2004-05-25 at 15:53, Vitaly Belman wrote:\n> >> \n> >> QUERY PLAN\n> >> ----------\n> >> Limit (cost=2337.41..2337.43 rows=10 width=76) (actual\n> >> time=7875.000..7875.000 rows=10 loops=1)\n> >> -> Sort (cost=2337.41..2337.94 rows=214 width=76) (actual\n> >> time=7875.000..7875.000 rows=10 loops=1)\n> >> Sort Key: bv_books.vote_avg\n> >> -> Nested Loop (cost=0.00..2329.13 rows=214 width=76)\n> >> (actual time=16.000..7844.000 rows=1993 loops=1)\n> >> -> Index Scan using i_bookgenres_genre_id on\n> >> bv_bookgenres (cost=0.00..1681.54 rows=214 width=4) (actual\n> >> time=16.000..3585.000 rows=1993 loops=1)\n> >> Index Cond: (genre_id = 5830)\n> >> -> Index Scan using bv_books_pkey on bv_books \n> >> (cost=0.00..3.01 rows=1 width=76) (actual time=2.137..2.137 rows=1\n> >> loops=1993)\n> >> Index Cond: (bv_books.book_id = "outer".book_id)\n> >> Total runtime: 7875.000 ms\n> \n\nA question and two experiments... what version of postgresql is this?\n\nTry reindexing i_bookgenres_genre_id and capture the explain analyze for\nthat. If it doesn't help try doing set enable_indexscan = false and\ncapture the explain analyze for that. \n\nRobert Treat\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "26 May 2004 09:13:58 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Hello Marty, Nick and Robert,\n\nNB> Depending on what version of PG you are running, IN might take a while\nNB> to complete. If so try an EXISTS instead\n\nRT> A question and two experiments... what version of postgresql is this?\n\nI am using the newer 7.5dev native Windows port. For this reason I\ndon't think that IN will cause any trouble (I read that this issue was\nresolved in 7.4).\n\nMS> At any rate, a query with an IN clause should help quite a bit\n\nMS> SELECT bv_books. *\nMS> FROM bv_books\nMS> WHERE bv_books.book_id IN (\nMS> SELECT book_id\nMS> FROM bv_genres\nMS> WHERE bv_bookgenres.genre_id = 5830\nMS> )\nMS> ORDER BY vote_avg DESC LIMIT 10 OFFSET 0;\n\nIt looks like it helps a bit (though you meant \"FROM bv_bookgenres\",\nright?). I can't tell you how MUCH it helped though, because of two\nreasons:\n\n1) As soon as I run a query, it is cached in the memory and I can't\nreally find a good way to flush it out of there to test again except a\nfull computer reset (shutting postmaster down doesn't help). If you\nhave a better idea on this, do tell me =\\ (Reminding again, I am on\nWindows).\n\n2) I *think* I resolved this issue, at least for most of the genre_ids\n(didn't go through them all, but tried a few with different book count\nand the results looked quite good). The fault was partly mine, a few\nweeks ago I increase the statistics for the genre_id column a bit too\nmuch (from 10 to 70), I was unsure how exactly it works (and still am)\nbut it helped for a few genre_ids that had a high book count, yet it\nalso hurt the performence for the genres without as much ids. I now\nhalved the stastics (to 58) and almost everything looks good now.\n\nBecause of that I'll stop working on that query for a while (unless\nyou have some more performance tips on the subject). Big thanks to\neveryone who helped.. And I might bring this issue later again, it it\nstill will cause too much troubles.\n\nRT> Try reindexing i_bookgenres_genre_id and capture the explain\nRT> analyze for that.\n\nIs that's what you meant \"REINDEX INDEX i_bookgenres_genre_id\"? But it\nreturns no messages what-so-ever =\\. I can EXPLAIN it either.\n\nRT> If it doesn't help try doing set enable_indexscan = false and\nRT> capture the explain analyze for that.\n\nHere it is:\n\n------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN\nLimit (cost=41099.93..41099.96 rows=10 width=76) (actual time=6734.000..6734.000 rows=10 loops=1)\n -> Sort (cost=41099.93..41100.45 rows=208 width=76) (actual time=6734.000..6734.000 rows=10 loops=1)\n Sort Key: bv_books.vote_count\n -> Merge Join (cost=40229.21..41091.92 rows=208 width=76) (actual time=6078.000..6593.000 rows=1993 loops=1)\n Merge Cond: (\"outer\".book_id = \"inner\".book_id)\n -> Sort (cost=16817.97..16818.49 rows=208 width=4) (actual time=1062.000..1062.000 rows=1993 loops=1)\n Sort Key: bv_bookgenres.book_id\n -> Seq Scan on bv_bookgenres (cost=0.00..16809.96 rows=208 width=4) (actual time=0.000..1047.000 rows=1993 loops=1)\n Filter: (genre_id = 5830)\n -> Sort (cost=23411.24..23841.04 rows=171918 width=76) (actual time=5016.000..5189.000 rows=171801 loops=1)\n Sort Key: bv_books.book_id\n -> Seq Scan on bv_books (cost=0.00..4048.18 rows=171918 width=76) (actual time=0.000..359.000 rows=171918 loops=1)\nTotal runtime: 6734.000 ms\n------------------------------------------------------------------------------------------------------------------------------------------\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\nWednesday, May 26, 2004, 1:24:18 AM, you wrote:\n\nMS> Vitaly,\n\nMS> This looks like there might be some room for performance improvement...\n\n >> MS> I didn't see the table structure, but I assume\n >> MS> that the vote_avg and\n >> MS> vote_count fields are in bv_bookgenres.\n >>\n >> I didn't understand you. vote_avg is stored in bv_books.\n\nMS> Ok. That helps. The confusion (on my end) came from the SELECT clause\nMS> of the query you provided:\n\n >> SELECT bv_books. * ,\n >> vote_avg,\n >> vote_count\n\nMS> All fields from bv_books were selected (bv_books.*) along with vote_agv\nMS> and vote_count. My assumption was that vote_avg and vote_count were\nMS> therefore not in bv_books.\n\nMS> At any rate, a query with an IN clause should help quite a bit:\n\nMS> SELECT bv_books. *\nMS> FROM bv_books\nMS> WHERE bv_books.book_id IN (\nMS> SELECT book_id\nMS> FROM bv_genres\nMS> WHERE bv_bookgenres.genre_id = 5830\nMS> )\nMS> ORDER BY vote_avg DESC LIMIT 10 OFFSET 0;\n\nMS> Give it a whirl.\n\nMS> Marty\n\n\nMS> ---------------------------(end of\nMS> broadcast)---------------------------\nMS> TIP 6: Have you searched our list archives?\n\nMS> http://archives.postgresql.org\n\n",
"msg_date": "Wed, 26 May 2004 17:33:56 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Vitaly,\n\n> I am using the newer 7.5dev native Windows port. For this reason I\n> don't think that IN will cause any trouble (I read that this issue was\n> resolved in 7.4).\n\nWell, for performance, all bets are off for the dev Windows port. Last I \nchecked, the Win32 team was still working on *stability* and hadn't yet even \nlooked at performance. Not that you can't improve the query, just that it \nmight not fix the problem.\n\nTherefore ... your detailed feedback is appreciated, especially if you can \ncompare stuff to the same database running on a Linux, Unix, or BSD machine.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 26 May 2004 09:17:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Hello Josh,\r\n\r\nJB> Not that you can't improve the query, just that it might not fix\r\nJB> the problem.\r\n\r\nYes, I'm aware it might be slower than the Linux version, but then, as\r\nyou said, I still can improve the query (as I did with your help now).\r\n\r\nBut true, if there's something awfully wrong with Win32 port\r\nperformance, I might be doing some overwork...\r\n\r\nJB> Therefore ... your detailed feedback is appreciated, especially if you can\r\nJB> compare stuff to the same database running on a Linux, Unix, or BSD machine.\r\n\r\nI can't easily install Linux right now.. But I am considering using it\r\nthrough VMWare. Do you think it would suffice as a comprasion?\r\n\r\nFrom what I saw (e.g\r\nhttp://usuarios.lycos.es/hernandp/articles/vpcvs.html) the performance\r\nare bad only when it's coming to graphics, otherwise it looks pretty\r\ngood.\r\n\r\nRegards,\r\n Vitaly Belman\r\n \r\n ICQ: 1912453\r\n AIM: VitalyB1984\r\n MSN: [email protected]\r\n Yahoo!: VitalyBe\r\n\r\nWednesday, May 26, 2004, 7:17:35 PM, you wrote:\r\n\r\nJB> Vitaly,\r\n\r\n>> I am using the newer 7.5dev native Windows port. For this reason I\r\n>> don't think that IN will cause any trouble (I read that this issue was\r\n>> resolved in 7.4).\r\n\r\nJB> Well, for performance, all bets are off for the dev Windows port. Last I\r\nJB> checked, the Win32 team was still working on *stability* and hadn't yet even\r\nJB> looked at performance. Not that you can't improve the query, just that it\r\nJB> might not fix the problem.\r\n\r\nJB> Therefore ... your detailed feedback is appreciated, especially if you can\r\nJB> compare stuff to the same database running on a Linux, Unix, or BSD machine.\r\n\r\n",
"msg_date": "Wed, 26 May 2004 21:00:34 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "> \n> Hello Josh,\n> \n> JB> Not that you can't improve the query, just that it might not fix\n> JB> the problem.\n> \n> Yes, I'm aware it might be slower than the Linux version, but then, as\n> you said, I still can improve the query (as I did with your help now).\n> \n> But true, if there's something awfully wrong with Win32 port\n> performance, I might be doing some overwork...\n> \n> JB> Therefore ... your detailed feedback is appreciated, especially if you\n> can\n> JB> compare stuff to the same database running on a Linux, Unix, or BSD\n> machine.\n> \n> I can't easily install Linux right now.. But I am considering using it\n> through VMWare. Do you think it would suffice as a comprasion?\n> \n> From what I saw (e.g\n> http://usuarios.lycos.es/hernandp/articles/vpcvs.html) the performance\n> are bad only when it's coming to graphics, otherwise it looks pretty\n> good.\n> \n> Regards,\n> Vitaly Belman\n> \n\nAn interesting alternative that I've been using lately is colinux\n(http://colinux.sf.net). It lets you run linux in windows and compared to\nvmware, I find it remarkably faster and when it is idle less resource\nintensive. I have vmware but if I'm only going to use a console based\nprogram, colinux seems to outperform it. \n\nNote that it may simply be interactive processes that run better because it\nhas a simpler interface and does not try to emulate the display hardware.\n(Therefore no X unless you use vmware) It seems though that there is less\noverhead and if that's the case, then everything should run faster.\n\nAlso note that getting it installed is a little more work than vmware. If\nyou're running it on a workstation that you use for normal day-to-day tasks\nthough I think you'll like it because you can detach the terminal and let it\nrun in the background. When I do that I often forget it is running because\nit produces such a low load on the system. If you are going to give it a\ntry, the one trick I used to get things going was to download the newest\nbeta of winpcap and then the networking came up easily. Everything else was\na piece of cake.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n\n",
"msg_date": "Thu, 27 May 2004 09:56:00 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
}
] |
[
{
"msg_contents": "All,\n\nI have a particularly troublesome table in my 7.3.4 database. It \ntypically has less than 50k rows, and a usage pattern of about 1k \nINSERTs, 50-100k UPDATEs, and no DELETEs per day. It is vacuumed and \nanalyzed three times per week. However, the performance of queries \nperformed on this table slowly degrades over a period of weeks, until \neven a \"select count(*)\" takes several seconds. The only way I've found \nto restore performance is to VACUUM FULL the table, which is highly \nundesireable in our application due to the locks it imposes.\n\nHere is the output of a psql session demonstrating the problem/solution. \nNote the \\timing output after each of the SELECTs:\n\nqqqqqqqq=> vacuum analyze xxxx;\nNOTICE: VACUUM will be committed automatically\nVACUUM\nTime: 715900.74 ms\nqqqqqqqq=> select count(*) from xxxx;\n count\n-------\n 17978\n(1 row)\n\nTime: 171789.08 ms\nqqqqqqqq=> vacuum full verbose xxxx;\nNOTICE: VACUUM will be committed automatically\nINFO: --Relation public.xxxx--\nINFO: Pages 188903: Changed 60, reaped 188896, Empty 0, New 0; Tup \n17987: Vac 1469, Keep/VTL 0/0, UnUsed 9120184, MinLen 92, MaxLen 468; \nRe-using: Free/Avail. Space 1504083956/1504083872; EndEmpty/Avail. Pages \n0/188901.\n CPU 6.23s/1.07u sec elapsed 55.02 sec.\nINFO: Index xxxx_yyyy_idx: Pages 29296; Tuples 17987: Deleted 1469.\n CPU 1.08s/0.20u sec elapsed 61.68 sec.\nINFO: Index xxxx_zzzz_idx: Pages 18412; Tuples 17987: Deleted 1469.\n CPU 0.67s/0.05u sec elapsed 17.90 sec.\nINFO: Rel xxxx: Pages: 188903 --> 393; Tuple(s) moved: 17985.\n CPU 15.97s/19.11u sec elapsed 384.49 sec.\nINFO: Index xxxx_yyyy_idx: Pages 29326; Tuples 17987: Deleted 17985.\n CPU 1.14s/0.65u sec elapsed 32.34 sec.\nINFO: Index xxxx_zzzz_idx: Pages 18412; Tuples 17987: Deleted 17985.\n CPU 0.43s/0.32u sec elapsed 13.37 sec.\nVACUUM\nTime: 566313.54 ms\nqqqqqqqq=> select count(*) from xxxx;\n count\n-------\n 17987\n(1 row)\n\nTime: 22.82 ms\n\n\nIs there any way to avoid doing a periodic VACUUM FULL on this table, \ngiven the fairly radical usage pattern? Or is the (ugly) answer to \nredesign our application to avoid this usage pattern?\n\nAlso, how do I read the output of VACUUM FULL? \nhttp://www.postgresql.org/docs/7.3/interactive/sql-vacuum.html does not \nexplain how to interpret the output, nor has google helped. I have a \nfeeling that the full vacuum is compressing hundreds of thousands of \npages of sparse data into tens of thousands of pages of dense data, thus \nreducing the number of block reads by an order of magnitude, but I'm not \nquite sure how to read the output.\n\nFWIW, this is last night's relevant output from the scheduled VACUUM \nANALYZE. 24 days have passed since the VACUUM FULL above:\n\nINFO: --Relation public.xxx--\nINFO: Index xxx_yyy_idx: Pages 30427; Tuples 34545: Deleted 77066.\n CPU 1.88s/0.51u sec elapsed 95.39 sec.\nINFO: Index xxx_zzz_idx: Pages 19049; Tuples 34571: Deleted 77066.\n CPU 0.83s/0.40u sec elapsed 27.92 sec.\nINFO: Removed 77066 tuples in 3474 pages.\n CPU 0.38s/0.32u sec elapsed 1.33 sec.\nINFO: Pages 13295: Changed 276, Empty 0; Tup 34540: Vac 77066, Keep 0, \nUnUsed 474020.\n Total CPU 3.34s/1.29u sec elapsed 125.00 sec.\nINFO: Analyzing public.xxx\n\n\nBest Regards,\n\nBill Montgomery\n",
"msg_date": "Fri, 21 May 2004 11:59:11 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoiding vacuum full on an UPDATE-heavy table"
},
{
"msg_contents": "> Is there any way to avoid doing a periodic VACUUM FULL on this table,\n> given the fairly radical usage pattern? Or is the (ugly) answer to\n> redesign our application to avoid this usage pattern?\n\nYes, you should be able to doing avoid periodic VACUUM FULL. The problem\nis that your table needs to be vacuumed MUCH more often. What should\nhappen is that assuming you have enough FSM space allocated and assuming\nyou vacuum the \"right\" amount, your table will reach a steady state size. \nAs you could see your from you vacumm verbose output your table was almost\nentriely dead space.\n\npg_autovacuum would probably help as it monitors activity and vacuumus\ntables accordingly. It is not included with 7.3.x but if you download it\nand compile yourself it will work against a 7.3.x server.\n\nGood luck,\n\nMatthew\n\n\n",
"msg_date": "Fri, 21 May 2004 15:22:40 -0400 (EDT)",
"msg_from": "\"Matthew T. O'Connor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding vacuum full on an UPDATE-heavy table"
},
{
"msg_contents": ">>>>> \"BM\" == Bill Montgomery <[email protected]> writes:\n\nBM> Is there any way to avoid doing a periodic VACUUM FULL on this table,\nBM> given the fairly radical usage pattern? Or is the (ugly) answer to\nBM> redesign our application to avoid this usage pattern?\n\nI'll bet upgrading to 7.4.2 clears up your problems. I'm not sure if\nit was in 7.3 or 7.4 where the index bloat problem was solved. Try to\nsee if just reindexing will help your performance. Also, run a plain\nvacuum at least nightly so that your table size stays reasonable. It\nwon't take much time on a table with only 50k rows in it.\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n",
"msg_date": "Fri, 21 May 2004 16:36:18 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding vacuum full on an UPDATE-heavy table"
},
{
"msg_contents": "Matthew T. O'Connor wrote:\n\n>>Is there any way to avoid doing a periodic VACUUM FULL on this table,\n>>given the fairly radical usage pattern? Or is the (ugly) answer to\n>>redesign our application to avoid this usage pattern?\n>> \n>>\n>pg_autovacuum would probably help as it monitors activity and vacuumus\n>tables accordingly. It is not included with 7.3.x but if you download it\n>and compile yourself it will work against a 7.3.x server.\n> \n>\nAs a quick fix, since we're upgrading to 7.4.2 in a few weeks anyhow \n(which includes pg_autovacuum), I've simply set up an hourly vacuum on \nthis table. It only takes ~4 seconds to execute when kept up on an \nhourly basis. Is there any penalty to vacuuming too frequently, other \nthan the time wasted in an unnecessary vacuum operation?\n\nMy hourly VACUUM VERBOSE output now looks like this:\n\nINFO: --Relation public.xxxx--\nINFO: Index xxxx_yyyy_idx: Pages 30452; Tuples 34990: Deleted 1226.\n CPU 0.67s/0.18u sec elapsed 0.87 sec.\nINFO: Index xxxx_yyyy_idx: Pages 19054; Tuples 34991: Deleted 1226.\n CPU 0.51s/0.13u sec elapsed 1.35 sec.\nINFO: Removed 1226 tuples in 137 pages.\n CPU 0.01s/0.00u sec elapsed 1.30 sec.\nINFO: Pages 13709: Changed 31, Empty 0; Tup 34990: Vac 1226, Keep 0, \nUnUsed 567233.\n Total CPU 1.58s/0.31u sec elapsed 3.91 sec.\nINFO: Analyzing public.xxxx\nVACUUM\n\nWith regards to Vivek's post about index bloat, I tried REINDEXing \nbefore I did a VACUUM FULL a month ago when performance had gotten \ndismal. It didn't help :-(\n\nBest Regards,\n\nBill Montgomery\n",
"msg_date": "Fri, 21 May 2004 17:29:33 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding vacuum full on an UPDATE-heavy table"
},
{
"msg_contents": "Bill Montgomery <[email protected]> writes:\n> I have a particularly troublesome table in my 7.3.4 database. It \n> typically has less than 50k rows, and a usage pattern of about 1k \n> INSERTs, 50-100k UPDATEs, and no DELETEs per day. It is vacuumed and \n> analyzed three times per week.\n\nYou probably want to vacuum (non-FULL) once a day, if not more often.\nAlso take a look at your FSM settings --- it seems like a good bet that\nthey're not large enough to remember all the free space in your\ndatabase.\n\nWith adequate FSM the table should stabilize at a physical size\ncorresponding to number-of-live-rows + number-of-updates-between-VACUUMs, \nwhich would be three times the minimum possible size if you vacuum once\na day (50K + 100K) or five times if you stick to every-other-day\n(50K + 200K). Your VACUUM FULL output shows that the table had bloated\nto hundreds of times the minimum size:\n\n> INFO: Rel xxxx: Pages: 188903 --> 393; Tuple(s) moved: 17985.\n\nand AFAIK the only way that will happen is if you fail to vacuum at all\nor don't have enough FSM.\n\nThe indexes are looking darn large as well. In 7.3 about the only thing\nyou can do about this is REINDEX the table every so often. 7.4 should\nbehave better though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 May 2004 18:09:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding vacuum full on an UPDATE-heavy table "
},
{
"msg_contents": "Hi, \n\nAfter a table analyzed a table, the table's relpages \nof pg_class gets updated, but not those of associated \nindexes, which can be updated by \"vacuum analyze\".\n\nIs this a feature or a bug?\n\nI have some tables and there are almost only\ninserts. So I do not care about the \"dead tuples\",\nbut do care about the statistics. \n\nDoes the above \"future/bug\" affect the performance?\n\nMy PG version is 7.3.2.\n\nThanks,\n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Domains ��� Claim yours for only $14.70/year\nhttp://smallbusiness.promotions.yahoo.com/offer \n",
"msg_date": "Mon, 24 May 2004 09:51:12 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "index's relpages after table analyzed"
},
{
"msg_contents": ">From PG\nhttp://developer.postgresql.org/docs/postgres/diskusage.html:\n\n\"(Remember, relpages is only updated by VACUUM and\nANALYZE.)\"\n\n\n--- Litao Wu <[email protected]> wrote:\n> Hi, \n> \n> After a table analyzed a table, the table's relpages\n> \n> of pg_class gets updated, but not those of\n> associated \n> indexes, which can be updated by \"vacuum analyze\".\n> \n> Is this a feature or a bug?\n> \n> I have some tables and there are almost only\n> inserts. So I do not care about the \"dead tuples\",\n> but do care about the statistics. \n> \n> Does the above \"future/bug\" affect the performance?\n> \n> My PG version is 7.3.2.\n> \n> Thanks,\n> \n> \n> \n> \n> \t\n> \t\t\n> __________________________________\n> Do you Yahoo!?\n> Yahoo! Domains ?Claim yours for only $14.70/year\n> http://smallbusiness.promotions.yahoo.com/offer \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Domains ��� Claim yours for only $14.70/year\nhttp://smallbusiness.promotions.yahoo.com/offer \n",
"msg_date": "Mon, 24 May 2004 10:20:56 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index's relpages after table analyzed"
},
{
"msg_contents": "Litao,\n\n> I have some tables and there are almost only\n> inserts. So I do not care about the \"dead tuples\",\n> but do care about the statistics. \n\nThen just run ANALYZE on those tables, and not VACUUM.\nANALYZE <table-name>;\n\n> My PG version is 7.3.2.\n\nI would suggest upgrading to 7.3.6; the version you are using has several \nknown bugs.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 24 May 2004 12:05:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index's relpages after table analyzed"
},
{
"msg_contents": "Bill,\n\n> As a quick fix, since we're upgrading to 7.4.2 in a few weeks anyhow \n> (which includes pg_autovacuum), I've simply set up an hourly vacuum on \n> this table. It only takes ~4 seconds to execute when kept up on an \n> hourly basis. Is there any penalty to vacuuming too frequently, other \n> than the time wasted in an unnecessary vacuum operation?\n\nNope, no penalty other than the I/O and CPU load while vacuuming. If you \nhave a lot of transactions involving serial writes to many tables, sometimes \nyou can get into a deadlock situation, which is annoying, but I wouldn't \nassume this to be a problem until it crops up.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 24 May 2004 12:08:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding vacuum full on an UPDATE-heavy table"
},
{
"msg_contents": "Hi Josh,\n\nI know that and that is what I am using now.\nThe problem is I also need to know\nthe relpages each indexe takes and \"analyze\"\nseems not update relpages though vacuum and\nvacuum analyze do. \n\nAccording to PG doc:\n\"Remember, relpages is only updated by VACUUM and\nANALYZE\" \n\nMy question is why relpages of indexes\ndo not get updated after \"analyze\".\n\nHere is a quick test:\ncreate table test as select * from pg_class where 1=2;\ncreate index test_idx on test (relname);\ninsert into test select * from pg_class;\nselect relname, relpages from pg_class\nwhere relname in ('test', 'test_idx');\n relname | relpages\n----------+----------\n test | 10\n test_idx | 1\n(2 rows)\n\nanalyze test;\nselect relname, relpages from pg_class\nwhere relname in ('test', 'test_idx');\n relname | relpages\n----------+----------\n test | 27\n test_idx | 1\n(2 rows)\n-- Analyze only updates table's relpage, not index's!\n\nvacuum analyze test;\nselect relname, relpages from pg_class\nwhere relname in ('test', 'test_idx');\n relname | relpages\n----------+----------\n test | 27\n test_idx | 22\n(2 rows)\n-- \"acuum analzye\" updates both\n-- \"vacuum\" only also updates both\n\nThank you for your help!\n\n\n--- Josh Berkus <[email protected]> wrote:\n> Litao,\n> \n> > I have some tables and there are almost only\n> > inserts. So I do not care about the \"dead tuples\",\n> > but do care about the statistics. \n> \n> Then just run ANALYZE on those tables, and not\n> VACUUM.\n> ANALYZE <table-name>;\n> \n> > My PG version is 7.3.2.\n> \n> I would suggest upgrading to 7.3.6; the version you\n> are using has several \n> known bugs.\n> \n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nFriends. Fun. Try the all-new Yahoo! Messenger.\nhttp://messenger.yahoo.com/ \n",
"msg_date": "Mon, 24 May 2004 12:48:03 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index's relpages after table analyzed"
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> My question is why relpages of indexes\n> do not get updated after \"analyze\".\n\nIt's an oversight, which just got fixed in CVS tip a few weeks ago.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 24 May 2004 21:22:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index's relpages after table analyzed "
}
] |
[
{
"msg_contents": "Not knowing a whole lot about the internals of Pg, one thing jumped out \nat me, that each trip to get data from bv_books took 2.137 ms, which \ncame to over 4.2 seconds right there.\n\nThe problem \"seems\" to be the 1993 times that the nested loop spins, as \nalmost all of the time is spent there.\n\nPersonally, I am amazed that it takes 3.585 seconds to index scan \ni_bookgenres_genre_id. Is that a composite index? Analyzing the \ntaables may help, as the optimizer appears to mispredict the number of \nrows returned.\n\nI would be curious to see how it performs with an \"IN\" clause, which I \nwould suspect would go quite a bit fasrer. Try the following:\n\nSELECT bv_books. * ,\n vote_avg,\n vote_count\nFROM bv_bookgenres,\n bv_books\nWHERE bv_books.book_id IN (\n SELECT book_id\n FROM bv_genres\n WHERE bv_bookgenres.genre_id = 5830\n )\nAND bv_bookgenres.genre_id = 5830\nORDER BY vote_avg DESC LIMIT 10 OFFSET 0;\n\nIn this query, all of the book_id values are pulled at once.\n\nWho knows?\n\nIf you get statisctics on this, please post.\n\nMarty\n\n",
"msg_date": "Fri, 21 May 2004 14:10:56 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
},
{
"msg_contents": "Hello Marty,\n\nMS> Is that a composite index?\n\nIt is a regular btree index. What is a composite index?\n\nMS> Analyzing the taables may help, as the optimizer appears to\nMS> mispredict the number of rows returned.\n\nI'll try analyzing, but I highly doubt that it would help. I analyzed\nonce already and haven't changed the data since.\n\nMS> I would be curious to see how it performs with an \"IN\" clause,\nMS> which I would suspect would go quite a bit fasrer.\n\nActually it reached 20s before I canceled it... Here's the explain:\n\nQUERY PLAN\nLimit (cost=3561.85..3561.88 rows=10 width=76)\n -> Sort (cost=3561.85..3562.39 rows=214 width=76)\n Sort Key: bv_books.vote_avg\n -> Nested Loop (cost=1760.75..3553.57 rows=214 width=76)\n -> Index Scan using i_bookgenres_genre_id on bv_bookgenres (cost=0.00..1681.54 rows=214 width=0)\n Index Cond: (genre_id = 5830)\n -> Materialize (cost=1760.75..1761.01 rows=26 width=76)\n -> Nested Loop (cost=1682.07..1760.75 rows=26 width=76)\n -> HashAggregate (cost=1682.07..1682.07 rows=26 width=4)\n -> Index Scan using i_bookgenres_genre_id on bv_bookgenres (cost=0.00..1681.54 rows=214 width=4)\n Index Cond: (genre_id = 5830)\n -> Index Scan using bv_books_pkey on bv_books (cost=0.00..3.01 rows=1 width=76)\n Index Cond: (bv_books.book_id = \"outer\".book_id)\n\n \nThank you for your try.\n\nRegards,\nVitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\nFriday, May 21, 2004, 11:10:56 PM, you wrote:\n\nMS> Not knowing a whole lot about the internals of Pg, one thing jumped out\nMS> at me, that each trip to get data from bv_books took 2.137 ms, which\nMS> came to over 4.2 seconds right there.\n\nMS> The problem \"seems\" to be the 1993 times that the nested loop spins, as\nMS> almost all of the time is spent there.\n\nMS> Personally, I am amazed that it takes 3.585 seconds to index scan \nMS> i_bookgenres_genre_id. Is that a composite index? Analyzing the \nMS> taables may help, as the optimizer appears to mispredict the number of\nMS> rows returned.\n\nMS> I would be curious to see how it performs with an \"IN\" clause, which I\nMS> would suspect would go quite a bit fasrer. Try the following:\n\nMS> SELECT bv_books. * ,\nMS> vote_avg,\nMS> vote_count\nMS> FROM bv_bookgenres,\nMS> bv_books\nMS> WHERE bv_books.book_id IN (\nMS> SELECT book_id\nMS> FROM bv_genres\nMS> WHERE bv_bookgenres.genre_id = 5830\nMS> )\nMS> AND bv_bookgenres.genre_id = 5830\nMS> ORDER BY vote_avg DESC LIMIT 10 OFFSET 0;\n\nMS> In this query, all of the book_id values are pulled at once.\n\nMS> Who knows?\n\nMS> If you get statisctics on this, please post.\n\nMS> Marty\n\n\nMS> ---------------------------(end of\nMS> broadcast)---------------------------\nMS> TIP 4: Don't 'kill -9' the postmaster\n\n",
"msg_date": "Sun, 23 May 2004 01:22:09 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL caching"
}
] |
[
{
"msg_contents": "I will soon have at my disposal a new IBM pSeries server. The main \nmission for this box will be to serve several pg databases. I have \nordered 8GB of RAM and want to learn the best way to tune pg and AIX for \nthis configuration. Specifically, I am curious about shared memory \nlimitations. I've had to tune the shmmax on linux machines before but \nI'm new to AIX and not sure if this is even required on that platform? \nGoogle has not been much help for specifics here. \n\nHoping someone else here has a similar platform and can offer some advice..\n\nThanks!\n\n-Dan Harris\n",
"msg_date": "Fri, 21 May 2004 17:23:36 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "tuning for AIX 5L with large memory"
},
{
"msg_contents": "Clinging to sanity, [email protected] (Dan Harris) mumbled into her beard:\n> I will soon have at my disposal a new IBM pSeries server. The main\n> mission for this box will be to serve several pg databases. I have\n> ordered 8GB of RAM and want to learn the best way to tune pg and AIX\n> for this configuration. Specifically, I am curious about shared\n> memory limitations. I've had to tune the shmmax on linux machines\n> before but I'm new to AIX and not sure if this is even required on\n> that platform? Google has not been much help for specifics here.\n>\n> Hoping someone else here has a similar platform and can offer some advice..\n\nWe have a couple of these at work; they're nice and fast, although the\nprocess of compiling things, well, \"makes me feel a little unclean.\"\n\nOne of our sysadmins did all the \"configuring OS stuff\" part; I don't\nrecall offhand if there was a need to twiddle something in order to\nget it to have great gobs of shared memory.\n\nA quick Google on this gives me the impression that AIX supports, out\nof the box, multiple GB of shared memory without special kernel\nconfiguration. A DB/2 configuration guide tells users of Solaris and\nHP/UX that they need to set shmmax in sundry config files and reboot.\nNo such instruction for AIX.\n\nIf it needs configuring, it's probably somewhere in SMIT. And you can\nalways try starting up an instance to see how big it'll let you make\nshared memory.\n\nThe usual rule of thumb has been that having substantially more than\n10000 blocks worth of shared memory is unworthwhile. I don't think\nanyone has done a detailed study on AIX to see if bigger numbers play\nwell or not. I would think that having more than about 1 to 1.5GB of\nshared memory in use for buffer cache would start playing badly, but I\nhave no numbers.\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://www3.sympatico.ca/cbbrowne/sap.html\nWould-be National Mottos:\nUSA: \"We don't care where you come from. We can't find our *own*\ncountry on a map...\"\n",
"msg_date": "Fri, 21 May 2004 21:28:08 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning for AIX 5L with large memory"
},
{
"msg_contents": "Christopher Browne wrote:\n> One of our sysadmins did all the \"configuring OS stuff\" part; I don't\n> recall offhand if there was a need to twiddle something in order to\n> get it to have great gobs of shared memory.\n\nFWIW, the section on configuring kernel resources under various \nUnixen[1] doesn't have any documentation for AIX. If someone out there \nknows which knobs need to be tweaked, would they mind sending in a doc \npatch? (Or just specifying what needs to be done, and I'll add the SGML.)\n\n-Neil\n\n[1] \nhttp://developer.postgresql.org/docs/postgres/kernel-resources.html#SYSVIPC\n",
"msg_date": "Fri, 21 May 2004 22:31:15 -0400",
"msg_from": "Neil Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning for AIX 5L with large memory"
},
{
"msg_contents": "Christopher Browne wrote:\n\n>We have a couple of these at work; they're nice and fast, although the\n>process of compiling things, well, \"makes me feel a little unclean.\"\n>\n>\n> \n>\nThanks very much for your detailed reply, Christopher. Would you mind \nelaborating on the \"makes me feel a little unclean\" statement? Also, I'm \ncurious which models you are running and if you have any anecdotal \ncomparisons for perfomance? I'm completely unfamiliar with AIX, so if \nthere are dark corners that await me, I'd love to hear a little more so \nI can be prepared. I'm going out on a limb here and jumping to an \nunfamiliar architecture as well as OS, but the IO performance of these \nsystems has convinced me that it's what I need to break out of my I/O \nlimited x86 systems.\n\nI suppose when I do get it, I'll just experiment with different sizes of \nshared memory and run some benchmarks. For the price of these things, \nthey better be some good marks!\n\nThanks again\n\n-Dan Harris\n",
"msg_date": "Sat, 22 May 2004 21:30:25 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: tuning for AIX 5L with large memory"
},
{
"msg_contents": "[email protected] (Neil Conway) writes:\n> Christopher Browne wrote:\n>> One of our sysadmins did all the \"configuring OS stuff\" part; I don't\n>> recall offhand if there was a need to twiddle something in order to\n>> get it to have great gobs of shared memory.\n>\n> FWIW, the section on configuring kernel resources under various\n> Unixen[1] doesn't have any documentation for AIX. If someone out there\n> knows which knobs need to be tweaked, would they mind sending in a doc\n> patch? (Or just specifying what needs to be done, and I'll add the\n> SGML.)\n\nAfter verifying that nobody wound up messing with the kernel\nparameters, here's a docs patch...\n\nIndex: runtime.sgml\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.263\ndiff -c -u -r1.263 runtime.sgml\n--- runtime.sgml\t29 Apr 2004 04:37:09 -0000\t1.263\n+++ runtime.sgml\t26 May 2004 16:35:43 -0000\n@@ -3557,6 +3557,26 @@\n </listitem>\n </varlistentry>\n \n+ <varlistentry>\n+ <term><systemitem class=\"osname\">AIX</></term>\n+ <indexterm><primary>AIX</><secondary>IPC configuration</></>\n+ <listitem>\n+ <para>\n+ At least as of version 5.1, it should not be necessary to do\n+ any special configuration for such parameters as\n+ <varname>SHMMAX</varname>, as it appears this is configured to\n+ allow all memory to be used as shared memory. That is the\n+ sort of configuration commonly used for other databases such\n+ as <application>DB/2</application>.</para>\n+\n+ <para> It may, however, be necessary to modify the global\n+ <command>ulimit</command> information in\n+ <filename>/etc/security/limits</filename>, as the default hard\n+ limits for filesizes (<varname>fsize</varname>) and numbers of\n+ files (<varname>nofiles</varname>) may be too low.\n+ </para>\n+ </listitem>\n+ </varlistentry> \n \n <varlistentry>\n <term><systemitem class=\"osname\">Solaris</></term>\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www.ntlug.org/~cbbrowne/linuxxian.html\nHail to the sun god, he sure is a fun god, Ra, Ra, Ra!! \n",
"msg_date": "Wed, 26 May 2004 12:37:24 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning for AIX 5L with large memory"
},
{
"msg_contents": "[email protected] (Dan Harris) writes:\n> Christopher Browne wrote:\n>\n>>We have a couple of these at work; they're nice and fast, although the\n>>process of compiling things, well, \"makes me feel a little unclean.\"\n>>\n> Thanks very much for your detailed reply, Christopher. Would you mind\n> elaborating on the \"makes me feel a little unclean\" statement? \n\nThe way AIX manages symbol tables for shared libraries is fairly\nastounding in its verbosity.\n\nGo and try to compile, by hand, a shared library, and you'll see :-).\n\n> Also, I'm curious which models you are running and if you have any\n> anecdotal comparisons for perfomance? I'm completely unfamiliar\n> with AIX, so if there are dark corners that await me, I'd love to\n> hear a little more so I can be prepared. I'm going out on a limb\n> here and jumping to an unfamiliar architecture as well as OS, but\n> the IO performance of these systems has convinced me that it's what\n> I need to break out of my I/O limited x86 systems.\n\nIt would probably be better for Andrew Sullivan to speak to the\ndetails on that. The main focus of comparison has been between AIX\nand Solaris, and the AIX systems have looked generally pretty good.\n\nWe haven't yet had AIX under what could be truly assessed as \"heavy\nload.\" That comes, in part, from the fact that brand-new\nlatest-generation pSeries hardware is _way_ faster than three-year-old\nSolaris hardware. Today's top-of-the-line is faster than what was\nhigh-end three years ago, so the load that the Sun boxes can cope with\n\"underwhelms\" the newer IBM hardware :-).\n\n> I suppose when I do get it, I'll just experiment with different\n> sizes of shared memory and run some benchmarks. For the price of\n> these things, they better be some good marks!\n\nWell, there's more than one way of looking at these things. One of\nthe important perspectives to me is the one of reliability. A system\nthat is Way Fast, but which crashes once in a while with some hardware\nfault is no good.\n\nI have been getting accustomed to Sun and Dell systems crashing way\ntoo often :-(. One of the merits of the pSeries hardware is that it's\ngot the maturity of IBM's long term experience at building reliable\nservers. If the IBM hardware was a bit slower (unlikely, based on it\nbeing way newer than the older Suns), but had suitable reliability,\nthat would seem a reasonable tradeoff to me.\n\nI take the very same perspective on the discussions of \"which\nfilesystem is best?\" Raw speed is NOT the only issue; it is\nsecondary, as far as I am concerned, to \"Is It Reliable?\"\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"ntlug.org\")\nhttp://cbbrowne.com/info/lsf.html\nAppendium to the Rules of the Evil Overlord #1: \"I will not build\nexcessively integrated security-and-HVAC systems. They may be Really\nCool, but are far too vulnerable to breakdowns.\"\n",
"msg_date": "Wed, 26 May 2004 12:58:55 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: tuning for AIX 5L with large memory"
}
] |
[
{
"msg_contents": " > Hello Marty,\n >\n > MS> Is that a composite index?\n >\n > It is a regular btree index. What is a composite index?\n\nMy apologies. A composite index is one that consists of multiple fields \n(aka multicolumn index). The reason I ask is that it was spending \nalmost half the time just searching bv_bookgenres, which seemed odd.\n\nI may be speaking out of turn since I am not overly familiar with Pg's \nquirks and internals.\n\nA composite index, or any index of a large field, will lower the number \nof index items stored per btree node, thereby lowering the branching \nfactor and increasing the tree depth. On tables with many rows, this \ncan result in many more disk accesses for reading the index. An index \nbtree that is 6 levels deep will require at least seven disk accesses (6 \nfor the index, one for the table row) per row retrieved.\n\nNot knowing the structure of the indexes, it's hard to say too much \nabout it. The fact that a 1993 row select from an indexed table took \n3.5 seconds caused me to take notice.\n\n > MS> I would be curious to see how it performs with an \"IN\" clause,\n > MS> which I would suspect would go quite a bit fasrer.\n >\n > Actually it reached 20s before I canceled it... Here's the explain:\n\nI believe that. The code I posted had a nasty join bug. If my math is \nright, the query was trying to return 1993*1993, or just under 4 million \nrows.\n\nI didn't see the table structure, but I assume that the vote_avg and \nvote_count fields are in bv_bookgenres. If no fields are actually \nneeded from bv_bookgenres, then the query might be constructed in a way \nthat only the index would be read, without loading any row data.\n\nI think that you mentioned this was for a web app. Do you actually have \na web page that displays 2000 rows of data?\n\nGood luck,\nMarty\n\n",
"msg_date": "Mon, 24 May 2004 16:52:02 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL caching"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nHow can I automatically kill a process in the database (ex a select or\nexplain) if it exceeds my limit of 2 or 3 mins ..\n\n For example : I have a query that already running for 3 or 4 mins I want to\nkill that process for a reason and return a\n\nSignal to the user.\n\n \n\nThanks\n\nMichael \n\n\n\n\n\n\n\n\n\n\nHi,\n \nHow can I automatically kill a process in the database (ex a\nselect or explain) if it exceeds my limit of 2 or 3 mins ..\n For example : I have a query that already running for\n3 or 4 mins I want to kill that process for a reason and return a\nSignal to the user.\n \nThanks\nMichael",
"msg_date": "Tue, 25 May 2004 10:54:46 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Server process"
},
{
"msg_contents": "Read the docs on going SET statement_timeout TO ...;\n\nChris\n\nMichael Ryan S. Puncia wrote:\n\n> \n> \n> Hi,\n> \n> \n> \n> How can I automatically kill a process in the database (ex a select or \n> explain) if it exceeds my limit of 2 or 3 mins ..\n> \n> For example : I have a query that already running for 3 or 4 mins I \n> want to kill that process for a reason and return a\n> \n> Signal to the user.\n> \n> \n> \n> Thanks\n> \n> Michael\n> \n",
"msg_date": "Tue, 25 May 2004 11:05:37 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Server process"
}
] |
[
{
"msg_contents": "I can't understand what's going on in this simple query:\n\nselect c.name from Candidate C where\n C.candidate_id in (select candidate_id from REFERRAL R\n where r.employee_id = 3000);\n\n\nWhere Candidate.CANDIDATE_ID is the primary key for Candidate.\nHere's the EXPLAN ANALYZE:\n\nSeq Scan on candidate c (cost=100000000.00..100705078.06 rows=143282 width=18) \n (actual time=2320.01..2320.01\nrows=0 loops=1)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=2.42..2.42 rows=3 width=4) \n (actual time=0.00..0.00 rows=0 loops=286563)\n -> Index Scan using referral_employee_id_index on referral r \n (cost=0.00..2.42 rows=3 width=4) (actual\ntime=0.48..0.48 rows=0 loops=1)\n Index Cond: (employee_id = 3000)\n\n\nIt seems to be accurately estimating the number of rows returned by\nthe sub-query (3), but then it thinks that 143282 rows are going to be\nreturned by the main query, even though we are querying based on the\nPRIMARY KEY!\n\n\nTo prove that in index query is possible, I tried:\nselect c.name from Candidate C where\n C.candidate_id in (99, 22, 23123, 2344) which resulted in:\n\nIndex Scan using candidate_id_index, candidate_id_index,\ncandidate_id_index, candidate_id_index on candidate c\n (cost=0.00..17.52 rows=4 width=18) (actual time=24.437..29.088\nrows=3 loops=1)\n Index Cond:\n ((candidate_id = 99) OR (candidate_id = 22) OR \n (candidate_id = 23123) OR (candidate_id = 2344))\n\n\nAny ideas what's causing the query planner to make such a simple and\ndrastic error?\n\nThanks,\nJosh\n",
"msg_date": "Tue, 25 May 2004 11:37:55 -0700",
"msg_from": "Josh Sacks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Not using Primary Key in query"
},
{
"msg_contents": "Josh Sacks <[email protected]> writes:\n> I can't understand what's going on in this simple query:\n\nIf you are using anything older than PG 7.4, you should not expect good\nperformance from WHERE ... IN (sub-SELECT) queries. There's essentially\nno optimization happening there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 29 May 2004 01:06:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Not using Primary Key in query "
}
] |
[
{
"msg_contents": "Hi. i hava a postresql 7.4.2 in a production server.\n\ntha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\n\nThe postresql.conf say:\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 1000 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 1024 # min 64, size in KB\nvacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\nmax_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = true # turns forced synchronization on or off\nwal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\nwal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\ncheckpoint_segments = 3 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\ncheckpoint_warning = 30 # 0 is off, in seconds\ncommit_delay = 0 # range 0-100000, in microseconds\ncommit_siblings = 5 # range 1-1000\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\nenable_hashagg = true\nenable_hashjoin = true\nenable_indexscan = true\nenable_mergejoin = true\nenable_nestloop = true\nenable_seqscan = true\nenable_sort = true\nenable_tidscan = true\n\n# - Planner Cost Constants -\n\neffective_cache_size = 1000 # typically 8KB each\nrandom_page_cost = 4 # units are one sequential page fetch cost\ncpu_tuple_cost = 0.01 # (same)\ncpu_index_tuple_cost = 0.001 # (same)\ncpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\ngeqo = true\ngeqo_threshold = 11\ngeqo_effort = 1\ngeqo_generations = 0\ngeqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\ngeqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100 # range 1-1000\nfrom_collapse_limit = 30\njoin_collapse_limit = 30 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\n#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\n#log_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose messages\n\n#log_min_error_statement = panic # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n\n#log_min_duration_statement = -1 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\n\n\ndebug_print_parse = true\ndebug_print_rewritten = true\ndebug_print_plan = true\ndebug_pretty_print = true\nlog_connections = true\nlog_duration = true\nlog_pid = true\nlog_statement = true\nlog_timestamp = true\nlog_hostname = true\nlog_source_port = true\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\nlog_parser_stats = true\nlog_planner_stats = true\nlog_executor_stats = true\n#log_statement_stats = true\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment\nsetting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'es_VE.UTF-8' # locale for system error message\nstrings\nlc_monetary = 'es_VE.UTF-8' # locale for monetary formatting\nlc_numeric = 'es_VE.UTF-8' # locale for number formatting\nlc_time = 'es_VE.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\nexplain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n\n\n\nBUT THE PERFORMANCE IT�S VERY SLOW\n\nwhat can do ?????\n\nThank\n\n\nMario Soto\n\n\n",
"msg_date": "Wed, 26 May 2004 11:26:30 -0400 (VET)",
"msg_from": "\"Mario Soto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance very slow"
},
{
"msg_contents": "Mario Soto wrote:\n\n>Hi. i hava a postresql 7.4.2 in a production server.\n>\n>tha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\n> \n>\nMario,\n\nStart with reading this:\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nWithout knowing anything about the size of your database, your usage \npatterns, or your disk subsystem (the most important part of a database \nserver, imho) I would suggest you first increase the number of \nshared_buffers allocated to Postgres. Most recommend keeping this number \nbelow 10000, but I've found I get the best performance with about 24000 \nshared_buffers with a ~5GB database on a machine with 4GB of ram, \ndedicated to Postgres. You'll have to experiment to see what works best \nfor you.\n\nAlso, make sure you VACUUM and ANALYZE on a regular basis. Again, the \nfrequency of this really depends on your data and usage patterns. More \nfrequent write operations require more frequent vacuuming.\n\nGood luck.\n\nBest Regards,\n\nBill Montgomery\n\n>The postresql.conf say:\n>\n>#---------------------------------------------------------------------------\n># RESOURCE USAGE (except WAL)\n>#---------------------------------------------------------------------------\n>\n># - Memory -\n>\n>shared_buffers = 1000 # min 16, at least max_connections*2, 8KB\n>each\n>sort_mem = 1024 # min 64, size in KB\n>vacuum_mem = 8192 # min 1024, size in KB\n>\n># - Free Space Map -\n>\n>max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n>max_fsm_relations = 1000 # min 100, ~50 bytes each\n>\n># - Kernel Resource Usage -\n>\n>max_files_per_process = 1000 # min 25\n>#preload_libraries = ''\n>\n>\n>#---------------------------------------------------------------------------\n># WRITE AHEAD LOG\n>#---------------------------------------------------------------------------\n>\n># - Settings -\n>\n>fsync = true # turns forced synchronization on or off\n>wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n>open_datasync\n>wal_buffers = 8 # min 4, 8KB each\n>\n># - Checkpoints -\n>\n>checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n>checkpoint_timeout = 300 # range 30-3600, in seconds\n>checkpoint_warning = 30 # 0 is off, in seconds\n>commit_delay = 0 # range 0-100000, in microseconds\n>commit_siblings = 5 # range 1-1000\n>\n>#---------------------------------------------------------------------------\n># QUERY TUNING\n>#---------------------------------------------------------------------------\n>\n># - Planner Method Enabling -\n>\n>enable_hashagg = true\n>enable_hashjoin = true\n>enable_indexscan = true\n>enable_mergejoin = true\n>enable_nestloop = true\n>enable_seqscan = true\n>enable_sort = true\n>enable_tidscan = true\n>\n># - Planner Cost Constants -\n>\n>effective_cache_size = 1000 # typically 8KB each\n>random_page_cost = 4 # units are one sequential page fetch cost\n>cpu_tuple_cost = 0.01 # (same)\n>cpu_index_tuple_cost = 0.001 # (same)\n>cpu_operator_cost = 0.0025 # (same)\n>\n># - Genetic Query Optimizer -\n>\n>geqo = true\n>geqo_threshold = 11\n>geqo_effort = 1\n>geqo_generations = 0\n>geqo_pool_size = 0 # default based on tables in statement,\n> # range 128-1024\n>geqo_selection_bias = 2.0 # range 1.5-2.0\n>\n># - Other Planner Options -\n>\n>default_statistics_target = 100 # range 1-1000\n>from_collapse_limit = 30\n>join_collapse_limit = 30 # 1 disables collapsing of explicit JOINs\n>\n>\n>#---------------------------------------------------------------------------\n># ERROR REPORTING AND LOGGING\n>#---------------------------------------------------------------------------\n>\n># - Syslog -\n>\n>#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\n>#syslog_facility = 'LOCAL0'\n>#syslog_ident = 'postgres'\n>\n># - When to Log -\n>\n>#client_min_messages = notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # log, info, notice, warning, error\n>\n>#log_min_messages = notice # Values, in order of decreasing detail:\n> # debug5, debug4, debug3, debug2, debug1,\n> # info, notice, warning, error, log, fatal,\n> # panic\n>\n>#log_error_verbosity = default # terse, default, or verbose messages\n>\n>#log_min_error_statement = panic # Values in order of increasing severity:\n> # debug5, debug4, debug3, debug2, debug1,\n> # info, notice, warning, error, panic(off)\n>\n>#log_min_duration_statement = -1 # Log all statements whose\n> # execution time exceeds the value, in\n> # milliseconds. Zero prints all queries.\n> # Minus-one disables.\n>\n>#silent_mode = false # DO NOT USE without Syslog!\n>\n># - What to Log -\n>\n>\n>\n>debug_print_parse = true\n>debug_print_rewritten = true\n>debug_print_plan = true\n>debug_pretty_print = true\n>log_connections = true\n>log_duration = true\n>log_pid = true\n>log_statement = true\n>log_timestamp = true\n>log_hostname = true\n>log_source_port = true\n>\n>\n>#---------------------------------------------------------------------------\n># RUNTIME STATISTICS\n>#---------------------------------------------------------------------------\n>\n># - Statistics Monitoring -\n>\n>log_parser_stats = true\n>log_planner_stats = true\n>log_executor_stats = true\n>#log_statement_stats = true\n>\n># - Query/Index Statistics Collector -\n>\n>stats_start_collector = true\n>stats_command_string = true\n>stats_block_level = true\n>stats_row_level = true\n>stats_reset_on_server_start = true\n>\n>\n>#---------------------------------------------------------------------------\n># CLIENT CONNECTION DEFAULTS\n>#---------------------------------------------------------------------------\n>\n># - Statement Behavior -\n>\n>#search_path = '$user,public' # schema names\n>#check_function_bodies = true\n>#default_transaction_isolation = 'read committed'\n>#default_transaction_read_only = false\n>#statement_timeout = 0 # 0 is disabled, in milliseconds\n>\n># - Locale and Formatting -\n>\n>#datestyle = 'iso, mdy'\n>#timezone = unknown # actually, defaults to TZ environment\n>setting\n>#australian_timezones = false\n>#extra_float_digits = 0 # min -15, max 2\n>#client_encoding = sql_ascii # actually, defaults to database encoding\n>\n># These settings are initialized by initdb -- they may be changed\n>lc_messages = 'es_VE.UTF-8' # locale for system error message\n>strings\n>lc_monetary = 'es_VE.UTF-8' # locale for monetary formatting\n>lc_numeric = 'es_VE.UTF-8' # locale for number formatting\n>lc_time = 'es_VE.UTF-8' # locale for time formatting\n>\n># - Other Defaults -\n>\n>explain_pretty_print = true\n>#dynamic_library_path = '$libdir'\n>#max_expr_depth = 10000 # min 10\n>\n>\n>#---------------------------------------------------------------------------\n># LOCK MANAGEMENT\n>#---------------------------------------------------------------------------\n>\n>#deadlock_timeout = 1000 # in milliseconds\n>#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n>\n>\n>#---------------------------------------------------------------------------\n># VERSION/PLATFORM COMPATIBILITY\n>#---------------------------------------------------------------------------\n>\n># - Previous Postgres Versions -\n>\n>#add_missing_from = true\n>#regex_flavor = advanced # advanced, extended, or basic\n>#sql_inheritance = true\n>\n># - Other Platforms & Clients -\n>\n>#transform_null_equals = false\n>\n>\n>\n>BUT THE PERFORMANCE IT�S VERY SLOW\n>\n>what can do ?????\n>\n>Thank\n>\n>\n>Mario Soto\n>\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n",
"msg_date": "Wed, 26 May 2004 12:25:45 -0400",
"msg_from": "Bill Montgomery <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] performance very slow"
},
{
"msg_contents": "OK. Thank fou your help.\n\nIn this moment the size of database its 2GB.\n\nAnd the machine it�s only to postgresql.\n\nGracias\n\n\n> Mario Soto wrote:\n>\n>>Hi. i hava a postresql 7.4.2 in a production server.\n>>\n>>tha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\n>>\n>>\n> Mario,\n>\n> Start with reading this:\n>\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> Without knowing anything about the size of your database, your usage\n> patterns, or your disk subsystem (the most important part of a database\n> server, imho) I would suggest you first increase the number of\n> shared_buffers allocated to Postgres. Most recommend keeping this number\n> below 10000, but I've found I get the best performance with about 24000\n> shared_buffers with a ~5GB database on a machine with 4GB of ram,\n> dedicated to Postgres. You'll have to experiment to see what works best\n> for you.\n>\n> Also, make sure you VACUUM and ANALYZE on a regular basis. Again, the\n> frequency of this really depends on your data and usage patterns. More\n> frequent write operations require more frequent vacuuming.\n>\n> Good luck.\n>\n> Best Regards,\n>\n> Bill Montgomery\n>\n>>The postresql.conf say:\n>>\n>>#---------------------------------------------------------------------------\n>> # RESOURCE USAGE (except WAL)\n>>#---------------------------------------------------------------------------\n>>\n>># - Memory -\n>>\n>>shared_buffers = 1000 # min 16, at least max_connections*2,\n>> 8KB each\n>>sort_mem = 1024 # min 64, size in KB\n>>vacuum_mem = 8192 # min 1024, size in KB\n>>\n>># - Free Space Map -\n>>\n>>max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes\n>> each max_fsm_relations = 1000 # min 100, ~50 bytes each\n>>\n>># - Kernel Resource Usage -\n>>\n>>max_files_per_process = 1000 # min 25\n>>#preload_libraries = ''\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # WRITE AHEAD LOG\n>>#---------------------------------------------------------------------------\n>>\n>># - Settings -\n>>\n>>fsync = true # turns forced synchronization on or\n>> off wal_sync_method = fsync # the default varies across platforms:\n>> # fsync, fdatasync, open_sync, or\n>>open_datasync\n>>wal_buffers = 8 # min 4, 8KB each\n>>\n>># - Checkpoints -\n>>\n>>checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n>>checkpoint_timeout = 300 # range 30-3600, in seconds\n>>checkpoint_warning = 30 # 0 is off, in seconds\n>>commit_delay = 0 # range 0-100000, in microseconds\n>> commit_siblings = 5 # range 1-1000\n>>\n>>#---------------------------------------------------------------------------\n>> # QUERY TUNING\n>>#---------------------------------------------------------------------------\n>>\n>># - Planner Method Enabling -\n>>\n>>enable_hashagg = true\n>>enable_hashjoin = true\n>>enable_indexscan = true\n>>enable_mergejoin = true\n>>enable_nestloop = true\n>>enable_seqscan = true\n>>enable_sort = true\n>>enable_tidscan = true\n>>\n>># - Planner Cost Constants -\n>>\n>>effective_cache_size = 1000 # typically 8KB each\n>>random_page_cost = 4 # units are one sequential page fetch\n>> cost cpu_tuple_cost = 0.01 # (same)\n>>cpu_index_tuple_cost = 0.001 # (same)\n>>cpu_operator_cost = 0.0025 # (same)\n>>\n>># - Genetic Query Optimizer -\n>>\n>>geqo = true\n>>geqo_threshold = 11\n>>geqo_effort = 1\n>>geqo_generations = 0\n>>geqo_pool_size = 0 # default based on tables in statement,\n>> # range 128-1024\n>>geqo_selection_bias = 2.0 # range 1.5-2.0\n>>\n>># - Other Planner Options -\n>>\n>>default_statistics_target = 100 # range 1-1000\n>>from_collapse_limit = 30\n>>join_collapse_limit = 30 # 1 disables collapsing of explicit\n>> JOINs\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # ERROR REPORTING AND LOGGING\n>>#---------------------------------------------------------------------------\n>>\n>># - Syslog -\n>>\n>>#syslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\n>> #syslog_facility = 'LOCAL0'\n>>#syslog_ident = 'postgres'\n>>\n>># - When to Log -\n>>\n>>#client_min_messages = notice # Values, in order of decreasing\n>> detail:\n>> # debug5, debug4, debug3, debug2,\n>> debug1, # log, info, notice, warning,\n>> error\n>>\n>>#log_min_messages = notice # Values, in order of decreasing\n>> detail:\n>> # debug5, debug4, debug3, debug2,\n>> debug1, # info, notice, warning,\n>> error, log, fatal, # panic\n>>\n>>#log_error_verbosity = default # terse, default, or verbose messages\n>>\n>>#log_min_error_statement = panic # Values in order of increasing\n>> severity:\n>> # debug5, debug4, debug3, debug2,\n>> debug1, # info, notice, warning,\n>> error, panic(off)\n>>\n>>#log_min_duration_statement = -1 # Log all statements whose\n>> # execution time exceeds the value, in\n>> # milliseconds. Zero prints all\n>> queries. # Minus-one disables.\n>>\n>>#silent_mode = false # DO NOT USE without Syslog!\n>>\n>># - What to Log -\n>>\n>>\n>>\n>>debug_print_parse = true\n>>debug_print_rewritten = true\n>>debug_print_plan = true\n>>debug_pretty_print = true\n>>log_connections = true\n>>log_duration = true\n>>log_pid = true\n>>log_statement = true\n>>log_timestamp = true\n>>log_hostname = true\n>>log_source_port = true\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # RUNTIME STATISTICS\n>>#---------------------------------------------------------------------------\n>>\n>># - Statistics Monitoring -\n>>\n>>log_parser_stats = true\n>>log_planner_stats = true\n>>log_executor_stats = true\n>>#log_statement_stats = true\n>>\n>># - Query/Index Statistics Collector -\n>>\n>>stats_start_collector = true\n>>stats_command_string = true\n>>stats_block_level = true\n>>stats_row_level = true\n>>stats_reset_on_server_start = true\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # CLIENT CONNECTION DEFAULTS\n>>#---------------------------------------------------------------------------\n>>\n>># - Statement Behavior -\n>>\n>>#search_path = '$user,public' # schema names\n>>#check_function_bodies = true\n>>#default_transaction_isolation = 'read committed'\n>>#default_transaction_read_only = false\n>>#statement_timeout = 0 # 0 is disabled, in milliseconds\n>>\n>># - Locale and Formatting -\n>>\n>>#datestyle = 'iso, mdy'\n>>#timezone = unknown # actually, defaults to TZ environment\n>> setting\n>>#australian_timezones = false\n>>#extra_float_digits = 0 # min -15, max 2\n>>#client_encoding = sql_ascii # actually, defaults to database\n>> encoding\n>>\n>># These settings are initialized by initdb -- they may be changed\n>> lc_messages = 'es_VE.UTF-8' # locale for system error\n>> message strings\n>>lc_monetary = 'es_VE.UTF-8' # locale for monetary\n>> formatting lc_numeric = 'es_VE.UTF-8' # locale for number\n>> formatting lc_time = 'es_VE.UTF-8' # locale for time\n>> formatting\n>>\n>># - Other Defaults -\n>>\n>>explain_pretty_print = true\n>>#dynamic_library_path = '$libdir'\n>>#max_expr_depth = 10000 # min 10\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # LOCK MANAGEMENT\n>>#---------------------------------------------------------------------------\n>>\n>>#deadlock_timeout = 1000 # in milliseconds\n>>#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes\n>> each\n>>\n>>\n>>#---------------------------------------------------------------------------\n>> # VERSION/PLATFORM COMPATIBILITY\n>>#---------------------------------------------------------------------------\n>>\n>># - Previous Postgres Versions -\n>>\n>>#add_missing_from = true\n>>#regex_flavor = advanced # advanced, extended, or basic\n>>#sql_inheritance = true\n>>\n>># - Other Platforms & Clients -\n>>\n>>#transform_null_equals = false\n>>\n>>\n>>\n>>BUT THE PERFORMANCE IT�S VERY SLOW\n>>\n>>what can do ?????\n>>\n>>Thank\n>>\n>>\n>>Mario Soto\n>>\n>>\n>>\n>>---------------------------(end of\n>> broadcast)--------------------------- TIP 2: you can get off all lists\n>> at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to\n>> [email protected])\n>>\n\n\n\n",
"msg_date": "Wed, 26 May 2004 12:35:57 -0400 (VET)",
"msg_from": "\"Mario Soto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] performance very slow"
},
{
"msg_contents": "Hi,\r\n\r\nshared_buffers seems quite low for a server to me. For best performance, you\r\nshould read and follow the optimisation articles on\r\nhttp://techdocs.postgresql.org/.\r\n\r\nRegards, Frank \r\n\r\n\r\n\r\nOn Wed, 26 May 2004 11:26:30 -0400 (VET) \"Mario Soto\"\r\n<[email protected]> sat down, thought long and then wrote:\r\n\r\n> Hi. i hava a postresql 7.4.2 in a production server.\r\n> \r\n> tha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\r\n> \r\n\r\n...\r\n\r\n> \r\n> \r\n> BUT THE PERFORMANCE IT´S VERY SLOW\r\n> \r\n> what can do ?????\r\n> \r\n> Thank\r\n> \r\n> \r\n> Mario Soto\r\n> \r\n>",
"msg_date": "Wed, 26 May 2004 19:46:32 +0200",
"msg_from": "Frank Finner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance very slow"
},
{
"msg_contents": "On Wed, 26 May 2004, Mario Soto wrote:\n\n> tha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\n> \n> BUT THE PERFORMANCE IT�S VERY SLOW\n\nHow often do you run VACUUM ANALYZE? You might want to do that every night \nor every hour (depending on how much updates you have).\n\nSome of your config values could and should be tuned, read something like\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nStill, if it's very slow it's probably not just a little tweaking of these\nvariables that solves everything.\n\nIf that is the case you need to find a slow query, run EXPLAIN ANALYZE on\nit and try to figure out why it is slow. There is a list to help with\nperformance issues called pgsql-performance that you might want to post to\n(and read its archive).\n\nBut before anything else, make sure you run VACUUM ANALYZE regulary.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Wed, 26 May 2004 21:03:56 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance very slow"
},
{
"msg_contents": "OK.\n\ni see the link and change parameters in postgresql.conf\n\ni.e.\n\nWhen excecute a insert statement the memory up to 90% to use .\nit's normal ???????\n\n\nThank for yor help and sorry for my bad englis\n\nRegards\n\nMario Soto\n\n\n> On Wed, 26 May 2004, Mario Soto wrote:\n>\n>> tha machine is a Pentium IV 2,6 GHZ AND 1 GB IN RAM with lINUX RH 9.0.\n>>\n>> BUT THE PERFORMANCE IT�S VERY SLOW\n>\n> How often do you run VACUUM ANALYZE? You might want to do that every\n> night or every hour (depending on how much updates you have).\n>\n> Some of your config values could and should be tuned, read something\n> like\n>\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n>\n> Still, if it's very slow it's probably not just a little tweaking of\n> these variables that solves everything.\n>\n> If that is the case you need to find a slow query, run EXPLAIN ANALYZE\n> on it and try to figure out why it is slow. There is a list to help with\n> performance issues called pgsql-performance that you might want to post\n> to (and read its archive).\n>\n> But before anything else, make sure you run VACUUM ANALYZE regulary.\n>\n> --\n> /Dennis Bj�rklund\n\n\n\n",
"msg_date": "Wed, 26 May 2004 15:13:14 -0400 (VET)",
"msg_from": "\"Mario Soto\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance very slow"
}
] |
[
{
"msg_contents": "I wanted to solicit some opinions on architecture and performance from \nyou guys.\n\nI am torn right now between these two systems to replace my aging DB server:\n\n4 x 2.2 GHz Opteron\n8GB RAM\nUltra320 15kRPM RAID5 with 128MB cache\n\nand\n\n2-way 1.2GHz POWER4+ IBM pSeries 615\n8GB RAM\nUltra320 15kRPM RAID5 with 64MB cache\n\nI plan on serving ~80GB of pgsql database on this machine. The current \nmachine handles around 1.5 million queries per day.\n\nI am having some trouble finding direct comparisons between the two \narchitectures. The OS will most likely be Linux ( I'm hedging on AIX \nfor now ). The pSeries has 8MB cache per CPU card ( 2 CPU on a card ) \nwhile the Opteron has 1MB for each processor. I know the POWER4+ is a \nvery fast machine but I wonder if having more processors in the Opteron \nsystem would beat it for database serving? FWIW, they are very close in \nprice.\n\nIgnoring the fault-tolerance features of the pSeries, which one would \nyou pick for performance?\n\nThanks,\nDan\n",
"msg_date": "Thu, 27 May 2004 16:40:49 -0600",
"msg_from": "Dan Harris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware opinions wanted"
},
{
"msg_contents": "Dan Harris wrote:\n> I am torn right now between these two systems to replace my aging DB \n> server:\n> \n> 4 x 2.2 GHz Opteron\n> 8GB RAM\n> Ultra320 15kRPM RAID5 with 128MB cache\n> \n> and\n> \n> 2-way 1.2GHz POWER4+ IBM pSeries 615\n> 8GB RAM\n> Ultra320 15kRPM RAID5 with 64MB cache\n\nI don't know anything about the pSeries, but have a look in the \narchives, there was recently a rather long thread about Xeon vs. \nOpteron. The Opteron was the clear winner.\n\nPersonally I think that you can't be wrong with the 4-way Opteron. It \nscales very well and if you don't need the fault tolerance of the \npSeries platform, then you should be able to save one or two bucks with \nopteron way.\n\nBtw: If you want to save a few more bucks, then drop the 15k and take \n10k drives. They are of almost same speed.\n\nRegards,\nBjoern\n\n\n",
"msg_date": "Fri, 28 May 2004 12:43:11 +0200",
"msg_from": "Bjoern Metzdorf <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware opinions wanted"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI'm working through the aquisition process for a quad Opteron box right\nnow. I'll be benchmarking it against a quad processor p630 as well as a\nquad Xeon after we get it and posting results here. But that's about a\nmonth or two from now.\n\nI expect that the results will be strongly in favour of the Opetron,\nespecially the price / performance since the Opteron box is being quoted\nat about half the price of the p630 systems.\n\nOne thing you may wish to consider is going with lots of 10kRPM SATA\ndisks instead of 15kRPM SCSI disks. Two companies that I'm aware of\noffer quad Opteron solutions with SATA raid:\n\nhttp://www.quatopteron.com/\nhttp://alltec.com/home.php\n\nAndrew Hammond\nDBA - Afilias\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFAvjYKgfzn5SevSpoRAvq0AJkBDXOKL52HXg43mQ6rXe/i9RzFkQCfYQn8\nHpHP2U0jvjfYIvihNLFLbzA=\n=LyqB\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 02 Jun 2004 16:18:19 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware opinions wanted"
}
] |
[
{
"msg_contents": "Hai all,\n\n I want to log my queries send to the Postgresql server. I heard\nthat we can do this my specifying the file name in \n\n/etc/rc.d/init.d/Postgresql initiating file. But I don't know. If this is\nthe way means how to do that. Or anyother way is there.\n\n \n\nThanks is advance\n\nRaja \n\n\n\n\n\n\n\n\n\n\nHai all,\n I want to log my queries send to the Postgresql\nserver. I heard that we can do this my specifying the file name in \n/etc/rc.d/init.d/Postgresql initiating file. But I don’t\nknow. If this is the way means how to do that. Or anyother way is there.\n \nThanks is advance\nRaja",
"msg_date": "Sat, 29 May 2004 09:28:35 +0300",
"msg_from": "\"rajaguru\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Logging all query in one seperate File"
}
] |
[
{
"msg_contents": "I'm trying to create a trigger (AFTER INSERT, UPDATE, DELETE) as an audit\nroutine inserting into an audit table the \"before\" and \"after\" views of the\nrow being acted upon. My problem is I defined the \"before\" and \"after\"\nfields in the audit table as TEXT and when I try to move NEW or OLD into\nthese fields I get the error \"NEW used in query that is not in a rule\". I\ntried defining a variable as RECORD type but when I tried executing I would\nget a \"syntax error at or near...\" the variable when it was referenced in\nthe code (new_fld := NEW for instance).\n\nI'm currently stumped. I don't want an audit table for each and every table\nI want to audit. I want a single audit table to handle multiple tables. Do\nany of you more astute users have any ideas to help me? Can you tell me\nwhere I'm going wrong? Is my wish to have a single audit table for multiple\ntables all folly? An explanation of the \"rule\" error shown above would help\nas well.\n\nAny help will be appreciated.\n\nTIA,\nDuane\n\nHere is the function definition:\n\nCREATE OR REPLACE FUNCTION func_aud_tst01() RETURNS trigger AS '\n DECLARE\n action char(1);\n b4 text;\n aftr text;\n BEGIN\n IF TG_OP = ''INSERT'' THEN\n action := ''I'';\n b4 := '''';\n aftr := NEW;\n-- b4 := ''Test b4 I'';\n-- aftr := ''Test aftr I'';\n ELSIF TG_OP = ''UPDATE'' THEN\n action := ''U'';\n-- b4 := OLD;\n-- aftr := NEW;\n b4 := ''Test b4 U'';\n aftr := ''Test aftr U'';\n ELSE\n action := ''D'';\n-- b4 := OLD;\n-- aftr := '''';\n b4 := ''Test b4 D'';\n aftr := ''Test aftr D'';\n END IF;\n insert into audtst(table_name, act_type, before_look, after_look)\n values(TG_RELNAME, action, b4, aftr);\n RETURN NEW;\n END;\n' LANGUAGE plpgsql;\n--\n COMMIT WORK;\n\n\n\n\n\nTrigger & Function\n\n\nI'm trying to create a trigger (AFTER INSERT, UPDATE, DELETE) as an audit routine inserting into an audit table the \"before\" and \"after\" views of the row being acted upon. My problem is I defined the \"before\" and \"after\" fields in the audit table as TEXT and when I try to move NEW or OLD into these fields I get the error \"NEW used in query that is not in a rule\". I tried defining a variable as RECORD type but when I tried executing I would get a \"syntax error at or near...\" the variable when it was referenced in the code (new_fld := NEW for instance).\nI'm currently stumped. I don't want an audit table for each and every table I want to audit. I want a single audit table to handle multiple tables. Do any of you more astute users have any ideas to help me? Can you tell me where I'm going wrong? Is my wish to have a single audit table for multiple tables all folly? An explanation of the \"rule\" error shown above would help as well.\nAny help will be appreciated.\n\nTIA,\nDuane\n\nHere is the function definition:\n\nCREATE OR REPLACE FUNCTION func_aud_tst01() RETURNS trigger AS '\n DECLARE\n action char(1);\n b4 text;\n aftr text;\n BEGIN\n IF TG_OP = ''INSERT'' THEN\n action := ''I'';\n b4 := '''';\n aftr := NEW;\n-- b4 := ''Test b4 I'';\n-- aftr := ''Test aftr I'';\n ELSIF TG_OP = ''UPDATE'' THEN\n action := ''U'';\n-- b4 := OLD;\n-- aftr := NEW;\n b4 := ''Test b4 U'';\n aftr := ''Test aftr U'';\n ELSE\n action := ''D'';\n-- b4 := OLD;\n-- aftr := '''';\n b4 := ''Test b4 D'';\n aftr := ''Test aftr D'';\n END IF;\n insert into audtst(table_name, act_type, before_look, after_look)\n values(TG_RELNAME, action, b4, aftr);\n RETURN NEW;\n END;\n' LANGUAGE plpgsql;\n--\n COMMIT WORK;",
"msg_date": "Tue, 1 Jun 2004 14:03:40 -0700 ",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Trigger & Function"
},
{
"msg_contents": "> My problem is I defined the \"before\" and \"after\"\n> fields in the audit table as TEXT and when I try to move NEW or OLD into\n> these fields I get the error \"NEW used in query that is not in a rule\". \n\nYou're trying to insert record data into a text field, that doesn't work.\nOLD and NEW can be used as either record identifiers (as in RETURN OLD)\nor column qualifiers (as in OLD.colname), but you can't intermingle them.\n\nI don't think postgres (pl/pgsql) has row-to-variable and variable-to-row \nfunctions like serialize and unserialize, that's probably what you'd need. \nIt would probably be necessary to write something like that in C, since \nat this point pl/perl cannot be used for trigger functions. \n\nI've not tried using pl/php yet, the announcement for it says it can be \nused for trigger functions. \n\nMy first thought is that even if there was a serialize/unserialize \ncapabiity you might be able to write something using it that creates \nthe log entry but not anything that allows you to query the log for \nspecific column or row entries.\n\nIt would probably require a MAJOR extension of SQL to add it to pg,\nas there would need to be qualifiers that can be mapped to specific\ntables and columns. Even if we had that, storing values coming from \nmultiple tables into a single audit table would present huge challenges.\n\nI've found only two ways to implement audit logs:\n\n1. Have separate log tables that match the structure of\n the tables they are logging.\n\n2. Write a trigger function that converts columns to something you can\n store in a common log table. (I've not found a way to do this without\n inserting one row for each column being logged, though.)\n--\nMike Nolan\n",
"msg_date": "Tue, 1 Jun 2004 17:04:19 -0500 (CDT)",
"msg_from": "Mike Nolan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Trigger & Function"
}
] |
[
{
"msg_contents": "Hello pgsql-performance,\n\n I was using the native windows PostgreSQL 7.5Dev and was adviced by\n several people to use an emulated PostgreSQL instead, as it is just\n a beta.\n\n Well, I give it a whirl and tried both commercial VMWare and the\n freeweare open-source CoLinux, both work under Windows and both\n emulate Linux, that's a quick review of my experience with them, may\n someone in need learn from it.\n\n This might be not the best place for such a post, but since the\n subject was brought up here, I'll post it here as well. If someone\n thinks it should be posted somewhere else, let me know.\n\n Installation & Configuration\n ----------------------------\n \n VMWare:\n\n On the bright side, the installation went quite smoothly, VMWare\n configured all the network stuff by itself and I had no trouble\n using the net right away. On the grim side, the installation itself\n took ages, compared to the plug & play feel of CoLinux.\n\n Installing PostgreSQL on VMWare was quite straightforward, just as\n the the PostgreSQL documention goes.\n\n CoLinux:\n\n As I said, with CoLinux the installation itself goes very quickly.\n To get Linux running you need to download practically less than 20mb\n which include the distribution (Debian in my case) and the CoLinux\n setup. Configuring CoLinux took a bit longer than VMWare, yet, not\n long as I thought it would take. In fact, it can be very easy if you\n just follow the documention of CoLinux Wiki stuff, there are some\n very easy to follow tutorials there.\n\n Installing PostgreSQL on CoLinux proved a little more difficult\n (again, Debian), but I posted a quick tutorial that should smooth\n the process: http://www.colinux.org/wiki/index.php/PostgreSQL.\n\n Performance\n -----------\n\n This was a totally subjective test (especially since one of the\n participants is in a beta stage), yet, that's what I tested and that's\n what I needed to know.\n\n To make the test as fair as possible, I did an exact dump of the\n same database. I ran the SQLs (around 10) in the same order on all\n of them and repeated the test several times. I also did an EXPLAIN\n on the queries to make sure all the databases work on the query the\n same way. It wasn't a full test though, I didn't test mass select\n load, nor inserts, nor work under heavy load, nor I tried different\n types of joins. All I did was to run some heavy (in execution time)\n queries. So you should take these \"tests\" just for what they are.\n\n That's what I got:\n\n The native window port performed poorly lagging\n 30%-50% behind the VMWare/CoLinux solutions in execution times,\n rather sad, but not unexpected, I guess.\n\n CoLinux and VMWare give AROUND the same results, yet CoLinux did\n give slightly better performance (I'd say 5%-10%) but with such\n slight improvement and inconsistency I wouldn't count it as much.\n\n Conclusion\n ----------\n\n With all that said, VMWare is badly suited for running a database,\n while CoLinux can be run as a service (didn't try it yet though),\n VMWare always sits there, it is slow to go up, slow to go down and\n generally feels like a system hog.\n\n I'll go on with CoLinux for now and hope it will act as good as it\n looks.\n\n http://www.vmware.com/\n http://www.colinux.org/\n\n Thanks to Bryan and Matthew for their advices regarding the emulations.\n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\n",
"msg_date": "Wed, 2 Jun 2004 01:56:07 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "\nVitaly Belman <[email protected]> writes:\n\n> With all that said, VMWare is badly suited for running a database,\n> while CoLinux can be run as a service (didn't try it yet though),\n> VMWare always sits there, it is slow to go up, slow to go down and\n> generally feels like a system hog.\n\nUhm, it sounds like you're using VMWare Workstation? VMWare has a range of\ndifferent versions including some that are specifically targeted towards\nserver situations. I think they had the idea that hosting companies would run\nhundreds of virtual machines on a server and provide their hosting clients\nwith a virtual machine to play with.\n\nThat said, I'm curious why the emulated servers performed better than the\nNative Windows port. My first thought is that they probably aren't syncing\nevery write to disk so effectively they're defeating the fsyncs, allowing the\nhost OS to buffer disk writes.\n\nI would be curious to see better stats on things like a pgbench run which\nwould give some idea of the context switch efficiency, and a large select or\nupdate, which would give some idea of the i/o throughput. Really there's no\nexcuse for the Windows port to be slower than an emulator. Barring effects\nlike the disk caching I mentioned, it should far outpace the emulators.\n\n-- \ngreg\n\n",
"msg_date": "02 Jun 2004 10:27:36 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> That said, I'm curious why the emulated servers performed better than the\n> Native Windows port. My first thought is that they probably aren't syncing\n> every write to disk so effectively they're defeating the fsyncs, allowing the\n> host OS to buffer disk writes.\n\nIt would be fairly easy to check this by repeating the comparisons with\nfsync = off in postgresql.conf. A performance number that doesn't\nchange much would be a smoking gun ;-).\n\nThe native port hasn't had any performance testing done on it yet, and\nI wouldn't be surprised to hear of a gotcha or two. Perhaps with the\nrecent schedule change there will be some time for performance tuning\nbefore we go beta.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 2004 11:24:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux "
},
{
"msg_contents": "Using VMware myself quite extensively, I wonder what the disk \nconfiguration was that you created for the VM. Where the disks \npreallocated and did you make sure that they are contiguous on the NTFS \nfilesystem? Did you install the VMware tools in the guest operating system?\n\nWhat did you use to measure the \"performance\"?\n\n\nJan\n\nOn 6/1/2004 6:56 PM, Vitaly Belman wrote:\n\n> Hello pgsql-performance,\n> \n> I was using the native windows PostgreSQL 7.5Dev and was adviced by\n> several people to use an emulated PostgreSQL instead, as it is just\n> a beta.\n> \n> Well, I give it a whirl and tried both commercial VMWare and the\n> freeweare open-source CoLinux, both work under Windows and both\n> emulate Linux, that's a quick review of my experience with them, may\n> someone in need learn from it.\n> \n> This might be not the best place for such a post, but since the\n> subject was brought up here, I'll post it here as well. If someone\n> thinks it should be posted somewhere else, let me know.\n> \n> Installation & Configuration\n> ----------------------------\n> \n> VMWare:\n> \n> On the bright side, the installation went quite smoothly, VMWare\n> configured all the network stuff by itself and I had no trouble\n> using the net right away. On the grim side, the installation itself\n> took ages, compared to the plug & play feel of CoLinux.\n> \n> Installing PostgreSQL on VMWare was quite straightforward, just as\n> the the PostgreSQL documention goes.\n> \n> CoLinux:\n> \n> As I said, with CoLinux the installation itself goes very quickly.\n> To get Linux running you need to download practically less than 20mb\n> which include the distribution (Debian in my case) and the CoLinux\n> setup. Configuring CoLinux took a bit longer than VMWare, yet, not\n> long as I thought it would take. In fact, it can be very easy if you\n> just follow the documention of CoLinux Wiki stuff, there are some\n> very easy to follow tutorials there.\n> \n> Installing PostgreSQL on CoLinux proved a little more difficult\n> (again, Debian), but I posted a quick tutorial that should smooth\n> the process: http://www.colinux.org/wiki/index.php/PostgreSQL.\n> \n> Performance\n> -----------\n> \n> This was a totally subjective test (especially since one of the\n> participants is in a beta stage), yet, that's what I tested and that's\n> what I needed to know.\n> \n> To make the test as fair as possible, I did an exact dump of the\n> same database. I ran the SQLs (around 10) in the same order on all\n> of them and repeated the test several times. I also did an EXPLAIN\n> on the queries to make sure all the databases work on the query the\n> same way. It wasn't a full test though, I didn't test mass select\n> load, nor inserts, nor work under heavy load, nor I tried different\n> types of joins. All I did was to run some heavy (in execution time)\n> queries. So you should take these \"tests\" just for what they are.\n> \n> That's what I got:\n> \n> The native window port performed poorly lagging\n> 30%-50% behind the VMWare/CoLinux solutions in execution times,\n> rather sad, but not unexpected, I guess.\n> \n> CoLinux and VMWare give AROUND the same results, yet CoLinux did\n> give slightly better performance (I'd say 5%-10%) but with such\n> slight improvement and inconsistency I wouldn't count it as much.\n> \n> Conclusion\n> ----------\n> \n> With all that said, VMWare is badly suited for running a database,\n> while CoLinux can be run as a service (didn't try it yet though),\n> VMWare always sits there, it is slow to go up, slow to go down and\n> generally feels like a system hog.\n> \n> I'll go on with CoLinux for now and hope it will act as good as it\n> looks.\n> \n> http://www.vmware.com/\n> http://www.colinux.org/\n> \n> Thanks to Bryan and Matthew for their advices regarding the emulations.\n> \n> Regards,\n> Vitaly Belman\n> \n> ICQ: 1912453\n> AIM: VitalyB1984\n> MSN: [email protected]\n> Yahoo!: VitalyBe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n\n",
"msg_date": "Wed, 02 Jun 2004 11:30:22 -0400",
"msg_from": "Jan Wieck <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "I have colinux running on a Fedora Core 1 image. I have the rhdb 3 (or\nPostgreSQL RedHat Edition 3) on it running. Here are tests with fsync on\nand off:\n FSYNC OFF\t \t FSYNC ON\t\tRUN\n136.9\t 142.0\t124.5\t 149.1\t1\n122.1\t 126.7\t140.1\t 169.7\t2\n125.7\t 148.7\t147.4\t 180.4\t3\n103.3\t 136.7\t136.8\t 166.3\t4\n126.5\t 146.1\t152.3\t 187.9\t5\n114.4\t 133.3\t144.8\t 176.7\t6\n124.0\t 146.5\t143.3\t 175.0\t7\n121.7\t 166.8\t147.8\t 180.5\t8\n127.3\t 151.8\t146.7\t 180.0\t9\n124.6\t 143.0\t137.2\t 167.5\t10\n--------------------------------------\n122.7\t 144.2\t142.1\t 173.3\tAVG\n\nI hope those numbers' formatting come through all right. \n\nThis computer is an AMD Athlon 900MHz with 448MB Ram running XP Pro SP1\nThis is using Colinux 0.60 (not the recently released 0.61) and 96MB of RAM\nallocated to linux.\n\nThe computer was idle but it was running Putty, Excel and Task Manager\nduring the process. (I prefer to use Putty to SSH into the virtual computer\nthan to run the fltk console)\n\nIt occurs to me that the fsync may be performed to the linux filesystem, but\nthis filesystem is merely a file on the windows drive. Would Windows cache\nthis file? It's 2GB in size, so if it did, it would only be able to cache\npart of it.\n\nI'd like to run a more difficult test personally. It seems like this test\ngoes too fast to be very useful.\n\nIf someone would like me to try something more specific, e-mail me right\naway and I'll do it. I must leave my office at 4:15 EDT and will not return\nuntil Friday, although I can do another test on my home computer Thursday.\n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Tom Lane\n> Sent: Wednesday, June 02, 2004 11:25 AM\n> To: Greg Stark\n> Cc: Vitaly Belman; [email protected]; Bryan Encina; Matthew\n> Nuzum\n> Subject: Re: [PERFORM] PostgreSQL on VMWare vs Windows vs CoLinux\n> \n> Greg Stark <[email protected]> writes:\n> > That said, I'm curious why the emulated servers performed better than\n> the\n> > Native Windows port. My first thought is that they probably aren't\n> syncing\n> > every write to disk so effectively they're defeating the fsyncs,\n> allowing the\n> > host OS to buffer disk writes.\n> \n> It would be fairly easy to check this by repeating the comparisons with\n> fsync = off in postgresql.conf. A performance number that doesn't\n> change much would be a smoking gun ;-).\n> \n> The native port hasn't had any performance testing done on it yet, and\n> I wouldn't be surprised to hear of a gotcha or two. Perhaps with the\n> recent schedule change there will be some time for performance tuning\n> before we go beta.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 2 Jun 2004 14:51:37 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "Greg Stark wrote:\n\n>That said, I'm curious why the emulated servers performed better than the\n>Native Windows port. My first thought is that they probably aren't syncing\n>every write to disk so effectively they're defeating the fsyncs, allowing the\n>host OS to buffer disk writes.\n> \n>\nI havn't tested it, and it's certanly possible. However, please bear in \nmind that it is also possible that it just gives better performance.\n\nThe reason this may be possible is that the emulation layer gets the CPU \n(and other resources) from the OS in bulk, and decides on it's own how \nto allocate it to the various processes running within the emulation. \nInparticular, this \"on it's own\" is done using the stock Linux kernel. \nAs Postgresql works sufficiently better on Linux than on Windows, this \nyields better performance.\n\nAgain - speculation only. Someone should defenitely make sure that no \ncaching takes place where it shouldn't.\n\nAs a side note, I have had a chance to talk to Dan Aloni (coLinux \nmaintainer) about running PostgreSQL on coLinux. He said that he knows \nthat this particular use is high on people's priority list, but he feels \nit is totally unsafe to run a production database on alpha grade \nsoftware. Then again, free software projects being what they are, this \nis usually what a maintainer would say.\n\n Shachar\n\n-- \nShachar Shemesh\nLingnu Open Source Consulting\nhttp://www.lingnu.com/\n\n",
"msg_date": "Wed, 02 Jun 2004 23:24:19 +0300",
"msg_from": "Shachar Shemesh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "\"Matthew Nuzum\" <[email protected]> writes:\n\n> I have colinux running on a Fedora Core 1 image. I have the rhdb 3 (or\n> PostgreSQL RedHat Edition 3) on it running. Here are tests with fsync on\n> and off:\n> FSYNC OFF\t \t FSYNC ON\t\tRUN\n> 136.9\t 142.0\t124.5\t 149.1\t1\n> 122.1\t 126.7\t140.1\t 169.7\t2\n> 125.7\t 148.7\t147.4\t 180.4\t3\n> 103.3\t 136.7\t136.8\t 166.3\t4\n> 126.5\t 146.1\t152.3\t 187.9\t5\n> 114.4\t 133.3\t144.8\t 176.7\t6\n> 124.0\t 146.5\t143.3\t 175.0\t7\n> 121.7\t 166.8\t147.8\t 180.5\t8\n> 127.3\t 151.8\t146.7\t 180.0\t9\n> 124.6\t 143.0\t137.2\t 167.5\t10\n> --------------------------------------\n> 122.7\t 144.2\t142.1\t 173.3\tAVG\n> \n> I hope those numbers' formatting come through all right. \n\nNo, they didn't. You used tabs? Are they four space tabs or 8 space tabs?\nI assume 4 space tabs, but then what is the meaning of the four columns?\nYou have two columns for each fsync setting? One's under Windows and one's\nunder Vmware? Which is which?\n\n> It occurs to me that the fsync may be performed to the linux filesystem, but\n> this filesystem is merely a file on the windows drive. Would Windows cache\n> this file? It's 2GB in size, so if it did, it would only be able to cache\n> part of it.\n\nWell VMWare certainly doesn't know that the linux process called fsync. For\nall it knows the Linux kernel just schedule the i/o because it felt it was\ntime.\n\nSo the question is how does VMWare alway handle i/o normally. Does it always\nhandle i/o from the Guest OS synchronously or does it buffer it via the\nHost OS's i/o system. \n\nI'm actually not sure which it does, it could be doing something strange. But\ndoes seem most likely that it lets Windows buffer the writes, or does so\nitself. It might also depend on whether you're using raw disks or a virtual\ndisk file. Undoable disks would throw another wrench in the works entirely.\n\nNote that \"caching\" isn't really the question. It doesn't have to cache the\nentire 2GB file or even very much of it. It just has to store the block that\nlinux wants to write and report success to linux without waiting for the disk\nto report success. Linux will then think the file is sync'd to disk and allow\npostgres to continue with the next transaction without actually waiting for\nthe physical disk to spin around to the right place and the head to seek and\nperform the write.\n\n-- \ngreg\n\n",
"msg_date": "02 Jun 2004 17:39:04 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "On Wed, 2004-06-02 at 17:39, Greg Stark wrote:\n> \"Matthew Nuzum\" <[email protected]> writes:\n> \n> > I have colinux running on a Fedora Core 1 image. I have the rhdb 3 (or\n> > PostgreSQL RedHat Edition 3) on it running. Here are tests with fsync on\n> > and off:\n> > FSYNC OFF\t \t FSYNC ON\t\tRUN\n> > 136.9\t 142.0\t124.5\t 149.1\t1\n> > 122.1\t 126.7\t140.1\t 169.7\t2\n> > 125.7\t 148.7\t147.4\t 180.4\t3\n> > 103.3\t 136.7\t136.8\t 166.3\t4\n> > 126.5\t 146.1\t152.3\t 187.9\t5\n> > 114.4\t 133.3\t144.8\t 176.7\t6\n> > 124.0\t 146.5\t143.3\t 175.0\t7\n> > 121.7\t 166.8\t147.8\t 180.5\t8\n> > 127.3\t 151.8\t146.7\t 180.0\t9\n> > 124.6\t 143.0\t137.2\t 167.5\t10\n> > --------------------------------------\n> > 122.7\t 144.2\t142.1\t 173.3\tAVG\n> > \n> > I hope those numbers' formatting come through all right. \n> \n> No, they didn't. You used tabs? Are they four space tabs or 8 space tabs?\n> I assume 4 space tabs, but then what is the meaning of the four columns?\n> You have two columns for each fsync setting? One's under Windows and one's\n> under Vmware? Which is which?\n> \nSorry that wasn't clear. The pgbench program puts out two numbers,\ncan't remember what they are, I think one number included the time to\nmake the connection. Therefore, the first two columns represent the two\nvalues presented from pgbench with FSYNC off. The second two columns\nare those same to figures but with FSYNC ON. The 5th column is the\nrun. I did 10 runs and included the output of all runs so that incase\nanything significant could be gleaned from the details, the data would\nbe there.\n\nThe executive summary is this:\nTom was curious if colinux might be deceiving the applications that\nexpect the fsync to occur. He suspected that pgbench run with and\nwithout fsync enabled might reveal something. Therefore:\nFSYNC ON: 142.1\nFSYNC OFF: 122.7\n\nHaving FSYNC off seems to yield faster results.\n\nI'd like some input on a more demanding test though, because these tests\nrun so quickly I can't help but be suspicious of their accuracy. When\nthere are two OSs involved, it seems like the momentary activity of a\nbackground process could skew these results.\n\n> > It occurs to me that the fsync may be performed to the linux filesystem, but\n> > this filesystem is merely a file on the windows drive. Would Windows cache\n> > this file? It's 2GB in size, so if it did, it would only be able to cache\n> > part of it.\n> \n> Well VMWare certainly doesn't know that the linux process called fsync. For\n> all it knows the Linux kernel just schedule the i/o because it felt it was\n> time.\n> \n> So the question is how does VMWare alway handle i/o normally. Does it always\n> handle i/o from the Guest OS synchronously or does it buffer it via the\n> Host OS's i/o system. \nWe probably will never know what the internal workings of VMWare are\nlike because it is a closed source program. I'm not slighting them, I\nhave purchased a license of VMWare and use it for my software testing. \nHowever, colinux is an open source project and we can easily find out\nhow they handle this. I have little interest in this as I use this\nmerely as a tool to speed up my application development and do not run\nany critical services what-so-ever.\n\n> \n> I'm actually not sure which it does, it could be doing something strange. But\n> does seem most likely that it lets Windows buffer the writes, or does so\n> itself. It might also depend on whether you're using raw disks or a virtual\n> disk file. Undoable disks would throw another wrench in the works entirely.\nIn these tests I'm using a virtual disk file. This is a 2GB file on the\nhard drive that linux sees as a disk partition. Colinux does not\nsupport undoable disks in the way that vmware does. Their wiky site\ndoes not mention anything tricky being done to force disk writes to\nactually be written; the implication therefore is that it leaves the i/o\ncompletely at the discretion of XP. Also note that XP Pro and 2000 Pro\nboth offer different caching options for the user to choose so unless it\ndoes something to actually force a write the answer is probably \"who\nknows.\"\n\n> \n> Note that \"caching\" isn't really the question. It doesn't have to cache the\n> entire 2GB file or even very much of it. It just has to store the block that\n> linux wants to write and report success to linux without waiting for the disk\n> to report success. Linux will then think the file is sync'd to disk and allow\n> postgres to continue with the next transaction without actually waiting for\n> the physical disk to spin around to the right place and the head to seek and\n> perform the write.\n\nThat's interesting to know. I wondered about that.\n\nSo, my summary is this:\nIf you develop applications in windows that run in linux and you need a\ntesting platform you may like colinux a lot because of the following:\n * It's purchase price is 0\n * It's seems to be capable of running any (or at least many)\ndistribution based on 2.4 kernel\n * It appears to run much faster than VMWare (maybe because it has far\nfewer features, including the ability to run X apps unless you use VNC\nor a separate X server)\n* When idle it causes no apparent load on the CPU. This is the best\npart in my opinion. VMWare causes a drag on the system even when idle\nand especially when active. With colinux you can simply minimize the\nconsole or run it as a service and forget about it until you need it.\n\nYou may prefer VMWare if you need:\n * GUI setup tools\n * Full PC hardware support (sound, fb, etc)\n * Undoable disk support\n * Painless install (colinux isn't hard but it's not \"shrink wrap\"\nquality yet)\n * Need special kernel features. Colinux uses its own kernel which has\nbeen ported to windows.\n\nOK, this horse is dead... anyone else want to kick it? Just kidding...\n\nIf you want me to test something for you, I'd be happy to. Be specific\nand give me the details and I'll run it through the wringer on Friday. \nOh, and don't ask me to download anything big please. 25 of us share a\n56k modem there so bandwidth is limited.\n\nHope this is helpful to someone, have a nice day.\n\n-- \nMatthew Nuzum <[email protected]>\nFollowers.net, Inc.\n\n",
"msg_date": "Wed, 02 Jun 2004 22:00:33 -0400",
"msg_from": "Matthew Nuzum <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "Matthew Nuzum <[email protected]> writes:\n> I'd like some input on a more demanding test though, because these tests\n> run so quickly I can't help but be suspicious of their accuracy.\n\nSo increase the number of transactions tested (-t switch to pgbench).\n\nBe aware also that you really want -s (database size scale factor) to\nexceed -c (number of concurrent clients) for meaningful results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 2004 22:12:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux "
}
] |
[
{
"msg_contents": "This was a lively debate on what was faster, single spindles or RAID.\n\nThis is important, because I keep running into people who do not \nunderstand the performance dynamics of a RDBMS like Oracle or Pg.\n\nPg and Oracle make a zillion tiny reads and writes and fsync() \nregularly. If your drive will copy a 4GB file at a sustained rate of 72 \nMB/s, that tells you nothing about how it will do with an RDBMS.\n\nI will throw in my experience on RAID vs spindles.\n\nWith the RAID write cache disabled, a well balanced set of spindles will \nkill a RAID system any day.\n\nEnable the cache, and the RAID starts inching ahead. My experience is \nthat no one can continuously keep I/O properly balanced across several \nspindles on a production system. Things change and the I/O mix changes. \n Then, the RAID is outperforming the spindles. If you want to spend \nthe rest of your career constantly balancing I/O across spindles, then \ndo so.\n\nFor the rest of us, with a write cache, a hardware RAID wins hands down \nover the long haul.\n\nIt might make sense to provide some sort of benchmarking tool for \nvarious systems so that we can predict I/O performance.\n\nRun the following code both on a hardware RAID and on a single spindle.\n\n#include <stdlib.h>\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <time.h>\n\nvoid makefile(int fs)\n{\n int i;\n char buf[8192];\n int ld;\n int blocks=4096;\n int pos;\n time_t stim;\n time_t etim;\n float avg;\n\n unlink(\"dump.out\");\n ld=open(\"dump.out\", O_WRONLY | O_CREAT);\n\n printf(\"Writing %d blocks sequentially\\n\", blocks);\n time(&stim);\n for (i=0; i<blocks; i++) {\n write(ld, buf, sizeof(buf));\n if (fs) {\n fsync(ld);\n }\n }\n time(&etim);\n avg = (blocks+0.0)/(etim-stim-0.0);\n printf(\"Took %d seconds, avg %f iops\\n\\n\", etim-stim, avg);\n\n // purge the write cache\n fsync(ld);\n\n printf(\"Writing %d blocks (somewhat randomly)\\n\", blocks);\n time(&stim);\n for (i=0; i<blocks; i++) {\n pos = (rand()%blocks)*sizeof(buf);\n lseek(ld, pos, SEEK_SET);\n write(ld, buf, sizeof(buf));\n if (fs) {\n fsync(ld);\n }\n }\n time(&etim);\n avg = (blocks+0.0)/(etim-stim-0.0);\n printf(\"Took %d seconds, avg %f iops\\n\\n\", etim-stim, avg);\n\n close(ld);\n unlink(\"dump.out\");\n}\n\nint main()\n{\n printf(\"No fsync()\\n\");\n makefile(0);\n\n printf(\"With fsync()\\n\");\n makefile(1);\n\n return 0;\n}\n\n\nThe first operation shows how well the OS write cache is doing. The \nsecond shows how poorly everything runs with fsync(), which is what Pg \nand Oracle do.\n\nMy RAID produced the following, but was also running production when I \nran it:\n\nNo fsync()\nWriting 4096 blocks sequentially\nTook 1 seconds, avg 4096.000000 iops\n\nWriting 4096 blocks (somewhat randomly)\nTook 4 seconds, avg 1024.000000 iops\n\nWith fsync()\nWriting 4096 blocks sequentially\nTook 40 seconds, avg 102.400002 iops\n\nWriting 4096 blocks (somewhat randomly)\nTook 66 seconds, avg 62.060608 iops\n\nWhen I ran this on a decent fibre channel drive, I got:\n\nNo fsync()\nWriting 4096 blocks sequentially\nTook 1 seconds, avg 4096.000000 iops\n\nWriting 4096 blocks (somewhat randomly)\nTook 7 seconds, avg 585.142883 iops\n\nWith fsync()\nWriting 4096 blocks sequentially\nTook 106 seconds, avg 38.641510 iops\n\nWriting 4096 blocks (somewhat randomly)\nTook 115 seconds, avg 35.617390 iops\n\n\nYou can see that the RAID array really helps out with small writes.\n\n\n",
"msg_date": "Tue, 01 Jun 2004 18:08:34 -0600",
"msg_from": "Marty Scholes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk performance, was Re: tablespaces and DB administration"
}
] |
[
{
"msg_contents": "\nDear all,\n\nHave anyone compiled PostgreSQL with kernel 2.6.x \nif YES\n1. Was their any performance gains\nElse\n1. Is it possible\n2. What problems would keeping us away from compiling on kernel 2.6\n\n-- \nBest Regards,\nVishal Kashyap\nDirector / Lead Software Developer,\nSai Hertz And Control Systems Pvt Ltd,\nhttp://saihertz.rediffblogs.com\nJabber IM: vishalkashyap[ a t ]jabber.org\nICQ : 264360076\nYahoo IM: mailforvishal[ a t ]yahoo.com\n-----------------------------------------------\nYou yourself, as much as anybody in the entire\nuniverse, deserve your love and affection.\n- Buddha\n---------------\npgsql=# select marital_status from vishals_life;\n\nmarital_status\n------------------\nSingle not looking\n\n1 Row(s) affected\n\n\n\n\n\n",
"msg_date": "Wed, 02 Jun 2004 08:46:59 +0530",
"msg_from": "\"V i s h a l Kashyap @ [Sai Hertz And Control Systems]\"\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Kernel 2.6.x"
},
{
"msg_contents": "On Tue, 2004-06-01 at 23:16, V i s h a l Kashyap @ [Sai Hertz And\nControl Systems] wrote:\n> Dear all,\n> \n> Have anyone compiled PostgreSQL with kernel 2.6.x \n> if YES\n> 1. Was their any performance gains\n\nOSDL reports approx 20% improvement. I've seen similar with some data\naccess patterns.\n\n> 2. What problems would keeping us away from compiling on kernel 2.6\n\nNothing that I know of assuming you have vendor support for it.\n\n\n",
"msg_date": "Tue, 01 Jun 2004 23:31:50 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Kernel 2.6.x"
},
{
"msg_contents": "V i s h a l Kashyap @ [Sai Hertz And Control Systems] wrote:\n\n>\n> Dear all,\n>\n> Have anyone compiled PostgreSQL with kernel 2.6.x if YES\n> 1. Was their any performance gains\n> Else\n> 1. Is it possible\n> 2. What problems would keeping us away from compiling on kernel 2.6\n>\nWe run pgsql on 2.6.6 there was upto 30% improvement in performance\nfor certain queries. None, everything works just fine.\n\n\nRegds\nMallah.\n",
"msg_date": "Wed, 02 Jun 2004 21:08:23 +0530",
"msg_from": "Rajesh Kumar Mallah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Kernel 2.6.x"
}
] |
[
{
"msg_contents": "Dear reader, \n\nI am investigating whether it is useful to directly query a database\ncontaining a rather large text corpus (order of magnitude 100k - 1m\nnewspaper articles, so around 100 million words), or whether I should\nuse third party text indexing services. I want to know things such as:\nhow often is a certain word (or pattern) mentioned in an article and how\noften it is mentioned with the condition that another word is nearby\n(same article or n words distant).\n\nI created a table listing the words one word per row, and created an\nindex on the word and wordnr columns. An example query would be:\n\nsimple: select articleid, count(*) as count from words w where articleid\nin (select id from articles where batchid in (84,85,100,101,118,121))\nand (word like '<PATTERN>') group by articleid\ncomplex: select articleid, count(*) as count from words w where\narticleid in (select id from articles where batchid in\n(84,85,100,101,118,121)) and (word like '<PATTERN>') and exists (select\n* from words w2 where w.articleid = w2.articleid and (word like\n'<PATTERN2>')) group by articleid\n\nAccording to the diagnostics, the database does use the indices for the\nquery, but it is still rather slow (around 10 minutes for a 'simple\nquery', x seconds for a complex one)\n\nIt is important that the complex query only counts instances where the\nPATTERN is found and PATTERN2 only functions as a criterium and does not\nadd to the count.\n\nMy questions are: (technical details provided below)\n- Does anyone disagree with the general setup?\n- Is there a more sensible way to phrase my SQL?\n- Any other ideas to improve performance?\n\nThanks,\n\nWouter van Atteveldt\nFree University Amsterdam\n\n------\n\nTechnicalities:\n\nI am using a Postgresql 7.4.1 database on a linux machine (uname -a:\nLinux swpc450.cs.vu.nl 2.4.22-1.2115.nptl #1 Wed Oct 29 15:31:21 EST\n2003 i686 athlon i386 GNU/Linux). The table of interest is: (lemma, pos,\nsimplepos currently not used)\n\n Table \"public.words\"\n Column | Type | Modifiers\n------------+------------------------+----------------------------------\n---------------------\n id | integer | not null default\nnextval('public.words_id_seq'::text)\n articleid | integer | not null\n sentencenr | integer | not null\n word | character varying(255) | not null\n lemma | character varying(255) |\n pos | character varying(255) |\n simplepos | character(1) |\n wordnr | integer | not null\n parnr | integer | not null\nIndexes:\n \"words_pkey\" primary key, btree (id)\n \"words_aid\" btree (articleid)\n \"words_word\" btree (word)\n \"words_word_ptrn\" btree (word varchar_pattern_ops)\n \"words_wordnr\" btree (wordnr)\n\nQuery plans:\n\nanoko=> explain select articleid, count(*) as count from words w where\narticleid in (select id from articles where batchid in\n(84,85,100,101,118,121)) and (word like 'integratie%') group by\narticleid;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------\n HashAggregate (cost=937959.21..937959.22 rows=2 width=4)\n -> Hash IN Join (cost=95863.70..937816.01 rows=28640 width=4)\n Hash Cond: (\"outer\".articleid = \"inner\".id)\n -> Index Scan using words_word_ptrn on words w\n(cost=0.00..836604.62 rows=208886 width=4)\n Index Cond: (((word)::text ~>=~ 'integratie'::character\nvarying) AND ((word)::text ~<~ 'integratif'::character varying))\n Filter: ((word)::text ~~ 'integratie%'::text)\n -> Hash (cost=94998.60..94998.60 rows=146041 width=4)\n -> Index Scan using articles_batchid, articles_batchid,\narticles_batchid, articles_batchid, articles_batchid, articles_batchid\non articles (cost=0.00..94998.60 rows=146041 width=4)\n Index Cond: ((batchid = 84) OR (batchid = 85) OR\n(batchid = 100) OR (batchid = 101) OR (batchid = 118) OR (batchid =\n121))\n\nexplain select articleid, count(*) as count from words w where articleid\nin (select id from articles where batchid in (84,85,100,101,118,121))\nand (word like '<PATTERN>') and exists (select * from words w2 where\nw.articleid = w2.articleid and (word like '<PATTERN2>')) group by\narticleid\nanoko-> ;\n \nQUERY PLAN \n------------------------------------------------------------------------\n------------------------------------------------------------------------\n------------------------------------------------------\n GroupAggregate (cost=168253089.23..168254556.46 rows=1 width=4)\n -> Merge IN Join (cost=168253089.23..168254484.85 rows=14320\nwidth=4)\n Merge Cond: (\"outer\".articleid = \"inner\".id)\n -> Sort (cost=168144438.23..168144699.33 rows=104443 width=4)\n Sort Key: w.articleid\n -> Index Scan using words_word_ptrn on words w\n(cost=0.00..168134972.17 rows=104443 width=4)\n Index Cond: ((word)::text ~=~\n'<PATTERN>'::character varying)\n Filter: (((word)::text ~~ '<PATTERN>'::text) AND\n(subplan))\n SubPlan\n -> Index Scan using words_aid on words w2\n(cost=0.00..836948.84 rows=1045 width=460)\n Index Cond: ($0 = articleid)\n Filter: ((word)::text ~~\n'<PATTERN2>'::text)\n -> Sort (cost=108651.01..109016.11 rows=146041 width=4)\n Sort Key: articles.id\n -> Index Scan using articles_batchid, articles_batchid,\narticles_batchid, articles_batchid, articles_batchid, articles_batchid\non articles (cost=0.00..94998.60 rows=146041 width=4)\n Index Cond: ((batchid = 84) OR (batchid = 85) OR\n(batchid = 100) OR (batchid = 101) OR (batchid = 118) OR (batchid =\n121))\n",
"msg_date": "Wed, 2 Jun 2004 13:57:44 +0200",
"msg_from": "\"W.H. van Atteveldt\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres query optimization with varchar fields"
},
{
"msg_contents": "> I am investigating whether it is useful to directly query a database\n> containing a rather large text corpus (order of magnitude 100k - 1m\n> newspaper articles, so around 100 million words), or whether I should\n> use third party text indexing services. I want to know things such as:\n> how often is a certain word (or pattern) mentioned in an article and how\n> often it is mentioned with the condition that another word is nearby\n> (same article or n words distant).\n\nYou really want to use the contrib/tsearch2 module that comes already \nwith PostgreSQL.\n\ncd contrib/tsearch2\ngmake install\npsql <mydb> < tsearch2.sql\nmore README.tsearch2\n\nChris\n\n",
"msg_date": "Thu, 03 Jun 2004 09:42:46 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres query optimization with varchar fields"
}
] |
[
{
"msg_contents": "Thanks for the response. I was pretty sure it couldn't be done the way I\nwanted to but felt I would ask anyway.\n\nThanks again,\nDuane\n\n-----Original Message-----\nFrom: Mike Nolan [mailto:[email protected]]\nSent: Tuesday, June 01, 2004 3:04 PM\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Trigger & Function\n\n\n> My problem is I defined the \"before\" and \"after\"\n> fields in the audit table as TEXT and when I try to move NEW or OLD into\n> these fields I get the error \"NEW used in query that is not in a rule\". \n\nYou're trying to insert record data into a text field, that doesn't work.\nOLD and NEW can be used as either record identifiers (as in RETURN OLD)\nor column qualifiers (as in OLD.colname), but you can't intermingle them.\n\nI don't think postgres (pl/pgsql) has row-to-variable and variable-to-row \nfunctions like serialize and unserialize, that's probably what you'd need. \nIt would probably be necessary to write something like that in C, since \nat this point pl/perl cannot be used for trigger functions. \n\nI've not tried using pl/php yet, the announcement for it says it can be \nused for trigger functions. \n\nMy first thought is that even if there was a serialize/unserialize \ncapabiity you might be able to write something using it that creates \nthe log entry but not anything that allows you to query the log for \nspecific column or row entries.\n\nIt would probably require a MAJOR extension of SQL to add it to pg,\nas there would need to be qualifiers that can be mapped to specific\ntables and columns. Even if we had that, storing values coming from \nmultiple tables into a single audit table would present huge challenges.\n\nI've found only two ways to implement audit logs:\n\n1. Have separate log tables that match the structure of\n the tables they are logging.\n\n2. Write a trigger function that converts columns to something you can\n store in a common log table. (I've not found a way to do this without\n inserting one row for each column being logged, though.)\n--\nMike Nolan\n\n\n\n\n\nRE: [PERFORM] Trigger & Function\n\n\nThanks for the response. I was pretty sure it couldn't be done the way I wanted to but felt I would ask anyway.\n\nThanks again,\nDuane\n\n-----Original Message-----\nFrom: Mike Nolan [mailto:[email protected]]\nSent: Tuesday, June 01, 2004 3:04 PM\nTo: [email protected]\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Trigger & Function\n\n\n> My problem is I defined the \"before\" and \"after\"\n> fields in the audit table as TEXT and when I try to move NEW or OLD into\n> these fields I get the error \"NEW used in query that is not in a rule\". \n\nYou're trying to insert record data into a text field, that doesn't work.\nOLD and NEW can be used as either record identifiers (as in RETURN OLD)\nor column qualifiers (as in OLD.colname), but you can't intermingle them.\n\nI don't think postgres (pl/pgsql) has row-to-variable and variable-to-row \nfunctions like serialize and unserialize, that's probably what you'd need. \nIt would probably be necessary to write something like that in C, since \nat this point pl/perl cannot be used for trigger functions. \n\nI've not tried using pl/php yet, the announcement for it says it can be \nused for trigger functions. \n\nMy first thought is that even if there was a serialize/unserialize \ncapabiity you might be able to write something using it that creates \nthe log entry but not anything that allows you to query the log for \nspecific column or row entries.\n\nIt would probably require a MAJOR extension of SQL to add it to pg,\nas there would need to be qualifiers that can be mapped to specific\ntables and columns. Even if we had that, storing values coming from \nmultiple tables into a single audit table would present huge challenges.\n\nI've found only two ways to implement audit logs:\n\n1. Have separate log tables that match the structure of\n the tables they are logging.\n\n2. Write a trigger function that converts columns to something you can\n store in a common log table. (I've not found a way to do this without\n inserting one row for each column being logged, though.)\n--\nMike Nolan",
"msg_date": "Wed, 2 Jun 2004 10:01:10 -0700 ",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Trigger & Function"
}
] |
[
{
"msg_contents": "Hi All,\n\nI think it would actually be interesting to see the performance of the Cygwin version for these same benchmarks, then we've covered all ways to run PostgreSQL on Windows systems. (I expect though that performance of Cygwin-PostgreSQL will improve considerably when an updated version is released that uses Cygwin native IPC instead of the ipc-daemon.)\n\nregards,\n\n--Tim\n",
"msg_date": "Wed, 2 Jun 2004 21:03:33 +0200",
"msg_from": "\"Leeuw van der, Tim\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
}
] |
[
{
"msg_contents": "Folks,\n\nI've been testing varying SPINS_PER_DELAY in a client's installation of \nPostgreSQL against a copy of a production database, to test varying this \nstatistic as a way of fixing the issue. \n\nIt does not seem to work.\n\nI've tested all of the following graduated levels:\n\n100 (the original)\n250\n500\n1000\n2000\n5000\n10000\n20000\n30000\n50000\n\nNone of these quantities seem to make any difference at all in the number of \ncontext switches -- neither down nor up. Seems to me like this is a dead \nend. Does anyone have test results that show otherwise?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 2 Jun 2004 12:25:25 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Context Switching issue: Spinlock doesn't fix."
},
{
"msg_contents": "\nHello,\n\nIs context switching problem resolved in 8.0?\n\nCan I drop in another Xeon?\n\nThanks,\nJelle\n\n\nOn Wed, 2 Jun 2004, Josh Berkus wrote:\n\n> Folks,\n> \n> I've been testing varying SPINS_PER_DELAY in a client's installation of \n> PostgreSQL against a copy of a production database, to test varying this \n> statistic as a way of fixing the issue. \n> \n> It does not seem to work.\n> \n> I've tested all of the following graduated levels:\n> \n> 100 (the original)\n> 250\n> 500\n> 1000\n> 2000\n> 5000\n> 10000\n> 20000\n> 30000\n> 50000\n> \n> None of these quantities seem to make any difference at all in the number of \n> context switches -- neither down nor up. Seems to me like this is a dead \n> end. Does anyone have test results that show otherwise?\n> \n> \n\n-- \n\nhttp://www.jibjab.com\n\n",
"msg_date": "Tue, 31 Aug 2004 14:20:41 -0700 (PDT)",
"msg_from": "jelle <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Context Switching issue: Spinlock doesn't fix."
},
{
"msg_contents": "Jellej,\n\n> Is context switching problem resolved in 8.0?\n>\n> Can I drop in another Xeon?\n\nNope, not solved yet. However, it only affects certain data access patterns. \nSo don't use it as a reason not to go multi-processor.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 31 Aug 2004 15:03:05 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Context Switching issue: Spinlock doesn't fix."
}
] |
[
{
"msg_contents": "Shachar Shemesh wrote:\n> Greg Stark wrote:\n> \n> >That said, I'm curious why the emulated servers performed better than\nthe\n> >Native Windows port. My first thought is that they probably aren't\n> syncing\n> >every write to disk so effectively they're defeating the fsyncs,\nallowing\n> the\n> >host OS to buffer disk writes.\n> >\n> >\n> I havn't tested it, and it's certanly possible. However, please bear\nin\n> mind that it is also possible that it just gives better performance.\n> \n> The reason this may be possible is that the emulation layer gets the\nCPU\n> (and other resources) from the OS in bulk, and decides on it's own how\n> to allocate it to the various processes running within the emulation.\n> Inparticular, this \"on it's own\" is done using the stock Linux kernel.\n> As Postgresql works sufficiently better on Linux than on Windows, this\n> yields better performance.\n\n'better' does not mean 'faster'. Win32 has a pretty decent journaling\nfilesytem (ntfs) and a good I/O subsystem which includes IPC. Process\nmanagement is poor compared to newer linux kernels but this is\nunimportant except in extreme cases. Right now the win32 native does\nnot sync() (but does fsync()). So, the performance is somewhere between\nfsync = off and fsync = on (probably much closer to fsync = on). It is\nreasonable to assume that the win32 port will outperform the unix\nversions at many tasks (at the expense of safety) until the new sync()\ncode is put in.\n\nIf tested on the same source base, 40-60% differences can only be coming\nfrom the I/O subsystem. There are other factors which aren't clear from\nthis exchange like what version of gcc, etc.\n\nMerlin\n",
"msg_date": "Wed, 2 Jun 2004 16:45:27 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "On 2 Jun 2004 at 16:45, Merlin Moncure wrote:\n\n> \n> 'better' does not mean 'faster'. Win32 has a pretty decent journaling\n> filesytem (ntfs) and a good I/O subsystem which includes IPC. Process\n> management is poor compared to newer linux kernels but this is\n> unimportant except in extreme cases. Right now the win32 native does\n> not sync() (but does fsync()). So, the performance is somewhere between\n> fsync = off and fsync = on (probably much closer to fsync = on). It is\n> reasonable to assume that the win32 port will outperform the unix\n> versions at many tasks (at the expense of safety) until the new sync()\n> code is put in.\n> \n> If tested on the same source base, 40-60% differences can only be coming\n> from the I/O subsystem. There are other factors which aren't clear from\n> this exchange like what version of gcc, etc.\n> \n\nHmm, interesting.\n\nI've been running the Win32 port for a couple of weeks now. Using the \nsame database as a Linux 2.6 system. Same processor and memory \nbut different disks.\n\nLinux system has 10K rpm SCSI disks\nWindows has 7200 rpm serial ATA disks.\n\nWhen a lot of IO is involved the performance differences are very mixed \nas I would expect. Sometimes Windows wins, sometimes Linux.\n\nBUT, very consistently, when NO IO is involved then the Win32 port is \nalways around 20% slower than Linux. In cases where the EXPLAIN \nANALYZE results are different I have disregarded. In all the cases that \nthe EXPLAIN ANALYZE results are the same and no IO is involved the \nWin32 port is slower.\n\nCurrently I am putting this down to the build/gcc differences. I can't see \nwhy there should be this difference otherwise. (memory \nmanagement??)\n\nRegards,\nGary.\n\n",
"msg_date": "Wed, 02 Jun 2004 22:31:27 +0100",
"msg_from": "\"Gary Doades\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> ... Right now the win32 native does\n> not sync() (but does fsync()). So, the performance is somewhere between\n> fsync = off and fsync = on (probably much closer to fsync = on). It is\n> reasonable to assume that the win32 port will outperform the unix\n> versions at many tasks (at the expense of safety) until the new sync()\n> code is put in.\n\n... which was three days ago. Why are we still speculating?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 02 Jun 2004 23:51:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL on VMWare vs Windows vs CoLinux "
}
] |
[
{
"msg_contents": "Hello all,\n\n I have an import function that I have been working on for some time now, and \nit performed well up until recently. It is doing a lot, and because the \nqueries are not cached, I am not sure if that is what the problem is. If a \nfunction takes a while, does it lock any of the tables it is accessing, even \nfor SELECT?\n\nBelow is the bulk of the function:\n\n-- set sql statement variables\n create_import_file_sql := ''COPY '' || container_table || '' ('' || \nfiltered_container_columns || '') TO '' || \nquote_literal(formatted_import_file) || '' WITH NULL AS '' || \nnull_single_quotes;\n upload_to_import_table_sql := ''COPY '' || import_table || '' ('' || \nfield_names || '') FROM '' || quote_literal(formatted_import_file) || '' WITH \nNULL AS '' || null_single_quotes;\n clean_personalization_fields_sql := ''UPDATE '' || import_table || '' \nSET emma_member_email = btrim(emma_member_email, '' || \nquote_literal(quoted_single_quote) || '') , emma_member_name_first = \nbtrim(emma_member_name_first, '' || quote_literal(quoted_single_quote) || \n'') , emma_member_name_last = btrim(emma_member_name_last, '' || \nquote_literal(quoted_single_quote) || '') ;'';\n clean_personalization_fields_sql2 := ''UPDATE '' || import_table || '' \nSET emma_member_email = btrim(emma_member_email) , emma_member_name_first = \nbtrim(emma_member_name_first) , emma_member_name_last = \nbtrim(emma_member_name_last) ;'';\n set_account_id_sql := ''UPDATE '' || import_table || '' SET \nemma_account_id = '' || account_id;\n set_default_active_status_sql := ''UPDATE '' || import_table || '' SET \nemma_member_status_id = 1'';\n set_errors_for_null_email_sql := ''UPDATE '' || import_table || '' SET \nemma_member_status_id = 2 WHERE emma_member_email IS NULL'';\n record_null_email_count_sql := ''UPDATE '' || import_history_table || \n'' SET emma_import_null_email_count = (SELECT COUNT(*) FROM '' || \nimport_table || '' WHERE emma_member_email IS NULL) WHERE \nemma_import_history_id ='' || import_history_id;\n set_errors_for_invalid_email_sql := ''UPDATE '' || import_table || '' \nSET emma_member_status_id = 2 WHERE emma_member_email !~* '' || email_regex;\n record_invalid_email_count_sql := ''UPDATE '' || import_history_table \n|| '' SET emma_import_invalid_email_count = ( SELECT COUNT(*) FROM '' || \nimport_table || '' WHERE emma_member_email !~* '' || email_regex || '' ) \nWHERE emma_import_history_id ='' || import_history_id;\n get_dupes_in_import_sql := ''SELECT emma_member_email, \nemma_member_status_id FROM '' || import_table || '' GROUP BY \nemma_member_email, emma_member_status_id having count(*) > 1'';\n insert_dupes_sql := ''INSERT INTO '' || dupe_table || '' SELECT * \nFROM '' || import_table || '' WHERE LOWER(emma_member_email) = LOWER('' || \nmember_table || ''.emma_member_email)'';\n record_table_dupe_count_sql := ''UPDATE '' || import_history_table || \n'' SET emma_import_table_dupe_email_count = (SELECT COUNT(*) FROM '' || \nimport_table || '' WHERE emma_member_email = LOWER('' || member_table || \n''.emma_member_email)) WHERE emma_import_history_id ='' || import_history_id;\n remove_dupes_from_import_table_sql := ''DELETE FROM '' || import_table \n|| '' WHERE LOWER(emma_member_email) = LOWER('' || member_table || \n''.emma_member_email)'';\n create_clean_import_file_sql := ''COPY '' || import_table || '' TO '' \n|| quote_literal(clean_import_file) || '' WITH NULL AS '' || \nnull_single_quotes;\n create_members_groups_ids_file_sql := ''COPY '' || import_table || \n'' (emma_member_id) TO '' || quote_literal(members_groups_ids_file) || '' \nWITH NULL AS '' || null_single_quotes;\n empty_import_table_sql := ''TRUNCATE '' || import_table;\n upload_clean_import_sql := ''COPY '' || member_table || '' FROM '' || \nquote_literal(clean_import_file) || '' WITH NULL AS '' || \nnull_single_quotes;\n upload_members_groups_ids_sql := ''COPY '' || members_groups_ids_table \n|| '' (emma_member_id) FROM '' || quote_literal(members_groups_ids_file) || \n'' WITH NULL AS '' || null_single_quotes;\n empty_members_groups_ids_sql := ''TRUNCATE '' || \nmembers_groups_ids_table;\n empty_members_dupes_sql := ''TRUNCATE '' || dupe_table;\n vacuum_sql := ''VACUUM '' || member_table || ''; VACUUM '' || \nimport_table || ''; VACUUM '' || container_table || ''; VACUUM '' || \nmembers_groups_table || ''; VACUUM '' || members_groups_ids_table || ''; \nVACUUM '' || dupe_table;\n\n -- BEGIN ACTIVITY\n -- Create the filtered import file with the\n EXECUTE create_import_file_sql;\n -- Load data from the filtered file to the import table\n EXECUTE upload_to_import_table_sql;\n -- Set account id in import table\n EXECUTE set_account_id_sql;\n -- Set the status of all the records to 1\n EXECUTE set_default_active_status_sql;\n -- Clean personalization data\n EXECUTE clean_personalization_fields_sql;\n EXECUTE clean_personalization_fields_sql2;\n -- Set the status to error for all NULL emails\n EXECUTE set_errors_for_null_email_sql;\n -- Record the count of null emails\n EXECUTE record_null_email_count_sql;\n -- Set the status to error for all invalid emails\n EXECUTE set_errors_for_invalid_email_sql;\n -- Record the count of invalid emails\n EXECUTE record_invalid_email_count_sql;\n\n -- Remove duplicates in import table (originally in file)\n FOR duplicate_record IN EXECUTE get_dupes_in_import_sql LOOP\n IF duplicate_record.emma_member_email IS NOT NULL THEN\n FOR replacement_record IN EXECUTE '' SELECT * FROM '' || \nimport_table || '' WHERE emma_member_email = '' || \nquote_literal(duplicate_record.emma_member_email) || '' ORDER BY \nemma_member_id LIMIT 1'' LOOP\n escape_first_name := quote_literal \n(replacement_record.emma_member_name_first);\n escape_last_name := quote_literal \n(replacement_record.emma_member_name_last);\n escape_email := quote_literal \n(replacement_record.emma_member_email);\n escape_status_id := \nquote_literal(replacement_record.emma_member_status_id);\n -- Record count of dupes\n FOR dupe_record_count IN EXECUTE ''SELECT COUNT(*) \nAS count FROM '' || import_table || '' WHERE LOWER(emma_member_email) = \nLOWER('' || escape_email || '')'' LOOP\n EXECUTE ''UPDATE '' || \nimport_history_table || '' SET emma_import_file_dupe_email_count ='' || \ndupe_record_count.count;\n END LOOP;\n FOR primary_dupe_record IN EXECUTE ''SELECT \nMAX(emma_member_id) AS max_id FROM '' || import_table || '' WHERE \nLOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n EXECUTE ''UPDATE '' || import_table || '' \nSET emma_member_status_id = 5 WHERE emma_member_id = '' || \nprimary_dupe_record.max_id;\n EXECUTE ''DELETE FROM '' || import_table \n|| '' WHERE emma_member_email = '' || \nquote_literal(duplicate_record.emma_member_email) || '' AND \nemma_member_status_id != 5'';\n EXECUTE ''UPDATE '' || import_table || '' \nSET emma_member_status_id = 1 WHERE emma_member_status_id = 5'';\n END LOOP;\n import_dupe_count := import_dupe_count + 1;\n END LOOP;\n END IF;\n END LOOP;\n\n -- Move dupes over to the dupe table\n EXECUTE insert_dupes_sql;\n -- Record the count of dupes from import to members\n EXECUTE record_table_dupe_count_sql;\n -- Delete the dupes from the import table\n EXECUTE remove_dupes_from_import_table_sql;\n -- Create clean import file\n EXECUTE create_clean_import_file_sql;\n -- Create groups_id file\n EXECUTE create_members_groups_ids_file_sql;\n -- Empty import table\n EXECUTE empty_import_table_sql;\n -- Upload clean members from import\n EXECUTE upload_clean_import_sql;\n -- Upload group ids\n EXECUTE upload_members_groups_ids_sql;\n\n -- Associate to groups\n groups := string_to_array(group_list, '','');\n if array_lower(groups, 1) IS NOT NULL THEN\n FOR i IN array_lower(groups, 1)..array_upper(groups, 1) LOOP\n EXECUTE ''INSERT INTO '' || members_groups_ids_table || \n'' SELECT '' || member_table || ''.emma_member_id FROM ONLY '' || \nmember_table || '' WHERE LOWER('' || member_table || ''.emma_member_email) = \nLOWER('' || dupe_table || ''.emma_member_email) AND '' || member_table || \n''.emma_member_id NOT IN (SELECT '' || members_groups_table || \n''.emma_member_id FROM '' || members_groups_table || '' WHERE '' || \nmembers_groups_table || ''.emma_group_id = '' || groups[i] || '') AND '' || \nmember_table || ''.emma_member_id NOT IN (SELECT emma_member_id FROM '' || \nmembers_groups_ids_table || '')'';\n EXECUTE ''DELETE FROM '' || members_groups_ids_table || \n'' WHERE emma_member_id IN (SELECT emma_member_id FROM '' || \nmembers_groups_table || '' WHERE emma_group_id = '' || groups[i] || '' )'';\n EXECUTE ''INSERT INTO '' || members_groups_table || '' \nSELECT DISTINCT '' || groups[i] || '' AS emma_group_id, emma_member_id FROM \n'' || members_groups_ids_table;\n END LOOP;\n END IF;\n\nAny pointers on large plpgsql operations are appreciated. Especially when \nmore than one instance is runinng. Thanks.\n\n\n\n-- \nmarcus whitney\n\nchief architect : cold feet creative\n\nwww.coldfeetcreative.com\n\n800.595.4401\n \n\n\ncold feet presents emma\n\nemail marketing for discriminating\n\norganizations everywhere\n\nvisit www.myemma.com\n",
"msg_date": "Wed, 2 Jun 2004 16:08:56 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pl/Pgsql Functions running simultaneously"
},
{
"msg_contents": "Am I on the wrong list to ask this question, or does this list usually have \nlow activity? Just asking because I am new and I need to know where to ask \nthis question. Thanks.\n\nOn Wednesday 02 June 2004 16:08, Marcus Whitney wrote:\n> Hello all,\n>\n> I have an import function that I have been working on for some time now,\n> and it performed well up until recently. It is doing a lot, and because\n> the queries are not cached, I am not sure if that is what the problem is. \n> If a function takes a while, does it lock any of the tables it is\n> accessing, even for SELECT?\n>\n> Below is the bulk of the function:\n>\n> -- set sql statement variables\n> create_import_file_sql := ''COPY '' || container_table || '' ('' ||\n> filtered_container_columns || '') TO '' ||\n> quote_literal(formatted_import_file) || '' WITH NULL AS '' ||\n> null_single_quotes;\n> upload_to_import_table_sql := ''COPY '' || import_table || '' (''\n> || field_names || '') FROM '' || quote_literal(formatted_import_file) || ''\n> WITH NULL AS '' || null_single_quotes;\n> clean_personalization_fields_sql := ''UPDATE '' || import_table ||\n> '' SET emma_member_email = btrim(emma_member_email, '' ||\n> quote_literal(quoted_single_quote) || '') , emma_member_name_first =\n> btrim(emma_member_name_first, '' || quote_literal(quoted_single_quote) ||\n> '') , emma_member_name_last = btrim(emma_member_name_last, '' ||\n> quote_literal(quoted_single_quote) || '') ;'';\n> clean_personalization_fields_sql2 := ''UPDATE '' || import_table ||\n> '' SET emma_member_email = btrim(emma_member_email) ,\n> emma_member_name_first = btrim(emma_member_name_first) , \n> emma_member_name_last =\n> btrim(emma_member_name_last) ;'';\n> set_account_id_sql := ''UPDATE '' || import_table || '' SET\n> emma_account_id = '' || account_id;\n> set_default_active_status_sql := ''UPDATE '' || import_table || ''\n> SET emma_member_status_id = 1'';\n> set_errors_for_null_email_sql := ''UPDATE '' || import_table || ''\n> SET emma_member_status_id = 2 WHERE emma_member_email IS NULL'';\n> record_null_email_count_sql := ''UPDATE '' || import_history_table\n> || '' SET emma_import_null_email_count = (SELECT COUNT(*) FROM '' ||\n> import_table || '' WHERE emma_member_email IS NULL) WHERE\n> emma_import_history_id ='' || import_history_id;\n> set_errors_for_invalid_email_sql := ''UPDATE '' || import_table ||\n> '' SET emma_member_status_id = 2 WHERE emma_member_email !~* '' ||\n> email_regex; record_invalid_email_count_sql := ''UPDATE '' ||\n> import_history_table\n>\n> || '' SET emma_import_invalid_email_count = ( SELECT COUNT(*) FROM '' ||\n>\n> import_table || '' WHERE emma_member_email !~* '' || email_regex || '' )\n> WHERE emma_import_history_id ='' || import_history_id;\n> get_dupes_in_import_sql := ''SELECT emma_member_email,\n> emma_member_status_id FROM '' || import_table || '' GROUP BY\n> emma_member_email, emma_member_status_id having count(*) > 1'';\n> insert_dupes_sql := ''INSERT INTO '' || dupe_table || '' SELECT *\n> FROM '' || import_table || '' WHERE LOWER(emma_member_email) = LOWER('' ||\n> member_table || ''.emma_member_email)'';\n> record_table_dupe_count_sql := ''UPDATE '' || import_history_table\n> || '' SET emma_import_table_dupe_email_count = (SELECT COUNT(*) FROM '' ||\n> import_table || '' WHERE emma_member_email = LOWER('' || member_table ||\n> ''.emma_member_email)) WHERE emma_import_history_id ='' ||\n> import_history_id; remove_dupes_from_import_table_sql := ''DELETE FROM ''\n> || import_table\n>\n> || '' WHERE LOWER(emma_member_email) = LOWER('' || member_table ||\n>\n> ''.emma_member_email)'';\n> create_clean_import_file_sql := ''COPY '' || import_table || '' TO\n> ''\n>\n> || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n>\n> null_single_quotes;\n> create_members_groups_ids_file_sql := ''COPY '' || import_table ||\n> '' (emma_member_id) TO '' || quote_literal(members_groups_ids_file) || ''\n> WITH NULL AS '' || null_single_quotes;\n> empty_import_table_sql := ''TRUNCATE '' || import_table;\n> upload_clean_import_sql := ''COPY '' || member_table || '' FROM ''\n> || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n> null_single_quotes;\n> upload_members_groups_ids_sql := ''COPY '' ||\n> members_groups_ids_table\n>\n> || '' (emma_member_id) FROM '' || quote_literal(members_groups_ids_file) ||\n>\n> '' WITH NULL AS '' || null_single_quotes;\n> empty_members_groups_ids_sql := ''TRUNCATE '' ||\n> members_groups_ids_table;\n> empty_members_dupes_sql := ''TRUNCATE '' || dupe_table;\n> vacuum_sql := ''VACUUM '' || member_table || ''; VACUUM '' ||\n> import_table || ''; VACUUM '' || container_table || ''; VACUUM '' ||\n> members_groups_table || ''; VACUUM '' || members_groups_ids_table || '';\n> VACUUM '' || dupe_table;\n>\n> -- BEGIN ACTIVITY\n> -- Create the filtered import file with the\n> EXECUTE create_import_file_sql;\n> -- Load data from the filtered file to the import table\n> EXECUTE upload_to_import_table_sql;\n> -- Set account id in import table\n> EXECUTE set_account_id_sql;\n> -- Set the status of all the records to 1\n> EXECUTE set_default_active_status_sql;\n> -- Clean personalization data\n> EXECUTE clean_personalization_fields_sql;\n> EXECUTE clean_personalization_fields_sql2;\n> -- Set the status to error for all NULL emails\n> EXECUTE set_errors_for_null_email_sql;\n> -- Record the count of null emails\n> EXECUTE record_null_email_count_sql;\n> -- Set the status to error for all invalid emails\n> EXECUTE set_errors_for_invalid_email_sql;\n> -- Record the count of invalid emails\n> EXECUTE record_invalid_email_count_sql;\n>\n> -- Remove duplicates in import table (originally in file)\n> FOR duplicate_record IN EXECUTE get_dupes_in_import_sql LOOP\n> IF duplicate_record.emma_member_email IS NOT NULL THEN\n> FOR replacement_record IN EXECUTE '' SELECT * FROM ''\n> || import_table || '' WHERE emma_member_email = '' ||\n> quote_literal(duplicate_record.emma_member_email) || '' ORDER BY\n> emma_member_id LIMIT 1'' LOOP\n> escape_first_name := quote_literal\n> (replacement_record.emma_member_name_first);\n> escape_last_name := quote_literal\n> (replacement_record.emma_member_name_last);\n> escape_email := quote_literal\n> (replacement_record.emma_member_email);\n> escape_status_id :=\n> quote_literal(replacement_record.emma_member_status_id);\n> -- Record count of dupes\n> FOR dupe_record_count IN EXECUTE ''SELECT\n> COUNT(*) AS count FROM '' || import_table || '' WHERE\n> LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> EXECUTE ''UPDATE '' ||\n> import_history_table || '' SET emma_import_file_dupe_email_count ='' ||\n> dupe_record_count.count;\n> END LOOP;\n> FOR primary_dupe_record IN EXECUTE ''SELECT\n> MAX(emma_member_id) AS max_id FROM '' || import_table || '' WHERE\n> LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> EXECUTE ''UPDATE '' || import_table ||\n> '' SET emma_member_status_id = 5 WHERE emma_member_id = '' ||\n> primary_dupe_record.max_id;\n> EXECUTE ''DELETE FROM '' ||\n> import_table\n>\n> || '' WHERE emma_member_email = '' ||\n>\n> quote_literal(duplicate_record.emma_member_email) || '' AND\n> emma_member_status_id != 5'';\n> EXECUTE ''UPDATE '' || import_table ||\n> '' SET emma_member_status_id = 1 WHERE emma_member_status_id = 5'';\n> END LOOP;\n> import_dupe_count := import_dupe_count + 1;\n> END LOOP;\n> END IF;\n> END LOOP;\n>\n> -- Move dupes over to the dupe table\n> EXECUTE insert_dupes_sql;\n> -- Record the count of dupes from import to members\n> EXECUTE record_table_dupe_count_sql;\n> -- Delete the dupes from the import table\n> EXECUTE remove_dupes_from_import_table_sql;\n> -- Create clean import file\n> EXECUTE create_clean_import_file_sql;\n> -- Create groups_id file\n> EXECUTE create_members_groups_ids_file_sql;\n> -- Empty import table\n> EXECUTE empty_import_table_sql;\n> -- Upload clean members from import\n> EXECUTE upload_clean_import_sql;\n> -- Upload group ids\n> EXECUTE upload_members_groups_ids_sql;\n>\n> -- Associate to groups\n> groups := string_to_array(group_list, '','');\n> if array_lower(groups, 1) IS NOT NULL THEN\n> FOR i IN array_lower(groups, 1)..array_upper(groups, 1) LOOP\n> EXECUTE ''INSERT INTO '' || members_groups_ids_table\n> || '' SELECT '' || member_table || ''.emma_member_id FROM ONLY '' ||\n> member_table || '' WHERE LOWER('' || member_table || ''.emma_member_email)\n> = LOWER('' || dupe_table || ''.emma_member_email) AND '' || member_table ||\n> ''.emma_member_id NOT IN (SELECT '' || members_groups_table ||\n> ''.emma_member_id FROM '' || members_groups_table || '' WHERE '' ||\n> members_groups_table || ''.emma_group_id = '' || groups[i] || '') AND '' ||\n> member_table || ''.emma_member_id NOT IN (SELECT emma_member_id FROM '' ||\n> members_groups_ids_table || '')'';\n> EXECUTE ''DELETE FROM '' || members_groups_ids_table\n> || '' WHERE emma_member_id IN (SELECT emma_member_id FROM '' ||\n> members_groups_table || '' WHERE emma_group_id = '' || groups[i] || '' )'';\n> EXECUTE ''INSERT INTO '' || members_groups_table || ''\n> SELECT DISTINCT '' || groups[i] || '' AS emma_group_id, emma_member_id\n> FROM '' || members_groups_ids_table;\n> END LOOP;\n> END IF;\n>\n> Any pointers on large plpgsql operations are appreciated. Especially when\n> more than one instance is runinng. Thanks.\n\n-- \nmarcus whitney\n\nchief architect : cold feet creative\n\nwww.coldfeetcreative.com\n\n800.595.4401\n \n\n\ncold feet presents emma\n\nemail marketing for discriminating\n\norganizations everywhere\n\nvisit www.myemma.com\n",
"msg_date": "Thu, 3 Jun 2004 16:38:03 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pl/Pgsql Functions running simultaneously"
},
{
"msg_contents": "Uh... I don't think this is necessarily the wrong list, sometimes people\ndon't have much to chime in. You could try reposting to -sql or -general\nI suppose. \n\nAs for my take on your questions, I wasn't exactly clear on what the\nproblem is. If its just that things seem slow, make sure you have done\nthe appropriate vacuum/analyze/reindex tech and then try adding some\ndebug info to the function to determine where in the function it is\nslowing down. \n\nqueries inside plpgsql functions will take locks as needed, but they are\nno different than regular statements, just keep in mind that the queries\ninside the function will work like an implicit transaction.\n\nRobert Treat\n\nOn Thu, 2004-06-03 at 17:38, Marcus Whitney wrote:\n> Am I on the wrong list to ask this question, or does this list usually have \n> low activity? Just asking because I am new and I need to know where to ask \n> this question. Thanks.\n> \n> On Wednesday 02 June 2004 16:08, Marcus Whitney wrote:\n> > Hello all,\n> >\n> > I have an import function that I have been working on for some time now,\n> > and it performed well up until recently. It is doing a lot, and because\n> > the queries are not cached, I am not sure if that is what the problem is. \n> > If a function takes a while, does it lock any of the tables it is\n> > accessing, even for SELECT?\n> >\n> > Below is the bulk of the function:\n> >\n> > -- set sql statement variables\n> > create_import_file_sql := ''COPY '' || container_table || '' ('' ||\n> > filtered_container_columns || '') TO '' ||\n> > quote_literal(formatted_import_file) || '' WITH NULL AS '' ||\n> > null_single_quotes;\n> > upload_to_import_table_sql := ''COPY '' || import_table || '' (''\n> > || field_names || '') FROM '' || quote_literal(formatted_import_file) || ''\n> > WITH NULL AS '' || null_single_quotes;\n> > clean_personalization_fields_sql := ''UPDATE '' || import_table ||\n> > '' SET emma_member_email = btrim(emma_member_email, '' ||\n> > quote_literal(quoted_single_quote) || '') , emma_member_name_first =\n> > btrim(emma_member_name_first, '' || quote_literal(quoted_single_quote) ||\n> > '') , emma_member_name_last = btrim(emma_member_name_last, '' ||\n> > quote_literal(quoted_single_quote) || '') ;'';\n> > clean_personalization_fields_sql2 := ''UPDATE '' || import_table ||\n> > '' SET emma_member_email = btrim(emma_member_email) ,\n> > emma_member_name_first = btrim(emma_member_name_first) , \n> > emma_member_name_last =\n> > btrim(emma_member_name_last) ;'';\n> > set_account_id_sql := ''UPDATE '' || import_table || '' SET\n> > emma_account_id = '' || account_id;\n> > set_default_active_status_sql := ''UPDATE '' || import_table || ''\n> > SET emma_member_status_id = 1'';\n> > set_errors_for_null_email_sql := ''UPDATE '' || import_table || ''\n> > SET emma_member_status_id = 2 WHERE emma_member_email IS NULL'';\n> > record_null_email_count_sql := ''UPDATE '' || import_history_table\n> > || '' SET emma_import_null_email_count = (SELECT COUNT(*) FROM '' ||\n> > import_table || '' WHERE emma_member_email IS NULL) WHERE\n> > emma_import_history_id ='' || import_history_id;\n> > set_errors_for_invalid_email_sql := ''UPDATE '' || import_table ||\n> > '' SET emma_member_status_id = 2 WHERE emma_member_email !~* '' ||\n> > email_regex; record_invalid_email_count_sql := ''UPDATE '' ||\n> > import_history_table\n> >\n> > || '' SET emma_import_invalid_email_count = ( SELECT COUNT(*) FROM '' ||\n> >\n> > import_table || '' WHERE emma_member_email !~* '' || email_regex || '' )\n> > WHERE emma_import_history_id ='' || import_history_id;\n> > get_dupes_in_import_sql := ''SELECT emma_member_email,\n> > emma_member_status_id FROM '' || import_table || '' GROUP BY\n> > emma_member_email, emma_member_status_id having count(*) > 1'';\n> > insert_dupes_sql := ''INSERT INTO '' || dupe_table || '' SELECT *\n> > FROM '' || import_table || '' WHERE LOWER(emma_member_email) = LOWER('' ||\n> > member_table || ''.emma_member_email)'';\n> > record_table_dupe_count_sql := ''UPDATE '' || import_history_table\n> > || '' SET emma_import_table_dupe_email_count = (SELECT COUNT(*) FROM '' ||\n> > import_table || '' WHERE emma_member_email = LOWER('' || member_table ||\n> > ''.emma_member_email)) WHERE emma_import_history_id ='' ||\n> > import_history_id; remove_dupes_from_import_table_sql := ''DELETE FROM ''\n> > || import_table\n> >\n> > || '' WHERE LOWER(emma_member_email) = LOWER('' || member_table ||\n> >\n> > ''.emma_member_email)'';\n> > create_clean_import_file_sql := ''COPY '' || import_table || '' TO\n> > ''\n> >\n> > || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n> >\n> > null_single_quotes;\n> > create_members_groups_ids_file_sql := ''COPY '' || import_table ||\n> > '' (emma_member_id) TO '' || quote_literal(members_groups_ids_file) || ''\n> > WITH NULL AS '' || null_single_quotes;\n> > empty_import_table_sql := ''TRUNCATE '' || import_table;\n> > upload_clean_import_sql := ''COPY '' || member_table || '' FROM ''\n> > || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n> > null_single_quotes;\n> > upload_members_groups_ids_sql := ''COPY '' ||\n> > members_groups_ids_table\n> >\n> > || '' (emma_member_id) FROM '' || quote_literal(members_groups_ids_file) ||\n> >\n> > '' WITH NULL AS '' || null_single_quotes;\n> > empty_members_groups_ids_sql := ''TRUNCATE '' ||\n> > members_groups_ids_table;\n> > empty_members_dupes_sql := ''TRUNCATE '' || dupe_table;\n> > vacuum_sql := ''VACUUM '' || member_table || ''; VACUUM '' ||\n> > import_table || ''; VACUUM '' || container_table || ''; VACUUM '' ||\n> > members_groups_table || ''; VACUUM '' || members_groups_ids_table || '';\n> > VACUUM '' || dupe_table;\n> >\n> > -- BEGIN ACTIVITY\n> > -- Create the filtered import file with the\n> > EXECUTE create_import_file_sql;\n> > -- Load data from the filtered file to the import table\n> > EXECUTE upload_to_import_table_sql;\n> > -- Set account id in import table\n> > EXECUTE set_account_id_sql;\n> > -- Set the status of all the records to 1\n> > EXECUTE set_default_active_status_sql;\n> > -- Clean personalization data\n> > EXECUTE clean_personalization_fields_sql;\n> > EXECUTE clean_personalization_fields_sql2;\n> > -- Set the status to error for all NULL emails\n> > EXECUTE set_errors_for_null_email_sql;\n> > -- Record the count of null emails\n> > EXECUTE record_null_email_count_sql;\n> > -- Set the status to error for all invalid emails\n> > EXECUTE set_errors_for_invalid_email_sql;\n> > -- Record the count of invalid emails\n> > EXECUTE record_invalid_email_count_sql;\n> >\n> > -- Remove duplicates in import table (originally in file)\n> > FOR duplicate_record IN EXECUTE get_dupes_in_import_sql LOOP\n> > IF duplicate_record.emma_member_email IS NOT NULL THEN\n> > FOR replacement_record IN EXECUTE '' SELECT * FROM ''\n> > || import_table || '' WHERE emma_member_email = '' ||\n> > quote_literal(duplicate_record.emma_member_email) || '' ORDER BY\n> > emma_member_id LIMIT 1'' LOOP\n> > escape_first_name := quote_literal\n> > (replacement_record.emma_member_name_first);\n> > escape_last_name := quote_literal\n> > (replacement_record.emma_member_name_last);\n> > escape_email := quote_literal\n> > (replacement_record.emma_member_email);\n> > escape_status_id :=\n> > quote_literal(replacement_record.emma_member_status_id);\n> > -- Record count of dupes\n> > FOR dupe_record_count IN EXECUTE ''SELECT\n> > COUNT(*) AS count FROM '' || import_table || '' WHERE\n> > LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> > EXECUTE ''UPDATE '' ||\n> > import_history_table || '' SET emma_import_file_dupe_email_count ='' ||\n> > dupe_record_count.count;\n> > END LOOP;\n> > FOR primary_dupe_record IN EXECUTE ''SELECT\n> > MAX(emma_member_id) AS max_id FROM '' || import_table || '' WHERE\n> > LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> > EXECUTE ''UPDATE '' || import_table ||\n> > '' SET emma_member_status_id = 5 WHERE emma_member_id = '' ||\n> > primary_dupe_record.max_id;\n> > EXECUTE ''DELETE FROM '' ||\n> > import_table\n> >\n> > || '' WHERE emma_member_email = '' ||\n> >\n> > quote_literal(duplicate_record.emma_member_email) || '' AND\n> > emma_member_status_id != 5'';\n> > EXECUTE ''UPDATE '' || import_table ||\n> > '' SET emma_member_status_id = 1 WHERE emma_member_status_id = 5'';\n> > END LOOP;\n> > import_dupe_count := import_dupe_count + 1;\n> > END LOOP;\n> > END IF;\n> > END LOOP;\n> >\n> > -- Move dupes over to the dupe table\n> > EXECUTE insert_dupes_sql;\n> > -- Record the count of dupes from import to members\n> > EXECUTE record_table_dupe_count_sql;\n> > -- Delete the dupes from the import table\n> > EXECUTE remove_dupes_from_import_table_sql;\n> > -- Create clean import file\n> > EXECUTE create_clean_import_file_sql;\n> > -- Create groups_id file\n> > EXECUTE create_members_groups_ids_file_sql;\n> > -- Empty import table\n> > EXECUTE empty_import_table_sql;\n> > -- Upload clean members from import\n> > EXECUTE upload_clean_import_sql;\n> > -- Upload group ids\n> > EXECUTE upload_members_groups_ids_sql;\n> >\n> > -- Associate to groups\n> > groups := string_to_array(group_list, '','');\n> > if array_lower(groups, 1) IS NOT NULL THEN\n> > FOR i IN array_lower(groups, 1)..array_upper(groups, 1) LOOP\n> > EXECUTE ''INSERT INTO '' || members_groups_ids_table\n> > || '' SELECT '' || member_table || ''.emma_member_id FROM ONLY '' ||\n> > member_table || '' WHERE LOWER('' || member_table || ''.emma_member_email)\n> > = LOWER('' || dupe_table || ''.emma_member_email) AND '' || member_table ||\n> > ''.emma_member_id NOT IN (SELECT '' || members_groups_table ||\n> > ''.emma_member_id FROM '' || members_groups_table || '' WHERE '' ||\n> > members_groups_table || ''.emma_group_id = '' || groups[i] || '') AND '' ||\n> > member_table || ''.emma_member_id NOT IN (SELECT emma_member_id FROM '' ||\n> > members_groups_ids_table || '')'';\n> > EXECUTE ''DELETE FROM '' || members_groups_ids_table\n> > || '' WHERE emma_member_id IN (SELECT emma_member_id FROM '' ||\n> > members_groups_table || '' WHERE emma_group_id = '' || groups[i] || '' )'';\n> > EXECUTE ''INSERT INTO '' || members_groups_table || ''\n> > SELECT DISTINCT '' || groups[i] || '' AS emma_group_id, emma_member_id\n> > FROM '' || members_groups_ids_table;\n> > END LOOP;\n> > END IF;\n> >\n> > Any pointers on large plpgsql operations are appreciated. Especially when\n> > more than one instance is runinng. Thanks.\n> \n> -- \n> marcus whitney\n> \n> chief architect : cold feet creative\n> \n> www.coldfeetcreative.com\n> \n> 800.595.4401\n> \n> \n> \n> cold feet presents emma\n> \n> email marketing for discriminating\n> \n> organizations everywhere\n> \n> visit www.myemma.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n-- \nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n\n",
"msg_date": "04 Jun 2004 17:39:01 -0400",
"msg_from": "Robert Treat <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pl/Pgsql Functions running simultaneously"
},
{
"msg_contents": "Thanks for your reply. My comments are below.\n\nOn Friday 04 June 2004 16:39, you wrote:\n> Uh... I don't think this is necessarily the wrong list, sometimes people\n> don't have much to chime in. You could try reposting to -sql or -general\n> I suppose.\n\nI'll try one of those.\n\n>\n> As for my take on your questions, I wasn't exactly clear on what the\n> problem is. If its just that things seem slow, make sure you have done\n> the appropriate vacuum/analyze/reindex tech and then try adding some\n> debug info to the function to determine where in the function it is\n> slowing down.\n\nYeah, I do a fair amount of vacuum/analyze , but I am unclear as to when I \nshould run REINDEX. Is their a way to tell that indexes have become corrupt, \nor need to be reindexed?\n\n>\n> queries inside plpgsql functions will take locks as needed, but they are\n> no different than regular statements, just keep in mind that the queries\n> inside the function will work like an implicit transaction.\n\nI've noticed. Thanks for the info.\n\n>\n> Robert Treat\n>\n> On Thu, 2004-06-03 at 17:38, Marcus Whitney wrote:\n> > Am I on the wrong list to ask this question, or does this list usually\n> > have low activity? Just asking because I am new and I need to know where\n> > to ask this question. Thanks.\n> >\n> > On Wednesday 02 June 2004 16:08, Marcus Whitney wrote:\n> > > Hello all,\n> > >\n> > > I have an import function that I have been working on for some time\n> > > now, and it performed well up until recently. It is doing a lot, and\n> > > because the queries are not cached, I am not sure if that is what the\n> > > problem is. If a function takes a while, does it lock any of the tables\n> > > it is accessing, even for SELECT?\n> > >\n> > > Below is the bulk of the function:\n> > >\n> > > -- set sql statement variables\n> > > create_import_file_sql := ''COPY '' || container_table || ''\n> > > ('' || filtered_container_columns || '') TO '' ||\n> > > quote_literal(formatted_import_file) || '' WITH NULL AS '' ||\n> > > null_single_quotes;\n> > > upload_to_import_table_sql := ''COPY '' || import_table || ''\n> > > (''\n> > >\n> > > || field_names || '') FROM '' || quote_literal(formatted_import_file)\n> > > || || ''\n> > >\n> > > WITH NULL AS '' || null_single_quotes;\n> > > clean_personalization_fields_sql := ''UPDATE '' || import_table\n> > > || '' SET emma_member_email = btrim(emma_member_email, '' ||\n> > > quote_literal(quoted_single_quote) || '') , emma_member_name_first =\n> > > btrim(emma_member_name_first, '' || quote_literal(quoted_single_quote)\n> > > || '') , emma_member_name_last = btrim(emma_member_name_last, '' ||\n> > > quote_literal(quoted_single_quote) || '') ;'';\n> > > clean_personalization_fields_sql2 := ''UPDATE '' ||\n> > > import_table || '' SET emma_member_email = btrim(emma_member_email) ,\n> > > emma_member_name_first = btrim(emma_member_name_first) ,\n> > > emma_member_name_last =\n> > > btrim(emma_member_name_last) ;'';\n> > > set_account_id_sql := ''UPDATE '' || import_table || '' SET\n> > > emma_account_id = '' || account_id;\n> > > set_default_active_status_sql := ''UPDATE '' || import_table ||\n> > > '' SET emma_member_status_id = 1'';\n> > > set_errors_for_null_email_sql := ''UPDATE '' || import_table ||\n> > > '' SET emma_member_status_id = 2 WHERE emma_member_email IS NULL'';\n> > > record_null_email_count_sql := ''UPDATE '' || import_history_table\n> > >\n> > > || '' SET emma_import_null_email_count = (SELECT COUNT(*) FROM '' ||\n> > >\n> > > import_table || '' WHERE emma_member_email IS NULL) WHERE\n> > > emma_import_history_id ='' || import_history_id;\n> > > set_errors_for_invalid_email_sql := ''UPDATE '' || import_table\n> > > || '' SET emma_member_status_id = 2 WHERE emma_member_email !~* '' ||\n> > > email_regex; record_invalid_email_count_sql := ''UPDATE '' ||\n> > > import_history_table\n> > >\n> > > || '' SET emma_import_invalid_email_count = ( SELECT COUNT(*) FROM ''\n> > > || ||\n> > >\n> > > import_table || '' WHERE emma_member_email !~* '' || email_regex || ''\n> > > ) WHERE emma_import_history_id ='' || import_history_id;\n> > > get_dupes_in_import_sql := ''SELECT emma_member_email,\n> > > emma_member_status_id FROM '' || import_table || '' GROUP BY\n> > > emma_member_email, emma_member_status_id having count(*) > 1'';\n> > > insert_dupes_sql := ''INSERT INTO '' || dupe_table || ''\n> > > SELECT * FROM '' || import_table || '' WHERE LOWER(emma_member_email) =\n> > > LOWER('' || member_table || ''.emma_member_email)'';\n> > > record_table_dupe_count_sql := ''UPDATE '' ||\n> > > import_history_table\n> > >\n> > > || '' SET emma_import_table_dupe_email_count = (SELECT COUNT(*) FROM ''\n> > > || ||\n> > >\n> > > import_table || '' WHERE emma_member_email = LOWER('' || member_table\n> > > || ''.emma_member_email)) WHERE emma_import_history_id ='' ||\n> > > import_history_id; remove_dupes_from_import_table_sql := ''DELETE FROM\n> > > ''\n> > >\n> > > || import_table\n> > > ||\n> > > || '' WHERE LOWER(emma_member_email) = LOWER('' || member_table ||\n> > >\n> > > ''.emma_member_email)'';\n> > > create_clean_import_file_sql := ''COPY '' || import_table || ''\n> > > TO ''\n> > >\n> > > || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n> > >\n> > > null_single_quotes;\n> > > create_members_groups_ids_file_sql := ''COPY '' || import_table\n> > > || '' (emma_member_id) TO '' || quote_literal(members_groups_ids_file)\n> > > || '' WITH NULL AS '' || null_single_quotes;\n> > > empty_import_table_sql := ''TRUNCATE '' || import_table;\n> > > upload_clean_import_sql := ''COPY '' || member_table || '' FROM\n> > > ''\n> > >\n> > > || quote_literal(clean_import_file) || '' WITH NULL AS '' ||\n> > >\n> > > null_single_quotes;\n> > > upload_members_groups_ids_sql := ''COPY '' ||\n> > > members_groups_ids_table\n> > >\n> > > || '' (emma_member_id) FROM '' ||\n> > > || quote_literal(members_groups_ids_file) ||\n> > >\n> > > '' WITH NULL AS '' || null_single_quotes;\n> > > empty_members_groups_ids_sql := ''TRUNCATE '' ||\n> > > members_groups_ids_table;\n> > > empty_members_dupes_sql := ''TRUNCATE '' || dupe_table;\n> > > vacuum_sql := ''VACUUM '' || member_table || ''; VACUUM '' ||\n> > > import_table || ''; VACUUM '' || container_table || ''; VACUUM '' ||\n> > > members_groups_table || ''; VACUUM '' || members_groups_ids_table ||\n> > > ''; VACUUM '' || dupe_table;\n> > >\n> > > -- BEGIN ACTIVITY\n> > > -- Create the filtered import file with the\n> > > EXECUTE create_import_file_sql;\n> > > -- Load data from the filtered file to the import table\n> > > EXECUTE upload_to_import_table_sql;\n> > > -- Set account id in import table\n> > > EXECUTE set_account_id_sql;\n> > > -- Set the status of all the records to 1\n> > > EXECUTE set_default_active_status_sql;\n> > > -- Clean personalization data\n> > > EXECUTE clean_personalization_fields_sql;\n> > > EXECUTE clean_personalization_fields_sql2;\n> > > -- Set the status to error for all NULL emails\n> > > EXECUTE set_errors_for_null_email_sql;\n> > > -- Record the count of null emails\n> > > EXECUTE record_null_email_count_sql;\n> > > -- Set the status to error for all invalid emails\n> > > EXECUTE set_errors_for_invalid_email_sql;\n> > > -- Record the count of invalid emails\n> > > EXECUTE record_invalid_email_count_sql;\n> > >\n> > > -- Remove duplicates in import table (originally in file)\n> > > FOR duplicate_record IN EXECUTE get_dupes_in_import_sql LOOP\n> > > IF duplicate_record.emma_member_email IS NOT NULL THEN\n> > > FOR replacement_record IN EXECUTE '' SELECT * FROM\n> > > ''\n> > >\n> > > || import_table || '' WHERE emma_member_email = '' ||\n> > >\n> > > quote_literal(duplicate_record.emma_member_email) || '' ORDER BY\n> > > emma_member_id LIMIT 1'' LOOP\n> > > escape_first_name := quote_literal\n> > > (replacement_record.emma_member_name_first);\n> > > escape_last_name := quote_literal\n> > > (replacement_record.emma_member_name_last);\n> > > escape_email := quote_literal\n> > > (replacement_record.emma_member_email);\n> > > escape_status_id :=\n> > > quote_literal(replacement_record.emma_member_status_id);\n> > > -- Record count of dupes\n> > > FOR dupe_record_count IN EXECUTE ''SELECT\n> > > COUNT(*) AS count FROM '' || import_table || '' WHERE\n> > > LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> > > EXECUTE ''UPDATE '' ||\n> > > import_history_table || '' SET emma_import_file_dupe_email_count =''\n> > > || dupe_record_count.count;\n> > > END LOOP;\n> > > FOR primary_dupe_record IN EXECUTE ''SELECT\n> > > MAX(emma_member_id) AS max_id FROM '' || import_table || '' WHERE\n> > > LOWER(emma_member_email) = LOWER('' || escape_email || '')'' LOOP\n> > > EXECUTE ''UPDATE '' || import_table\n> > > || '' SET emma_member_status_id = 5 WHERE emma_member_id = '' ||\n> > > primary_dupe_record.max_id;\n> > > EXECUTE ''DELETE FROM '' ||\n> > > import_table\n> > >\n> > > || '' WHERE emma_member_email = '' ||\n> > >\n> > > quote_literal(duplicate_record.emma_member_email) || '' AND\n> > > emma_member_status_id != 5'';\n> > > EXECUTE ''UPDATE '' || import_table\n> > > || '' SET emma_member_status_id = 1 WHERE emma_member_status_id = 5'';\n> > > END LOOP;\n> > > import_dupe_count := import_dupe_count + 1;\n> > > END LOOP;\n> > > END IF;\n> > > END LOOP;\n> > >\n> > > -- Move dupes over to the dupe table\n> > > EXECUTE insert_dupes_sql;\n> > > -- Record the count of dupes from import to members\n> > > EXECUTE record_table_dupe_count_sql;\n> > > -- Delete the dupes from the import table\n> > > EXECUTE remove_dupes_from_import_table_sql;\n> > > -- Create clean import file\n> > > EXECUTE create_clean_import_file_sql;\n> > > -- Create groups_id file\n> > > EXECUTE create_members_groups_ids_file_sql;\n> > > -- Empty import table\n> > > EXECUTE empty_import_table_sql;\n> > > -- Upload clean members from import\n> > > EXECUTE upload_clean_import_sql;\n> > > -- Upload group ids\n> > > EXECUTE upload_members_groups_ids_sql;\n> > >\n> > > -- Associate to groups\n> > > groups := string_to_array(group_list, '','');\n> > > if array_lower(groups, 1) IS NOT NULL THEN\n> > > FOR i IN array_lower(groups, 1)..array_upper(groups, 1)\n> > > LOOP EXECUTE ''INSERT INTO '' || members_groups_ids_table\n> > >\n> > > || '' SELECT '' || member_table || ''.emma_member_id FROM ONLY '' ||\n> > >\n> > > member_table || '' WHERE LOWER('' || member_table ||\n> > > ''.emma_member_email) = LOWER('' || dupe_table || ''.emma_member_email)\n> > > AND '' || member_table || ''.emma_member_id NOT IN (SELECT '' ||\n> > > members_groups_table || ''.emma_member_id FROM '' ||\n> > > members_groups_table || '' WHERE '' || members_groups_table ||\n> > > ''.emma_group_id = '' || groups[i] || '') AND '' || member_table ||\n> > > ''.emma_member_id NOT IN (SELECT emma_member_id FROM '' ||\n> > > members_groups_ids_table || '')'';\n> > > EXECUTE ''DELETE FROM '' ||\n> > > members_groups_ids_table\n> > >\n> > > || '' WHERE emma_member_id IN (SELECT emma_member_id FROM '' ||\n> > >\n> > > members_groups_table || '' WHERE emma_group_id = '' || groups[i] || ''\n> > > )''; EXECUTE ''INSERT INTO '' || members_groups_table || '' SELECT\n> > > DISTINCT '' || groups[i] || '' AS emma_group_id, emma_member_id FROM\n> > > '' || members_groups_ids_table;\n> > > END LOOP;\n> > > END IF;\n> > >\n> > > Any pointers on large plpgsql operations are appreciated. Especially\n> > > when more than one instance is runinng. Thanks.\n> >\n> > --\n> > marcus whitney\n> >\n> > chief architect : cold feet creative\n> >\n> > www.coldfeetcreative.com\n> >\n> > 800.595.4401\n> >\n> >\n> >\n> > cold feet presents emma\n> >\n> > email marketing for discriminating\n> >\n> > organizations everywhere\n> >\n> > visit www.myemma.com\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 7: don't forget to increase your free space map settings\n\n-- \nmarcus whitney\n\nchief architect : cold feet creative\n\nwww.coldfeetcreative.com\n\n800.595.4401\n \n\n\ncold feet presents emma\n\nemail marketing for discriminating\n\norganizations everywhere\n\nvisit www.myemma.com\n",
"msg_date": "Mon, 7 Jun 2004 11:43:41 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pl/Pgsql Functions running simultaneously"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nMarcus Whitney wrote:\n| Am I on the wrong list to ask this question, or does this list usually\nhave\n| low activity? Just asking because I am new and I need to know where\nto ask\n| this question. Thanks.\n\nYour .sig may hold the reason why people are not responding. You seem\nlike an intelligent guy and you asked an interesting question, but...\n\n| cold feet presents emma\n|\n| email marketing for discriminating\n| ^^^^^^^^^^^^^^^\n| organizations everywhere\n|\n| visit www.myemma.com\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFA1zstgfzn5SevSpoRApgdAKCtjL4qMwNQ9mZN57RHmHJi5Ana0wCggXhb\n7HYFtE3S9zQ2hSGR9vYdXYQ=\n=Kfqd\n-----END PGP SIGNATURE-----",
"msg_date": "Mon, 21 Jun 2004 15:46:55 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Pl/Pgsql Functions running simultaneously"
}
] |
[
{
"msg_contents": "Is there any crossover in performance with sibling inherited tables? \n\nFor Ex. \n\nif I have a parent table called : people\n\nA child of 'people' called: Adults\n\tand\nA child of 'people' called: Kids\n\nDoes the work I do to Adults, namely copies, huge updates and such ever affect \nthe performance of Kids?\n\nThanks.\n\n\n-- \nmarcus whitney\n\nchief architect : cold feet creative\n\nwww.coldfeetcreative.com\n\n800.595.4401\n \n\n\ncold feet presents emma\n\nemail marketing for discriminating\n\norganizations everywhere\n\nvisit www.myemma.com\n",
"msg_date": "Wed, 2 Jun 2004 16:27:28 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inherited Tables Performance"
}
] |
[
{
"msg_contents": "This is a very general question but what is the largest linux box anyone\nhas run PostgreSQL on and what kind of concurrent transactions per\nsecond have you seen? \n\n \n\nWe have a client who has huge bursts of activity, coinciding with high\nrated TV appearances, meaning hundreds of thousands of users registering\nand logging in at the same time.\n\n \n\nCurrently we are running a dual cpu dell blade server on redhat linux\n(2.4?) and PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC 2.96,\nraid5 and am using sqlrelay for connection pooling. It works fine under\nordinary load but bogs down too much under these huge loads. \n\n \n\nI would love to be able to tell them that a specific box like a 2 ghz\nquad xenon with 10 GB of ram, on a raid 10 server will get them x\nconcurrent transactions per second with x threads running. From that I\ncan give them a great estimate on how many registrations per minute\nthere money can buy, but right now all I can tell them is that if you\nspend more money you will get more tps. \n\n \n\nThanks for any info you may have. \n\n\n\n\n\n\n\n\n\n\nThis is a very general question but what is the largest linux\nbox anyone has run PostgreSQL on and what kind of concurrent transactions per\nsecond have you seen? \n \nWe have a client who has huge bursts of activity, coinciding\nwith high rated TV appearances, meaning hundreds of thousands of users\nregistering and logging in at the same time.\n \nCurrently we are running a dual cpu dell blade server on\nredhat linux (2.4?) and PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC\n2.96, raid5 and am using sqlrelay for connection pooling. It works fine\nunder ordinary load but bogs down too much under these huge loads. \n \nI would love to be able to tell them that a specific box\nlike a 2 ghz quad xenon with 10 GB of ram, on a raid 10 server will get them x concurrent\ntransactions per second with x threads running. From that I can give them a great\nestimate on how many registrations per minute there money can buy, but right\nnow all I can tell them is that if you spend more money you will get more tps. \n \nThanks for any info you may have.",
"msg_date": "Thu, 3 Jun 2004 17:10:26 -0600",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Most transactions per second on largest box?"
},
{
"msg_contents": "\n<[email protected]> writes:\n\n> Currently we are running a dual cpu dell blade server on redhat linux\n> (2.4?) and PostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC 2.96,\n> raid5 and am using sqlrelay for connection pooling. It works fine under\n> ordinary load but bogs down too much under these huge loads. \n\nI can't answer the question you asked. But I would suggest you upgrade your\ncompiler. 2.96 is just about the worst version of gcc to be using for\nanything. It has known bugs where it just flatly miscompiles code, resulting\nin programs that crash or behave strangely.\n\nAlso postgresql 7.4 has lots of very nice performance improvements including\neveryone's favourite, \"IN\" optimization. So you might consider upgrading to\n7.4.2.\n\n> I would love to be able to tell them that a specific box like a 2 ghz\n> quad xenon with 10 GB of ram, on a raid 10 server will get them x\n> concurrent transactions per second with x threads running. From that I\n> can give them a great estimate on how many registrations per minute\n> there money can buy, but right now all I can tell them is that if you\n> spend more money you will get more tps. \n\nUnfortunately nobody will be able to give you precise numbers since it will\ndepend a *lot* on the precise queries and database design you're working with.\nThere are also a lot of variables in the hardware design where you trade off\nincreasing cpus vs improving the disk subsystem and such.\n\nYou're going to have to do some performance testing of your application on\ndifferent hardware and extrapolate from that. Unfortunately (or fortunately\ndepending on your billing model:) this can be a lot of work. It involves\nsetting up a database with a realistic quantity of data, and some sort of test\nharness simulating users.\n\nIn general terms though, several people have mentioned being happy with\nOpteron servers, and generally people find spending money on good hardware\nRAID cards with battery backed caches and more disks to spread data over to be\nmore effective than spending money on more or faster processors.\n\nIf you benchmark your application on a given set of hardware and analyze where\nyour bottleneck is then people may be able to suggest what alternatives to\nconsider and may be able to give some idea of what type of improvement to\nexpect. But it takes actual data from vmstat, iostat, et al to do this\nanalysis.\n\n-- \ngreg\n\n",
"msg_date": "04 Jun 2004 01:57:43 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Most transactions per second on largest box?"
}
] |
[
{
"msg_contents": "Hi!\n\nI am having a bit of problem with the plan that the planner produces.\n\nFirst a little info:\n\nOS: Linux Debian\nVersion: PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc GCC) \n3.3.3 (Debian 20040401)\nCPU: AMD Athlon XP 2000+\nMemory: 1GB\nDisk: 4 SCSI-3 UW 10000rpm in RAID 1 (mirror) mode\n\nshared_mem: 20000\nSort mem: 8192\neffective_cache_size: 80000\n\n\nRow count for the tables involved:\npriceavailable: 8564\nproduct: 3\nproductCodeAlias: 7\nlocationIATA: 3402\nlocationConnection: 64\nprice: 4402\n\n\nI have runned both vacuum full and vacuum analyze before I run my query \nand shared_mem is set to 20000\n\n\nHere is the result I am getting:\n\nEXPLAIN ANALYZE\nSELECT\n CASE WHEN 'D' = 'A' THEN price.priceArrival ELSE price.priceDeparture \nEND AS price,\n price.vatPercent AS vatPercent\nFROM\n priceavailable pa,\n product product,\n productCodeAlias productAlias,\n locationiata la,\n locationconnection lc,\n price price\nWHERE\n pa.direction IN('D', 'B')\n AND pa.productId = product.productId\n AND product.productId = productAlias.productId\n AND productAlias.productCode = 'TAXI'\n AND pa.locationConnectionId = lc.locationConnectionId\n AND lc.locationConnectionCode = 'RNB'\n AND pa.locationId = la.locationId\n AND la.iataCode = 'KB8'\n AND price.pricegroupId = pa.priceGroupId\n AND price.productId = product.productId\n AND '2004-06-01 05:30:00.000000+02' BETWEEN price.validstartdate AND \nprice.validstopdate\n AND product.organizationId = 1\n AND price.organizationId = product.organizationId\n AND pa.deletionDate IS NULL\n AND product.deletionDate IS NULL\n AND la.deletionDate IS NULL\n AND lc.deletionDate IS NULL\n AND price.deletionDate IS NULL\n;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=2.14..17.16 rows=1 width=8) (actual \ntime=3274.196..4620.299 rows=1 loops=1)\n -> Nested Loop (cost=2.14..13.49 rows=1 width=12) (actual \ntime=2643.441..4617.246 rows=47 loops=1)\n Join Filter: ((\"outer\".productid = \"inner\".productid) AND \n(\"inner\".pricegroupid = \"outer\".pricegroupid))\n -> Nested Loop (cost=0.00..7.72 rows=1 width=12) (actual \ntime=0.238..5.455 rows=111 loops=1)\n -> Seq Scan on locationconnection lc (cost=0.00..1.80 \nrows=1 width=4) (actual time=0.153..0.245 rows=1 loops=1)\n Filter: ((locationconnectioncode = 'RNB'::text) \nAND (deletiondate IS NULL))\n -> Index Scan using priceavailableix1 on priceavailable \npa (cost=0.00..5.91 rows=1 width=16) (actual time=0.067..4.182 rows=111 \nloops=1)\n Index Cond: (pa.locationconnectionid = \n\"outer\".locationconnectionid)\n Filter: (((direction = 'D'::text) OR (direction = \n'B'::text)) AND (deletiondate IS NULL))\n -> Hash Join (cost=2.14..5.74 rows=2 width=24) (actual \ntime=0.058..39.116 rows=1243 loops=111)\n Hash Cond: (\"outer\".productid = \"inner\".productid)\n -> Index Scan using priceix2 on price (cost=0.00..3.41 \nrows=23 width=20) (actual time=0.042..25.811 rows=4402 loops=111)\n Index Cond: ('2004-06-01'::date <= validstopdate)\n Filter: (('2004-06-01'::date >= validstartdate) \nAND (deletiondate IS NULL) AND (organizationid = 1))\n -> Hash (cost=2.14..2.14 rows=1 width=12) (actual \ntime=0.132..0.132 rows=0 loops=1)\n -> Nested Loop (cost=0.00..2.14 rows=1 width=12) \n(actual time=0.088..0.123 rows=1 loops=1)\n Join Filter: (\"outer\".productid = \n\"inner\".productid)\n -> Seq Scan on product (cost=0.00..1.04 \nrows=1 width=8) (actual time=0.013..0.022 rows=3 loops=1)\n Filter: ((organizationid = 1) AND \n(deletiondate IS NULL))\n -> Seq Scan on productcodealias \nproductalias (cost=0.00..1.09 rows=1 width=4) (actual time=0.022..0.024 \nrows=1 loops=3)\n Filter: (productcode = 'TAXI'::text)\n -> Index Scan using locationiataix4 on locationiata la \n(cost=0.00..3.66 rows=1 width=4) (actual time=0.052..0.052 rows=0 loops=47)\n Index Cond: (\"outer\".locationid = la.locationid)\n Filter: ((iatacode = 'KB8'::text) AND (deletiondate IS NULL))\n Total runtime: 4620.852 ms\n\n\nIf I do an \"set enable_nestloop = 0\" and run the exact same question I get:\n \n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=150.55..154.20 rows=1 width=8) (actual \ntime=68.095..103.006 rows=1 loops=1)\n Hash Cond: (\"outer\".locationid = \"inner\".locationid)\n -> Hash Join (cost=145.65..149.29 rows=1 width=12) (actual \ntime=66.304..102.709 rows=47 loops=1)\n Hash Cond: ((\"outer\".productid = \"inner\".productid) AND \n(\"outer\".pricegroupid = \"inner\".pricegroupid))\n -> Hash Join (cost=2.15..5.75 rows=2 width=24) (actual \ntime=0.343..38.844 rows=1243 loops=1)\n Hash Cond: (\"outer\".productid = \"inner\".productid)\n -> Index Scan using priceix2 on price (cost=0.00..3.41 \nrows=23 width=20) (actual time=0.053..26.225 rows=4402 loops=1)\n Index Cond: ('2004-06-01'::date <= validstopdate)\n Filter: (('2004-06-01'::date >= validstartdate) \nAND (deletiondate IS NULL) AND (organizationid = 1))\n -> Hash (cost=2.14..2.14 rows=1 width=12) (actual \ntime=0.191..0.191 rows=0 loops=1)\n -> Hash Join (cost=1.09..2.14 rows=1 width=12) \n(actual time=0.174..0.184 rows=1 loops=1)\n Hash Cond: (\"outer\".productid = \n\"inner\".productid)\n -> Seq Scan on product (cost=0.00..1.04 \nrows=1 width=8) (actual time=0.017..0.025 rows=3 loops=1)\n Filter: ((organizationid = 1) AND \n(deletiondate IS NULL))\n -> Hash (cost=1.09..1.09 rows=1 width=4) \n(actual time=0.073..0.073 rows=0 loops=1)\n -> Seq Scan on productcodealias \nproductalias (cost=0.00..1.09 rows=1 width=4) (actual time=0.042..0.045 \nrows=1 loops=1)\n Filter: (productcode = 'TAXI'::text)\n -> Hash (cost=143.50..143.50 rows=1 width=12) (actual \ntime=61.650..61.650 rows=0 loops=1)\n -> Merge Join (cost=1.81..143.50 rows=1 width=12) \n(actual time=59.914..61.419 rows=111 loops=1)\n Merge Cond: (\"outer\".locationconnectionid = \n\"inner\".locationconnectionid)\n -> Index Scan using priceavailableix1 on \npriceavailable pa (cost=0.00..141.57 rows=43 width=16) (actual \ntime=0.048..48.739 rows=6525 loops=1)\n Filter: (((direction = 'D'::text) OR \n(direction = 'B'::text)) AND (deletiondate IS NULL))\n -> Sort (cost=1.81..1.81 rows=1 width=4) (actual \ntime=0.215..0.290 rows=1 loops=1)\n Sort Key: lc.locationconnectionid\n -> Seq Scan on locationconnection lc \n(cost=0.00..1.80 rows=1 width=4) (actual time=0.125..0.194 rows=1 loops=1)\n Filter: ((locationconnectioncode = \n'RNB'::text) AND (deletiondate IS NULL))\n -> Hash (cost=4.89..4.89 rows=1 width=4) (actual time=0.122..0.122 \nrows=0 loops=1)\n -> Index Scan using locationiataix1 on locationiata la \n(cost=0.00..4.89 rows=1 width=4) (actual time=0.108..0.112 rows=1 loops=1)\n Index Cond: (iatacode = 'KB8'::text)\n Filter: (deletiondate IS NULL)\nTotal runtime: 103.563 ms\n\nWhich is a lot faster than the original question.\n\nIf I then enable nestloop again and lower the geqo_threshold from 11 \n(default) to 5 and run the same query again I get:\n\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=7.72..17.19 rows=1 width=8) (actual \ntime=9.752..42.278 rows=1 loops=1)\n -> Nested Loop (cost=7.72..13.52 rows=1 width=12) (actual \ntime=5.698..41.079 rows=47 loops=1)\n Join Filter: (\"outer\".productid = \"inner\".productid)\n -> Nested Loop (cost=7.72..12.47 rows=1 width=28) (actual \ntime=5.663..39.992 rows=47 loops=1)\n Join Filter: (\"inner\".productid = \"outer\".productid)\n -> Hash Join (cost=7.72..11.37 rows=1 width=24) \n(actual time=5.470..36.848 rows=111 loops=1)\n Hash Cond: ((\"outer\".pricegroupid = \n\"inner\".pricegroupid) AND (\"outer\".productid = \"inner\".productid))\n -> Index Scan using priceix2 on price \n(cost=0.00..3.41 rows=23 width=20) (actual time=0.050..26.475 rows=4402 \nloops=1)\n Index Cond: ('2004-06-01'::date <= \nvalidstopdate)\n Filter: (('2004-06-01'::date >= \nvalidstartdate) AND (deletiondate IS NULL) AND (organizationid = 1))\n -> Hash (cost=7.72..7.72 rows=1 width=12) \n(actual time=1.809..1.809 rows=0 loops=1)\n -> Nested Loop (cost=0.00..7.72 rows=1 \nwidth=12) (actual time=0.253..1.587 rows=111 loops=1)\n -> Seq Scan on locationconnection lc \n (cost=0.00..1.80 rows=1 width=4) (actual time=0.163..0.234 rows=1 loops=1)\n Filter: ((locationconnectioncode \n= 'RNB'::text) AND (deletiondate IS NULL))\n -> Index Scan using priceavailableix1 \non priceavailable pa (cost=0.00..5.91 rows=1 width=16) (actual \ntime=0.070..0.965 rows=111 loops=1)\n Index Cond: \n(pa.locationconnectionid = \"outer\".locationconnectionid)\n Filter: (((direction = \n'D'::text) OR (direction = 'B'::text)) AND (deletiondate IS NULL))\n -> Seq Scan on productcodealias productalias \n(cost=0.00..1.09 rows=1 width=4) (actual time=0.020..0.022 rows=1 loops=111)\n Filter: (productcode = 'TAXI'::text)\n -> Seq Scan on product (cost=0.00..1.04 rows=1 width=8) \n(actual time=0.005..0.013 rows=3 loops=47)\n Filter: ((organizationid = 1) AND (deletiondate IS NULL))\n -> Index Scan using locationiataix4 on locationiata la \n(cost=0.00..3.66 rows=1 width=4) (actual time=0.022..0.022 rows=0 loops=47)\n Index Cond: (\"outer\".locationid = la.locationid)\n Filter: ((iatacode = 'KB8'::text) AND (deletiondate IS NULL))\nTotal runtime: 42.852 ms\n\nWhich is even faster than disable:ing nestloop.\n\nSo my question is why is the planner making such a bad choice and how \ncan I make it choose a better plan?\n\n\n/Mikael\n\n",
"msg_date": "Fri, 04 Jun 2004 14:10:41 +0200",
"msg_from": "=?ISO-8859-1?Q?Mikael_Kjellstr=F6m?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planner problem"
},
{
"msg_contents": "=?ISO-8859-1?Q?Mikael_Kjellstr=F6m?= <[email protected]> writes:\n> I am having a bit of problem with the plan that the planner produces.\n\nActually, your problem is with the row-count estimates. Some of them\nare pretty wildly off, which inevitably leads to bad plan choices.\nIn particular the price row estimate is off by a factor of 200 in all\nthree plans:\n\n> -> Index Scan using priceix2 on price (cost=0.00..3.41 rows=23 width=20) (actual time=0.042..25.811 rows=4402 loops=111)\n> Index Cond: ('2004-06-01'::date <= validstopdate)\n> Filter: (('2004-06-01'::date >= validstartdate) AND (deletiondate IS NULL) AND (organizationid = 1))\n\n> -> Index Scan using priceix2 on price (cost=0.00..3.41 rows=23 width=20) (actual time=0.053..26.225 rows=4402 loops=1)\n> Index Cond: ('2004-06-01'::date <= validstopdate)\n> Filter: (('2004-06-01'::date >= validstartdate) AND (deletiondate IS NULL) AND (organizationid = 1))\n\n> -> Index Scan using priceix2 on price (cost=0.00..3.41 rows=23 width=20) (actual time=0.050..26.475 rows=4402 loops=1)\n> Index Cond: ('2004-06-01'::date <= validstopdate)\n> Filter: (('2004-06-01'::date >= validstartdate) AND (deletiondate IS NULL) AND (organizationid = 1))\n\nand priceavailable is off by a factor of 100:\n\n> -> Index Scan using priceavailableix1 on priceavailable pa (cost=0.00..5.91 rows=1 width=16) (actual time=0.067..4.182 rows=111 loops=1)\n> Index Cond: (pa.locationconnectionid = \"outer\".locationconnectionid)\n> Filter: (((direction = 'D'::text) OR (direction = 'B'::text)) AND (deletiondate IS NULL))\n\n> -> Index Scan using priceavailableix1 on priceavailable pa (cost=0.00..141.57 rows=43 width=16) (actual time=0.048..48.739 rows=6525 loops=1)\n> Filter: (((direction = 'D'::text) OR (direction = 'B'::text)) AND (deletiondate IS NULL))\n\n> -> Index Scan using priceavailableix1 on priceavailable pa (cost=0.00..5.91 rows=1 width=16) (actual time=0.070..0.965 rows=111 loops=1)\n> Index Cond: (pa.locationconnectionid = \"outer\".locationconnectionid)\n> Filter: (((direction = 'D'::text) OR (direction = 'B'::text)) AND (deletiondate IS NULL))\n\nAre you sure you've vacuum analyzed these two tables recently? If so,\nwhat may be needed is to increase ANALYZE's statistics target for\nthe columns used in the conditions. (See ALTER TABLE SET STATISTICS)\n\nI suspect that part of the story here has to do with cross-column\ncorrelations, which the present planner will never figure out since it\nhas no cross-column statistics. But it's hard to believe that that's\nthe problem for cases as simple as\n\n> Filter: (((direction = 'D'::text) OR (direction = 'B'::text)) AND (deletiondate IS NULL))\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 2004 10:41:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planner problem "
}
] |
[
{
"msg_contents": "Hi!\n\n> > Does that mean only the xlog, or also the clog? As far as I understand, the\n> > clog contains some meta-information on the xlog, so presumably it is\n> > flushed to disc synchronously together with the xlog? That would mean that\n> > they each need a separate disk to prevent one disk having to seek too\n> > often...?\n> \n> You can put clog and xlog on same drive. That should be enough in most cases. \n> xlog is written sequentially and never read back other than for recovery \n> after a crash. clog is typically 8KB or a page and should not be an IO \n> overhead even in high traffic databases.\n\nFor many small commits it's usually the disk seek, the moving of the head,\nthat produces the overhead. Even if there is only a single byte changed in\nthe clog, it's an independent operation for the harddisk to seek to, and\nwrite, the block.\n\n> Well, if you have tablespaces, you don't have to mess with symlinking \n> clog/xlog or use location facility which is bit rough. You should be able to \n> manage such a setup solely from postgresql. That is an advantage of \n> tablespaces.\n\nRight, hadn't thought of that.\n\n> > Here goes ... we are talking about a database cluster with two tables where\n> > things are happening, one is a kind of log that is simply \"appended\" to and\n> > will expect to reach a size of several million entries in the time window\n> > that is kept, the other is a persistent backing of application data that\n> > will mostly see read-modify-writes of single records. Two writers to the\n> > history, one writer to the data table. The volume of data is not very high\n> > and RAM is enough...\n> \n> Even if you have enough RAM, you should use pg_autovacuum so that your tables \n> are in shape. This is especially required when your update/insert rate is \n> high.\n\nI haven't looked at pg_autovacuum yet, but had planned on doing a vacuum or\nvacuum flush once every day (or rather, once every night).\n\n[next message]\n\n> > > Does that mean only the xlog, or also the clog?\n> \n> > You can put clog and xlog on same drive.\n> \n> You can, but I think you shouldn't. [...]\n\nSo the clog is not written to every time the xlog is written to?\n\nOn a related issue, what's the connection between the \"fsync\" and the\n\"wal_sync_method\" configuration switches? Does fsync apply to the wal,\nto checkpointing, to regular data writes (assuming data blocks are\nwritten to between checkpoints if there's \"nothing better to do\"), or\nto all? Does it select the wal_sync_method to be used for other writes\ntoo?\n\n(The other issue I am trying to sort out is the interference with journalled\nfilesystems, in this case SUN UFS with the logging option. That's another\nlayer of transaction log, and I do not know all details yet to get an\nimpression of how it affects safety and performance wrt. all these partition\nquestions...)\n\n(Why are these not common questions covered in the documentation? Is nobody\ninterested, or does everybody have battery-backed cache on the SCSI HA and\ndoes not care how often the disk would have to seek without?)\n\nThanks & Regards,\nColin\n",
"msg_date": "Fri, 4 Jun 2004 16:21:11 +0200",
"msg_from": "CH <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: filesystem option tuning"
},
{
"msg_contents": "CH <[email protected]> writes:\n> So the clog is not written to every time the xlog is written to?\n\nNo. One clog page holds 32000 transactions' worth of transaction status\nvalues, so on average we need only one clog page write per 32000\ntransactions.\n\n> On a related issue, what's the connection between the \"fsync\" and the\n> \"wal_sync_method\" configuration switches?\n\nfsync is the master circuit breaker: turn it off, there is no power at\nthe wal_sync_method socket ;-). We stop doing anything special about\nenforcing write ordering for any files, but just assume that the kernel\n+ hardware can be trusted not to lose data they've been given.\n\nWith fsync on, wal_sync_method means something.\n\nfsync on also enables fsync/sync for the data files at checkpoint\ntimes. We don't need to force writes for those between checkpoints,\nso wal_sync_method doesn't apply to data files (nor clog).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 2004 11:33:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: filesystem option tuning "
}
] |
[
{
"msg_contents": "I have two instances of a production application that uses Postgres 7.2,\ndeployed in two different data centers for about the last 6 months. The\nsizes, schemas, configurations, hardware, and access patterns of the two\ndatabases are nearly identical, but one consistently takes at least 5x\nlonger than the other for some common operations. During this time, CPU\nusage and IO on the slow database are both high (sustained); I'm not\nsure about the fast database. These common operations are chatty - at\nleast tens of thousands of queries over a 5 to 60 minute stretch - but\nthe queries themselves are fairly simple. The query plans are identical\nacross both databases, and the data distribution is comparable. The\ntables involved in these common operations change frequently, and are\nindexed according to these queries. The queries use the indexes as\nexpected. The tables involved have 50k-500k rows.\n\nWe 'vacuum analyze' nightly, and we recently rebuilt the indexes on the\nslow database (using reindex table). This cut the number of index pages\ndramatically: from ~1800 to ~50, but didn't noticeably change the time\nor CPU utilization for the common operations described above.\n\nWhen running pgbench, both databases have very similar results (200-260\nover multiple runs with 5 concurrent threads).\n\nI know of a few things I can do to make this operation marginally\nsimpler, but I'm most interested in the difference between the two\ndatabases.\n\nI haven't come up with a theory that explains each of these things.\nWhat are some things I can look into to track this down further?\n\nmike\n",
"msg_date": "Fri, 4 Jun 2004 10:42:59 -0500",
"msg_from": "\"Michael Nonemacher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres performance: comparing 2 data centers"
},
{
"msg_contents": "\"Michael Nonemacher\" <[email protected]> writes:\n> I have two instances of a production application that uses Postgres 7.2,\n> deployed in two different data centers for about the last 6 months. The\n> sizes, schemas, configurations, hardware, and access patterns of the two\n> databases are nearly identical, but one consistently takes at least 5x\n> longer than the other for some common operations.\n\nDoes VACUUM VERBOSE show comparable physical sizes (in pages) for the\nkey tables in both databases? Maybe the slow one has lots of dead space\nin the tables (not indexes). It would be useful to look at EXPLAIN\nANALYZE output of both databases for some of those common ops, too.\nIt could be that you're getting different plans in the two cases for\nsome reason.\n\n> We 'vacuum analyze' nightly, and we recently rebuilt the indexes on the\n> slow database (using reindex table). This cut the number of index pages\n> dramatically: from ~1800 to ~50, but didn't noticeably change the time\n> or CPU utilization for the common operations described above.\n\nThat's pretty suspicious.\n\nIf it's not dead space or plan choice, the only other thing I can think\nof is physical tuple ordering. You might try CLUSTERing on the\nmost-heavily-used index of each table.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Jun 2004 12:07:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres performance: comparing 2 data centers "
}
] |
[
{
"msg_contents": "Michael wrote:\n> I have two instances of a production application that uses Postgres\n7.2,\n> deployed in two different data centers for about the last 6 months.\nThe\n> sizes, schemas, configurations, hardware, and access patterns of the\ntwo\n> databases are nearly identical, but one consistently takes at least 5x\n> longer than the other for some common operations. During this time,\nCPU\n> usage and IO on the slow database are both high (sustained); I'm not\n> sure about the fast database. These common operations are chatty - at\n> least tens of thousands of queries over a 5 to 60 minute stretch - but\n> the queries themselves are fairly simple. The query plans are\nidentical\n> across both databases, and the data distribution is comparable. The\n> tables involved in these common operations change frequently, and are\n> indexed according to these queries. The queries use the indexes as\n> expected. The tables involved have 50k-500k rows.\n\nHave you isolated any hardware issues? For example, if you are using\nATA cables, and one is kinked or too long, you could be having ATA\nerrors which cause bizarre and intermittent slowdowns and pauses,\nespecially in raid systems. Do a filesystem diagnostic to check this.\n\nMerlin\n",
"msg_date": "Fri, 4 Jun 2004 12:02:41 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
}
] |
[
{
"msg_contents": "Slight update:\n\nThanks for the replies; this is starting to make a little more sense...\n\nI've managed to track down the root of the problem to a single query on\na single table. I have a query that looks like this:\n select count(*) from members where group_id = ? and member_id >\n0;\n\nThe members table contains about 500k rows. It has an index on\n(group_id, member_id) and on (member_id, group_id).\n\nIt seems like the statistics are wildly different depending on whether\nthe last operation on the table was a 'vacuum analyze' or an 'analyze'.\nVacuum or vacuum-analyze puts the correct number (~500k) in\npg_class.reltuples, but analyze puts 7000 in pg_class.reltuples. The\nreltuples numbers associated with this table's indexes are unchanged.\n\nAfter a vacuum-analyze, the query correctly uses the index on (group_id,\nmember_id), and runs very fast (sub-millisecond reported by explain\nanalyze). After an analyze, the query uses the (member_id, group_id)\nindex, and the query takes much longer (150ms reported by explain\nanalyze). (Yes, I said the 2 databases were using the same query plan;\nit turns out they're only sometimes using the same query plan. :( )\n\nA similar problem happens to some of my other tables (according to\npg_class.reltuples), although I haven't seen query performance change as\nmuch.\n\nAny idea what could cause this bad analyze behavior? Any guesses why\nthis has happened in one of my data centers but not both? (Coincidence\nisn't a big stretch here.) What can I do to stop or change this\nbehavior? Apologies if this is a known problem...\n\nmike\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael\nNonemacher\nSent: Friday, June 04, 2004 10:43 AM\nTo: [email protected]\nSubject: [PERFORM] postgres performance: comparing 2 data centers\n\n\nI have two instances of a production application that uses Postgres 7.2,\ndeployed in two different data centers for about the last 6 months. The\nsizes, schemas, configurations, hardware, and access patterns of the two\ndatabases are nearly identical, but one consistently takes at least 5x\nlonger than the other for some common operations. During this time, CPU\nusage and IO on the slow database are both high (sustained); I'm not\nsure about the fast database. These common operations are chatty - at\nleast tens of thousands of queries over a 5 to 60 minute stretch - but\nthe queries themselves are fairly simple. The query plans are identical\nacross both databases, and the data distribution is comparable. The\ntables involved in these common operations change frequently, and are\nindexed according to these queries. The queries use the indexes as\nexpected. The tables involved have 50k-500k rows.\n\nWe 'vacuum analyze' nightly, and we recently rebuilt the indexes on the\nslow database (using reindex table). This cut the number of index pages\ndramatically: from ~1800 to ~50, but didn't noticeably change the time\nor CPU utilization for the common operations described above.\n\nWhen running pgbench, both databases have very similar results (200-260\nover multiple runs with 5 concurrent threads).\n\nI know of a few things I can do to make this operation marginally\nsimpler, but I'm most interested in the difference between the two\ndatabases.\n\nI haven't come up with a theory that explains each of these things. What\nare some things I can look into to track this down further?\n\nmike\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if\nyour\n joining column's datatypes do not match\n",
"msg_date": "Fri, 4 Jun 2004 17:07:52 -0500",
"msg_from": "\"Michael Nonemacher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
},
{
"msg_contents": "> The members table contains about 500k rows. It has an index on\n> (group_id, member_id) and on (member_id, group_id).\n\n\nYes, bad stats are causing it to pick a poor plan, but you're giving it\ntoo many options (which doesn't help) and using space up unnecessarily.\n\nKeep (group_id, member_id)\nRemove (member_id, group_id)\nAdd (member_id)\n\nAn index on just member_id is actually going to perform better than\nmember_id, group_id since it has a smaller footprint on the disk.\n\nAnytime where both group_id and member_id are in the query, the\n(group_id, member_id) index will likely be used.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc\n\n",
"msg_date": "Fri, 04 Jun 2004 18:27:08 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
},
{
"msg_contents": "On Fri, 2004-06-04 at 18:07, Michael Nonemacher wrote:\n> Slight update:\n> \n> Thanks for the replies; this is starting to make a little more sense...\n> \n> I've managed to track down the root of the problem to a single query on\n> a single table. I have a query that looks like this:\n> select count(*) from members where group_id = ? and member_id >\n> 0;\n> \n> The members table contains about 500k rows. It has an index on\n> (group_id, member_id) and on (member_id, group_id).\n> \n> It seems like the statistics are wildly different depending on whether\n> the last operation on the table was a 'vacuum analyze' or an 'analyze'.\n\n\nYes, bad stats are causing it to pick a poor plan (might be better in\n7.5), but you're giving it too many options (which doesn't help) and\nusing diskspace up unnecessarily.\n\nKeep (group_id, member_id)\nRemove (member_id, group_id)\nAdd (member_id)\n\nAn index on just member_id is actually going to perform better than\nmember_id, group_id since it has a smaller footprint on the disk.\n\nAnytime where both group_id and member_id are in the query, the\n(group_id, member_id) index will likely be used.\n\n\n",
"msg_date": "Fri, 04 Jun 2004 18:29:29 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
},
{
"msg_contents": "\"Michael Nonemacher\" <[email protected]> writes:\n> It seems like the statistics are wildly different depending on whether\n> the last operation on the table was a 'vacuum analyze' or an 'analyze'.\n> Vacuum or vacuum-analyze puts the correct number (~500k) in\n> pg_class.reltuples, but analyze puts 7000 in pg_class.reltuples.\n\nOkay, this is a known issue: in 7.4 and earlier, ANALYZE is easily\nfooled as to the total number of rows in the table. It samples the\ninitial portion of the table and assumes that the density of live rows\nper page in that section is representative of the rest of the table.\nEvidently that assumption is way off for your table. There's an\nimproved sampling algorithm in CVS tip that we hope will avoid this\nerror in 7.5 and beyond, but the immediate problem for you is what\nto do in 7.4. I'd suggest either VACUUM FULL or CLUSTER to clean out\nthe existing dead space; then you should look into whether you need\nto increase your vacuum frequency and/or FSM settings to keep it from\ngetting into this state again. Ideally the average dead space per\npage *should* be consistent over the whole table, and the fact that\nit isn't suggests strongly that you've got space-management issues\nto deal with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Jun 2004 00:17:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres performance: comparing 2 data centers "
}
] |
[
{
"msg_contents": "Agreed.\n\nWe originally created the indexes this way because we sometimes do\nsearches where one of the columns is constrained using =, and the other\nusing a range search, but it's not clear to me how much Postgres\nunderstands multi-column indexes. Will I get the gain I'd expect from a\n(member_id, group_id) index on a query like \"where member_id = ? and\ngroup_id > ?\"?\n\nI've since found a few other often-used tables where the reltuples\ncounts generated by 'analyze' are off by a factor of 5 or more. In the\nshort term, I'm just trying to eliminate the automatic-analyzes where\npossible and make sure they're followed up quickly with a 'vacuum' where\nit's not possible.\n\nIs \"analyze generating bad stats\" a known issue? Is there anything I\ncould be doing to aggravate or work around the problem?\n\nmike\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Friday, June 04, 2004 5:27 PM\nTo: Michael Nonemacher\nCc: Postgresql Performance\nSubject: Re: [PERFORM] postgres performance: comparing 2 data centers\n\n\n> The members table contains about 500k rows. It has an index on \n> (group_id, member_id) and on (member_id, group_id).\n\n\nYes, bad stats are causing it to pick a poor plan, but you're giving it\ntoo many options (which doesn't help) and using space up unnecessarily.\n\nKeep (group_id, member_id)\nRemove (member_id, group_id)\nAdd (member_id)\n\nAn index on just member_id is actually going to perform better than\nmember_id, group_id since it has a smaller footprint on the disk.\n\nAnytime where both group_id and member_id are in the query, the\n(group_id, member_id) index will likely be used.\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL PGP Key:\nhttp://www.rbt.ca/signature.asc\n\n",
"msg_date": "Fri, 4 Jun 2004 18:12:52 -0500",
"msg_from": "\"Michael Nonemacher\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
},
{
"msg_contents": "\"Michael Nonemacher\" <[email protected]> writes:\n\n> Agreed.\n> \n> We originally created the indexes this way because we sometimes do\n> searches where one of the columns is constrained using =, and the other\n> using a range search, but it's not clear to me how much Postgres\n> understands multi-column indexes. Will I get the gain I'd expect from a\n> (member_id, group_id) index on a query like \"where member_id = ? and\n> group_id > ?\"?\n\nIt will use them, whether you see a gain depends on the distribution of your\ndata. Does the group_id > ? exclude enough records that it's worth having to\ndo all the extra i/o the bigger index would require?\n\nPersonally I think the other poster was a bit hasty to assert unconditionally\nthat it's never worth it. If you have a lot of records for every member_id and\nvery few of which will be greater than '?' then it might be worth it. If\nhowever you'll only ever have on the order of a hundred or fewer records per\nmember_id and a significant chunk of them will have group_id > '?' then it\nwill probably be a wash or worse.\n\nThere's another side to the story though. In a web site or other OLTP\napplication you may find you're better off with the multi-column index. Even\nif it performs less well on average than the smaller single column index when\nusers have reasonable numbers of groups. That's becuase you're guaranteed\n(assuming postgres is using it) that even if a user someday has an obscene\nnumber of groups he won't suddenly break your web site by driving your\ndatabase into the ground.\n\nThere is a difference between latency and bandwidth, and between average and\nworst-case. Sometimes it's necessary to keep an eye on worst-case scenarios\nand not just average bandwidth. \n\nBut that said. If you are reasonably certain that you'll never or rarely have\nthousands of groups per user you're probably better off with the indexes the\nother person described.\n\n> I've since found a few other often-used tables where the reltuples\n> counts generated by 'analyze' are off by a factor of 5 or more. In the\n> short term, I'm just trying to eliminate the automatic-analyzes where\n> possible and make sure they're followed up quickly with a 'vacuum' where\n> it's not possible.\n>\n> Is \"analyze generating bad stats\" a known issue? Is there anything I\n> could be doing to aggravate or work around the problem?\n\nI would suggest trying a VACUUM FULL and then retrying the ANALYZE. I suspect\nyou might have a lot of dead tuples at the beginning of your table which is\nconfusing the sampling. If that's it, then yes it's known and in fact already\nimproved in what will be 7.5. You may be able to avoid the situation by\nvacuuming more frequently.\n\nIf that doesn't solve it then I would suggest trying to raise the statistics\ntargets for the columns in question with \n\n ALTER TABLE name ALTER column SET STATISTICS integer\n\nThe default is 100 iirc. You could try 200 or even more.\n\n-- \ngreg\n\n",
"msg_date": "04 Jun 2004 20:33:26 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres performance: comparing 2 data centers"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm using postgresql 7.4.2, and I have this view:\nslooze=# \\d userpictures2\n Vue �public.userpictures2�\n Colonne | Type | Modificateurs \n-------------+--------------------------+---------------\n pictureid | integer | \n rollid | character varying(64) | \n frameid | character varying(64) | \n description | character varying(255) | \n filename | character varying(255) | \n owner | integer | \n entrydate | timestamp with time zone | \n date | timestamp with time zone | \n nbclick | integer | \n nbrates | integer | \n maxrate | smallint | \n minrate | smallint | \n averagerate | real | \n sumrates | integer | \n userid | integer | \nD�finition de la vue\n SELECT DISTINCT ON (permissions.pictureid, userid) pictures.pictureid, rollid, frameid, description, filename, \"owner\", entrydate, date, nbclick, nbrates, maxrate, minrate, averagerate, sumrates, userid\n FROM permissions\n JOIN groupsdef USING (groupid)\n JOIN pictures USING (pictureid)\n WHERE groupsdef.groupid = permissions.groupid\n ORDER BY permissions.pictureid, userid;\n\n\nNow consider this query:\n\nSELECT count(*) FROM userpictures JOIN topicscontent using(PictureID) WHERE TopicID=137 and UserID=2;\n\nThe pictures table is scanned, but it's not needed. (see plan at the end).\n\nI believe it's not need because my tables are as follow:\n\nCREATE TABLE pictures (\n\tPictureID serial PRIMARY KEY,\n\tRollID character varying(64) NOT NULL REFERENCES rolls,\n\tFrameID character varying(64) NOT NULL,\n\tDescription character varying(255),\n\tFilename character varying(255),\n\tOwner integer NOT NULL REFERENCES users,\n\tEntryDate datetime DEFAULT now(),\n\tDate datetime,\n\tNbClick integer DEFAULT 0,\n\tNbRates integer DEFAULT 0,\n\tMaxRate int2,\n\tMinRate int2,\n\tAverageRate float4 DEFAULT 5,\n\tSumRates integer DEFAULT 0);\n\nCREATE TABLE permissions (\n\tGroupID integer NOT NULL REFERENCES groups ON DELETE cascade,\n\tPictureID integer NOT NULL REFERENCES pictures ON DELETE cascade,\n\tUNIQUE (GroupID, PictureID));\n\nCREATE TABLE groupsdef (\n\tUserID integer REFERENCES users,\n\tGroupID integer REFERENCES groups,\n\tPRIMARY KEY (UserID,GroupID));\n\nCREATE TABLE topicscontent (\n\tTopicID integer REFERENCES topics ON DELETE cascade,\n\tPictureID integer REFERENCES pictures ON DELETE cascade,\n\tDirect boolean NOT NULL,\n\tPRIMARY KEY (TopicID,PictureID) );\n\nSo obviously, the join on pictures is not adding any rows, since\npermissions.PictureID references pictures.PictureID and\npictures.PictureID is the primary key. \n\nI can workaround with a second view:\n\nslooze=# \\d userpictures2\n Vue �public.userpictures2�\n Colonne | Type | Modificateurs \n-----------+---------+---------------\n pictureid | integer | \n userid | integer | \nD�finition de la vue\n SELECT DISTINCT pictureid, userid\n FROM permissions\n JOIN groupsdef USING (groupid)\n WHERE groupsdef.groupid = permissions.groupid\n ORDER BY pictureid, userid;\n\nBut it would be better if Postgresql could figure it out itself. Is\nthere a way to currently avoid the 2nd view ?\n\nQUERY PLAN for SELECT count(*) FROM userpictures JOIN topicscontent using(PictureID) WHERE TopicID=137 and UserID=2;\n--------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1195.15..1195.15 rows=1 width=0) (actual time=89.252..89.253 rows=1 loops=1)\n -> Merge Join (cost=1096.05..1194.98 rows=66 width=0) (actual time=84.574..89.202 rows=8 loops=1)\n Merge Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Subquery Scan userpictures (cost=995.78..1081.47 rows=4897 width=4) (actual time=84.386..88.530 rows=841 loops=1)\n -> Unique (cost=995.78..1032.50 rows=4897 width=105) (actual time=84.377..87.803 rows=841 loops=1)\n -> Sort (cost=995.78..1008.02 rows=4897 width=105) (actual time=84.369..84.786 rows=1433 loops=1)\n Sort Key: permissions.pictureid, groupsdef.userid\n -> Hash Join (cost=371.82..695.65 rows=4897 width=105) (actual time=23.328..56.498 rows=5076 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Index Scan using pictures_pkey on pictures (cost=0.00..164.87 rows=2933 width=97) (actual time=0.015..4.591 rows=2933 loops=1)\n -> Hash (cost=359.58..359.58 rows=4897 width=8) (actual time=23.191..23.191 rows=0 loops=1)\n -> Merge Join (cost=10.16..359.58 rows=4897 width=8) (actual time=0.110..19.365 rows=5076 loops=1)\n Merge Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Sort (cost=10.16..10.19 rows=12 width=8) (actual time=0.080..0.088 rows=11 loops=1)\n Sort Key: groupsdef.groupid\n -> Index Scan using groupsdef_userid_key on groupsdef (cost=0.00..9.94 rows=12 width=8) (actual time=0.038..0.056 rows=11 loops=1)\n Index Cond: (userid = 2)\n -> Index Scan using permissions_groupid_key on permissions (cost=0.00..279.63 rows=8305 width=8) (actual time=0.015..9.801 rows=7633 loops=1)\n -> Sort (cost=100.28..100.37 rows=38 width=4) (actual time=0.114..0.118 rows=8 loops=1)\n Sort Key: topicscontent.pictureid\n -> Index Scan using topicscontent_topicid on topicscontent (cost=0.00..99.28 rows=38 width=4) (actual time=0.052..0.072 rows=8 loops=1)\n Index Cond: (topicid = 137)\n Total runtime: 91.096 ms\n\n\nQUERY PLAN for SELECT count(*) FROM userpictures JOIN topicscontent using(PictureID) WHERE TopicID=137 and UserID=2;\n--------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=859.09..859.09 rows=1 width=4) (actual time=30.488..30.489 rows=1 loops=1)\n -> Merge Join (cost=759.99..858.92 rows=66 width=4) (actual time=27.845..30.466 rows=8 loops=1)\n Merge Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Subquery Scan userpictures2 (cost=659.71..745.41 rows=4897 width=4) (actual time=27.707..29.853 rows=841 loops=1)\n -> Unique (cost=659.71..696.44 rows=4897 width=8) (actual time=27.701..29.121 rows=841 loops=1)\n -> Sort (cost=659.71..671.95 rows=4897 width=8) (actual time=27.696..28.153 rows=1433 loops=1)\n Sort Key: permissions.pictureid, groupsdef.userid\n -> Merge Join (cost=10.16..359.58 rows=4897 width=8) (actual time=0.101..20.682 rows=5076 loops=1)\n Merge Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Sort (cost=10.16..10.19 rows=12 width=8) (actual time=0.074..0.078 rows=11 loops=1)\n Sort Key: groupsdef.groupid\n -> Index Scan using groupsdef_userid_key on groupsdef (cost=0.00..9.94 rows=12 width=8) (actual time=0.035..0.055 rows=11 loops=1)\n Index Cond: (userid = 2)\n -> Index Scan using permissions_groupid_key on permissions (cost=0.00..279.63 rows=8305 width=8) (actual time=0.014..10.093 rows=7633 loops=1)\n -> Sort (cost=100.28..100.37 rows=38 width=4) (actual time=0.091..0.094 rows=8 loops=1)\n Sort Key: topicscontent.pictureid\n -> Index Scan using topicscontent_topicid on topicscontent (cost=0.00..99.28 rows=38 width=4) (actual time=0.039..0.057 rows=8 loops=1)\n Index Cond: (topicid = 137)\n Total runtime: 31.376 ms\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n",
"msg_date": "Sat, 05 Jun 2004 16:37:26 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unused table of view"
},
{
"msg_contents": "Laurent Martelli <[email protected]> writes:\n> The pictures table is scanned, but it's not needed.\n\nYes it is. For example, if pictures is empty then the view yields\nzero rows. Omitting the join to pictures could give a different result.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 05 Jun 2004 13:36:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unused table of view "
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> Laurent Martelli <[email protected]> writes:\n >> The pictures table is scanned, but it's not needed.\n\n Tom> Yes it is. For example, if pictures is empty then the view\n Tom> yields zero rows. Omitting the join to pictures could give a\n Tom> different result.\n\nSince Permission is like this:\n\nCREATE TABLE permissions (\n\tGroupID integer NOT NULL REFERENCES groups ON DELETE cascade,\n\tPictureID integer NOT NULL REFERENCES pictures ON DELETE cascade,\n\tUNIQUE (GroupID, PictureID));\n\nif the pictures table is empty, so is permissions, because\npermissions.PictureID references pictures. \n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n",
"msg_date": "Sat, 05 Jun 2004 21:01:29 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unused table of view"
}
] |
[
{
"msg_contents": "\nI've got a simple database (no indices, 6 columns) that I need\nto write data quickly into through JDBC connections from\nmultiple such connections simultaneously in a distributed\nenvironment. (This is going to be a message logging service\nfor software generated messages.)\n\nUsing a PreparedStatement, I can get about 400/s inserted. If I\n(on the java side) buffer up the entries and dump them in large\ntransaction blocks I can push this up to about 1200/s. I'd\nlike to go faster. One approach that I think might be\npromising would be to try using a COPY command instead of\nan INSERT, but I don't have a file for input, I have a \nJava collection, so COPY isn't quite right. Is there anyway to\nefficiently use COPY without having to create a file (remember\nthat the java apps are distributed on a LAN and aren't running\non the DB server.) Is this a dead end because of the way\nCOPY is implemented to only use a file?\n\nIs there something else I can do? Ultimately, this will end\nup on a machine running 1+0 RAID, so I expect that will give\nme some performance boost as well, but I'd like to push it\nup as best I can with my current hardware setup.\n\nThanks for any advice!\n-Steve\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Sat, 05 Jun 2004 13:12:29 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "Hi, Steve,\n\nOn Sat, 05 Jun 2004 13:12:29 -0700\nSteve Wampler <[email protected]> wrote:\n\n> I've got a simple database (no indices, 6 columns) that I need\n> to write data quickly into through JDBC connections from\n> multiple such connections simultaneously in a distributed\n> environment. (This is going to be a message logging service\n> for software generated messages.)\n> Using a PreparedStatement, I can get about 400/s inserted. If I\n> (on the java side) buffer up the entries and dump them in large\n> transaction blocks I can push this up to about 1200/s. I'd\n> like to go faster. One approach that I think might be\n> promising would be to try using a COPY command instead of\n> an INSERT, but I don't have a file for input, I have a \n> Java collection, so COPY isn't quite right. Is there anyway to\n> efficiently use COPY without having to create a file (remember\n> that the java apps are distributed on a LAN and aren't running\n> on the DB server.) Is this a dead end because of the way\n> COPY is implemented to only use a file?\n\nWe also found that using the psql frontend, using COPY seems to give a\nfactor 10 or more speedup. Sadly, as far as I learned, the current JDBC\ndriver does not support COPY ... FROM STDIN.\n\nAs a very bad workaround, it might be acceptable to use Runtime.exec()\nto start the psql command line tool, and issue the statement there, or\neven add a C-lib via JNI. Of course, the best \"workaround\" would be to\nimplement COPY support for the driver, and send the Patches to the\nPGJDBC team for inclusion :-)\n\nWe also had to do some trickery to get instertion of lots of rows fast.\nWe dit lots of benchmarks, and currently use the following method:\n\nOur input data is divided into chunks (the optimal size depends on the\nmachine, and seems to be between 250 and 3000). As the current pgjdbc\npreparedStatements implementation just does a text replacement, but we\nwantedto get the last bit of speed out of the machine, we issue a\n\"PREPARE\" statement for the insertion on connection setup, and then\naddBatch() a \"EXECUTE blubb (data, row, values)\" statement.\n\nThen we have several concurrent threads, all running essentially a {get\nbatch, write batch, commit} loop on their own connection. Increasing\nthe thread number to more than three did not show further substantial\nperformance improvements. This lead us to the conclusion that\nconcurrency can compensate for the time the postmaster is forced to wait\nwhile it syncs the WAL to disk, but there's still a concurrency limit\ninside of postgres for inserts (I presume they have to lock at some\ntimes, the multiversioning seems not to cover inserts very well).\n\nAlso, we surprisingly found that setting the transaction isolation to\n\"serializable\" can speed things remarkably in some cases...\n\n> Is there something else I can do? Ultimately, this will end\n> up on a machine running 1+0 RAID, so I expect that will give\n> me some performance boost as well, but I'd like to push it\n> up as best I can with my current hardware setup.\n\nAs any sane setup runs with syncing enabled in the backend, and each\nsync (and so each commit) at least has to write at least one block, you\ncan calculate the theoretical maximum number of commits your machine can\nachieve.\n\nIf you have 15k rpm disks (AFAIK, the fastest one currently available),\nthey spin at 250 rotations per second, so you cannot have more than 250\ncommits per second.\n\nRegarding the fact that your machine has to do some works between the\nsync() calls (e. G. processing the whole next batch), it is very likely\nthat it misses the next turn, so that you're likely to get a factor 2 or\n3 number in reality.\n\nOne way to overcome this limit is using multiple writer threads, and\n(having a highly capable I/O sybsystem) enabling commit delay in your\nbackend so that you can have more than one commit during the same write\noperation.\n\nIt might also help to put the WAL log to a different disk (just link or\nmount or mount --bind the appropriate subdirectory in your database), or\neven put the indices on a third disk (needs ugly trickery) - it's a\nshame that postmaster does not really support this techniques which are\nconsidered standard in any professional database.\n\nIf you really need much more speed, that you could try to put the WAL\non a Solid State Disk (essentially a battery-backed RAM) so you can\novercome this physical limit, or (if you really trust your hardware and\nyour power supply) put the WAL into a RAMDISK or switch of syncing in\nyour postmaster configuration.\n\nOne thing you should check is whether I/O or CPU is the limiting factor.\nIf you have a cpu utilization higher than 90%, than all the tricks I\ntold you won't help much. (But using COPY still could help a lot as it\ncut's down the CPU usage very much.)\n\nWe tested with two machines, a single-processor developer machine, and a\n2-way 64-Bit Itanium SMP machine. On the desktop machine, a single\nthread already utilized 80% CPU, and so only small improvement was\npossible using 2 or more threads. \n\nOn the SMP machine, we had substantial improvements using 2 or 3\nthreads, but then going up to 8 threads gave no more remarkable speedup\nconstantly utilizing about 120% CPU (Remember we have a 2-way machine).\nI think that there are some internal postgres locks that prohibit\nfurther concurrency for inserts in the same table.\n\n> Thanks for any advice!\n\nHope, that helps,\nMarkus Schaber\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Mon, 7 Jun 2004 00:08:00 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "Hi, Steve,\n\nHere are the results of some benchmarks we did inserting 30k rows into a\ntable, using \"time psql -f blubb.sql -q dbname\":\n\nFile kingfisher skate\n30kinsert.sql 39.359s 762r/s 335.024s 90r/s\n30kcommonce.sql 11.402s 2631r/s 7.086s 4233r/s\n30kwithids.sql 10.138s 2959r/s 6.936s 4325r/s\n30kprepare.sql 8.173s 3670r/s 5.189s 5781r/s\n30kdump.sql 1.286s 23328r/s 0.785s 38216r/s\n30kdumpseq.sql 1.498s 20026r/s 0.927s 32362r/s\n\n\nKingfisher is the single processor machine I mentioned yesterday night,\nskate the SMP machine.\n\nThe table has five rows (bigint, bigint, double, double, timestamp\nwithout time zone). The first of them has a \"default nextval('sequence'\n::text)\" definition, and there are no further constraints or indices.\n\nThe 30kinsert.sql uses simple insert statements with autocommit on, and\nwe insert all but the first column which is filled by the default\nsequence. With this test, kingfisher seems to have an irrealistic high\nvalue of commits (syncs) per second (see what I wrote yesterday) [1],\nskate has a more realistic value.\n\n30kcommonce.sql, as suspected, gives a rather high boost by\nencapsulating all into a single commit statement.\n\n30kwithids gives a small boost by inserting pre-calculated sequence\nnumbers, so it seems not worth the effort to move this logic into the\napplication.\n\n30kprepare prepares the insert statement, and then issues 30k EXECUTE\nstatements within one transaction, the speedup is noticeable.\n\n30kdump simply inserts the 30k rows as a dump via COPY FROM STDIN. (as\nwith 30kwithids, the first column is contained in the insert data, so\nthe default value sequence is not used). Naturally, this is by far the\nfastest method.\n\n30kdumpseq.sql uses COPY, too, but omits the first column and such\nutilizes the sequence generation again. This gives a noticeable 15%\nslowdown, but seems to be still fast enough for our purposes. Sadly, it\nis not available within jdbc.\n\nThanks for your patience.\n\nFootnotes: \n[1] We suspect this to be some strange interaction between ide,\ncryptoloop and ext3fs, so that the sync() call somehow does not really\nwait for the data to be physically written to the disk. (I really can't\nimagine a crypto-looped notebook harddisk to do more syncs/second than a\nSCSI-Based RAID in a server machine. We did some small benches on the\nsync() / fsync() calls that seem to prove this conclusion.)\n\n-- \nmarkus schaber | dipl. informatiker\nlogi-track ag | rennweg 14-16 | ch 8001 zürich\nphone +41-43-888 62 52 | fax +41-43-888 62 53\nmailto:[email protected] | www.logi-track.com\n",
"msg_date": "Mon, 7 Jun 2004 09:47:58 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "\n\nOn Sat, 5 Jun 2004, Steve Wampler wrote:\n\n> \n> [I want to use copy from JDBC]\n> \n\nI made a patch to the driver to support COPY as a PG extension. The\npatch required properly encoded and formatted copy data available\nfrom an InputStream. Following some feedback from others I began adding\nthe ability to handle different encodings and the ability to read and\nwrite objects without requiring any knowledge of the copy data format. I\ngot hung up on the object read/write part because of some issues with how\ntype conversions are done in the driver.\n\nAt the moment there is a big push being made by Oliver Jowett to get true \nV3 protocol support implemented which is currently my highest priority. \nGetting copy support into the JDBC driver is something I'd like to see for \n7.5, but I couldn't say if that will happen or how complete it may be. \nDepending on your needs perhaps the initial patch is sufficient.\n\nhttp://archives.postgresql.org/pgsql-jdbc/2003-12/msg00186.php\n\nKris Jurka\n\n",
"msg_date": "Mon, 7 Jun 2004 04:26:38 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "On Mon, 2004-06-07 at 02:26, Kris Jurka wrote:\n> On Sat, 5 Jun 2004, Steve Wampler wrote:\n> \n> > \n> > [I want to use copy from JDBC]\n> > \n> \n> I made a patch to the driver to support COPY as a PG extension.\n...\n> http://archives.postgresql.org/pgsql-jdbc/2003-12/msg00186.php\n\nThanks Kris - that patch worked beautifully and bumped the\ninsert rate from ~1000 entries/second to ~9000 e/s in my\ntest code.\n\nHere's hoping it makes it into 7.5.\n\nI do have a little concern about what's happening in the\nback end during the copy - I suspect the entire table is\nlocked, which may impact the performance when multiple\nclients are saving entries into the table. Anyone know\nif that's how COPY works? (For that matter, would that\nalso be true of a transaction consisting of a set of\ninserts?)\n\nThanks again!\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 07 Jun 2004 10:40:56 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "On Mon, 2004-06-07 at 10:40, Steve Wampler wrote:\n\n> Thanks Kris - that patch worked beautifully and bumped the\n> insert rate from ~1000 entries/second to ~9000 e/s in my\n> test code.\n\nAs a followup - that 9000 e/s becomes ~21,000 e/s if I don't\nhave the java code also dump the message to standard output!\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n",
"msg_date": "Mon, 07 Jun 2004 10:59:25 -0700",
"msg_from": "Steve Wampler <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Using a COPY...FROM through JDBC?"
},
{
"msg_contents": "\n\nOn Mon, 7 Jun 2004, Steve Wampler wrote:\n\n> I do have a little concern about what's happening in the\n> back end during the copy - I suspect the entire table is\n> locked, which may impact the performance when multiple\n> clients are saving entries into the table. Anyone know\n> if that's how COPY works? (For that matter, would that\n> also be true of a transaction consisting of a set of\n> inserts?)\n> \n\nThe table is not locked in either the copy or the insert case.\n\nKris Jurka\n",
"msg_date": "Mon, 7 Jun 2004 16:38:01 -0500 (EST)",
"msg_from": "Kris Jurka <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Using a COPY...FROM through JDBC?"
}
] |
[
{
"msg_contents": "Hello again,\n\nThis question is related to my previous one (Unused table of view, see\nhttp://archives.postgresql.org/pgsql-performance/2004-06/msg00043.php).\nFor the moment, I have queries where I join tables by hand. Since a\nfew tables are always joined together, I thought I could define a view\nto centralize this and make my queries more readable. But I then\nobserve a drop in performances on some queries because it seems the\nview is not \"broken\" by the planner, so some optimizations cannot\noccur anymore. Maybe this assertion is plain wrong, it's just my\nfeeling of the situation.\n\nI'm using postgresql 7.4.2 on Debian GNU/Linux.\n\nHere are the details of my tables, queries and views:\n\nCREATE TABLE pictures (\n\tPictureID serial PRIMARY KEY,\n\tRollID character varying(64) NOT NULL REFERENCES rolls,\n\tFrameID character varying(64) NOT NULL,\n\tDescription character varying(255),\n\tFilename character varying(255),\n\tOwner integer NOT NULL REFERENCES users,\n\tEntryDate datetime DEFAULT now(),\n\tDate datetime,\n\tNbClick integer DEFAULT 0,\n\tNbRates integer DEFAULT 0,\n\tMaxRate int2,\n\tMinRate int2,\n\tAverageRate float4 DEFAULT 5,\n\tSumRates integer DEFAULT 0);\n\n-- Each picture can belong to a number of topics\nCREATE TABLE topicscontent (\n\tTopicID integer REFERENCES topics ON DELETE cascade,\n\tPictureID integer REFERENCES pictures ON DELETE cascade,\n\tDirect boolean NOT NULL,\n\tPRIMARY KEY (TopicID,PictureID) );\n\n-- Each picture can be viewed by a number of groups\nCREATE TABLE permissions (\n\tGroupID integer NOT NULL REFERENCES groups ON DELETE cascade,\n\tPictureID integer NOT NULL REFERENCES pictures ON DELETE cascade,\n\tUNIQUE (GroupID, PictureID));\n\n-- Each user can belong to a number of groups\nCREATE TABLE groupsdef (\n\tUserID integer REFERENCES users,\n\tGroupID integer REFERENCES groups,\n\tPRIMARY KEY (UserID,GroupID));\n\n-- Each picture can have a number of keywords\nCREATE TABLE keywords (\n Type integer,\n PictureID integer NOT NULL REFERENCES pictures ON DELETE cascade,\n Value character varying(128) NOT NULL,\n\tUNIQUE (Type,PictureID,Value));\n\n\nWithout views, if I want all the picture with a keyword value of\n'laurent' that a user with ID of 2 can see, sorted by AverageRate:\n\nSELECT DISTINCT ON (AverageRate,PictureID) P.*\n FROM Pictures AS P, GroupsDef AS G, Permissions AS A, Keywords AS K\n WHERE P.PictureID=A.PictureID \n\t AND G.GroupID=A.GroupID \n\t AND K.Value in ('laurent') \n\t AND K.PictureID=P.PictureID \n\t AND UserID=2\n ORDER BY AverageRate,PictureID;\n\nQUERY PLAN \n\n Unique (cost=528.93..532.71 rows=504 width=97) (actual time=32.447..33.062 rows=274 loops=1)\n -> Sort (cost=528.93..530.19 rows=504 width=97) (actual time=32.443..32.590 rows=505 loops=1)\n Sort Key: p.averagerate, p.pictureid\n -> Hash Join (cost=297.36..506.31 rows=504 width=97) (actual time=12.495..29.312 rows=505 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Hash Join (cost=292.14..466.79 rows=900 width=101) (actual time=12.056..26.180 rows=750 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on permissions a (cost=0.00..125.05 rows=8305 width=8) (actual time=0.007..6.271 rows=8305 loops=1)\n -> Hash (cost=291.43..291.43 rows=285 width=101) (actual time=11.961..11.961 rows=0 loops=1)\n -> Hash Join (cost=110.26..291.43 rows=285 width=101) (actual time=6.378..11.573 rows=274 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on pictures p (cost=0.00..68.33 rows=2933 width=97) (actual time=0.007..2.426 rows=2933 loops=1)\n -> Hash (cost=109.55..109.55 rows=285 width=4) (actual time=6.163..6.163 rows=0 loops=1)\n -> Seq Scan on keywords k (cost=0.00..109.55 rows=285 width=4) (actual time=0.032..5.929 rows=274 loops=1)\n Filter: ((value)::text = 'laurent'::text)\n -> Hash (cost=5.19..5.19 rows=12 width=4) (actual time=0.217..0.217 rows=0 loops=1)\n -> Seq Scan on groupsdef g (cost=0.00..5.19 rows=12 width=4) (actual time=0.038..0.197 rows=11 loops=1)\n Filter: (userid = 2)\n Total runtime: 33.554 ms\n\nNow, if I use the following view to abstract access rights:\n\nCREATE VIEW userpictures (\n PictureID,RollID,FrameID,Description,Filename,\n Owner,EntryDate,Date,\n NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n UserID) \n AS SELECT DISTINCT ON (Permissions.PictureID,UserID) \n\t Pictures.PictureID,RollID,FrameID,Description,Filename,Owner,\n\t EntryDate,Date,NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n\t UserID \n\t FROM Permissions \n\t JOIN Groupsdef using (GroupID) \n\t JOIN pictures using (PictureID);\n\nThe query I use is:\n\nSELECT P.*\n FROM UserPictures AS P, Keywords AS K\n WHERE P.PictureID=K.PictureID \n\t AND K.Value in ('laurent') \n\t AND UserID=2\n ORDER BY AverageRate,PictureID;\n\nQUERY PLAN \n\n Sort (cost=1126.98..1128.54 rows=622 width=438) (actual time=142.053..142.132 rows=274 loops=1)\n Sort Key: p.averagerate, p.pictureid\n -> Merge Join (cost=995.75..1098.12 rows=622 width=438) (actual time=116.412..140.481 rows=274 loops=1)\n Merge Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Subquery Scan p (cost=874.58..955.99 rows=4652 width=438) (actual time=108.709..130.049 rows=2361 loops=1)\n -> Unique (cost=874.58..909.47 rows=4652 width=105) (actual time=108.685..119.661 rows=2361 loops=1)\n -> Sort (cost=874.58..886.21 rows=4652 width=105) (actual time=108.676..110.185 rows=4403 loops=1)\n Sort Key: permissions.pictureid, groupsdef.userid\n -> Hash Join (cost=388.35..591.19 rows=4652 width=105) (actual time=32.031..63.322 rows=5076 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on pictures (cost=0.00..68.33 rows=2933 width=97) (actual time=0.011..2.836 rows=2933 loops=1)\n -> Hash (cost=376.72..376.72 rows=4652 width=8) (actual time=31.777..31.777 rows=0 loops=1)\n -> Merge Join (cost=5.40..376.72 rows=4652 width=8) (actual time=0.285..27.805 rows=5076 loops=1)\n Merge Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Index Scan using permissions_groupid_key on permissions (cost=0.00..280.77 rows=8305 width=8) (actual time=0.031..13.018 rows=7633 loops=1)\n -> Sort (cost=5.40..5.43 rows=12 width=8) (actual time=0.237..1.762 rows=5078 loops=1)\n Sort Key: groupsdef.groupid\n -> Seq Scan on groupsdef (cost=0.00..5.19 rows=12 width=8) (actual time=0.034..0.203 rows=11 loops=1)\n Filter: (userid = 2)\n -> Sort (cost=121.17..121.88 rows=285 width=4) (actual time=6.987..7.065 rows=274 loops=1)\n Sort Key: k.pictureid\n -> Seq Scan on keywords k (cost=0.00..109.55 rows=285 width=4) (actual time=0.056..6.656 rows=274 loops=1)\n Filter: ((value)::text = 'laurent'::text)\n Total runtime: 144.045 ms\n\nTo me the only difference between the queries is that second one\nincludes a UserID column. One strange thing is that width from 105\njumps to 438 during \"Subquery Scan p\".\n\nI think it's because of DISTINCT ON in the view. If I use this view:\n\n\nCREATE VIEW userpictures (\n PictureID,RollID,FrameID,Description,Filename,\n Owner,EntryDate,Date,\n NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n UserID) \n AS SELECT Pictures.PictureID,RollID,FrameID,Description,Filename,Owner,\n\t EntryDate,Date,NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n\t UserID \n\t FROM Permissions \n\t JOIN Groupsdef using (GroupID) \n\t JOIN pictures using (PictureID);\n\nand this query:\n\nSELECT DISTINCT ON (AverageRate,PictureID) P.*\n FROM UserPictures AS P, Keywords AS K\n WHERE P.PictureID=K.PictureID \n\t AND K.Value in ('laurent') \n\t AND UserID=2\n ORDER BY AverageRate,PictureID;\n\nThe result is similar to the query without the view:\n\nQUERY PLAN \n\n Unique (cost=525.39..529.17 rows=504 width=101) (actual time=34.689..35.287 rows=274 loops=1)\n -> Sort (cost=525.39..526.65 rows=504 width=101) (actual time=34.686..34.828 rows=505 loops=1)\n Sort Key: pictures.averagerate, pictures.pictureid\n -> Hash Join (cost=297.36..502.76 rows=504 width=101) (actual time=11.786..31.739 rows=505 loops=1)\n Hash Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Hash Join (cost=292.14..466.79 rows=807 width=101) (actual time=11.409..28.389 rows=750 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on permissions (cost=0.00..125.05 rows=8305 width=8) (actual time=0.004..9.177 rows=8305 loops=1)\n -> Hash (cost=291.43..291.43 rows=285 width=101) (actual time=11.319..11.319 rows=0 loops=1)\n -> Hash Join (cost=110.26..291.43 rows=285 width=101) (actual time=5.919..10.961 rows=274 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on pictures (cost=0.00..68.33 rows=2933 width=97) (actual time=0.004..2.258 rows=2933 loops=1)\n -> Hash (cost=109.55..109.55 rows=285 width=4) (actual time=5.700..5.700 rows=0 loops=1)\n -> Seq Scan on keywords k (cost=0.00..109.55 rows=285 width=4) (actual time=0.029..5.471 rows=274 loops=1)\n Filter: ((value)::text = 'laurent'::text)\n -> Hash (cost=5.19..5.19 rows=12 width=8) (actual time=0.198..0.198 rows=0 loops=1)\n -> Seq Scan on groupsdef (cost=0.00..5.19 rows=12 width=8) (actual time=0.031..0.178 rows=11 loops=1)\n Filter: (userid = 2)\n Total runtime: 35.657 ms\n\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n",
"msg_date": "Sun, 06 Jun 2004 13:08:43 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query involving views"
},
{
"msg_contents": "Laurent Martelli <[email protected]> writes:\n> Now, if I use the following view to abstract access rights:\n\n> CREATE VIEW userpictures (\n> PictureID,RollID,FrameID,Description,Filename,\n> Owner,EntryDate,Date,\n> NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n> UserID) \n> AS SELECT DISTINCT ON (Permissions.PictureID,UserID) \n> \t Pictures.PictureID,RollID,FrameID,Description,Filename,Owner,\n> \t EntryDate,Date,NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n> \t UserID \n> \t FROM Permissions \n> \t JOIN Groupsdef using (GroupID) \n> \t JOIN pictures using (PictureID);\n\n> [ performance sucks ]\n\nFind a way to get rid of the DISTINCT ON. That's essentially an\noptimization fence. Worse, the way you are using it here, it doesn't\neven give well-defined results, since there's no ORDER BY constraining\nwhich row will be selected out of a set of duplicates. (I think it may\nnot matter to you, since you don't really care which groupsdef row is\nselected, but in general a view constructed like this is broken.)\n\nIt might work to do the view as\n\nSELECT ... all that stuff ...\nFROM pictures p, users u\nWHERE\n EXISTS (SELECT 1 FROM permissions prm, groupsdef g\n WHERE p.pictureid = prm.pictureid AND prm.groupid = g.groupid\n AND g.userid = u.userid);\n\nI'm not sure offhand about the performance properties of this either,\nbut it would be worth trying.\n\nA cruder answer is just to accept that the view may give you multiple\nhits, and put the DISTINCT in the top-level query.\n\nI think though that in the long run you're going to need to rethink this\nrepresentation of permissions. It's nice and simple but it's not going\nto scale well. Even your \"fast\" query is going to look like a dog once\nyou get to many thousands of permission entries.\n\nIt might work to maintain a derived table (basically a materialized\nview) of the form (userid, groupid, pictureid) signifying that a user\ncan access a picture through membership in a group. Put a nonunique\nindex on (userid, pictureid) on it. This could then drive the EXISTS\ntest efficiently.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 06 Jun 2004 12:22:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query involving views "
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <[email protected]> writes:\n\n Tom> Laurent Martelli <[email protected]> writes:\n >> Now, if I use the following view to abstract access rights:\n\n >> CREATE VIEW userpictures (\n >> PictureID,RollID,FrameID,Description,Filename,\n >> Owner,EntryDate,Date,\n >> NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates, UserID) AS\n >> SELECT DISTINCT ON (Permissions.PictureID,UserID)\n >> Pictures.PictureID,RollID,FrameID,Description,Filename,Owner,\n >> EntryDate,Date,NbClick,NbRates,MaxRate,MinRate,AverageRate,SumRates,\n >> UserID FROM Permissions JOIN Groupsdef using (GroupID) JOIN\n >> pictures using (PictureID);\n\n >> [ performance sucks ]\n\n Tom> Find a way to get rid of the DISTINCT ON. That's essentially\n Tom> an optimization fence. Worse, the way you are using it here,\n Tom> it doesn't even give well-defined results, since there's no\n Tom> ORDER BY constraining which row will be selected out of a set\n Tom> of duplicates. (I think it may not matter to you, since you\n Tom> don't really care which groupsdef row is selected, \n\nThat's true. I do not use columns from groupsdef in the end. \n\n Tom> but in general a view constructed like this is broken.)\n\n Tom> It might work to do the view as\n\n Tom> SELECT ... all that stuff ... FROM pictures p, users u WHERE\n Tom> EXISTS (SELECT 1 FROM permissions prm, groupsdef g WHERE\n Tom> p.pictureid = prm.pictureid AND prm.groupid = g.groupid AND\n Tom> g.userid = u.userid);\n\n Tom> I'm not sure offhand about the performance properties of this\n Tom> either, but it would be worth trying.\n\nThis one does not yield very good performance. In fact, the best\nperformances I have is when I use a where clause like this one:\n\n WHERE PictureID IN \n (SELECT PictureID FROM permissions JOIN groupsdef USING(GroupID) \n WHERE groupsdef.UserID=2)\n\nBut it's not as elegant to write as the initial view using \"distinct\non\". I could create a view like this:\n\n CREATE VIEW userpictures (PictureID,UserID) \n AS SELECT pictureid,userid \n FROM permissions JOIN groupsdef USING(GroupID)\n\nand then do queries like this:\n\n SELECT * FROM pictures \n WHERE PictureID IN (SELECT PictureID FROM userpictures WHERE UserID=2)\n\nbut it's stillnot as elegant as \n\n SELECT * FROM userpictures WHERE UserID=2 \n\nI think I'll try a function: \n\nCREATE FUNCTION picturesID(int) RETURNS SETOF int AS '\n SELECT PictureID FROM permissions JOIN groupsdef USING(GroupID)\n WHERE groupsdef.UserID=$1\n' LANGUAGE sql;\n\nSELECT * FROM pictures WHERE PictureID IN (select * from picturesID(2));\n\nHere's something funny: using a function seems gives slihtly better results\nthan inlining the query (I did a dozen of runs and the timings were consistent):\n\nSELECT * FROM pictures WHERE PictureID IN (select * from picturesID(2));\nQUERY PLAN \n Hash Join (cost=15.50..100.49 rows=200 width=97) (actual time=28.609..46.568 rows=2906 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".picturesid)\n -> Seq Scan on pictures (cost=0.00..68.33 rows=2933 width=97) (actual time=0.018..2.610 rows=2933 loops=1)\n -> Hash (cost=15.00..15.00 rows=200 width=4) (actual time=28.467..28.467 rows=0 loops=1)\n -> HashAggregate (cost=15.00..15.00 rows=200 width=4) (actual time=23.698..26.201 rows=2906 loops=1)\n -> Function Scan on picturesid (cost=0.00..12.50 rows=1000 width=4) (actual time=16.202..19.952 rows=5076 loops=1)\n Total runtime: 48.601 ms\n\nSELECT * FROM pictures WHERE PictureID IN (\n SELECT PictureID FROM permissions JOIN groupsdef USING(GroupID) \n WHERE groupsdef.UserID=2);\nQUERY PLAN \n\n Hash Join (cost=394.93..504.24 rows=2632 width=97) (actual time=35.770..53.574 rows=2906 loops=1)\n Hash Cond: (\"outer\".pictureid = \"inner\".pictureid)\n -> Seq Scan on pictures (cost=0.00..68.33 rows=2933 width=97) (actual time=0.014..2.543 rows=2933 loops=1)\n -> Hash (cost=388.35..388.35 rows=2632 width=4) (actual time=35.626..35.626 rows=0 loops=1)\n -> HashAggregate (cost=388.35..388.35 rows=2632 width=4) (actual time=30.988..33.502 rows=2906 loops=1)\n -> Merge Join (cost=5.40..376.72 rows=4652 width=4) (actual time=0.247..26.628 rows=5076 loops=1)\n Merge Cond: (\"outer\".groupid = \"inner\".groupid)\n -> Index Scan using permissions_groupid_key on permissions (cost=0.00..280.77 rows=8305 width=8) (actual time=0.031..11.629 rows=7633 loops=1)\n -> Sort (cost=5.40..5.43 rows=12 width=4) (actual time=0.207..1.720 rows=5078 loops=1)\n Sort Key: groupsdef.groupid\n -> Seq Scan on groupsdef (cost=0.00..5.19 rows=12 width=4) (actual time=0.030..0.182 rows=11 loops=1)\n Filter: (userid = 2)\n Total runtime: 54.748 ms\n\n Tom> A cruder answer is just to accept that the view may give you\n Tom> multiple hits, and put the DISTINCT in the top-level query.\n\nI thought of that. But it has the drawback that if you use an ORDER\nBY, you must have the same columns in the DISTINCT. \n\n Tom> I think though that in the long run you're going to need to\n Tom> rethink this representation of permissions. It's nice and\n Tom> simple but it's not going to scale well. Even your \"fast\"\n Tom> query is going to look like a dog once you get to many\n Tom> thousands of permission entries.\n\n Tom> It might work to maintain a derived table (basically a\n Tom> materialized view) of the form (userid, groupid, pictureid)\n Tom> signifying that a user can access a picture through membership\n Tom> in a group. Put a nonunique index on (userid, pictureid) on\n Tom> it. This could then drive the EXISTS test efficiently.\n\nI'll probably do that if perf goes down when the database grows\nbigger.\n\nThanks for all the advice.\n\nBest regards,\nLaurent\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n",
"msg_date": "Sun, 06 Jun 2004 23:12:07 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query involving views"
}
] |
[
{
"msg_contents": "\nWhy is my name on a mail from Tom Lane ? Really, he knows a *lot* more than I and should get due credit.\n\nSeriously, is this the peformance remailer mangling something ?\n\nGreg Williamson\n(the real one)\n\n-----Original Message-----\nFrom:\tGregory S. Williamson\nSent:\tSun 6/6/2004 10:46 PM\nTo:\tSean Shanny\nCc:\[email protected]\nSubject:\tRe: [PERFORM] General performance questions about postgres on Apple\nIn-reply-to: <[email protected]> \nReferences: <[email protected]>\n\t<[email protected]> <[email protected]>\n\t<[email protected]>\nComments: In-reply-to Sean Shanny <[email protected]>message\n\tdated \"Sun, 22 Feb 2004 21:48:54 -0500\"\nDate: Sun, 22 Feb 2004 22:24:29 -0500\nMessage-ID: <[email protected]>\nFrom: Tom Lane <[email protected]>\nX-Virus-Scanned: by amavisd-new at postgresql.org\nX-Mailing-List: pgsql-performance\nPrecedence: bulk\nSender: [email protected]\nX-imss-version: 2.5\nX-imss-result: Passed\nX-imss-scores: Clean:99.90000 C:21 M:2 S:5 R:5\nX-imss-settings: Baseline:2 C:2 M:2 S:2 R:2 (0.1500 0.3000)\nReturn-Path: [email protected]\nX-OriginalArrivalTime: 07 Jun 2004 05:27:21.0994 (UTC) FILETIME=[1BC0EEA0:01C44C50]\n\nSean Shanny <[email protected]> writes:\n> We have the following setting for random page cost:\n> random_page_cost = 1 # units are one sequential page fetch cost\n> Any suggestions on what to bump it up to?\n\nWell, the default setting is 4 ... what measurements prompted you to\nreduce it to 1? The particular example you showed suggested that the\ntrue value on your setup might be 10 or more.\n\nNow I would definitely not suggest that you settle on any particular\nvalue based on only one test case. You need to try to determine an\nappropriate average value, bearing in mind that there's likely to be\nlots of noise in any particular measurement.\n\nBut in general, setting random_page_cost to 1 is only reasonable when\nyou are dealing with a fully-cached-in-RAM database, which yours isn't.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n\n\n",
"msg_date": "Sun, 6 Jun 2004 22:54:00 -0700",
"msg_from": "\"Gregory S. Williamson\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: General performance questions about postgres on Apple"
}
] |
[
{
"msg_contents": "Dear Gurus,\n\nPlease feel free to show me to the archives if my question has already been\nanswered.\n\nLast week I fumbled with CPU_TUPLE_COST and revealed that 4 out of 4 tested\nqueries improved by 10-60% if I changed it from 0.01 (default) to 0.40\n(ugh). Setting it higher did not bring any improvement.\n\n%----------------------- cut here -----------------------%\nQUESTION1: is there a (theoretical or practical) relation between this one\nand the other cpu costs? Should I also increase those values by the same\nrate and find a balance that way?\n\nAs far as I can guess, there should be a linear relation, i.e.\ncpu_tuple_cost:cpu_index_tuple_cost:cpu_operator_cost should be a constant\nratio, but then again, I suspect there is a cause that they have separate\nentries in the config file ;)\n\n\n%----------------------- cut here -----------------------%\nThe queries were, or contained, something like:\n\n SELECT s.qty FROM a, s WHERE a.id = s.a_id AND a.b_id = 1234;\n\nwhere\n* \"a\" and \"s\" are in 1:N relation,\n* \"b\" and \"a\" are in 1:N relation,\n* a.id is pkey in \"a\" and b.id is pkey in \"b\".\n\nThese queries usually return up to 6-10% of the tuples in s (about 16k of\n220k) and the planner chose seq scans on s. Disabling seq scan and some\nother things finally brought up a plan containing index scans that improved\ntwo queries. (I tested the other two after I found out the solution of\nthese, to see if they improve or get worse)\n\nAlso noticed that the largest gain was from the smallest change on\ncpu_tuple_cost: the query with the largest improvement (to 32% of orig time)\nchose the better plan from 0.03, but the other one improved (to 79%) only if\nset cpu_tuple_cost to 0.40 or higher.\n\n\n%----------------------- cut here -----------------------%\nQUESTION2: am I right setting cpu_tuple_cost, or may there be another cause\nof poor plan selection? Also tried lowering random_page_cost, but even 1.0\ndidn't yield any improvement.\n\n\n%----------------------- cut here -----------------------%\nCONFIGURATION: PostgreSQL 7.3.4, IBM Xeon 2x2.4GHz HT, 5x36GB 10krpm HW\nRAID-5.\n\nWe found out quite early that random page cost is quite low (now we have it\nat 1.5-- maybe it's still to high) and it's true that tasks that require raw\ncpu power aren't very much faster than PIII-800. Unfortunately I can't test\nthe same hw on 7.4 yet, since it's a production server.\n\nTIA,\nG.\n%----------------------- cut here -----------------------%\n\\end\n\n",
"msg_date": "Mon, 7 Jun 2004 09:36:19 +0200",
"msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Relation of cpu_*_costs?"
},
{
"msg_contents": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]> writes:\n> Last week I fumbled with CPU_TUPLE_COST and revealed that 4 out of 4 tested\n> queries improved by 10-60% if I changed it from 0.01 (default) to 0.40\n> (ugh). Setting it higher did not bring any improvement.\n\nThat's pretty hard to believe; particularly on modern machines, I'd\nthink that moving it down would make more sense than moving it up.\nYou're essentially asserting that the CPU time to process one tuple\nis almost half of the time needed to bring a page in from disk.\n\nI suspect that your test cases were toy cases small enough to be\nfully cached and thus not incur any actual I/O ...\n\n> [ trying to get a nestloop indexscan plan to be generated ]\n\nI believe that the planner's cost model for nestloops with inner\nindexscan is wrong: it costs each inner iteration independently, when\nin fact there should be some savings, because at least the topmost\nlevels of the index will soon be fully cached. However, when I tried\nto work out a proper model of this effect, I ended up with equations\nthat gave higher indexscan costs than what's in there now :-(. So that\ndidn't seem like it would make anyone happy.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 2004 09:51:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Relation of cpu_*_costs? "
},
{
"msg_contents": "Dear Tom,\n\nThanks for your response.\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nSent: Monday, June 07, 2004 3:51 PM\n\n\n> That's pretty hard to believe; particularly on modern machines, I'd\n> think that moving it down would make more sense than moving it up.\n> You're essentially asserting that the CPU time to process one tuple\n> is almost half of the time needed to bring a page in from disk.\n\nThat is exactly what I had in mind. We found that 5x10krpm HW RAID 5 array\nblazing fast, while we were really disappointed about CPU. E.g.\n* tar'ing 600MB took seconds; gzip'ing it took minutes.\n* initdb ran so fast that I didn't have time to hit Ctrl+C because\n I forgot a switch ;)\n* dumping the DB in or out was far faster than adddepend between 7.2 and 7.3\n* iirc index scans returning ~26k rows of ~64k were faster than seq scan.\n (most suspicious case of disk cache)\n\nBut whatever is the case with my hardware -- could you tell me something\n(even a search keyword ;) ) about my theoretical question: i.e. relation of\ncpu_*_costs?\n\n> I suspect that your test cases were toy cases small enough to be\n> fully cached and thus not incur any actual I/O ...\n\nDunno. The server has 1GB RAM; full DB is ~100MB; largest query was ~7k\nwhich moved at least 2 tables of >200k rows and several smaller ones. If it\nis a \"toy case\" for such hw, I humbly accept your opinion.\n\nBTW its runtime improved from 53 to 48 sec -- all due to changing cpu tuple\ncost. I ran the query at different costs, in fast succession:\n\nrun cost sec\n #1 0.01 53\n #2 0.4 50\n #3 1.0 48\n #4 1.0 48\n #5 0.4 48\n #6 0.01 53\n\nFor the second result, I'd say disk cache, yes-- but what about the last\nresult? It's all the same as the first one. Must have been some plan change\n(I can send the exp-ana results if you wish)\n\nG.\n%----------------------- cut here -----------------------%\n\\end\n\n",
"msg_date": "Mon, 7 Jun 2004 19:19:06 +0200",
"msg_from": "\"=?iso-8859-2?B?U1rbQ1MgR+Fib3I=?=\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Relation of cpu_*_costs?"
}
] |
[
{
"msg_contents": "Hi all,\n\nI've been dealing with a problem for the past two days\nwhere a certain sql statement works 2 out of 5 times, and\nthe other 3 times, it causes the machine (quad Xeon 2.8GHz\n+ 792543232 bytes mem, linux kernel 2.4.26-custom, pg ver 7.3.4)\nto slow down, and finally grind to a halt. It looks like postgres\ngets itself into an insane loop, because no matter how much\nshared memory I give it, it uses it all, and then\nthe kernel starts swapping. \n\nI'm pretty sure it's not the kernel, because I've tried four different \n2.4.2* stable kernels, and the same happens.\n\nI've attached the query, and the functions used inside the query,\nas well as the table structure and an explain. (I haven't been\nable to get explain analyze)\n\nIt seems that when I replace the functions used in the query,\nwith the actual values returned by them (one date in each case),\nthe query runs in 10 seconds.\n\nI did vacuum analyze, and reindex seemed to work at one\nstage, but now it doesn't anymore. \n\nIs there some limitation in using functions that I do not know \nabout, or is it a bug? \n\n(It seems to be hanging on the max_fpp('''')\nfunction call from inside the fpp_max_ms() function.)\n\nPlease help.\n\nKind Regards\nStefan",
"msg_date": "Mon, 7 Jun 2004 13:34:15 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres function use makes machine crash."
},
{
"msg_contents": "Stef <[email protected]> writes:\n> I've been dealing with a problem for the past two days\n> where a certain sql statement works 2 out of 5 times, and\n> the other 3 times, it causes the machine (quad Xeon 2.8GHz\n> + 792543232 bytes mem, linux kernel 2.4.26-custom, pg ver 7.3.4)\n> to slow down, and finally grind to a halt.\n\nIIRC, PG prior to 7.4 had some problems with memory leaks in repeated\nexecution of SQL-language functions ... and your query sure looks like\nit's going to be doing a lot of repeated execution of those functions.\n\nPlease try it on 7.4.2 and see if you still have a problem.\n\nIt seems somewhat interesting that you see the problem only sometimes\nand not every time, but there's not much point in investigating further\nif it turns out the problem is already fixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Jun 2004 11:11:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres function use makes machine crash. "
},
{
"msg_contents": "Tom Lane mentioned :\n=> Please try it on 7.4.2 and see if you still have a problem.\n\nWill do, and I'll post the results\n\nThanks!\n",
"msg_date": "Mon, 7 Jun 2004 17:34:53 +0200",
"msg_from": "Stef <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres function use makes machine crash."
}
] |
[
{
"msg_contents": "Hi all,\n\nI am optimizing some code that does a lot of iterative selects and \ninserts within loops. Because of the exception handling limitations in \npostgres and with no ability to twiddle autocommit, just about every \noperation is standalone.\n\nover 5000 odd lines this gets very slow (5-10 minutes including processing).\n\nIn seeking to speed it up I am PREPARing the most common inserts and \nselects. I have a few operations already inside plpgsql functions. \nEXECUTE means something different within a plpgsql funtion, so I am \nwondering if there is a way to execute a pre-prepared query inside a \nfunction.\n\nOr is this even necessary - are queries within plpgsql functions \nautomatically prepared when the function is first compiled? On a similar \nnote, is there any benefit in PREPAREing a select from a plpgsql function?\n\nOr does anyone have any smart ways to turn off autocommit? (I have \nalready played with commit_delay and commit_siblings).\n\nMy empirical testing has proven inconclusive (other than turning off \nfsync which makes a huge difference, but not possible on the live \nsystem, or using a fat copmaq raid card).\n\nThanks for any help,\n\nMark.\n\n-- \nMark Aufflick\n e: [email protected]\n w: www.pumptheory.com (business)\n w: mark.aufflick.com (personal)\n p: +61 438 700 647\n\n",
"msg_date": "Tue, 08 Jun 2004 01:52:14 +1000",
"msg_from": "Mark Aufflick <[email protected]>",
"msg_from_op": true,
"msg_subject": "PREPAREing statements versus compiling PLPGSQL"
}
] |
[
{
"msg_contents": "Hello,\n\n I have an instance where I have a series of pl/pgsql calls, that report stat \nresults to a common table. When other queries try to hit the stat table \n(with DML commands; SELECT, INSERT, UPDATE, DELETE etc.) they are forced to \nwait in a queue until the pl/pgsql has finished executing. \n\nwill:\n\n SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; \n\nbefore these DML queries eliminate the locking?\n\n\n-- \nmarcus whitney\n\nchief architect : cold feet creative\n\nwww.coldfeetcreative.com\n\n800.595.4401\n \n\n\ncold feet presents emma\n\nemail marketing for discriminating\n\norganizations everywhere\n\nvisit www.myemma.com\n",
"msg_date": "Mon, 7 Jun 2004 11:31:46 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "pl/pgsql and Transaction Isolation"
},
{
"msg_contents": "Marcus Whitney <[email protected]> writes:\n> I have an instance where I have a series of pl/pgsql calls, that report stat \n> results to a common table. When other queries try to hit the stat table \n> (with DML commands; SELECT, INSERT, UPDATE, DELETE etc.) they are forced to \n> wait in a queue until the pl/pgsql has finished executing. \n\nThis is quite hard to believe, unless your pl/pgsql is doing something\nas unfriendly as LOCKing the table.\n\nDo you want to post a more complete description of your problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jun 2004 00:29:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pl/pgsql and Transaction Isolation "
},
{
"msg_contents": "You were right not to believe it. I had an update on a whole table that I \nhadn't noticed. Thanks for debunking.\n\nOn Monday 07 June 2004 23:29, you wrote:\n> Marcus Whitney <[email protected]> writes:\n> > I have an instance where I have a series of pl/pgsql calls, that report\n> > stat results to a common table. When other queries try to hit the stat\n> > table (with DML commands; SELECT, INSERT, UPDATE, DELETE etc.) they are\n> > forced to wait in a queue until the pl/pgsql has finished executing.\n>\n> This is quite hard to believe, unless your pl/pgsql is doing something\n> as unfriendly as LOCKing the table.\n>\n> Do you want to post a more complete description of your problem?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \nmarcus whitney\n",
"msg_date": "Tue, 8 Jun 2004 09:19:44 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pl/pgsql and Transaction Isolation"
}
] |
[
{
"msg_contents": "Right now, I am having trouble getting the planner to optimize queries\nin the form of \n\nselect t.key, t.field from t a\n where \n (\n select count(*) from t b\n where b.field > a.field\n ) = k\n\nThe subplan (either index or seq. scan) executes once for each row in t,\nwhich of course takes forever.\n\nThis query is a way of achieving LIMIT type results (substitute n-1\ndesired rows for k) using standard SQL, which is desirable in some\ncircumstances. Is it theoretically possible for this to be optimized?\n\nMerlin\n\n",
"msg_date": "Mon, 7 Jun 2004 13:30:29 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "is it possible to for the planner to optimize this form?"
},
{
"msg_contents": "Merlin,\n\n> select t.key, t.field from t a\n> where\n> (\n> select count(*) from t b\n> where b.field > a.field\n> ) = k\n>\n> The subplan (either index or seq. scan) executes once for each row in t,\n> which of course takes forever.\n>\n> This query is a way of achieving LIMIT type results (substitute n-1\n> desired rows for k) using standard SQL, which is desirable in some\n> circumstances. Is it theoretically possible for this to be optimized?\n\nI don't think so, no. PostgreSQL does have some issues using indexes for \ncount() queires which makes the situation worse. However, with the query \nyou presented, I don't see any way around the planner executing the subquery \nonce for every row in t.\n\nExcept, of course, for some kind of scheme involving materialized views, if \nyou don't need up-to-the minute data. In that case, you could store in a \ntable the count(*)s of t for each threshold value of b.field. But, \ndynamically, that would be even slower.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 7 Jun 2004 13:31:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: is it possible to for the planner to optimize this form?"
}
] |
[
{
"msg_contents": "A production system has had a query recently degrade in performance. \nWhat once took < 1s now takes over 1s. I have tracked down the \nproblem to a working example.\n\nCompare http://rafb.net/paste/results/itZIx891.html\nwith http://rafb.net/paste/results/fbUTNF95.html\n\nThe first shows the query as is, without much change (actually, this \nquery is nested within a larger query, but it demonstrates the \nproblem). The query time is about 1 second.\n\nIn the second URL, a \"SET ENABLE_SEQSCAN TO OFF;\" is done, and the \ntime drops to 151ms, which is acceptable.\n\nWhat I don't understand is why the ports table is scanned in the \nfirst place. Clues please?\n-- \nDan Langille : http://www.langille.org/\nBSDCan - http://www.bsdcan.org/\n\n",
"msg_date": "Mon, 07 Jun 2004 15:45:42 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "seq scan woes"
},
{
"msg_contents": "On Mon, 2004-06-07 at 15:45, Dan Langille wrote:\n> A production system has had a query recently degrade in performance. \n> What once took < 1s now takes over 1s. I have tracked down the \n> problem to a working example.\n\nWhat changes have you made to postgresql.conf?\n\nCould you send explain analyse again with SEQ_SCAN enabled but with\nnested loops disabled?\n\nOff the cuff? I might hazard a guess that effective_cache is too low or\nrandom_page_cost is a touch too high. Probably the former.\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc\n\n",
"msg_date": "Mon, 07 Jun 2004 16:00:28 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan woes"
},
{
"msg_contents": "On 7 Jun 2004 at 16:00, Rod Taylor wrote:\n\n> On Mon, 2004-06-07 at 15:45, Dan Langille wrote:\n> > A production system has had a query recently degrade in performance. \n> > What once took < 1s now takes over 1s. I have tracked down the \n> > problem to a working example.\n> \n> What changes have you made to postgresql.conf?\n\nNothing recently (ie. past few months). Nothing at all really. \nPerhaps I need to start tuning that.\n\n> Could you send explain analyse again with SEQ_SCAN enabled but with\n> nested loops disabled?\n\nSee http://rafb.net/paste/results/zpJEvb28.html\n\n13s\n\n> Off the cuff? I might hazard a guess that effective_cache is too low or\n> random_page_cost is a touch too high. Probably the former.\n\nI grep'd postgresql.conf:\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n\nNOTE: both above are commented out.\n\nThank you\n-- \nDan Langille : http://www.langille.org/\nBSDCan - http://www.bsdcan.org/\n\n",
"msg_date": "Mon, 07 Jun 2004 16:12:40 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seq scan woes"
},
{
"msg_contents": "On Mon, 2004-06-07 at 16:12, Dan Langille wrote:\n> On 7 Jun 2004 at 16:00, Rod Taylor wrote:\n> \n> > On Mon, 2004-06-07 at 15:45, Dan Langille wrote:\n> > > A production system has had a query recently degrade in performance. \n> > > What once took < 1s now takes over 1s. I have tracked down the \n> > > problem to a working example.\n> > \n> > What changes have you made to postgresql.conf?\n> \n> Nothing recently (ie. past few months). Nothing at all really. \n> Perhaps I need to start tuning that.\n> \n> > Could you send explain analyse again with SEQ_SCAN enabled but with\n> > nested loops disabled?\n> \n> See http://rafb.net/paste/results/zpJEvb28.html\n\nThis doesn't appear to be the same query as we were shown earlier.\n\n> > Off the cuff? I might hazard a guess that effective_cache is too low or\n> > random_page_cost is a touch too high. Probably the former.\n> \n> I grep'd postgresql.conf:\n> \n> #effective_cache_size = 1000 # typically 8KB each\n> #random_page_cost = 4 # units are one sequential page fetch cost\n\nThis would be the issue. You haven't told PostgreSQL anything about your\nhardware. The defaults are somewhat modest.\n\nhttp://www.postgresql.org/docs/7.4/static/runtime-config.html\n\nSkim through the run-time configuration parameters that can be set in\npostgresql.conf.\n\nPay particular attention to:\n * shared_buffers (you may be best with 2000 or 4000)\n * effective_cache_size (set to 50% of ram size if dedicated db\n machine)\n * random_page_cost (good disks will bring this down to a 2 from a\n 4)\n\t\n\n-- \nRod Taylor <rbt [at] rbt [dot] ca>\n\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\nPGP Key: http://www.rbt.ca/signature.asc\n\n",
"msg_date": "Mon, 07 Jun 2004 16:38:18 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: seq scan woes"
},
{
"msg_contents": "On 7 Jun 2004 at 16:38, Rod Taylor wrote:\n\n> On Mon, 2004-06-07 at 16:12, Dan Langille wrote:\n> > On 7 Jun 2004 at 16:00, Rod Taylor wrote:\n> > \n> > > On Mon, 2004-06-07 at 15:45, Dan Langille wrote:\n> > > > A production system has had a query recently degrade in performance. \n> > > > What once took < 1s now takes over 1s. I have tracked down the \n> > > > problem to a working example.\n> > > \n> > > What changes have you made to postgresql.conf?\n> > \n> > Nothing recently (ie. past few months). Nothing at all really. \n> > Perhaps I need to start tuning that.\n> > \n> > > Could you send explain analyse again with SEQ_SCAN enabled but with\n> > > nested loops disabled?\n> > \n> > See http://rafb.net/paste/results/zpJEvb28.html\n> \n> This doesn't appear to be the same query as we were shown earlier.\n\nMy apologies. I should try to cook dinner and paste at the same time. \n ;)\n\nhttp://rafb.net/paste/results/rVr3To35.html is the right query.\n\n> > > Off the cuff? I might hazard a guess that effective_cache is too low or\n> > > random_page_cost is a touch too high. Probably the former.\n> > \n> > I grep'd postgresql.conf:\n> > \n> > #effective_cache_size = 1000 # typically 8KB each\n> > #random_page_cost = 4 # units are one sequential page fetch cost\n> \n> This would be the issue. You haven't told PostgreSQL anything about your\n> hardware. The defaults are somewhat modest.\n> \n> http://www.postgresql.org/docs/7.4/static/runtime-config.html\n> \n> Skim through the run-time configuration parameters that can be set in\n> postgresql.conf.\n> \n> Pay particular attention to:\n> * shared_buffers (you may be best with 2000 or 4000)\n> * effective_cache_size (set to 50% of ram size if dedicated db\n> machine)\n> * random_page_cost (good disks will bring this down to a 2 from a\n> 4)\n\nI'll have a play with that and report back.\n\nThanks.\n-- \nDan Langille : http://www.langille.org/\nBSDCan - http://www.bsdcan.org/\n\n",
"msg_date": "Mon, 07 Jun 2004 17:02:36 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seq scan woes"
},
{
"msg_contents": "On 7 Jun 2004 at 16:38, Rod Taylor wrote:\n\n> On Mon, 2004-06-07 at 16:12, Dan Langille wrote:\n> > I grep'd postgresql.conf:\n> > \n> > #effective_cache_size = 1000 # typically 8KB each\n> > #random_page_cost = 4 # units are one sequential page fetch cost\n> \n> This would be the issue. You haven't told PostgreSQL anything about your\n> hardware. The defaults are somewhat modest.\n> \n> http://www.postgresql.org/docs/7.4/static/runtime-config.html\n> \n> Skim through the run-time configuration parameters that can be set in\n> postgresql.conf.\n> \n> Pay particular attention to:\n> * shared_buffers (you may be best with 2000 or 4000)\n\nI do remember increasing this in the past. It was now at 1000 and is \nnow at 2000.\n\nsee http://rafb.net/paste/results/VbXQcZ87.html\n\n> * effective_cache_size (set to 50% of ram size if dedicated db\n> machine)\n\nThe machine has 512MB RAM. effective_cache_size was at 1000. So \nlet's try a 256MB cache. Does that the match a 32000 setting? I \ntried it. The query went to 1.5s. At 8000, the query was 1s. At \n2000, the query was about 950ms.\n\nThis machine is a webserver/database/mail server, but the FreshPorts \ndatabase is by far its biggest task. \n\n> * random_page_cost (good disks will bring this down to a 2 from a\n> 4)\n\nI've got mine set at 4. Increasing it to 6 gave me a 1971ms query. \nAt 3, it was a 995ms. Setting it to 2 gave me a 153ms query.\n\nHow interesting.\n\nFor camparison, I reset shared_buffers and effective_cache_size back \nto their original value (both at 1000). This gave me a 130-140ms \nquery.\n\nThe disks in question is:\n\nad0: 19623MB <IC35L020AVER07-0> [39870/16/63] at ata0-master UDMA100\n\nI guess that might be this disk: \nhttp://www.harddrives4less.com/ibmdes6020ua2.html\n\nI invite comments upon my findings.\n\nRod: thanks for the suggestions.\n\n\n\n\n> \t\n> \n> -- \n> Rod Taylor <rbt [at] rbt [dot] ca>\n> \n> Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n> PGP Key: http://www.rbt.ca/signature.asc\n> \n> \n\n\n-- \nDan Langille : http://www.langille.org/\nBSDCan - http://www.bsdcan.org/\n\n",
"msg_date": "Mon, 07 Jun 2004 18:49:46 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seq scan woes"
},
{
"msg_contents": "On 7 Jun 2004 at 18:49, Dan Langille wrote:\n\n> On 7 Jun 2004 at 16:38, Rod Taylor wrote:\n> > * random_page_cost (good disks will bring this down to a 2 from a\n> > 4)\n> \n> I've got mine set at 4. Increasing it to 6 gave me a 1971ms query. \n> At 3, it was a 995ms. Setting it to 2 gave me a 153ms query.\n> \n> How interesting.\n\nThe explain analyse: http://rafb.net/paste/results/pWhHsL86.html\n-- \nDan Langille : http://www.langille.org/\nBSDCan - http://www.bsdcan.org/\n\n",
"msg_date": "Mon, 07 Jun 2004 18:55:24 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: seq scan woes"
}
] |
[
{
"msg_contents": "So I have a table with about 50 million entries in it, I have to do a keyword search.\n\n \n\nThe keyword search is done on the title of the entry. For example a entry could be \"This is a title string which could be searched\"\n\n \n\nI have tried a few ways to search but I get horrible search times. Some keywords will come up with matches as big as .25 million but most are around 1000-5000.\n\n \n\nI use an index which narrows the table down to about 1.5-2million entries.\n\n \n\nI used 2 tables which had a 1:1 correspondence.\n\nOne held a gist index which was on a int field which searched the for the keyword. Then I would join the table to another to retrieve the rest of the information about the items it matched.\n\n \n\nThis was slow even for returning 100 entries. About 10 seconds, sometimes 5. But when I start getting 1xxx entries its about 30-50 seconds. The rest is just horrible.\n\n \n\nHow should I set up my indexes and or tables.\n\nWe were thinking of putting the index inside one table then the join would not have to be done but this still returns rather slow results.\n\n \n\nI have not fully tested this method but it looks like when run for just the keyword search on the title and no joining it can return in about 10 seconds or less.\n\nThis is a great improvement but I am currently going to make the table all in one and see how long it will take. I believe it will not be much more as there will be no join needed only the returning of some attribute fields. \n\n \n\nThis is still not the kind of time I would like to see, I wanted something around 2 seconds or less. I know there is a lot of information especially if .25 million rows are to be returned but if there is only 1xxx-9xxx rows to be returned I believe 2 seconds seems about right.\n\n \n\nHow do search engines do it?\n\nAny suggestions are welcome,\n\n \n\nThanks\n\n\n\n\n\n\n\n\nSo I have a table with about 50 million entries in it, I have to do a \nkeyword search.\n \nThe keyword search is done on the title of the entry. For example a entry could be \"This is a \ntitle string which could be searched\"\n \nI have tried a few ways to search but I get horrible search times. Some keywords will come up with matches \nas big as .25 million but most are around 1000-5000.\n \nI use an index which narrows the table down to about 1.5-2million \nentries.\n \nI used 2 tables which had a 1:1 correspondence.\nOne held a gist index which was on a int field which searched the for the \nkeyword. Then I would join the \ntable to another to retrieve the rest of the information about the items it \nmatched.\n \nThis was slow even for returning 100 entries. About 10 seconds, sometimes 5. But when I start getting 1xxx entries \nits about 30-50 seconds. The rest \nis just horrible.\n \nHow should I set up my indexes and or tables.\nWe were thinking of putting the index inside one table then the join \nwould not have to be done but this still returns rather slow results.\n \nI have not fully tested this method but it looks like when run for just \nthe keyword search on the title and no joining it can return in about 10 seconds \nor less.\nThis is a great improvement but I am currently going to \nmake the table all in one and see how long it will take. I believe it will not be much more as \nthere will be no join needed only the returning of some attribute fields. \n \nThis is still not the kind of time I would like to see, I wanted \nsomething around 2 seconds or less. \nI know there is a lot of information especially if .25 million rows are \nto be returned but if there is only 1xxx-9xxx rows to be returned I believe 2 \nseconds seems about right.\n \nHow do search engines do it?\nAny suggestions are welcome,\n \nThanks",
"msg_date": "Mon, 7 Jun 2004 14:47:17 -0700",
"msg_from": "\"borajetta\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "50 000 000 Table entries and I have to do a keyword search HELP\n NEEDED"
},
{
"msg_contents": "Borajetta,\n\n> So I have a table with about 50 million entries in it, I have to do a \nkeyword search.\n\nAre you using OpenFTS/TSearch2? Is your database optimized?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 14 Jun 2004 17:11:43 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 50 000 000 Table entries and I have to do a keyword search HELP\n\tNEEDED"
},
{
"msg_contents": "One option that does not take advantage of any fancy indexing methods is\nto create a trigger on the table, on insert/update/delete, which\nextracts each individual word from the field you care about, and creates\nan entry in another 'keyword' table, id = 'word', value = pk of your\noriginal table. then index the keyword table on the 'keyword' field,\nand do your searches from there. this should improve performance\nsubstantially, even on very large return sets, because the keyword table\nrows are very small and thus a lot of them fit in a disk block.\n \n- Jeremy\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of borajetta\nSent: Monday, June 07, 2004 5:47 PM\nTo: [email protected]\nSubject: [PERFORM] 50 000 000 Table entries and I have to do a keyword\nsearch HELP NEEDED\n\n\nSo I have a table with about 50 million entries in it, I have to do a\nkeyword search.\n\n \n\nThe keyword search is done on the title of the entry. For example a\nentry could be \"This is a title string which could be searched\"\n\n \n\nI have tried a few ways to search but I get horrible search times. Some\nkeywords will come up with matches as big as .25 million but most are\naround 1000-5000.\n\n \n\nI use an index which narrows the table down to about 1.5-2million\nentries.\n\n \n\nI used 2 tables which had a 1:1 correspondence.\n\nOne held a gist index which was on a int field which searched the for\nthe keyword. Then I would join the table to another to retrieve the\nrest of the information about the items it matched.\n\n \n\nThis was slow even for returning 100 entries. About 10 seconds,\nsometimes 5. But when I start getting 1xxx entries its about 30-50\nseconds. The rest is just horrible.\n\n \n\nHow should I set up my indexes and or tables.\n\nWe were thinking of putting the index inside one table then the join\nwould not have to be done but this still returns rather slow results.\n\n \n\nI have not fully tested this method but it looks like when run for just\nthe keyword search on the title and no joining it can return in about 10\nseconds or less.\n\nThis is a great improvement but I am currently going to make the table\nall in one and see how long it will take. I believe it will not be much\nmore as there will be no join needed only the returning of some\nattribute fields. \n\n \n\nThis is still not the kind of time I would like to see, I wanted\nsomething around 2 seconds or less. I know there is a lot of\ninformation especially if .25 million rows are to be returned but if\nthere is only 1xxx-9xxx rows to be returned I believe 2 seconds seems\nabout right.\n\n \n\nHow do search engines do it?\n\nAny suggestions are welcome,\n\n \n\nThanks\n\n\n\n\nMessage\n\n\n\n\nOne \noption that does not take advantage of any fancy indexing methods is to create a \ntrigger on the table, on insert/update/delete, which extracts each individual \nword from the field you care about, and creates an entry in another 'keyword' \ntable, id = 'word', value = pk of your original table. then index the \nkeyword table on the 'keyword' field, and do your searches from there. \nthis should improve performance substantially, even on very large return sets, \nbecause the keyword table rows are very small and thus a lot of them fit in a \ndisk block.\n \n- \nJeremy\n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n borajettaSent: Monday, June 07, 2004 5:47 PMTo: \n [email protected]: [PERFORM] 50 000 000 Table \n entries and I have to do a keyword search HELP NEEDED\n\nSo I have a table with about 50 million entries in it, I have to do a \n keyword search.\n \nThe keyword search is done on the title of the entry. For example a entry could be \"This is \n a title string which could be searched\"\n \nI have tried a few ways to search but I get horrible search times. Some keywords will come up with \n matches as big as .25 million but most are around 1000-5000.\n \nI use an index which narrows the table down to about 1.5-2million \n entries.\n \nI used 2 tables which had a 1:1 correspondence.\nOne held a gist index which was on a int field which searched the for \n the keyword. Then I would join \n the table to another to retrieve the rest of the information about the items \n it matched.\n \nThis was slow even for returning 100 entries. About 10 seconds, sometimes 5. But when I start getting 1xxx entries \n its about 30-50 seconds. The rest \n is just horrible.\n \nHow should I set up my indexes and or tables.\nWe were thinking of putting the index inside one table then the join \n would not have to be done but this still returns rather slow \n results.\n \nI have not fully tested this method but it looks like when run for just \n the keyword search on the title and no joining it can return in about 10 \n seconds or less.\nThis is a great improvement but I am currently going to \n make the table all in one and see how long it will take. I believe it will not be much more as \n there will be no join needed only the returning of some attribute fields. \n \nThis is still not the kind of time I would like to see, I wanted \n something around 2 seconds or less. \n I know there is a lot of information especially if .25 million rows are \n to be returned but if there is only 1xxx-9xxx rows to be returned I believe 2 \n seconds seems about right.\n \nHow do search engines do it?\nAny suggestions are welcome,\n \nThanks",
"msg_date": "Tue, 15 Jun 2004 08:43:49 -0400",
"msg_from": "\"Jeremy Dunn\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 50 000 000 Table entries and I have to do a keyword search HELP\n\tNEEDED"
},
{
"msg_contents": "MessageThe words for the keyword can be made up of a sentace, ie 10 or more keywords to one entry.\nAlso incase I didnt answer before, we are using TSearch2 and all tables have been fully analyzed and indexed.\nAny other suggestions?\nHow long do searches take when 10 000 rows are returned?\nWe can not use a limit of 100 because we need to analyze the entire data set returned.\nThanks,\n ----- Original Message ----- \n From: Jeremy Dunn \n To: 'borajetta' \n Cc: Postgresql Performance \n Sent: Tuesday, June 15, 2004 5:43 AM\n Subject: RE: [PERFORM] 50 000 000 Table entries and I have to do a keyword search HELP NEEDED\n\n\n One option that does not take advantage of any fancy indexing methods is to create a trigger on the table, on insert/update/delete, which extracts each individual word from the field you care about, and creates an entry in another 'keyword' table, id = 'word', value = pk of your original table. then index the keyword table on the 'keyword' field, and do your searches from there. this should improve performance substantially, even on very large return sets, because the keyword table rows are very small and thus a lot of them fit in a disk block.\n\n - Jeremy\n -----Original Message-----\n From: [email protected] [mailto:[email protected]] On Behalf Of borajetta\n Sent: Monday, June 07, 2004 5:47 PM\n To: [email protected]\n Subject: [PERFORM] 50 000 000 Table entries and I have to do a keyword search HELP NEEDED\n\n\n So I have a table with about 50 million entries in it, I have to do a keyword search.\n\n \n\n The keyword search is done on the title of the entry. For example a entry could be \"This is a title string which could be searched\"\n\n \n\n I have tried a few ways to search but I get horrible search times. Some keywords will come up with matches as big as .25 million but most are around 1000-5000.\n\n \n\n I use an index which narrows the table down to about 1.5-2million entries.\n\n \n\n I used 2 tables which had a 1:1 correspondence.\n\n One held a gist index which was on a int field which searched the for the keyword. Then I would join the table to another to retrieve the rest of the information about the items it matched.\n\n \n\n This was slow even for returning 100 entries. About 10 seconds, sometimes 5. But when I start getting 1xxx entries its about 30-50 seconds. The rest is just horrible.\n\n \n\n How should I set up my indexes and or tables.\n\n We were thinking of putting the index inside one table then the join would not have to be done but this still returns rather slow results.\n\n \n\n I have not fully tested this method but it looks like when run for just the keyword search on the title and no joining it can return in about 10 seconds or less.\n\n This is a great improvement but I am currently going to make the table all in one and see how long it will take. I believe it will not be much more as there will be no join needed only the returning of some attribute fields. \n\n \n\n This is still not the kind of time I would like to see, I wanted something around 2 seconds or less. I know there is a lot of information especially if .25 million rows are to be returned but if there is only 1xxx-9xxx rows to be returned I believe 2 seconds seems about right.\n\n \n\n How do search engines do it?\n\n Any suggestions are welcome,\n\n \n\n Thanks\n\nMessage\n\n\n\n\n\nThe words for the keyword can be made up of a \nsentace, ie 10 or more keywords to one entry.\nAlso incase I didnt answer before, we are using \nTSearch2 and all tables have been fully analyzed and indexed.\nAny other suggestions?\nHow long do searches take when 10 000 rows are \nreturned?\nWe can not use a limit of 100 because we need to \nanalyze the entire data set returned.\nThanks,\n\n----- Original Message ----- \nFrom:\nJeremy \n Dunn \nTo: 'borajetta' \nCc: Postgresql Performance\n\nSent: Tuesday, June 15, 2004 5:43 \nAM\nSubject: RE: [PERFORM] 50 000 000 Table \n entries and I have to do a keyword search HELP NEEDED\n\nOne \n option that does not take advantage of any fancy indexing methods is to create \n a trigger on the table, on insert/update/delete, which extracts each \n individual word from the field you care about, and creates an entry in another \n 'keyword' table, id = 'word', value = pk of your original table. then \n index the keyword table on the 'keyword' field, and do your searches from \n there. this should improve performance substantially, even on very large \n return sets, because the keyword table rows are very small and thus a lot of \n them fit in a disk block.\n \n- \n Jeremy\n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of \n borajettaSent: Monday, June 07, 2004 5:47 PMTo: \n [email protected]: [PERFORM] 50 000 000 \n Table entries and I have to do a keyword search HELP \n NEEDED\n\nSo I have a table with about 50 million entries in it, I have to do a \n keyword search.\n \nThe keyword search is done on the title of the entry. For example a entry could be \"This \n is a title string which could be searched\"\n \nI have tried a few ways to search but I get horrible search \n times. Some keywords will come \n up with matches as big as .25 million but most are around \n 1000-5000.\n \nI use an index which narrows the table down to about 1.5-2million \n entries.\n \nI used 2 tables which had a 1:1 correspondence.\nOne held a gist index which was on a int field which searched the for \n the keyword. Then I would join \n the table to another to retrieve the rest of the information about the items \n it matched.\n \nThis was slow even for returning 100 entries. About 10 seconds, sometimes 5. But when I start getting 1xxx \n entries its about 30-50 seconds. \n The rest is just horrible.\n \nHow should I set up my indexes and or tables.\nWe were thinking of putting the index inside one table then the join \n would not have to be done but this still returns rather slow \n results.\n \nI have not fully tested this method but it looks like when run for \n just the keyword search on the title and no joining it can return in about \n 10 seconds or less.\nThis is a great improvement but I am currently going \n to make the table all in one and see how long it will take. I believe it will not be much more \n as there will be no join needed only the returning of some attribute \n fields. \n \nThis is still not the kind of time I would like to see, I wanted \n something around 2 seconds or less. \n I know there is a lot of information especially if .25 million rows \n are to be returned but if there is only 1xxx-9xxx rows to be returned I \n believe 2 seconds seems about right.\n \nHow do search engines do it?\nAny suggestions are welcome,\n \nThanks",
"msg_date": "Tue, 15 Jun 2004 08:16:06 -0700",
"msg_from": "\"Aaron\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 50 000 000 Table entries and I have to do a keyword search HELP\n\tNEEDED"
},
{
"msg_contents": "Borajetta,\n\nThe other thing you can do is take a look at your hardware. What are you \nrunning this on? How much RAM? Are other things running on this server as \nwell?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Wed, 16 Jun 2004 17:33:37 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 50 000 000 Table entries and I have to do a keyword search HELP\n\tNEEDED"
}
] |
[
{
"msg_contents": "Hello list,\n\nServer is dual Xeon 2.4, 2GBRAM, Postgresql is running on partition:\n/dev/sda9 29G 8.9G 20G 31% /home2\n/dev/sda9 on /home2 type jfs (rw)\n\nVersion()\nPostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2 \n20020903 (Red Hat Linux 8.0 3.2-7)\n\nI have a view to join two tables inventory details (pkardex) and \ninventory documents header (pmdoc) this view usually runs pretty slow as \nindicated in the explain analyze, pkardex is 1943465 rows and its size \naprox 659MB, pmdoc is 1183520 rows and its size is aprox 314MB. The view \ndefinition is:\n\nSELECT pkd_pk AS kpk, (pkd_stamp)::date AS kfecha, pkd_docto AS kdocto,\n ((((pdc_custid)::text || ' '::text) || \n(pdc_custname)::text))::character\n varying(50) AS kclpv, pkd_saldo AS ksaldo, pkd_es AS kes, CASE WHEN \n(pkd_es\n = 'E'::bpchar) THEN pkd_qtyinv ELSE (0)::numeric END AS kentrada, \nCASE WHEN\n (pkd_es = 'S'::bpchar) THEN pkd_qtyinv ELSE (0)::numeric END AS \nksalida,\n pkd_pcode AS kprocode, pkd_price AS kvalor, pdc_tipdoc AS ktipdoc\nFROM (pkardex JOIN pmdoc ON ((pmdoc.pdc_pk = pkardex.doctofk)));\n\n\nShared memory is:\n/root: cat /proc/sys/kernel/shmmax\n1073741824\n\nand postgresql.conf have this settings:\ntcpip_socket = true\nsort_mem = 8190 # min 64, size in KB\nvacuum_mem = 262144 # min 1024, size in KB\ncheckpoint_segments = 10\nmax_connections = 256\nshared_buffers = 32000\neffective_cache_size = 160000 # typically 8KB each\nrandom_page_cost = 2 # units are one sequ\n\nThe explain analyze is:\ndbmund=# explain analyze select * from vkardex where kprocode='1017';\n Nested Loop (cost=0.00..32155.66 rows=5831 width=114) (actual \ntime=18.223..47983.157 rows=4553 loops=1)\n -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52 \nrows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)\n Index Cond: ((pkd_pcode)::text = '1017'::text)\n -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \nwidth=50) (actual time=1.659..1.661 rows=1 loops=5049)\n Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n Total runtime: 47988.067 ms\n(6 rows)\n\nDoes anyone can help me how to properly tune postgresql to gain some \nspeed in such queries, some people have mentioned a RAM increase is \nnecesary, about 8GB or more to have postgresql to run smooth, any \ncomment or suggestion. I really appreciate any help.\n\nRegards,\n\n\n-- \nSinceramente,\nJosu� Maldonado.\n\"Que se me den seis l�neas escritas de pu�o y letra del hombre m�s \nhonrado del mundo, y hallar� en ellas motivos para hacerle ahorcar.\" \n--cardenal Richelieu (Cardenal y pol�tico franc�s. 1.585 - 1.642)\n",
"msg_date": "Mon, 07 Jun 2004 16:19:22 -0600",
"msg_from": "=?ISO-8859-1?Q?Josu=E9_Maldonado?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Join slow on \"large\" tables"
},
{
"msg_contents": "Josue'\n\n> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52 \n> rows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)\n\nLooks to me like there's a problem with index bloat on pkd_pcode_idx. Try \nREINDEXing it, and if that doesn't help, VACUUM FULL on pkardex.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 7 Jun 2004 15:31:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "On Mon, 2004-06-07 at 16:19, Josué Maldonado wrote:\n> Hello list,\n> \n> Server is dual Xeon 2.4, 2GBRAM, Postgresql is running on partition:\n> /dev/sda9 29G 8.9G 20G 31% /home2\n> /dev/sda9 on /home2 type jfs (rw)\n> \n> Version()\n> PostgreSQL 7.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2 \n> 20020903 (Red Hat Linux 8.0 3.2-7)\n> \n> I have a view to join two tables inventory details (pkardex) and \n> inventory documents header (pmdoc) this view usually runs pretty slow as \n> indicated in the explain analyze, pkardex is 1943465 rows and its size \n> aprox 659MB, pmdoc is 1183520 rows and its size is aprox 314MB. The view \n> definition is:\n> \n> SELECT pkd_pk AS kpk, (pkd_stamp)::date AS kfecha, pkd_docto AS kdocto,\n> ((((pdc_custid)::text || ' '::text) || \n> (pdc_custname)::text))::character\n> varying(50) AS kclpv, pkd_saldo AS ksaldo, pkd_es AS kes, CASE WHEN \n> (pkd_es\n> = 'E'::bpchar) THEN pkd_qtyinv ELSE (0)::numeric END AS kentrada, \n> CASE WHEN\n> (pkd_es = 'S'::bpchar) THEN pkd_qtyinv ELSE (0)::numeric END AS \n> ksalida,\n> pkd_pcode AS kprocode, pkd_price AS kvalor, pdc_tipdoc AS ktipdoc\n> FROM (pkardex JOIN pmdoc ON ((pmdoc.pdc_pk = pkardex.doctofk)));\n> \n> \n> Shared memory is:\n> /root: cat /proc/sys/kernel/shmmax\n> 1073741824\n> \n> and postgresql.conf have this settings:\n> tcpip_socket = true\n> sort_mem = 8190 # min 64, size in KB\n> vacuum_mem = 262144 # min 1024, size in KB\n> checkpoint_segments = 10\n> max_connections = 256\n> shared_buffers = 32000\n> effective_cache_size = 160000 # typically 8KB each\n> random_page_cost = 2 # units are one sequ\n> \n> The explain analyze is:\n> dbmund=# explain analyze select * from vkardex where kprocode='1017';\n> Nested Loop (cost=0.00..32155.66 rows=5831 width=114) (actual \n> time=18.223..47983.157 rows=4553 loops=1)\n> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52 \n> rows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)\n> Index Cond: ((pkd_pcode)::text = '1017'::text)\n> -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \n> width=50) (actual time=1.659..1.661 rows=1 loops=5049)\n> Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n> Total runtime: 47988.067 ms\n> (6 rows)\n\nOK, you have to ask yourself a question here. Do I have enough memory\nto let both postgresql and the kernel to cache this data, or enough\nmemory for only one. Then, you pick one and try it out. But there's\nsome issues here. PostgreSQL's shared buffer are not, and should not\ngenerally be thought of as \"cache\". A cache's job it to hold the whole\nworking set, or as much as possible, ready for access. A buffer's job\nis to hold all the data we're tossing around right this second. Once\nwe're done with the data, the buffers can and do just drop whatever was\nin them. PostgreSQL does not have caching, in the classical sense. \nthat may or may not change.\n\nThe kernel, on the other hand, has both cache and buffer. Ever notice\nthat a Linux top shows the cache usually being much bigger than the\nbuffers? My 512 Meg home box right now has 252968k for cache, and\n43276k for buffers. \n\nNow, you're tossing around enough data to actually maybe have a use for\na huge set of buffers, but this means you'll need to starve your cache\nto get enough buffers. Which means that if one process does this kind\nof join, drops connection, and two seconds later, another process\nconnects and does nearly the same thing, it's likely to have to read it\nall from the hard drives again, as it's not in the postgresql buffer,\nand not in the kernel cache.\n\nStarting a seperate connection, doing a simple select * from table1;\nsekect * from table 2, dropping the result set returned, and staying\nconnected seems to be enough to get 7.4 to hold onto the data.\n\nPostgreSQL's current buffer management algo is dirt simple. The ones in\nthe kernel's cache are quite good. So you can quickly reach a point\nwhere PostgreSQL is chasing it's tail where the kernel would have done\nOK.\n\nYour numbers show that you are tossing 659M and 314M against each other,\nbut I don't know if you're harvesting the whole set at once, or just a\ncouple row of each. Indexing help, or is this always gonna be a big seq\nscan of 90% of both tables?\n\nIf you are getting the whole thing all the time, and want postgresql to\nbuffer the whole thing (I recommend against it, although a very few\ncircumstances seem to support it) you need to have 973M of buffer. That\nwould be 124544 or we'll just call it 130000. This high of a number\nmeans you will be getting more than 50% of the RAM for postgreSQL. At\nthat point, it seems you might as well go for broke and grab most of it,\n~200000 or so.\n\nIf you're not always mushing the two things against each other, and\nyou've got other datasets to interact with, index it.\n\nOh, in your reply you might to include an explain analyze of the query,\nand maybe an output of top while the query is running.\n\n",
"msg_date": "Mon, 07 Jun 2004 16:47:12 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "Hi Josh and thanks for your response,\n\nEl 07/06/2004 4:31 PM, Josh Berkus en su mensaje escribio:\n\n> Josue'\n> \n> \n>> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..11292.52 \n>>rows=5831 width=72) (actual time=18.152..39520.406 rows=5049 loops=1)\n> \n> \n> Looks to me like there's a problem with index bloat on pkd_pcode_idx. Try \n> REINDEXing it, and if that doesn't help, VACUUM FULL on pkardex.\n> \n\nRecreated the index (drop then create) and did the vacuum full pkardex \nand the behavior seems to be the same:\n\ndbmund=# explain analyze select * from vkardex where kprocode='1013';\n Nested Loop (cost=0.00..2248.19 rows=403 width=114) (actual \ntime=846.318..16030.633 rows=3145 loops=1)\n -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..806.27 \nrows=403 width=72) (actual time=0.054..87.393 rows=3544 loops=1)\n Index Cond: ((pkd_pcode)::text = '1013'::text)\n -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \nwidth=50) (actual time=4.482..4.484 rows=1 loops=3544)\n Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n Total runtime: 16033.807 ms\n(6 rows)\n\nAt the time the querie was running top returned:\n\n5:11pm up 1:28, 3 users, load average: 0.19, 0.97, 1.41\n69 processes: 66 sleeping, 1 running, 2 zombie, 0 stopped\nCPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU2 states: 0.1% user, 0.4% system, 0.0% nice, 98.4% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 2069596K av, 1477784K used, 591812K free, 0K shrd, 2336K \nbuff\nSwap: 2096440K av, 9028K used, 2087412K free 1388372K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 1225 postgres 17 0 257M 257M 255M S 0.6 12.7 7:14 postmaster\n 1978 postgres 11 0 1044 1044 860 R 0.2 0.0 0:00 top\n 1 root 9 0 472 444 428 S 0.0 0.0 0:04 init\n 2 root 8 0 0 0 0 SW 0.0 0.0 0:00 keventd\n\nand free returned:\n/root: free\n total used free shared buffers cached\nMem: 2069596 1477832 591764 0 2320 1388372\n-/+ buffers/cache: 87140 1982456\nSwap: 2096440 9028 2087412\n\nI'm not a Linux guru, it looks like a memory leak.\n\n\n-- \nSinceramente,\n\nJosu� Maldonado.\n\"Las palabras de aliento despu�s de la censura son como el sol tras el \naguacero.\"\n",
"msg_date": "Mon, 07 Jun 2004 17:12:44 -0600",
"msg_from": "=?ISO-8859-1?Q?Josu=E9_Maldonado?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "Josue'\n\n> dbmund=# explain analyze select * from vkardex where kprocode='1013';\n> Nested Loop (cost=0.00..2248.19 rows=403 width=114) (actual \n> time=846.318..16030.633 rows=3145 loops=1)\n> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..806.27 \n> rows=403 width=72) (actual time=0.054..87.393 rows=3544 loops=1)\n> Index Cond: ((pkd_pcode)::text = '1013'::text)\n> -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \n> width=50) (actual time=4.482..4.484 rows=1 loops=3544)\n> Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n> Total runtime: 16033.807 ms\n\nHuh? It is not at all the same. Your index scan is down to 87ms from \n27,000! And the total query is down to 16seconds from 47 seconds. Don't \nyou consider that an improvement?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 7 Jun 2004 16:21:10 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "Josh,\n\nEl 07/06/2004 5:21 PM, Josh Berkus en su mensaje escribio:\n> \n> Huh? It is not at all the same. Your index scan is down to 87ms from \n> 27,000! And the total query is down to 16seconds from 47 seconds. Don't \n> you consider that an improvement?\n\nYes there was an improvement with respect the previus query, but still \n16 seconds is too slow for that query. And usually the query takes more \nthan 10 seconds even with small data sets returned.\n\nThanks,\n\n\n-- \nSinceramente,\nJosu� Maldonado.\n\n\"La cultura es capaz de descifrar los enigmas en que nos envuelve la vida.\"\n",
"msg_date": "Mon, 07 Jun 2004 17:30:13 -0600",
"msg_from": "=?ISO-8859-1?Q?Josu=E9_Maldonado?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "=?ISO-8859-1?Q?Josu=E9_Maldonado?= <[email protected]> writes:\n> Recreated the index (drop then create) and did the vacuum full pkardex \n> and the behavior seems to be the same:\n\nWell, there was a pretty huge improvement in the pkardex scan time,\nwhether you noticed it or not: 39520.406 to 87.393 msec. This\ndefinitely suggests that you've been lax about vacuuming this table.\n\nI'm wondering whether pmdoc might not be overdue for vacuuming as\nwell.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jun 2004 01:18:22 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables "
},
{
"msg_contents": "Hello Scott,\n\nEl 07/06/2004 4:47 PM, Scott Marlowe en su mensaje escribio:\n\n> OK, you have to ask yourself a question here. Do I have enough memory\n> to let both postgresql and the kernel to cache this data, or enough\n> memory for only one. Then, you pick one and try it out. But there's\n> some issues here. PostgreSQL's shared buffer are not, and should not\n> generally be thought of as \"cache\". A cache's job it to hold the whole\n> working set, or as much as possible, ready for access. A buffer's job\n> is to hold all the data we're tossing around right this second. Once\n> we're done with the data, the buffers can and do just drop whatever was\n> in them. PostgreSQL does not have caching, in the classical sense. \n> that may or may not change.\n> \n> The kernel, on the other hand, has both cache and buffer. Ever notice\n> that a Linux top shows the cache usually being much bigger than the\n> buffers? My 512 Meg home box right now has 252968k for cache, and\n> 43276k for buffers. \n\nI noticed buffers are lower agains cache at least as top shows, dunno if \nI'm wrong:\n\n 8:28am up 1:00, 2 users, load average: 0.40, 0.97, 0.75\n65 processes: 64 sleeping, 1 running, 0 zombie, 0 stopped\nCPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU2 states: 0.0% user, 0.1% system, 0.0% nice, 99.4% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 2069596K av, 1882228K used, 187368K free, 0K shrd, 32924K \nbuff\nSwap: 2096440K av, 0K used, 2096440K free 1757220K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 1508 root 13 0 1040 1040 856 R 0.1 0.0 0:00 top\n 1 root 8 0 476 476 432 S 0.0 0.0 0:04 init\n\n\n\n> Now, you're tossing around enough data to actually maybe have a use for\n> a huge set of buffers, but this means you'll need to starve your cache\n> to get enough buffers. Which means that if one process does this kind\n> of join, drops connection, and two seconds later, another process\n> connects and does nearly the same thing, it's likely to have to read it\n> all from the hard drives again, as it's not in the postgresql buffer,\n> and not in the kernel cache.\n> \n> Starting a seperate connection, doing a simple select * from table1;\n> sekect * from table 2, dropping the result set returned, and staying\n> connected seems to be enough to get 7.4 to hold onto the data.\n> \n> PostgreSQL's current buffer management algo is dirt simple. The ones in\n> the kernel's cache are quite good. So you can quickly reach a point\n> where PostgreSQL is chasing it's tail where the kernel would have done\n> OK.\n> \n> Your numbers show that you are tossing 659M and 314M against each other,\n> but I don't know if you're harvesting the whole set at once, or just a\n> couple row of each. Indexing help, or is this always gonna be a big seq\n> scan of 90% of both tables?\n\nGenerally only a small set is queried, the bigest record set expected is \nabout 24,000 rows and does not exced the 10MB size, explain analyze \nshows the planner is using the index as expected but performance still poor.\n\n> If you are getting the whole thing all the time, and want postgresql to\n> buffer the whole thing (I recommend against it, although a very few\n> circumstances seem to support it) you need to have 973M of buffer. That\n> would be 124544 or we'll just call it 130000. This high of a number\n> means you will be getting more than 50% of the RAM for postgreSQL. At\n> that point, it seems you might as well go for broke and grab most of it,\n> ~200000 or so.\n> \n> If you're not always mushing the two things against each other, and\n> you've got other datasets to interact with, index it.\n> \n> Oh, in your reply you might to include an explain analyze of the query,\n> and maybe an output of top while the query is running.\n> \n\ndbmund=# explain analyze select * from vkardex where kprocode='1013';\n Nested Loop (cost=0.00..2248.19 rows=403 width=114) (actual \ntime=846.318..16030.633 rows=3145 loops=1)\n -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..806.27 \nrows=403 width=72) (actual time=0.054..87.393 rows=3544 loops=1)\n Index Cond: ((pkd_pcode)::text = '1013'::text)\n -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \nwidth=50) (actual time=4.482..4.484 rows=1 loops=3544)\n Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n Total runtime: 16033.807 ms\n(6 rows)\n\nAt the time the querie was running top returned:\n5:11pm up 1:28, 3 users, load average: 0.19, 0.97, 1.41\n69 processes: 66 sleeping, 1 running, 2 zombie, 0 stopped\nCPU0 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU1 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nCPU2 states: 0.1% user, 0.4% system, 0.0% nice, 98.4% idle\nCPU3 states: 0.0% user, 0.0% system, 0.0% nice, 100.0% idle\nMem: 2069596K av, 1477784K used, 591812K free, 0K shrd, 2336K \nbuff\nSwap: 2096440K av, 9028K used, 2087412K free 1388372K \ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND\n 1225 postgres 17 0 257M 257M 255M S 0.6 12.7 7:14 postmaster\n 1978 postgres 11 0 1044 1044 860 R 0.2 0.0 0:00 top\n 1 root 9 0 472 444 428 S 0.0 0.0 0:04 init\n 2 root 8 0 0 0 0 SW 0.0 0.0 0:00 keventd\n\nand free returned:\n/root: free\n total used free shared buffers cached\nMem: 2069596 1477832 591764 0 2320 1388372\n-/+ buffers/cache: 87140 1982456\nSwap: 2096440 9028 2087412\n\n\nThanks,\n\n\n-- \nSinceramente,\nJosué Maldonado.\n\"El verdadero placer está en la búsqueda, más que en la explicación.\" -- \nIsaac Asimov\n",
"msg_date": "Tue, 08 Jun 2004 08:36:02 -0600",
"msg_from": "=?UTF-8?B?Sm9zdcOpIE1hbGRvbmFkbw==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "On Tue, 2004-06-08 at 08:36, Josué Maldonado wrote:\n> Hello Scott,\n\nSNIP...\n\n> > Your numbers show that you are tossing 659M and 314M against each other,\n> > but I don't know if you're harvesting the whole set at once, or just a\n> > couple row of each. Indexing help, or is this always gonna be a big seq\n> > scan of 90% of both tables?\n> \n> Generally only a small set is queried, the bigest record set expected is \n> about 24,000 rows and does not exced the 10MB size, explain analyze \n> shows the planner is using the index as expected but performance still poor.\n\nIf that is the case, then shared_buffers should likely only be 1000 to\n10000. anything over 10000 is usually a bad idea, unless you've proven\nit to be faster than <10000.\n\n> dbmund=# explain analyze select * from vkardex where kprocode='1013';\n> Nested Loop (cost=0.00..2248.19 rows=403 width=114) (actual \n> time=846.318..16030.633 rows=3145 loops=1)\n> -> Index Scan using pkd_pcode_idx on pkardex (cost=0.00..806.27 \n> rows=403 width=72) (actual time=0.054..87.393 rows=3544 loops=1)\n> Index Cond: ((pkd_pcode)::text = '1013'::text)\n> -> Index Scan using pdc_pk_idx on pmdoc (cost=0.00..3.55 rows=1 \n> width=50) (actual time=4.482..4.484 rows=1 loops=3544)\n> Index Cond: (pmdoc.pdc_pk = \"outer\".doctofk)\n> Total runtime: 16033.807 ms\n> (6 rows)\n\nWell, it looks like your predicted versus actual rows are a bit off, and\nin the final bit, the planner things that it is going to be merging 403\nrows but is in fact merging 3145 rows. Try \n\nset enable_nestloop = off;\nand run the explain analyze again and see if that's faster. If so, try\nupping your target stats on kprocode (see \"\\h alter table\" in psql for\nthe syntax), rerun analyze, and try the query with set enable_nestloop =\non to see if the planner makes the right choice.\n\n\n\n",
"msg_date": "Tue, 08 Jun 2004 09:27:16 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Join slow on \"large\" tables"
},
{
"msg_contents": "Hi,\n\nWe often experience with the problem that reindex \ncannot be finished in our production database. \nIt's typically done with 30 minutes. However,\nsometimes, when there is another \"COPY\" process,\nreindex will not finish. By monitoring the CPU \ntime reindex takes, it does not increase at all.\nThat seems a deadlock. But the following query shows\nonly reindex process (23127)is granted lock while \nCOPY process (3149) is not.\n\nLast time when we have this problem and kill \nreindex process and COPY process does not work.\nWe had to bounce the database server.\n\nAs you know, when reindex is running, nobody can\naccess the table.\nCan someone kindly help?\n\nThanks,\n\n\n\nHere is lock info from database:\n\n replace | database | transaction | pid\n | mode | granted\n-----------------------+----------+-------------+-------+---------------------+---------\n email | 17613 | | \n3149 | RowExclusiveLock | f\n email_cre_dom_idx | 17613 | |\n23127 | ExclusiveLock | t\n email_cid_cre_idx | 17613 | |\n23127 | ShareLock | t\n email_cid_cre_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email | 17613 | |\n23127 | ShareLock | t\n email | 17613 | |\n23127 | AccessExclusiveLock | t\n email_cid_cre_dom_idx | 17613 | |\n23127 | ShareLock | t\n email_cid_cre_dom_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email_did_cre_idx | 17613 | |\n23127 | ShareLock | t\n email_did_cre_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email_cre_dom_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n(11 rows)\n\n\nHere are the processes of 3149 and 23127 from OS:\n\npostgres 3149 1.3 6.4 154104 134444 ? S \nJun03 92:04 postgres: postgres db1 xx.xx.xx.xx COPY\nwaiting\n\npostgres 23127 3.2 9.3 228224 194512 ? S \n03:35 15:03 postgres: postgres db1 [local] REINDEX\n\nHere are queries from database:\n23127 | REINDEX table email\n\n 3149 | COPY email (...) FROM stdin\n\n\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nFriends. Fun. Try the all-new Yahoo! Messenger.\nhttp://messenger.yahoo.com/ \n",
"msg_date": "Tue, 8 Jun 2004 08:31:06 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "reindex and copy - deadlock?"
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> We often experience with the problem that reindex \n> cannot be finished in our production database. \n> It's typically done with 30 minutes. However,\n> sometimes, when there is another \"COPY\" process,\n> reindex will not finish. By monitoring the CPU \n> time reindex takes, it does not increase at all.\n> That seems a deadlock.\n\nThere is no deadlock visible in your report: the reindex process is not\nwaiting for a lock, according to either ps or pg_locks. You sure it's\nnot just slow? I'd expect reindex to be largely I/O bound, so the lack\nof CPU activity doesn't prove much.\n\nIf you think it's actually stuck waiting for something, try attaching to\nthe REINDEX backend process with gdb to get a stack trace. That would\nat least give some idea what it's waiting for.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jun 2004 12:47:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Thank you, Tom!\n\nWe vacuum and reindex every night and\nreindex typically took 30 minutes. Today,\nit ran since 3AM, and has not finished till 8:30AM.\n\nThe email and its indexe sizes are:\ntablename indexname size_kb reltuples\nemail 1292696 8.07905e+06\nemail email_cre_dom_idx 323112 \nemail email_cid_cre_dom_idx 357952 \nemail email_did_cre_idx 205712\nemail email_cid_cre_idx 205560\n\nI agree with you that deadlock is unlikely from\ndatabase and OS report.\n\nWe have bounced the server since it is a production\ndatabase and nobody can access email table because\nof this.\n\nI will use gdb next time. What's this right way to\nget info as postgres owner?\ngdb\nattach pid\n\nThanks again!\n\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > We often experience with the problem that reindex \n> > cannot be finished in our production database. \n> > It's typically done with 30 minutes. However,\n> > sometimes, when there is another \"COPY\" process,\n> > reindex will not finish. By monitoring the CPU \n> > time reindex takes, it does not increase at all.\n> > That seems a deadlock.\n> \n> There is no deadlock visible in your report: the\n> reindex process is not\n> waiting for a lock, according to either ps or\n> pg_locks. You sure it's\n> not just slow? I'd expect reindex to be largely I/O\n> bound, so the lack\n> of CPU activity doesn't prove much.\n> \n> If you think it's actually stuck waiting for\n> something, try attaching to\n> the REINDEX backend process with gdb to get a stack\n> trace. That would\n> at least give some idea what it's waiting for.\n> \n> \t\t\tregards, tom lane\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nFriends. Fun. Try the all-new Yahoo! Messenger.\nhttp://messenger.yahoo.com/ \n",
"msg_date": "Tue, 8 Jun 2004 10:25:42 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> I will use gdb next time. What's this right way to\n> get info as postgres owner?\n\n\t$ gdb /path/to/postgres\n\tgdb> attach PID-of-backend-process\n\tgdb> bt\n\tgdb> quit\n\nYou might try this for practice on any idle backend; it shouldn't affect\nthe state of the backend, except for freezing it while you issue the\ncommands.\n\nIf \"bt\" gives you just a list of numbers and no symbolic information,\nthen it won't be much help; you'll need to rebuild the backend with\ndebugging information so that we can make some sense of the trace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jun 2004 13:51:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Hi Tom,\n\nHere is gdb info.\n\nThis happens in our production database\n3 times this week. It's totally unacceptable.\nI have written a drop/create script to \navoid reindex. However, drop/create \nseems to me take more time than reindex\nfor the whole database.\n\nYour help is greatly appreciated!\n\nVersion:\nPostgreSQL 7.3.2 on i686-pc-linux-gnu, compiled by GCC\n2.96\n\n\n===========================================\nsu - postgres\nps -auxww|grep REINDEX\npostgres 18903 1.2 8.1 195388 169512 ? S \n04:49 1:54 postgres:postgres db1 [local] REINDEX\npostgres 13329 0.0 0.0 1768 620 pts/0 S \n07:23 0:00 grep REINDEX\ngdb /bin/postgres\nGNU gdb Red Hat Linux (5.2-2)\nCopyright 2002 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General\nPublic License, and you are\nwelcome to change it and/or distribute copies of it\nunder certain \nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show\nwarranty\" for details.\nThis GDB was configured as \"i386-redhat-linux\"...(no\ndebugging symbols \nfound)...\n(gdb) attach 18903\nAttaching to program: /bin/postgres, process 18903\nReading symbols from /usr/lib/libz.so.1...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /usr/lib/libz.so.1\nReading symbols from /usr/lib/libreadline.so.4...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /usr/lib/libreadline.so.4\nReading symbols from /lib/libtermcap.so.2...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/libtermcap.so.2\nReading symbols from /lib/libcrypt.so.1...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/libcrypt.so.1\nReading symbols from /lib/libresolv.so.2...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/libresolv.so.2\nReading symbols from /lib/libnsl.so.1...(no debugging\nsymbols found)...done.\nLoaded symbols for /lib/libnsl.so.1\nReading symbols from /lib/libdl.so.2...(no debugging\nsymbols found)...done.\nLoaded symbols for /lib/libdl.so.2\nReading symbols from /lib/i686/libm.so.6...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/i686/libm.so.6\nReading symbols from /lib/i686/libc.so.6...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/i686/libc.so.6\nReading symbols from /lib/ld-linux.so.2...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/ld-linux.so.2\nReading symbols from /lib/libnss_files.so.2...(no\ndebugging symbols \nfound)...done.\nLoaded symbols for /lib/libnss_files.so.2\n0x420e8bb2 in semop () from /lib/i686/libc.so.6\n(gdb) bt\n#0 0x420e8bb2 in semop () from /lib/i686/libc.so.6\n#1 0x080ffa28 in PGSemaphoreLock ()\n#2 0x08116432 in LWLockAcquire ()\n#3 0x0810f572 in LockBuffer ()\n#4 0x0807dea3 in _bt_getbuf ()\n#5 0x080813ec in _bt_leafbuild ()\n#6 0x080816a6 in _bt_leafbuild ()\n#7 0x08081b8b in _bt_leafbuild ()\n#8 0x080813cc in _bt_leafbuild ()\n#9 0x0807e1d0 in btbuild ()\n#10 0x081631c3 in OidFunctionCall3 ()\n#11 0x080920a7 in index_build ()\n#12 0x08092593 in reindex_index ()\n#13 0x08092473 in IndexBuildHeapScan ()\n#14 0x0809275d in reindex_relation ()\n#15 0x080b9164 in ReindexTable ()\n#16 0x08118ece in pg_exec_query_string ()\n#17 0x08119fe5 in PostgresMain ()\n#18 0x0810214c in ClosePostmasterPorts ()\n#19 0x08101a9e in ClosePostmasterPorts ()\n#20 0x08100ca1 in PostmasterMain ()\n#21 0x08100862 in PostmasterMain ()\n#22 0x080deed7 in main ()\n#23 0x42017589 in __libc_start_main () from\n/lib/i686/libc.so.6\n(gdb) quit\nThe program is running. Quit anyway (and detach it)?\n(y or n) y\nDetaching from program: /bin/postgres, process 18903\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > I will use gdb next time. What's this right way to\n> > get info as postgres owner?\n> \n> \t$ gdb /path/to/postgres\n> \tgdb> attach PID-of-backend-process\n> \tgdb> bt\n> \tgdb> quit\n> \n> You might try this for practice on any idle backend;\n> it shouldn't affect\n> the state of the backend, except for freezing it\n> while you issue the\n> commands.\n> \n> If \"bt\" gives you just a list of numbers and no\n> symbolic information,\n> then it won't be much help; you'll need to rebuild\n> the backend with\n> debugging information so that we can make some sense\n> of the trace.\n> \n> \t\t\tregards, tom lane\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nFriends. Fun. Try the all-new Yahoo! Messenger.\nhttp://messenger.yahoo.com/ \n",
"msg_date": "Fri, 11 Jun 2004 08:11:48 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> (gdb) bt\n> #0 0x420e8bb2 in semop () from /lib/i686/libc.so.6\n> #1 0x080ffa28 in PGSemaphoreLock ()\n> #2 0x08116432 in LWLockAcquire ()\n> #3 0x0810f572 in LockBuffer ()\n> #4 0x0807dea3 in _bt_getbuf ()\n> #5 0x080813ec in _bt_leafbuild ()\n> #6 0x080816a6 in _bt_leafbuild ()\n> #7 0x08081b8b in _bt_leafbuild ()\n> #8 0x080813cc in _bt_leafbuild ()\n> #9 0x0807e1d0 in btbuild ()\n> #10 0x081631c3 in OidFunctionCall3 ()\n> #11 0x080920a7 in index_build ()\n> #12 0x08092593 in reindex_index ()\n\nHmm. I don't think I believe this backtrace. It's obviously wrong at\nlines 5-7 - _bt_leafbuild doesn't call itself nor call _bt_getbuf.\nIt's possible that you don't have any local symbols in this executable\nand what we're seeing is the nearest global symbol, so let's ignore\nthat; but if we take lines 0-4 at face value, what it says is that the\nREINDEX is stuck waiting for buffer lock on a buffer for a new empty\npage it has just added to the new index. This is flatly impossible.\nThere is no other process that could possibly be interested in that\nbuffer, or for that matter even be able to name it (since the new index\nhas a new relfilenode value that isn't even visible to any other process\nyet). I thought for a little bit that a background CHECKPOINT might be\ntrying to write out the new buffer, but that theory holds no water\neither, because at this point in the _bt_getbuf sequence, the buffer is\nnot marked dirty (I just verified this by stepping through it in 7.4.2).\n\nI can think of lots of reasons why the REINDEX might block at the\nprevious step of the sequence, namely acquiring a fresh buffer ... but\nonce it's got the buffer there is surely no reason to block.\n\nWhat I'm inclined to think is that the backtrace isn't right at all.\nWould it be possible for you to install a backend built with\n--enable-debug and get a more reliable backtrace?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jun 2004 12:05:20 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "We have another production database,\nwhich is similar with this one. \nIt has never had REINDEX block problem yet. \nOne difference between these two databases\nis the one having REINDEX problem is using\nNTFS file system. \n\nIs it possible the root of problem?\n\nThanks,\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > (gdb) bt\n> > #0 0x420e8bb2 in semop () from\n> /lib/i686/libc.so.6\n> > #1 0x080ffa28 in PGSemaphoreLock ()\n> > #2 0x08116432 in LWLockAcquire ()\n> > #3 0x0810f572 in LockBuffer ()\n> > #4 0x0807dea3 in _bt_getbuf ()\n> > #5 0x080813ec in _bt_leafbuild ()\n> > #6 0x080816a6 in _bt_leafbuild ()\n> > #7 0x08081b8b in _bt_leafbuild ()\n> > #8 0x080813cc in _bt_leafbuild ()\n> > #9 0x0807e1d0 in btbuild ()\n> > #10 0x081631c3 in OidFunctionCall3 ()\n> > #11 0x080920a7 in index_build ()\n> > #12 0x08092593 in reindex_index ()\n> \n> Hmm. I don't think I believe this backtrace. It's\n> obviously wrong at\n> lines 5-7 - _bt_leafbuild doesn't call itself nor\n> call _bt_getbuf.\n> It's possible that you don't have any local symbols\n> in this executable\n> and what we're seeing is the nearest global symbol,\n> so let's ignore\n> that; but if we take lines 0-4 at face value, what\n> it says is that the\n> REINDEX is stuck waiting for buffer lock on a buffer\n> for a new empty\n> page it has just added to the new index. This is\n> flatly impossible.\n> There is no other process that could possibly be\n> interested in that\n> buffer, or for that matter even be able to name it\n> (since the new index\n> has a new relfilenode value that isn't even visible\n> to any other process\n> yet). I thought for a little bit that a background\n> CHECKPOINT might be\n> trying to write out the new buffer, but that theory\n> holds no water\n> either, because at this point in the _bt_getbuf\n> sequence, the buffer is\n> not marked dirty (I just verified this by stepping\n> through it in 7.4.2).\n> \n> I can think of lots of reasons why the REINDEX might\n> block at the\n> previous step of the sequence, namely acquiring a\n> fresh buffer ... but\n> once it's got the buffer there is surely no reason\n> to block.\n> \n> What I'm inclined to think is that the backtrace\n> isn't right at all.\n> Would it be possible for you to install a backend\n> built with\n> --enable-debug and get a more reliable backtrace?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\[email protected]\n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nFriends. Fun. Try the all-new Yahoo! Messenger.\nhttp://messenger.yahoo.com/ \n",
"msg_date": "Fri, 11 Jun 2004 12:35:23 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> One difference between these two databases\n> is the one having REINDEX problem is using\n> NTFS file system. \n\nOh? That's interesting.\n\n> Is it possible the root of problem?\n\nI would not expect it to show this particular symptom --- if the\nbacktrace is accurate. But there are nearby places that might have\nFS-dependent behavior. Can you do anything about my request for\na stack trace from a debug-enabled build?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jun 2004 16:27:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Hi,\n\nI have changed \"reindex table my_table\" to:\npsql ...\n -c \"drop index my_index; create index my_index;\"\n\nWe still experience the same \"hang\" problem.\n\nI was told that this time, the process is\n\"create index my_index;\" before the PG server is\nbounced.\n\nWhen I login the database, I found the \nmy_index is still there.\n\nI do not know what caused this happen, and I \nam also confused. If create index my_index is killed\nby \"-9\", then my_index should not present in the\ndatabase because it has been dropped before creating.\n\nOn the other hand, if \"drop index my_index;\" is\nkilled, then how drop index (which is DDL, right?)\ncan be blocked? There must be other process(es)\nhas/have execlusive lock on my_index, which\nis not our case from pg_locks.\n\nTom, we are in the process of installing \nthe backend with --enable-debug.\n\nThanks,\n\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > One difference between these two databases\n> > is the one having REINDEX problem is using\n> > NTFS file system. \n> \n> Oh? That's interesting.\n> \n> > Is it possible the root of problem?\n> \n> I would not expect it to show this particular\n> symptom --- if the\n> backtrace is accurate. But there are nearby places\n> that might have\n> FS-dependent behavior. Can you do anything about my\n> request for\n> a stack trace from a debug-enabled build?\n> \n> \t\t\tregards, tom lane\n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail - Helps protect you from nasty viruses.\nhttp://promotions.yahoo.com/new_mail\n",
"msg_date": "Tue, 22 Jun 2004 08:00:42 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> I have changed \"reindex table my_table\" to:\n> psql ...\n> -c \"drop index my_index; create index my_index;\"\n\n> I do not know what caused this happen, and I \n> am also confused. If create index my_index is killed\n> by \"-9\", then my_index should not present in the\n> database because it has been dropped before creating.\n\nI believe that the above executes the two commands in a single\ntransaction. So if you kill it midway through the CREATE, everything\nrolls back and the index is still there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jun 2004 12:26:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Hi All, \n\nIt happened again. \nThis time it hangs when we drop/create index.\n\nHere is gdb info with --enable-debug postgres.\n\nThank you for your help!\n\npostgres 24533 24327 2 Jun28 ? 00:39:11\npostgres: postgres\nxxx xxx.xxx.x.xxx COPY waiting\npostgres 23508 24327 0 03:23 ? 00:00:00\npostgres: postgres\nxxx xxx.xxx.x.xx SELECT waiting\nroot 23662 22727 0 03:24 ? 00:00:00\n/xxx/bin/psql -t -A -q xxx -U postgres -c set\nsort_mem=131072; DROP INDEX xxx_mod_ac_did_cre_idx;\nCREATE INDEX xxx_mod_ac_did_cre_idx ON\nxxx_module_action USING btree (domain_id, created);\npostgres 23663 24327 2 03:24 ? 00:04:40\npostgres: postgres\nxxx [local] CREATE INDEX\npostgres 24252 24327 0 03:26 ? 00:00:00\npostgres: postgres\nxxx xxx.xxx.x.xx SELECT waiting\n\nbash-2.05a$ gdb /xxx/bin/postgres\nGNU gdb Red Hat Linux (5.2-2)\nCopyright 2002 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General\nPublic License, and you\nare\nwelcome to change it and/or distribute copies of it\nunder certain\nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show\nwarranty\" for\ndetails.\nThis GDB was configured as \"i386-redhat-linux\"...\n(gdb) attach 23663\nAttaching to program: /xxx/bin.Linux/postgres, process\n23663\nReading symbols from /usr/lib/libz.so.1...done.\nLoaded symbols for /usr/lib/libz.so.1\nReading symbols from /usr/lib/libreadline.so.4...done.\nLoaded symbols for /usr/lib/libreadline.so.4\nReading symbols from /lib/libtermcap.so.2...done.\nLoaded symbols for /lib/libtermcap.so.2\nReading symbols from /lib/libcrypt.so.1...done.\nLoaded symbols for /lib/libcrypt.so.1\nReading symbols from /lib/libresolv.so.2...done.\nLoaded symbols for /lib/libresolv.so.2\nReading symbols from /lib/libnsl.so.1...done.\nLoaded symbols for /lib/libnsl.so.1\nReading symbols from /lib/libdl.so.2...done.\nLoaded symbols for /lib/libdl.so.2\nReading symbols from /lib/i686/libm.so.6...done.\nLoaded symbols for /lib/i686/libm.so.6\nReading symbols from /lib/i686/libc.so.6...done.\nLoaded symbols for /lib/i686/libc.so.6\nReading symbols from /lib/ld-linux.so.2...done.\nLoaded symbols for /lib/ld-linux.so.2\nReading symbols from /lib/libnss_files.so.2...done.\nLoaded symbols for /lib/libnss_files.so.2\n0x420e8bb2 in semop () from /lib/i686/libc.so.6\n(gdb) bt\n#0 0x420e8bb2 in semop () from /lib/i686/libc.so.6\n#1 0x080ff954 in PGSemaphoreLock (sema=0x4a2d83e8,\ninterruptOK=0 '\\0')\nat pg_sema.c:434\n#2 0x0811635e in LWLockAcquire (lockid=21335,\nmode=LW_EXCLUSIVE) at\nlwlock.c:312\n#3 0x0810f49e in LockBuffer (buffer=10657, mode=2) at\nbufmgr.c:1848\n#4 0x0807dea3 in _bt_getbuf (rel=0x40141b10,\nblkno=4294967295,\naccess=2) at nbtpage.c:337\n#5 0x080813d8 in _bt_blnewpage (index=0x40141b10,\nbuf=0xbfffe724,\npage=0xbfffe728, flags=1) at nbtsort.c:188\n#6 0x08081692 in _bt_buildadd (index=0x40141b10,\nstate=0x4e0b3e30,\nbti=0x4fe20cb8) at nbtsort.c:373\n#7 0x08081b77 in _bt_load (index=0x40141b10,\nbtspool=0x82bf7b8,\nbtspool2=0x0) at nbtsort.c:638\n#8 0x080813b8 in _bt_leafbuild (btspool=0x82bf7b8,\nbtspool2=0x0) at\nnbtsort.c:171\n#9 0x0807e1d0 in btbuild (fcinfo=0xbfffe820) at\nnbtree.c:165\n#10 0x081630d7 in OidFunctionCall3 (functionId=338,\narg1=1075019120,\narg2=1075059472, arg3=137095072)\n at fmgr.c:1275\n#11 0x08092093 in index_build\n(heapRelation=0x40137d70,\nindexRelation=0x40141b10, indexInfo=0x82be7a0)\n at index.c:1447\n#12 0x080913d7 in index_create (heapRelationId=17618,\nindexRelationName=0x82b9648 \"xxx_mod_ac_did_cre_idx\", \n indexInfo=0x82be7a0, accessMethodObjectId=403,\nclassObjectId=0x82be578, primary=0 '\\0',\nisconstraint=0 '\\0', \n allow_system_table_mods=0 '\\0') at index.c:765\n#13 0x080b88ae in DefineIndex (heapRelation=0x82b9698,\nindexRelationName=0x82b9648 \"xxx_mod_ac_did_cre_idx\", \n accessMethodName=0x82b96c0 \"btree\",\nattributeList=0x82b9718,\nunique=0 '\\0', primary=0 '\\0', \n isconstraint=0 '\\0', predicate=0x0,\nrangetable=0x0) at\nindexcmds.c:211\n#14 0x0811b250 in ProcessUtility (parsetree=0x82b9788,\ndest=Remote,\ncompletionTag=0xbfffea80 \"\") at utility.c:620\n#15 0x08118df6 in pg_exec_query_string\n(query_string=0x82b91e0,\ndest=Remote, parse_context=0x82ade58)\n at postgres.c:789\n#16 0x08119f0d in PostgresMain (argc=4,\nargv=0xbfffecb0,\nusername=0x8240679 \"postgres\") at postgres.c:2013\n#17 0x08102078 in DoBackend (port=0x8240548) at\npostmaster.c:2302\n#18 0x081019ca in BackendStartup (port=0x8240548) at\npostmaster.c:1924\n#19 0x08100bcd in ServerLoop () at postmaster.c:1009\n#20 0x0810078e in PostmasterMain (argc=1,\nargv=0x8227468) at\npostmaster.c:788\n#21 0x080dee2b in main (argc=1, argv=0xbffff644) at\nmain.c:210\n#22 0x42017589 in __libc_start_main () from\n/lib/i686/libc.so.6\n(gdb) quit\nThe program is running. Quit anyway (and detach it)?\n(y or n) y\nDetaching from program: /xxx/bin.Linux/postgres,\nprocess 23663\n\n\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > One difference between these two databases\n> > is the one having REINDEX problem is using\n> > NTFS file system. \n> \n> Oh? That's interesting.\n> \n> > Is it possible the root of problem?\n> \n> I would not expect it to show this particular\n> symptom --- if the\n> backtrace is accurate. But there are nearby places\n> that might have\n> FS-dependent behavior. Can you do anything about my\n> request for\n> a stack trace from a debug-enabled build?\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\[email protected]\n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nYahoo! Mail Address AutoComplete - You start. We finish.\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Wed, 30 Jun 2004 08:08:16 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> It happened again. \n> This time it hangs when we drop/create index.\n> Here is gdb info with --enable-debug postgres.\n\nWell, that pretty much removes all doubt: something has left the buffer\ncontext lock (cntx_lock) set on a buffer that certainly ought to be free.\n\nThe problem here is that REINDEX (or CREATE INDEX in this case) is the\nvictim, not the perpetrator, so we still don't know exactly what's\ncausing the error. We need to go backwards in time, so to speak, to\nidentify the code that's leaving the buffer locked when it shouldn't.\nI don't offhand have a good idea about how to do that. Is there another\nprocess that is also getting stuck when REINDEX does (if so please get\na backtrace from it too)?\n\nBTW, what Postgres version are you using again? The line numbers in\nyour trace don't square with any current version of bufmgr.c ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 2004 11:43:38 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Hi Tom,\n\nOur PG version is 7.3.2.\n\nThe copy process is always there. Besides copy\nprocess, there are many select processes wait also\n(it is understandable only when reindex,\nbut how come selects wait when drop/create index?\n>From Postgres doc:\nNote: Another approach to dealing with a corrupted\nuser-table index is just to drop and recreate it. This\nmay in fact be preferable if you would like to\nmaintain some semblance of normal operation on the\ntable meanwhile. REINDEX acquires exclusive lock on\nthe table, while CREATE INDEX only locks out writes\nnot reads of the table. \n)\n\nEach time, whan this happened, it might hang\non the different index. \n\nBut one thing is sure:\nreindex or create index is granted lock while\nothers wait. If reindex/create index is not \nthe perpetrator, how can PG grants it lock\nbut not others, like COPY?\n\nForgive me I had not provided the full table and\nindex names, IP address, etc. for security reason.\n\nHere is the copy of my the first post on June 8:\nHi,\n\nWe often experience with the problem that reindex \ncannot be finished in our production database. \nIt's typically done with 30 minutes. However,\nsometimes, when there is another \"COPY\" process,\nreindex will not finish. By monitoring the CPU \ntime reindex takes, it does not increase at all.\nThat seems a deadlock. But the following query shows\nonly reindex process (23127)is granted lock while \nCOPY process (3149) is not.\n\nLast time when we have this problem and kill \nreindex process and COPY process does not work.\nWe had to bounce the database server.\n\nAs you know, when reindex is running, nobody can\naccess the table.\nCan someone kindly help?\n\nThanks,\n\n\n\nHere is lock info from database:\n\n replace | database | transaction | pid\n | mode | granted\n-----------------------+----------+-------------+-------+---------------------+---------\n email | 17613 | | \n3149 | RowExclusiveLock | f\n email_cre_dom_idx | 17613 | |\n23127 | ExclusiveLock | t\n email_cid_cre_idx | 17613 | |\n23127 | ShareLock | t\n email_cid_cre_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email | 17613 | |\n23127 | ShareLock | t\n email | 17613 | |\n23127 | AccessExclusiveLock | t\n email_cid_cre_dom_idx | 17613 | |\n23127 | ShareLock | t\n email_cid_cre_dom_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email_did_cre_idx | 17613 | |\n23127 | ShareLock | t\n email_did_cre_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n email_cre_dom_idx | 17613 | |\n23127 | AccessExclusiveLock | t\n(11 rows)\n\n\nHere are the processes of 3149 and 23127 from OS:\n\npostgres 3149 1.3 6.4 154104 134444 ? S \nJun03 92:04 postgres: postgres db1 xx.xx.xx.xx COPY\nwaiting\n\npostgres 23127 3.2 9.3 228224 194512 ? S \n03:35 15:03 postgres: postgres db1 [local] REINDEX\n\nHere are queries from database:\n23127 | REINDEX table email\n\n 3149 | COPY email (...) FROM stdin\n\n\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > It happened again. \n> > This time it hangs when we drop/create index.\n> > Here is gdb info with --enable-debug postgres.\n> \n> Well, that pretty much removes all doubt: something\n> has left the buffer\n> context lock (cntx_lock) set on a buffer that\n> certainly ought to be free.\n> \n> The problem here is that REINDEX (or CREATE INDEX in\n> this case) is the\n> victim, not the perpetrator, so we still don't know\n> exactly what's\n> causing the error. We need to go backwards in time,\n> so to speak, to\n> identify the code that's leaving the buffer locked\n> when it shouldn't.\n> I don't offhand have a good idea about how to do\n> that. Is there another\n> process that is also getting stuck when REINDEX does\n> (if so please get\n> a backtrace from it too)?\n> \n> BTW, what Postgres version are you using again? The\n> line numbers in\n> your trace don't square with any current version of\n> bufmgr.c ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose\n> an index scan if your\n> joining column's datatypes do not match\n> \n\n\n\n\t\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - 100MB free storage!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Wed, 30 Jun 2004 09:17:46 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> Our PG version is 7.3.2.\n\nHmm. On general principles you should be using 7.3.6, but I do not see\nanything in the 7.3.* change logs that looks very likely to cure this.\n\n> The copy process is always there. Besides copy\n> process, there are many select processes wait also\n> (it is understandable only when reindex,\n> but how come selects wait when drop/create index?\n\nDROP INDEX would lock out selects (it has no other way to be sure no\nselect is trying to *use* the index). Once you're past that, selects\nwould work, but if you try something like\n\tbegin; drop index; create index; commit;\nthen the drop's lock will be held till commit.\n\nI'm not sure about whether COPY is related. In your original post, the\nCOPY was waiting to acquire RowExclusiveLock on the table, so it hadn't\nactually done anything yet and really couldn't be holding a buffer lock\nAFAICS.\n\n> But one thing is sure:\n> reindex or create index is granted lock while\n> others wait. If reindex/create index is not \n> the perpetrator, how can PG grants it lock\n> but not others, like COPY?\n\nThe point is that it's waiting for a lower-level lock (namely a buffer\nLWLock). There's no deadlock detection for LWLocks, because they're not\nsupposed to be used in ways that could cause a deadlock.\n\nAssuming for the moment that indeed this is a deadlock, you could learn\nsomething the next time it happens with some manual investigation.\nYou'll need to keep using the debug-enabled build. When you next get a\nlockup, proceed as follows:\n\n1. Attach to the REINDEX or CREATE INDEX process and find out which\nLWLock number it is blocked on. (This is the lockid argument of\nLWLockAcquire, 21335 in your trace of today.)\n\n2. For *each* live backend process (including the REINDEX itself),\nattach with gdb and look at the held-locks status of lwlock.c.\nThis would go something like\n\n\tgdb> p num_held_lwlocks\nif greater than zero:\n\tgdb> x/10d held_lwlocks\n(replace \"10\" by the value of num_held_lwlocks)\n\nIf you find a backend that is holding the lock number that REINDEX\nwants, print out its call stack with \"bt\", and look in pg_locks to see\nwhat lockmanager locks it is holding or waiting for. If you do not find\none, then the deadlock theory is disproved, and we're back to square\none.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 2004 13:06:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Thanks!\n\nOK, we will do this exceise next time.\n\nTSince there are multiple databases and\nthere are 170 postgres processes this morning,\n60 of them are access the problem database,\nand 57 of 60 are non-idle. \n\nWe only need to gdb those 57 processes, or\nwe need gdb 60 or 170?\n\nThanks again!\n\n--- Tom Lane <[email protected]> wrote:\n> Litao Wu <[email protected]> writes:\n> > Our PG version is 7.3.2.\n> \n> Hmm. On general principles you should be using\n> 7.3.6, but I do not see\n> anything in the 7.3.* change logs that looks very\n> likely to cure this.\n> \n> > The copy process is always there. Besides copy\n> > process, there are many select processes wait also\n> > (it is understandable only when reindex,\n> > but how come selects wait when drop/create index?\n> \n> DROP INDEX would lock out selects (it has no other\n> way to be sure no\n> select is trying to *use* the index). Once you're\n> past that, selects\n> would work, but if you try something like\n> \tbegin; drop index; create index; commit;\n> then the drop's lock will be held till commit.\n> \n> I'm not sure about whether COPY is related. In your\n> original post, the\n> COPY was waiting to acquire RowExclusiveLock on the\n> table, so it hadn't\n> actually done anything yet and really couldn't be\n> holding a buffer lock\n> AFAICS.\n> \n> > But one thing is sure:\n> > reindex or create index is granted lock while\n> > others wait. If reindex/create index is not \n> > the perpetrator, how can PG grants it lock\n> > but not others, like COPY?\n> \n> The point is that it's waiting for a lower-level\n> lock (namely a buffer\n> LWLock). There's no deadlock detection for LWLocks,\n> because they're not\n> supposed to be used in ways that could cause a\n> deadlock.\n> \n> Assuming for the moment that indeed this is a\n> deadlock, you could learn\n> something the next time it happens with some manual\n> investigation.\n> You'll need to keep using the debug-enabled build. \n> When you next get a\n> lockup, proceed as follows:\n> \n> 1. Attach to the REINDEX or CREATE INDEX process and\n> find out which\n> LWLock number it is blocked on. (This is the lockid\n> argument of\n> LWLockAcquire, 21335 in your trace of today.)\n> \n> 2. For *each* live backend process (including the\n> REINDEX itself),\n> attach with gdb and look at the held-locks status of\n> lwlock.c.\n> This would go something like\n> \n> \tgdb> p num_held_lwlocks\n> if greater than zero:\n> \tgdb> x/10d held_lwlocks\n> (replace \"10\" by the value of num_held_lwlocks)\n> \n> If you find a backend that is holding the lock\n> number that REINDEX\n> wants, print out its call stack with \"bt\", and look\n> in pg_locks to see\n> what lockmanager locks it is holding or waiting for.\n> If you do not find\n> one, then the deadlock theory is disproved, and\n> we're back to square\n> one.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Wed, 30 Jun 2004 11:45:18 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Litao Wu <[email protected]> writes:\n> Since there are multiple databases and\n> there are 170 postgres processes this morning,\n> 60 of them are access the problem database,\n> and 57 of 60 are non-idle. \n\n> We only need to gdb those 57 processes, or\n> we need gdb 60 or 170?\n\nPotentially the deadlock could be anywhere :-(. You should definitely\nnot assume it must be one of the processes connected to the problem\ndatabase, because the buffer pool is cluster-wide.\n\nMight be worth setting up a shell script to help.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 30 Jun 2004 15:33:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: reindex and copy - deadlock? "
},
{
"msg_contents": "Hi,\n\nI have query:\nexplain\nSELECT *\nFROM ip_tracking T, ip_map C\nWHERE\n T.source_ip::inet >>= C.net;\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..3894833367750.16\nrows=51709297065144 width=111)\n Join Filter: (\"outer\".source_ip >>=\n(\"inner\".net)::inet)\n -> Seq Scan on ip_tracking t \n(cost=0.00..825050.68 rows=31093368 width=34)\n -> Seq Scan on ip_map c (cost=0.00..83686.66\nrows=3326066 width=77)\n(4 rows)\n\nip_tracking (\n pk_col int,\n source_ip inet,\n .. the rest...\n)\nThere is one index \n ip_tracking_ip_idx btree (source_ip)\n\nip_map (\nnet cidr,\n... the rest...)\nIndexes: map_net_idx hash (net)\n\nIf I change \">>=\" to \"=\", the query plan is:\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..10798882243.63 rows=31093368\nwidth=111)\n -> Seq Scan on ip_map c (cost=0.00..83686.66\nrows=3326066 width=77)\n -> Index Scan using ip_tracking_ip_idx on\nip_tracking t (cost=0.00..3236.72 rows=800 width=34)\n Index Cond: (t.source_ip =\n(\"outer\".net)::inet)\n(4 rows)\n\nThis is my first time to deal network address type.\n\nIs it possible to make a query use index with\noperator of \">>=\" like the above?\n\nThanks,\n\n\n\n\n\t\t\n__________________________________\nDo you Yahoo!?\nNew and Improved Yahoo! Mail - Send 10MB messages!\nhttp://promotions.yahoo.com/new_mail \n",
"msg_date": "Thu, 1 Jul 2004 09:12:47 -0700 (PDT)",
"msg_from": "Litao Wu <[email protected]>",
"msg_from_op": false,
"msg_subject": "network address query "
}
] |
[
{
"msg_contents": "I didn't really look that closely at the problem but have you thought of\ntrying:\n\nselect t.key, t.field from t a\n , (select count(*) as cntb from t b\n where b.field > a.field) as dmytbl\nwhere\ncntb = k\n\nThis is called an inline view or sometimes a nested table. You would be\njoining table t to this inline view with the join criteria being \"cntb = k\"\nwhere k is in t.\n\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Monday, June 07, 2004 1:32 PM\nTo: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] is it possible to for the planner to optimize\nthis form?\n\n\nMerlin,\n\n> select t.key, t.field from t a\n> where\n> (\n> select count(*) from t b\n> where b.field > a.field\n> ) = k\n>\n> The subplan (either index or seq. scan) executes once for each row in t,\n> which of course takes forever.\n>\n> This query is a way of achieving LIMIT type results (substitute n-1\n> desired rows for k) using standard SQL, which is desirable in some\n> circumstances. Is it theoretically possible for this to be optimized?\n\nI don't think so, no. PostgreSQL does have some issues using indexes for \ncount() queires which makes the situation worse. However, with the query \nyou presented, I don't see any way around the planner executing the subquery\n\nonce for every row in t.\n\nExcept, of course, for some kind of scheme involving materialized views, if \nyou don't need up-to-the minute data. In that case, you could store in a \ntable the count(*)s of t for each threshold value of b.field. But, \ndynamically, that would be even slower.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n\n\nRE: [PERFORM] is it possible to for the planner to optimize this form?\n\n\nI didn't really look that closely at the problem but have you thought of trying:\n\nselect t.key, t.field from t a\n , (select count(*) as cntb from t b\n where b.field > a.field) as dmytbl\nwhere\ncntb = k\n\nThis is called an inline view or sometimes a nested table. You would be joining table t to this inline view with the join criteria being \"cntb = k\" where k is in t.\n\n-----Original Message-----\nFrom: Josh Berkus [mailto:[email protected]]\nSent: Monday, June 07, 2004 1:32 PM\nTo: Merlin Moncure; [email protected]\nSubject: Re: [PERFORM] is it possible to for the planner to optimize\nthis form?\n\n\nMerlin,\n\n> select t.key, t.field from t a\n> where\n> (\n> select count(*) from t b\n> where b.field > a.field\n> ) = k\n>\n> The subplan (either index or seq. scan) executes once for each row in t,\n> which of course takes forever.\n>\n> This query is a way of achieving LIMIT type results (substitute n-1\n> desired rows for k) using standard SQL, which is desirable in some\n> circumstances. Is it theoretically possible for this to be optimized?\n\nI don't think so, no. PostgreSQL does have some issues using indexes for \ncount() queires which makes the situation worse. However, with the query \nyou presented, I don't see any way around the planner executing the subquery \nonce for every row in t.\n\nExcept, of course, for some kind of scheme involving materialized views, if \nyou don't need up-to-the minute data. In that case, you could store in a \ntable the count(*)s of t for each threshold value of b.field. But, \ndynamically, that would be even slower.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly",
"msg_date": "Mon, 7 Jun 2004 16:31:44 -0700 ",
"msg_from": "Duane Lee - EGOVX <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: is it possible to for the planner to optimize this "
}
] |
[
{
"msg_contents": "Dear All,\n\nI have a table with approximately 570k Rows.\n\n\n Table \"filter.rules\"\n Column | Type | Modifiers\n----------+------------------------+----------------------------------------\n rulename | character varying(16) | not null default ''::character varying\n uri | character varying(200) | not null default ''::character varying\n redirect | character varying(200) | not null default ''::character varying\n moddate | date | not null default ('now'::text)::date\n active | boolean | not null default true\n comment | character varying(255) | not null default ''::character varying\nIndexes:\n \"rules_pkey\" primary key, btree (rulename, uri)\n \"moddate_idx\" btree (moddate)\n \"rules_idx\" btree (lower((uri)::text))\n\nStatistic on the uri column have been set 1000\nVacuum full and analyze was run before tests, no alteration to tables since then.\n\n# analyze verbose filter.rules;\nINFO: analyzing \"filter.rules\"\nINFO: \"rules\": 5228 pages, 300000 rows sampled, 570533 estimated total rows\nANALYZE\n\n# explain analyze SELECT rulename, redirect from filter.rules WHERE lower(uri) IN(lower('land.com'),lower('com'),lower('land.com/'),lower('com/')) GROUP BY rulename,redirect;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=22352.79..22352.79 rows=1 width=12) (actual time=2047.331..2047.332 rows=1 loops=1)\n -> Seq Scan on rules (cost=0.00..22296.32 rows=11294 width=12) (actual time=540.149..2047.308 rows=1 loops=1)\n Filter: ((lower((uri)::text) = 'land.com'::text) OR (lower((uri)::text) = 'com'::text) OR (lower((uri)::text) = 'land.com/'::text) OR (lower((uri)::text) = 'com/'::text))\n Total runtime: 2047.420 ms\n(4 rows)\n\n# SET enable_seqscan=off;\n\n# explain analyze SELECT rulename, redirect from filter.rules WHERE lower(uri) IN(lower('land.com'),lower('com'),lower('land.com/'),lower('com/')) GROUP BY rulename,redirect;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=38970.68..38970.68 rows=1 width=12) (actual time=0.328..0.328 rows=1 loops=1)\n -> Index Scan using rules_idx, rules_idx, rules_idx, rules_idx on rules (cost=0.00..38914.21 rows=11294 width=12) (actual time=0.210..0.312 rows=1 loops=1)\n Index Cond: ((lower((uri)::text) = 'land.com'::text) OR (lower((uri)::text) = 'com'::text) OR (lower((uri)::text) = 'land.com/'::text) OR (lower((uri)::text) = 'com/'::text))\n Total runtime: 0.700 ms\n(4 rows)\n\nCould anybody offer explanations of why the planner does such a terrible job of estimated the number of rows for this query, with the stats set so high.\nTests were also done with stats set to 100, and 1. The results are exactly the same. Which I would have assumed.\n\nAlso I am interested in how functional indexes have statistics collected for them, if they do. As to possibly minimize or avoid this problem in the future.\n\nThanks for your considersation of this matter.\n\nRegards\n\nRussell Smith.\n",
"msg_date": "Tue, 8 Jun 2004 17:24:36 +1000",
"msg_from": "Russell Smith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Use of Functional Indexs and Planner estimates"
},
{
"msg_contents": "On Tue, 8 Jun 2004 17:24:36 +1000, Russell Smith <[email protected]>\nwrote:\n>Also I am interested in how functional indexes have statistics collected for them, if they do.\n\nNot in any released version.\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/src/backend/commands/analyze.c\n\n| Revision 1.70 / Sun Feb 15 21:01:39 2004 UTC (3 months, 3 weeks ago) by tgl\n| Changes since 1.69: +323 -16 lines\n|\n| First steps towards statistics on expressional (nee functional) indexes.\n| This commit teaches ANALYZE to store such stats in pg_statistic, but\n| nothing is done yet about teaching the planner to use 'em.\n\nSo statistics gathering for expressional indexes will be in 7.5, but I\ndon't know about the state of the planner ...\n\nServus\n Manfred\n",
"msg_date": "Tue, 08 Jun 2004 10:57:49 +0200",
"msg_from": "Manfred Koizar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of Functional Indexs and Planner estimates"
},
{
"msg_contents": "Manfred Koizar <[email protected]> writes:\n> So statistics gathering for expressional indexes will be in 7.5, but I\n> don't know about the state of the planner ...\n\nPlanner support is there too:\n\n2004-02-16 19:52 tgl\n\n\t* src/: backend/optimizer/path/costsize.c,\n\tbackend/optimizer/util/relnode.c, backend/utils/adt/selfuncs.c,\n\tinclude/optimizer/pathnode.h, include/utils/selfuncs.h: Make use of\n\tstatistics on index expressions. There are still some corner cases\n\tthat could stand improvement, but it does all the basic stuff.\tA\n\tbyproduct is that the selectivity routines are no longer\n\tconstrained to working on simple Vars; we might in future be able\n\tto improve the behavior for subexpressions that don't match\n\tindexes.\n\nI don't recall anymore what \"corner cases\" I had in mind for future\nimprovement.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Jun 2004 10:33:30 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of Functional Indexs and Planner estimates "
},
{
"msg_contents": "On Tue, 2004-06-08 at 01:24, Russell Smith wrote:\n> Dear All,\n> \n> I have a table with approximately 570k Rows.\n> \n> \n> Table \"filter.rules\"\n> Column | Type | Modifiers\n> ----------+------------------------+----------------------------------------\n> rulename | character varying(16) | not null default ''::character varying\n> uri | character varying(200) | not null default ''::character varying\n> redirect | character varying(200) | not null default ''::character varying\n> moddate | date | not null default ('now'::text)::date\n> active | boolean | not null default true\n> comment | character varying(255) | not null default ''::character varying\n> Indexes:\n> \"rules_pkey\" primary key, btree (rulename, uri)\n> \"moddate_idx\" btree (moddate)\n> \"rules_idx\" btree (lower((uri)::text))\n> \n> Statistic on the uri column have been set 1000\n> Vacuum full and analyze was run before tests, no alteration to tables since then.\n> \n> # analyze verbose filter.rules;\n> INFO: analyzing \"filter.rules\"\n> INFO: \"rules\": 5228 pages, 300000 rows sampled, 570533 estimated total rows\n> ANALYZE\n> \n> # explain analyze SELECT rulename, redirect from filter.rules WHERE lower(uri) IN(lower('land.com'),lower('com'),lower('land.com/'),lower('com/')) GROUP BY rulename,redirect;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=22352.79..22352.79 rows=1 width=12) (actual time=2047.331..2047.332 rows=1 loops=1)\n> -> Seq Scan on rules (cost=0.00..22296.32 rows=11294 width=12) (actual time=540.149..2047.308 rows=1 loops=1)\n> Filter: ((lower((uri)::text) = 'land.com'::text) OR (lower((uri)::text) = 'com'::text) OR (lower((uri)::text) = 'land.com/'::text) OR (lower((uri)::text) = 'com/'::text))\n> Total runtime: 2047.420 ms\n> (4 rows)\n> \n> # SET enable_seqscan=off;\n> \n> # explain analyze SELECT rulename, redirect from filter.rules WHERE lower(uri) IN(lower('land.com'),lower('com'),lower('land.com/'),lower('com/')) GROUP BY rulename,redirect;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=38970.68..38970.68 rows=1 width=12) (actual time=0.328..0.328 rows=1 loops=1)\n> -> Index Scan using rules_idx, rules_idx, rules_idx, rules_idx on rules (cost=0.00..38914.21 rows=11294 width=12) (actual time=0.210..0.312 rows=1 loops=1)\n> Index Cond: ((lower((uri)::text) = 'land.com'::text) OR (lower((uri)::text) = 'com'::text) OR (lower((uri)::text) = 'land.com/'::text) OR (lower((uri)::text) = 'com/'::text))\n> Total runtime: 0.700 ms\n> (4 rows)\n> \n> Could anybody offer explanations of why the planner does such a terrible job of estimated the number of rows for this query, with the stats set so high.\n> Tests were also done with stats set to 100, and 1. The results are exactly the same. Which I would have assumed.\n\nSimple, the planner is choosing a sequential scan when it should be\nchoosing an index scan. This is usually because random_page_cost is set\ntoo high, at the default of 4. Try settings between 1.2 and 2.x or so\nto see how that helps. Be sure and test with various queries of your\nown to be sure you've got about the right setting.\n\n",
"msg_date": "Tue, 08 Jun 2004 08:54:11 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of Functional Indexs and Planner estimates"
},
{
"msg_contents": "\n\"Scott Marlowe\" <[email protected]> writes:\n\n> > -> Seq Scan on rules\n> > (cost=0.00..22296.32 rows=11294 width=12) \n> > (actual time=540.149..2047.308 rows=1 loops=1)\n\n\n> Simple, the planner is choosing a sequential scan when it should be\n> choosing an index scan. This is usually because random_page_cost is set\n> too high, at the default of 4. Try settings between 1.2 and 2.x or so\n> to see how that helps. Be sure and test with various queries of your\n> own to be sure you've got about the right setting.\n\nUnless you make random_page_cost about .0004 (4/11294) it isn't going to be\ncosting this query right (That's a joke, don't do it:). It's thinking there\nare 11,000 records matching the where clause when in fact there is only 1.\n\nIf you know how an upper bound on how many records the query should be finding\nyou might try a kludge involving putting a LIMIT inside the group by. ie,\nsomething like \n\nselect rulename,redirect \n from (select rulename,redirect \n from ...\n where ... \n limit 100) as kludge\n group by rulename,redirect\n\nThis would at least tell the planner not to expect more than 100 rows and to\ntake the plan likely to produce the first 100 rows fastest.\n\nBut this has the disadvantage of uglifying your code and introducing an\narbitrary limit. When 7.5 comes out it you'll want to rip this out.\n\n-- \ngreg\n\n",
"msg_date": "09 Jun 2004 11:59:41 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Use of Functional Indexs and Planner estimates"
}
] |
[
{
"msg_contents": "I am putting together new server to deal with huge burst loads of\ntraffic. I have been reading up on performance recommendations on the\nsite and am interested to try a battery backed up ram disks for the wal\nbuffer. I would like to hear about types and brands of ram disk you have\ntried out and if anyone has a make or type they love or hate. What kind\nof pgbench improvements have you seen. ) Did anyone experience any linux\ndriver issues with their ram disk? We will probably run it with Redhat\n(2.4.20), on an Opteron 8GB (2,4 or 8) cpu box (clients choice) with the\nfastest raid system we can afford. Thanks for the info.\n\n \n\n \n\n\n\n\n\n\n\n\n\n\n \nI am putting together new server to deal\nwith huge burst loads of traffic. I have been reading up on performance\nrecommendations on the site and am interested to try a battery backed up ram\ndisks for the wal buffer. I would like to hear about types and brands of ram\ndisk you have tried out and if anyone has a make or type they love or hate. What\nkind of pgbench improvements have you seen. ) Did anyone experience any linux driver\nissues with their ram disk? We will probably run it with Redhat (2.4.20), on an\nOpteron 8GB (2,4 or 8) cpu box (clients choice) with the fastest raid system we\ncan afford. Thanks for the info.",
"msg_date": "Tue, 8 Jun 2004 12:45:26 -0600",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "RamDisk"
}
] |
[
{
"msg_contents": "I'm having a performance issue that I just can't resolve and its very,\nvery curious. Thought someone here might be able to shed some light on\nthe subject.\n\nI'm using Postgres 7.4.2 on Red Hat 9. I have a table with 763,809 rows\nin it defined as follows ...\n\nksedb=# \\d nrgfeature\n Table \"public.nrgfeature\"\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n fid1 | numeric(64,0) | not null\n fid2 | numeric(64,0) | not null\n created | timestamp without time zone | not null\n createdby | character varying(30) | not null\n modified | timestamp without time zone |\n modifiedby | character varying(30) |\n geommodified | timestamp without time zone |\n geommodifiedby | character varying(30) |\n deleted | timestamp without time zone |\n deletedby | character varying(30) |\n featuretypeid | smallint | not null\n description | text |\n datasourceid | smallint | not null\n lowerleftx | double precision | not null\n lowerlefty | double precision | not null\n upperrightx | double precision | not null\n upperrighty | double precision | not null\n diagonalsize | double precision |\n login | character varying(25) |\nIndexes:\n \"nrgfeature_pkey\" primary key, btree (fid1, fid2)\n \"nrgfeature_ft_index\" btree (featuretypeid)\n \"nrgfeature_xys_index\" btree (upperrightx, lowerleftx, upperrighty,\nlowerlefty, diagonalsize)\nInherits: commonfidattrs,\n commonrevisionattrs\n\n\n... If I write a query as follows ...\n\nSELECT *\nFROM nrgfeature f\nWHERE \n upperRightX > 321264.23697721504\n\tAND lowerLeftX < 324046.79981208267\n\tAND upperRightY > 123286.26189863647\n\tAND lowerLeftY < 124985.92745047594\n\tAND diagonalSize > 50.000\n;\n\n... (or any value for diagonalsize over 50) then my query runs in 50-100\nmilliseconds. However, if the diagonalSize value is changed to 49.999\nor any value below 50, then the query takes over a second for a factor\nof 10 degradation in speed, even though the exact same number of rows is\nreturned.\n\nThe query plan for diagonalSize > 50.000 is ...\n\nIndex Scan using nrgfeature_xys_index on nrgfeature f \n(cost=0.00..17395.79 rows=4618 width=220)\n Index Cond: ((upperrightx > 321264.236977215::double precision) AND\n(lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n123286.261898636::double precision) AND (lowerlefty <\n124985.927450476::double precision) AND (diagonalsize > 50::double\nprecision))\n\n... while for diagonalSize > 49.999 is ...\n\n Seq Scan on nrgfeature f (cost=0.00..31954.70 rows=18732 width=220)\n Filter: ((upperrightx > 321264.236977215::double precision) AND\n(lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n123286.261898636::double precision) AND (lowerlefty <\n124985.927450476::double precision) AND (diagonalsize > 49.999::double\nprecision))\n\n... and yes, if I set enable_seqscan=false then the index is forced to\nbe used. However, despite this being an undesirable solution for this\nsimple case it doesn't solve the problem for the general case. As soon\nas I add in joins with a couple of tables to perform the actual query I\nwant to perform, the seq scan setting doesn't force the index to be used\nanymore. Instead, the primary key index is used at this same\ndiagonalSize cutoff and the 5-part double precision clause is used as a\nfilter to the index scan and the result is again a very slow query.\n\nI can provide those queries and results but that would only complicate\nthis already lengthy email and the above seems to be the crux of the\nproblem anyway.\n\nAny help or thoughts would be greatly appreciated of course.\n\nThanks,\n\nKen Southerland\n\n\n-- \n------s----a----m----s----i----x----e----d----d------\n--\n\nKen Southerland\nSenior Consultant\nSam Six EDD\nhttp://www.samsixedd.com\n\n503-236-4288 (office)\n503-358-6542 (cell)\n\n\n",
"msg_date": "Wed, 09 Jun 2004 12:31:00 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index oddity"
},
{
"msg_contents": "It seems to believe that the number of rows returned for the >49.999\ncase will be 4 times the number for the >50 case. If that was true, then\nthe sequential scan would be correct.\n\nALTER TABLE <table> ALTER COLUMN diagonalsize SET STATISTICS 1000;\nANALZYE <table>;\n\nSend back EXPLAIN ANALYZE output for the >49.999 case.\n\n> The query plan for diagonalSize > 50.000 is ...\n> \n> Index Scan using nrgfeature_xys_index on nrgfeature f \n> (cost=0.00..17395.79 rows=4618 width=220)\n> Index Cond: ((upperrightx > 321264.236977215::double precision) AND\n> (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> 123286.261898636::double precision) AND (lowerlefty <\n> 124985.927450476::double precision) AND (diagonalsize > 50::double\n> precision))\n> \n> ... while for diagonalSize > 49.999 is ...\n> \n> Seq Scan on nrgfeature f (cost=0.00..31954.70 rows=18732 width=220)\n> Filter: ((upperrightx > 321264.236977215::double precision) AND\n> (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> 123286.261898636::double precision) AND (lowerlefty <\n> 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> precision))\n\n",
"msg_date": "Wed, 09 Jun 2004 16:12:11 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "Thanks Rod,\n\nThis setting has no effect however. If I set statistics to 1000, or\neven 0, (and then reanalyze the table) I see no change in the behaviour\nof the query plans. i.e. there is still the odd transtion in the plans\nat diagonalSize = 50.\n\nKen\n\n\n\nOn Wed, 2004-06-09 at 13:12, Rod Taylor wrote:\n> It seems to believe that the number of rows returned for the >49.999\n> case will be 4 times the number for the >50 case. If that was true, then\n> the sequential scan would be correct.\n> \n> ALTER TABLE <table> ALTER COLUMN diagonalsize SET STATISTICS 1000;\n> ANALZYE <table>;\n> \n> Send back EXPLAIN ANALYZE output for the >49.999 case.\n> \n> > The query plan for diagonalSize > 50.000 is ...\n> > \n> > Index Scan using nrgfeature_xys_index on nrgfeature f \n> > (cost=0.00..17395.79 rows=4618 width=220)\n> > Index Cond: ((upperrightx > 321264.236977215::double precision) AND\n> > (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> > 123286.261898636::double precision) AND (lowerlefty <\n> > 124985.927450476::double precision) AND (diagonalsize > 50::double\n> > precision))\n> > \n> > ... while for diagonalSize > 49.999 is ...\n> > \n> > Seq Scan on nrgfeature f (cost=0.00..31954.70 rows=18732 width=220)\n> > Filter: ((upperrightx > 321264.236977215::double precision) AND\n> > (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> > 123286.261898636::double precision) AND (lowerlefty <\n> > 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> > precision))\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n\n",
"msg_date": "Wed, 09 Jun 2004 13:50:09 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "On Wed, 2004-06-09 at 16:50, ken wrote:\n> Thanks Rod,\n> \n> This setting has no effect however. If I set statistics to 1000, or\n\nOkay.. but you never did send EXPLAIN ANALYZE output. I want to know\nwhat it is really finding.\n\n> On Wed, 2004-06-09 at 13:12, Rod Taylor wrote:\n> > It seems to believe that the number of rows returned for the >49.999\n> > case will be 4 times the number for the >50 case. If that was true, then\n> > the sequential scan would be correct.\n> > \n> > ALTER TABLE <table> ALTER COLUMN diagonalsize SET STATISTICS 1000;\n> > ANALZYE <table>;\n> > \n> > Send back EXPLAIN ANALYZE output for the >49.999 case.\n> > \n> > > The query plan for diagonalSize > 50.000 is ...\n> > > \n> > > Index Scan using nrgfeature_xys_index on nrgfeature f \n> > > (cost=0.00..17395.79 rows=4618 width=220)\n> > > Index Cond: ((upperrightx > 321264.236977215::double precision) AND\n> > > (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> > > 123286.261898636::double precision) AND (lowerlefty <\n> > > 124985.927450476::double precision) AND (diagonalsize > 50::double\n> > > precision))\n> > > \n> > > ... while for diagonalSize > 49.999 is ...\n> > > \n> > > Seq Scan on nrgfeature f (cost=0.00..31954.70 rows=18732 width=220)\n> > > Filter: ((upperrightx > 321264.236977215::double precision) AND\n> > > (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> > > 123286.261898636::double precision) AND (lowerlefty <\n> > > 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> > > precision))\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: the planner will ignore your desire to choose an index scan if your\n> > joining column's datatypes do not match\n> > \n\n",
"msg_date": "Wed, 09 Jun 2004 16:56:38 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "On Wed, 2004-06-09 at 13:56, Rod Taylor wrote:\n> On Wed, 2004-06-09 at 16:50, ken wrote:\n> > Thanks Rod,\n> > \n> > This setting has no effect however. If I set statistics to 1000, or\n> \n> Okay.. but you never did send EXPLAIN ANALYZE output. I want to know\n> what it is really finding.\n\nAh, sorry, missed the ANALYZE portion of your request (thought you only\nwanted the result of explain if it changed due to the setting).\n\nHere is the query plan with statistics on diagonalsize set to the\ndefault (-1) ...\n\n Seq Scan on nrgfeature f (cost=0.00..32176.98 rows=19134 width=218)\n(actual time=61.640..1009.414 rows=225 loops=1)\n Filter: ((upperrightx > 321264.236977215::double precision) AND\n(lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n123286.261898636::double precision) AND (lowerlefty <\n124985.927450476::double precision) AND (diagonalsize > 49.999::double\nprecision))\n\n... and here is the plan with statistics set to 1000 ...\n\n Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)\n(actual time=63.544..1002.701 rows=225 loops=1)\n Filter: ((upperrightx > 321264.236977215::double precision) AND\n(lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n123286.261898636::double precision) AND (lowerlefty <\n124985.927450476::double precision) AND (diagonalsize > 49.999::double\nprecision))\n\n... so yeah, its obviously finding way, way less rows than it thinks it\nwill.\n\nthanks,\n\nken\n\n\n> \n> > On Wed, 2004-06-09 at 13:12, Rod Taylor wrote:\n> > > It seems to believe that the number of rows returned for the >49.999\n> > > case will be 4 times the number for the >50 case. If that was true, then\n> > > the sequential scan would be correct.\n> > > \n> > > ALTER TABLE <table> ALTER COLUMN diagonalsize SET STATISTICS 1000;\n> > > ANALZYE <table>;\n> > > \n> > > Send back EXPLAIN ANALYZE output for the >49.999 case.\n> > > \n\n\n",
"msg_date": "Wed, 09 Jun 2004 14:02:09 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "> ... and here is the plan with statistics set to 1000 ...\n> \n> Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)\n> (actual time=63.544..1002.701 rows=225 loops=1)\n> Filter: ((upperrightx > 321264.236977215::double precision) AND\n> (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> 123286.261898636::double precision) AND (lowerlefty <\n> 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> precision))\n\nIt's better like this, but still way off the mark. Even your good query\nwhich uses the index was out by more than an order of magnitude.\n\nTry raising the statistics levels for upperrightx, lowerleftx,\nupperrighty and lowerlefty.\n\nFailing that, you might be able to push it back down again by giving\ndiagonalsize an upper limit. Perhaps 500 is a value that would never\noccur.\n\n\tAND (diagonalsize BETWEEN 49.999::double precision AND 500)\n\n\n",
"msg_date": "Wed, 09 Jun 2004 17:29:03 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "I had already tried setting the statistics to 1000 for all five of these\ndouble precision fields with effectively no improvement. Should have\nmentioned that.\n\nAlso the between makes all values for diagonalSize bad since it is\neffectively doing digonalSize > X and diagonalSize < Y. If I do a query\nwith the same values for the four x,y values and diagonalSize < Y, then\nfor Y=49.999 the query is fast but for anything 50.000 and greater the\nquery is slow. The exact opposite of the greater than queries not\nsurprisingly.\n\nI also think I originally reported that the two queries gave the same\nnumber of rows. That is not true. It was true when I had other joins\nin, but when I stripped the query down to this core problem I should\nhave noticed that the number of results now differs between the two,\nwhich I didn't at first.\n\nIf I take away the diagonalSize condition in my query I find that there\nare 225 rows that satisfy the other conditions. 155 of these have a\ndiagonalSize value of exactly 50.000, while the remaining 70 rows all\nhave values larger than 50. Thus there is a big discrete jump in the\nnumber of rows at a diagonalSize of 50. However, the statistics are off\nby two orders of magnitude in guessing how many rows there are going to\nbe in this case and thus is not using my index. How can I fix that?\n\nKen\n\n\n\nOn Wed, 2004-06-09 at 14:29, Rod Taylor wrote:\n> > ... and here is the plan with statistics set to 1000 ...\n> > \n> > Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)\n> > (actual time=63.544..1002.701 rows=225 loops=1)\n> > Filter: ((upperrightx > 321264.236977215::double precision) AND\n> > (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> > 123286.261898636::double precision) AND (lowerlefty <\n> > 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> > precision))\n> \n> It's better like this, but still way off the mark. Even your good query\n> which uses the index was out by more than an order of magnitude.\n> \n> Try raising the statistics levels for upperrightx, lowerleftx,\n> upperrighty and lowerlefty.\n> \n> Failing that, you might be able to push it back down again by giving\n> diagonalsize an upper limit. Perhaps 500 is a value that would never\n> occur.\n> \n> \tAND (diagonalsize BETWEEN 49.999::double precision AND 500)\n> \n> \n> \n\n",
"msg_date": "Wed, 09 Jun 2004 16:11:03 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "> If I take away the diagonalSize condition in my query I find that there\n> are 225 rows that satisfy the other conditions. 155 of these have a\n> diagonalSize value of exactly 50.000, while the remaining 70 rows all\n> have values larger than 50. Thus there is a big discrete jump in the\n> number of rows at a diagonalSize of 50. However, the statistics are off\n> by two orders of magnitude in guessing how many rows there are going to\n> be in this case and thus is not using my index. How can I fix that?\n\nMaybe you should drop your random_page_cost to something less than 4, \neg. 3 or even 2...\n\nChris\n",
"msg_date": "Thu, 10 Jun 2004 09:45:14 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "On Wed, 2004-06-09 at 21:45, Christopher Kings-Lynne wrote:\n> > If I take away the diagonalSize condition in my query I find that there\n> > are 225 rows that satisfy the other conditions. 155 of these have a\n\n> Maybe you should drop your random_page_cost to something less than 4, \n> eg. 3 or even 2...\n\nThe big problem is a very poor estimate (off by a couple orders of\nmagnitude). I was hoping someone with more knowledge in fixing those\nwould jump in.\n\n\n",
"msg_date": "Wed, 09 Jun 2004 22:17:57 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "\n\nRod Taylor wrote:\n\n>\n>The big problem is a very poor estimate (off by a couple orders of\n>magnitude). I was hoping someone with more knowledge in fixing those\n>would jump in.\n>\n>\n> \n>\nANALYZE might be producing poor stats due to :\n\ni) many dead tuples or\nii) high proportion of dead tuples in the first few pages of the table\n\nDoes a VACUUM FULL followed by ANALYZE change the estimates (or have you \ntried this already)?\n\n(p.s. - I probably don't qualify for the 'more knowledge' bit either...)\n\nregards\n\nMark\n",
"msg_date": "Thu, 10 Jun 2004 15:08:49 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "\n>>\n> ANALYZE might be producing poor stats due to :\n>\n> i) many dead tuples or\n> ii) high proportion of dead tuples in the first few pages of the table\n>\n> Does a VACUUM FULL followed by ANALYZE change the estimates (or have \n> you tried this already)?\n>\n> (p.s. - I probably don't qualify for the 'more knowledge' bit either...)\n\nYou can also increase your statistics_target which will make ANALYZE \ntake longer but can help a great deal\nwith larger data sets.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n> regards\n>\n> Mark\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n",
"msg_date": "Wed, 09 Jun 2004 20:32:10 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity"
},
{
"msg_contents": "ken <[email protected]> writes:\n> ... and here is the plan with statistics set to 1000 ...\n\n> Seq Scan on nrgfeature f (cost=0.00..31675.57 rows=18608 width=218)\n> (actual time=63.544..1002.701 rows=225 loops=1)\n> Filter: ((upperrightx > 321264.236977215::double precision) AND\n> (lowerleftx < 324046.799812083::double precision) AND (upperrighty >\n> 123286.261898636::double precision) AND (lowerlefty <\n> 124985.927450476::double precision) AND (diagonalsize > 49.999::double\n> precision))\n\n> ... so yeah, its obviously finding way, way less rows than it thinks it\n> will.\n\nYup. I think your problem here is that the conditions on the different\ncolumns are highly correlated, but the planner doesn't know anything\nabout that, because it has no cross-column statistics.\n\nYou could check that the individual conditions are accurately estimated\nby looking at the estimated and actual row counts in\n\nexplain analyze\nSELECT * FROM nrgfeature f WHERE upperRightX > 321264.23697721504;\n\nexplain analyze\nSELECT * FROM nrgfeature f WHERE lowerLeftX < 324046.79981208267;\n\netc --- but I'll bet lunch that they are right on the money with the\nhigher statistics targets, and probably not too far off even at the\ndefault. The trouble is that the planner is simply multiplying these\nprobabilities together to get its estimate for the combined query,\nand when the columns are not independent that leads to a very bad\nestimate.\n\nIn particular it sounds like you have a *whole lot* of rows that have\ndiagonalSize just under 50, and so changing the diagonalSize condition\nto include those makes for a big change in the predicted number of rows,\neven though for the specific values of upperRightX and friends in your\ntest query there isn't any change in the actual number of matching rows.\n\nI don't have any advice to magically solve this problem. I would\nsuggest experimenting with alternative data representations -- for\nexample, maybe it would help to store \"leftX\" and \"width\" in place\nof \"leftX\" and \"rightX\", etc. What you want to do is try to decorrelate\nthe column values. leftX and rightX are likely to have a strong\ncorrelation, but maybe leftX and width don't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Jun 2004 01:01:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity "
},
{
"msg_contents": "Apologies in advance for the length of this post but I want to be as\nthorough as possible in describing my problem to avoid too much banter\nback and forth.\n\nFirst off, thanks to all for your help with my index problem on my\nmulti-column index made up of 5 double precision columns.\n\nUnfortunately, I was unable to make it work with any of the suggestions\nprovided. However, Tom's suggestion of experimenting with alternative\ndata representations led me to explore using the built-in geometric data\nobject box to store this information since that is exactly what I am\nstoring.\n\nBy adding an rtree index on this box column I was able to make this new\nmethod work beautifully for most cases! The query that was taking over\na second went down under 20 milliseconds! Wow!\n\nHowever, for *some* cases the query now behaves very badly. My new\ntable is now essentially the following (with the unnecessary bits\nremoved for your ease of viewing) ...\n\n Column | Type | Modifiers\n----------------+-----------------------------+-----------\n fid1 | numeric(64,0) | not null\n fid2 | numeric(64,0) | not null\n boundingbox | box |\n\nIndexes:\n \"nrgfeature_pkey\" primary key, btree (fid2, fid1)\n \"nrgfeature_bb_idx\" rtree (boundingbox)\n \"nrgfeature_bblength_idx\" btree (length(lseg(boundingbox)))\n\n\n... with 763,809 rows.\n\nThe query I run is of the form ...\n\n\nSELECT * FROM nrgFeature WHERE\nboundingbox && '((213854.57920364887, 91147.4541420119),\n(212687.30997287965, 90434.4541420119))'\nAND length(lseg(boundingbox)) > 12.916666666666668;\n\n\n... If the bounding box is relatively small (like the one defined above)\nthen the query is very fast as relatively few rows are investigated by\nthe index. The explain analyze for the above is ...\n\n\n Index Scan using nrgfeature_bb_idx on nrgfeature (cost=0.00..14987.30\nrows=1274 width=256) (actual time=0.046..0.730 rows=89 loops=1)\n Index Cond: (boundingbox &&\n'(213854.579203649,91147.4541420119),(212687.30997288,90434.4541420119)'::box)\n Filter: (length(lseg(boundingbox)) > 12.9166666666667::double\nprecision)\n Total runtime: 0.830 ms\n\n\n... Notice the statistics aren't great at guessing the number of rows,\nhowever, since the number is sufficient to tell the optimizer to use the\nindex, it does and the query is blindingly fast. However, now let's try\nand retrieve all the rows that overlap a much, much bigger bounding box\nbut limit the results to rows with very large bounding boxes (we are\ndisplaying these things on the screen and if they are too small to see\nat this scale there is no reason to return them in the query)...\n\n\nSELECT * FROM nrgFeature WHERE\nboundingbox && '((793846.1538461539, 423000.0), (-109846.15384615387,\n-129000.0))'\nAND length(lseg(boundingbox)) > 10000.0;\n\n\n... and its explain analyze is ...\n\n\n Index Scan using nrgfeature_bb_idx on nrgfeature (cost=0.00..14987.30\nrows=1274 width=256) (actual time=1.861..6427.876 rows=686 loops=1)\n Index Cond: (boundingbox &&\n'(793846.153846154,423000),(-109846.153846154,-129000)'::box)\n Filter: (length(lseg(boundingbox)) > 10000::double precision)\n Total runtime: 6428.838 ms\n\n\n... notice that the query now takes 6.4 seconds even though the\nstatistics look to be pretty close and only 686 rows are returned. The\nreason is due to the condition on the length(lseg()) functions. Without\nthis condition the explain analyze is the following ...\n\n\n Index Scan using nrgfeature_bb_idx on nrgfeature (cost=0.00..14958.66\nrows=3820 width=256) (actual time=21.356..7750.360 rows=763768 loops=1)\n Index Cond: (boundingbox &&\n'(793846.153846154,423000),(-109846.153846154,-129000)'::box)\n Total runtime: 8244.213 ms\n\n\n... in which it can be seen that the statistics are way, way off. It\nthinks its only going to get back 3820 rows but instead almost every row\nin the table is returned! It should *not* be using this index in this\ncase. Is there something wrong with rtree indexes on box data types?\n\nSo that is problem number one.\n\nProblem number two, and this is possibly a bug, is that postgres doesn't\nseem to use functional indexes on geometric data. In the case\nimmediately above, the optimizer should choose the\nnrgfeature_bblength_idx instead as it would immediately give the 686\nrows that satisfy the length(lseg()) condition and voila it would be\ndone. However, even if I *drop* the rtree index on the boundingbox\ncolumn, so that it can't use that index, the optimizer does not choose\nthe other index. Instead it reverts to doing a sequential scan of the\nentire table and its really slow.\n\nAgain, sorry for the long post. Hope someone has experience with either\nof these problems.\n\nKen\n\n\n\n\n> I don't have any advice to magically solve this problem. I would\n> suggest experimenting with alternative data representations -- for\n> example, maybe it would help to store \"leftX\" and \"width\" in place\n> of \"leftX\" and \"rightX\", etc. What you want to do is try to decorrelate\n> the column values. leftX and rightX are likely to have a strong\n> correlation, but maybe leftX and width don't.\n> \n> \t\t\tregards, tom lane\n> \n\n",
"msg_date": "Mon, 14 Jun 2004 16:47:08 -0700",
"msg_from": "ken <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index oddity (still)"
},
{
"msg_contents": "ken <[email protected]> writes:\n> Is there something wrong with rtree indexes on box data types?\n\nNot per se, but the selectivity estimator for && is just a stub :-(.\nFeel free to take a stab at improving this, or take a look at PostGIS\n--- I think those guys are working the same problem even now.\n\n> However, even if I *drop* the rtree index on the boundingbox\n> column, so that it can't use that index, the optimizer does not choose\n> the other index. Instead it reverts to doing a sequential scan of the\n> entire table and its really slow.\n\nThis is also a lack-of-statistical-knowledge problem. The optimizer has\nno stats about the distribution of length(lseg(boundingbox)) and so it\ndoes not realize that it'd be reasonable to use that index for a query\nlike \"length(lseg(boundingbox)) > largevalue\". As of 7.5 this issue\nwill be addressed, but in existing releases I think you could only get\nthat index to be used by forcing it with enable_seqscan = off :-(\n\nAs a short-term workaround it might be reasonable to store that length\nas an independent table column, which you could put an index on. You\ncould use triggers to ensure that it always matches the boundingbox.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jun 2004 09:52:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index oddity (still) "
}
] |
[
{
"msg_contents": "L.S.\n\nCould anybody explain why the planner is doing what it is doing?\n\nWhat could I do to make it easier to choose a better plan?\n\n\n\n*********\nSummary\n*********\nOn a freshly vacuum/analysed pair of tables with 7389 and 64333 records, this:\n\nselect id from location where id not in (select location_id from \nlocation_carrier);\n\ntakes 581546,497 ms\n\n\nWhile a variant like:\n\nselect id from location where not exists (select 1 from location_carrier where \nlocation_id = location.id);\n\ntakes only 124,625 ms\n\n\n*********\nDetails\n*********\n=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n\n=# \\d location\n Table \"public.location\"\n Column | Type | Modifiers\n------------+-----------------------------+-----------\n id | integer | not null\nIndexes:\n \"location_pkey\" primary key, btree (id)\n\n\n=# select count(*) from location;\n count\n-------\n 7389\n(1 row)\n\n\n=# \\d location_carrier\n Table \"public.location_carrier\"\n Column | Type | Modifiers\n---------------------+-----------------------------+-----------\n location_id | integer | not null\n carrier_id | integer | not null\nIndexes:\n \"location_carrier_pkey\" primary key, btree (location_id, carrier_id)\n\n\n=# select count(*) from location_carrier;\n count\n-------\n 64333\n(1 row)\n\n\n=# explain select id from location where id not in (select location_id from \nlocation_carrier);\n QUERY PLAN\n-------------------------------------------------------------------------------\n Seq Scan on \"location\" (cost=0.00..5077093.72 rows=3695 width=4)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on location_carrier (cost=0.00..1213.33 rows=64333 width=4)\n(4 rows)\n\n\n=# explain analyse select id from location where id not in (select location_id \nfrom location_carrier);\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"location\" (cost=0.00..5077093.72 rows=3695 width=4) (actual \ntime=248.310..581541.483 rows=240 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on location_carrier (cost=0.00..1213.33 rows=64333 width=4) \n(actual time=0.007..48.517 rows=19364 loops=7389)\n Total runtime: 581542.560 ms\n(5 rows)\n\nTime: 581546,497 ms\n\n\n=# explain analyse select id from location l left outer join location_carrier \nlc on l.id = lc.location_id where lc.location_id is null;\n QUERY \nPLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..3022.51 rows=7389 width=4) (actual \ntime=0.083..435.841 rows=240 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".location_id)\n Filter: (\"inner\".location_id IS NULL)\n -> Index Scan using location_pkey on \"location\" l (cost=0.00..258.85 \nrows=7389 width=4) (actual time=0.041..26.211 rows=7389 loops=1)\n -> Index Scan using location_carrier_pkey on location_carrier lc \n(cost=0.00..1941.22 rows=64333 width=4) (actual time=0.015..238.305 \nrows=64333 loops=1)\n Total runtime: 436.213 ms\n(6 rows)\n\nTime: 440,787 ms\n\n\nmegafox=# explain analyse select id from location where not exists (select 1 \nfrom location_carrier where location_id = location.id);\n QUERY \nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on \"location\" (cost=0.00..13242.14 rows=3695 width=4) (actual \ntime=0.078..120.785 rows=240 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Index Scan using location_carrier_pkey on location_carrier \n(cost=0.00..17.61 rows=10 width=0) (actual time=0.011..0.011 rows=1 \nloops=7389)\n Index Cond: (location_id = $0)\n Total runtime: 121.165 ms\n(6 rows)\n\nTime: 124,625 ms\n\n\n\n\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Thu, 10 Jun 2004 16:24:21 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "*very* inefficient choice made by the planner (regarding IN(...))"
},
{
"msg_contents": "\nOn Thu, 10 Jun 2004, Frank van Vugt wrote:\n\n> Could anybody explain why the planner is doing what it is doing?\n>\n> What could I do to make it easier to choose a better plan?\n\nYou might try raising sort_mem to see if it chooses a better plan. I\nthink it may be guessing that the hash won't fit and falling back to the\nplan you were getting.\n\n",
"msg_date": "Thu, 10 Jun 2004 08:00:28 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "Frank van Vugt <[email protected]> writes:\n> What could I do to make it easier to choose a better plan?\n\nIncrease sort_mem. You want it to pick a \"hashed subplan\", but\nit's not doing so because 64000 rows won't fit in the default\nsort_mem.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Jun 2004 11:19:35 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding IN(...))"
},
{
"msg_contents": "Wow,\n\nThe effectiveness of the pgsql mailinglists never ceases to amaze me.\n\nDefault sort mem it was, I guess I'd simply been to cautious with this \nper-client setting.\n\n\n\n\nStephan & Tom : thanks!\n\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Thu, 10 Jun 2004 17:32:08 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding IN(...))"
},
{
"msg_contents": "The real question is:\n\nIf the two statments are functionally equivalent, why can't PG rewrite \nthe \"NOT IN\" version into the more efficient \"NOT EXISTS\"?\n\n\n\nFrank van Vugt wrote:\n\n> L.S.\n> \n> Could anybody explain why the planner is doing what it is doing?\n> \n> What could I do to make it easier to choose a better plan?\n> \n> \n> \n> *********\n> Summary\n> *********\n> On a freshly vacuum/analysed pair of tables with 7389 and 64333 records, this:\n> \n> select id from location where id not in (select location_id from \n> location_carrier);\n> \n> takes 581546,497 ms\n> \n> \n> While a variant like:\n> \n> select id from location where not exists (select 1 from location_carrier where \n> location_id = location.id);\n> \n> takes only 124,625 ms\n> \n> \n> *********\n> Details\n> *********\n> =# select version();\n> version\n> ---------------------------------------------------------------------\n> PostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n> (1 row)\n> \n> \n> =# \\d location\n> Table \"public.location\"\n> Column | Type | Modifiers\n> ------------+-----------------------------+-----------\n> id | integer | not null\n> Indexes:\n> \"location_pkey\" primary key, btree (id)\n> \n> \n> =# select count(*) from location;\n> count\n> -------\n> 7389\n> (1 row)\n> \n> \n> =# \\d location_carrier\n> Table \"public.location_carrier\"\n> Column | Type | Modifiers\n> ---------------------+-----------------------------+-----------\n> location_id | integer | not null\n> carrier_id | integer | not null\n> Indexes:\n> \"location_carrier_pkey\" primary key, btree (location_id, carrier_id)\n> \n> \n> =# select count(*) from location_carrier;\n> count\n> -------\n> 64333\n> (1 row)\n> \n> \n> =# explain select id from location where id not in (select location_id from \n> location_carrier);\n> QUERY PLAN\n> -------------------------------------------------------------------------------\n> Seq Scan on \"location\" (cost=0.00..5077093.72 rows=3695 width=4)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on location_carrier (cost=0.00..1213.33 rows=64333 width=4)\n> (4 rows)\n> \n> \n> =# explain analyse select id from location where id not in (select location_id \n> from location_carrier);\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on \"location\" (cost=0.00..5077093.72 rows=3695 width=4) (actual \n> time=248.310..581541.483 rows=240 loops=1)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on location_carrier (cost=0.00..1213.33 rows=64333 width=4) \n> (actual time=0.007..48.517 rows=19364 loops=7389)\n> Total runtime: 581542.560 ms\n> (5 rows)\n> \n> Time: 581546,497 ms\n> \n> \n> =# explain analyse select id from location l left outer join location_carrier \n> lc on l.id = lc.location_id where lc.location_id is null;\n> QUERY \n> PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Merge Left Join (cost=0.00..3022.51 rows=7389 width=4) (actual \n> time=0.083..435.841 rows=240 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".location_id)\n> Filter: (\"inner\".location_id IS NULL)\n> -> Index Scan using location_pkey on \"location\" l (cost=0.00..258.85 \n> rows=7389 width=4) (actual time=0.041..26.211 rows=7389 loops=1)\n> -> Index Scan using location_carrier_pkey on location_carrier lc \n> (cost=0.00..1941.22 rows=64333 width=4) (actual time=0.015..238.305 \n> rows=64333 loops=1)\n> Total runtime: 436.213 ms\n> (6 rows)\n> \n> Time: 440,787 ms\n> \n> \n> megafox=# explain analyse select id from location where not exists (select 1 \n> from location_carrier where location_id = location.id);\n> QUERY \n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on \"location\" (cost=0.00..13242.14 rows=3695 width=4) (actual \n> time=0.078..120.785 rows=240 loops=1)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Index Scan using location_carrier_pkey on location_carrier \n> (cost=0.00..17.61 rows=10 width=0) (actual time=0.011..0.011 rows=1 \n> loops=7389)\n> Index Cond: (location_id = $0)\n> Total runtime: 121.165 ms\n> (6 rows)\n> \n> Time: 124,625 ms\n> \n> \n> \n> \n> \n> \n> \n\n",
"msg_date": "Thu, 10 Jun 2004 11:45:02 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "Jean-Luc Lachance <[email protected]> writes:\n> If the two statments are functionally equivalent, why can't PG rewrite \n> the \"NOT IN\" version into the more efficient \"NOT EXISTS\"?\n\nThey're not equivalent. In particular, the behavior in the presence of\nNULLs is quite different.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 10 Jun 2004 11:59:05 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding "
},
{
"msg_contents": "I agree, but it should be a simple rewrite. No?\n\nx IS NULL/IS NOT NULL AND/OR NOT EXISTS\n\n\nTom Lane wrote:\n\n> Jean-Luc Lachance <[email protected]> writes:\n> \n>>If the two statments are functionally equivalent, why can't PG rewrite \n>>the \"NOT IN\" version into the more efficient \"NOT EXISTS\"?\n> \n> \n> They're not equivalent. In particular, the behavior in the presence of\n> NULLs is quite different.\n> \n> \t\t\tregards, tom lane\n> \n\n",
"msg_date": "Thu, 10 Jun 2004 12:56:26 -0400",
"msg_from": "Jean-Luc Lachance <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "\nOn Thu, 10 Jun 2004, Jean-Luc Lachance wrote:\n\n> I agree, but it should be a simple rewrite. No?\n\nIt's NULLs inside the subselect that are the issue.\n\nselect 1 in (select a from foo)\nselect exists ( select 1 from foo where a=1)\n\nIf foo.a contains a row with NULL but no rows containing a 1, the above\ngive different results (unknown and exists) and IIRC, exists cannot\nreturn unknown, so there's no simple rewrite of the subselect that gives\nequivalent behavior.\n\n",
"msg_date": "Thu, 10 Jun 2004 10:09:07 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "\nOn Thu, 10 Jun 2004, Stephan Szabo wrote:\n\n>\n> On Thu, 10 Jun 2004, Jean-Luc Lachance wrote:\n>\n> > I agree, but it should be a simple rewrite. No?\n>\n> It's NULLs inside the subselect that are the issue.\n>\n> select 1 in (select a from foo)\n> select exists ( select 1 from foo where a=1)\n>\n> If foo.a contains a row with NULL but no rows containing a 1, the above\n> give different results (unknown and exists) and IIRC, exists cannot\n\nErm that exists should have been false\n",
"msg_date": "Thu, 10 Jun 2004 10:14:36 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "Dear Gurus,\n\n----- Original Message ----- \nFrom: \"Stephan Szabo\" <[email protected]>\nSent: Thursday, June 10, 2004 7:14 PM\n\n\n>\n> On Thu, 10 Jun 2004, Stephan Szabo wrote:\n>\n> >\n> > On Thu, 10 Jun 2004, Jean-Luc Lachance wrote:\n> >\n> > > I agree, but it should be a simple rewrite. No?\n> >\n> > It's NULLs inside the subselect that are the issue.\n> >\n> > select 1 in (select a from foo)\n> > select exists ( select 1 from foo where a=1)\n\nJust a dumb try :)\n\n SELECT (exists(select 1 from foo where a isnull) AND NULL)\n OR exists(select 1 from foo where a=1)\n\nAFAIK this returns\n* NULL if (NULL in foo.a) and (1 not in foo.a)\n* (1 in foo.a) otherwise.\n\nThe weakness is the doubled exists clause. I'm sure it makes most cases at\nleast doubtful...\n\nG.\n%----------------------- cut here -----------------------%\n\\end\n\n",
"msg_date": "Fri, 18 Jun 2004 10:37:40 +0200",
"msg_from": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
},
{
"msg_contents": "=?iso-8859-1?Q?SZUCS_G=E1bor?= <[email protected]> writes:\n>> It's NULLs inside the subselect that are the issue.\n> \n> select 1 in (select a from foo)\n> select exists ( select 1 from foo where a=1)\n\n> Just a dumb try :)\n\n> SELECT (exists(select 1 from foo where a isnull) AND NULL)\n> OR exists(select 1 from foo where a=1)\n\nThe more general case is where you have a variable (or expression)\non the left of IN. That could be NULL too, and this still doesn't\ngive the right result in that case :-(. With NULL on the left the\ncorrect answer would be FALSE if the subselect has zero rows,\nNULL otherwise.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 2004 10:29:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding "
},
{
"msg_contents": "On Fri, 18 Jun 2004, [iso-8859-1] SZUCS Gábor wrote:\n\n> Dear Gurus,\n>\n> ----- Original Message -----\n> From: \"Stephan Szabo\" <[email protected]>\n> Sent: Thursday, June 10, 2004 7:14 PM\n>\n>\n> >\n> > On Thu, 10 Jun 2004, Stephan Szabo wrote:\n> >\n> > >\n> > > On Thu, 10 Jun 2004, Jean-Luc Lachance wrote:\n> > >\n> > > > I agree, but it should be a simple rewrite. No?\n> > >\n> > > It's NULLs inside the subselect that are the issue.\n> > >\n> > > select 1 in (select a from foo)\n> > > select exists ( select 1 from foo where a=1)\n>\n> Just a dumb try :)\n>\n> SELECT (exists(select 1 from foo where a isnull) AND NULL)\n> OR exists(select 1 from foo where a=1)\n>\n> AFAIK this returns\n> * NULL if (NULL in foo.a) and (1 not in foo.a)\n> * (1 in foo.a) otherwise.\n>\n> The weakness is the doubled exists clause. I'm sure it makes most cases at\n> least doubtful...\n\nWell, once you take into account the lhs being potentially null\n lhe in (select rhe from foo) is something like:\n\ncase when lhe is null then\n not exists(select 1 from foo limit 1) or null\nelse\n (exists(select 1 from foo where rhe is null) and null)\n or exists(select 1 from foo where rhe=lhe)\nend\n\nI think the real win occurs for where clause cases if it can pull up the\nexists that references lhe so that it doesn't try to evaluate it on every\nrow and that's unlikely to occur in something like the above.\n",
"msg_date": "Fri, 18 Jun 2004 08:32:00 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: *very* inefficient choice made by the planner (regarding"
}
] |
[
{
"msg_contents": "Vivek,\r\n \r\nWas there anything specific that helped you decide on a RAID-5 and not a RAID-10?\r\n \r\nI have my DBs on RAID10, and would soon be moving them on FC drives, and i am considering RAID-10.\r\n \r\nThanks,\r\nAnjan\r\n\r\n\t-----Original Message----- \r\n\tFrom: Josh Berkus [mailto:[email protected]] \r\n\tSent: Tue 3/2/2004 4:27 PM \r\n\tTo: Vivek Khera; [email protected] \r\n\tCc: \r\n\tSubject: Re: [PERFORM] Database Server Tuning\r\n\t\r\n\t\r\n\r\n\tVivek,\r\n\t\r\n\t> I did a bunch of testing with different RAID levels on a 14 disk\r\n\t> array. I finally settled on this: RAID5 across 14 disks for the\r\n\t> data, the OS (including syslog directory) and WAL on a RAID1 pair on\r\n\t> the other channel of the same controller (I didn't want to spring for\r\n\t> dual RAID controllers). The biggest bumps in performance came from\r\n\t> increasing the checkpoint_buffers since my DB is heavily written to,\r\n\t> and increasing sort_mem.\r\n\t\r\n\tWith large RAID, have you found that having WAL on a seperate array actually\r\n\tboosts performance? The empirical tests we've seen so far don't seem to\r\n\tsupport this.\r\n\t\r\n\t--\r\n\t-Josh Berkus\r\n\t Aglio Database Solutions\r\n\t San Francisco\r\n\t\r\n\t\r\n\t---------------------------(end of broadcast)---------------------------\r\n\tTIP 1: subscribe and unsubscribe commands go to [email protected]\r\n\t\r\n\r\n",
"msg_date": "Thu, 10 Jun 2004 12:02:46 -0400",
"msg_from": "\"Anjan Dave\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Database Server Tuning"
},
{
"msg_contents": "On Jun 10, 2004, at 12:02 PM, Anjan Dave wrote:\n\n> Vivek,\n>\n> Was there anything specific that helped you decide on a RAID-5 and not \n> a RAID-10?\n\nperformance testing on restore times. My DB is more than 50% write, so \nI needed to optimize for writes.\n\n> I have my DBs on RAID10, and would soon be moving them on FC drives, \n> and i am considering RAID-10.\n\nIf I had to do it over again, I'd most likely go with RAID-50, and take \nthe hit on restore time for the advantage on reads. I have to dig \nthrough my records again to see the details... but then I have to do \nall that for my OSCON presentation on this topic at the end of July in \nPortland, OR. ;-)",
"msg_date": "Thu, 10 Jun 2004 12:51:24 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Database Server Tuning"
}
] |
[
{
"msg_contents": "I have an application which logs interactions on a regular basis. The interaction details (their types, etc) are held in one table (tblitem) and the 'hits' are held in tbltotal.\n\nI have written a function to get the total 'hits' during a period and need to collect together the information from tblitem with it.\n\nThe query works OK returning results in under a second:\n\nEXPLAIN ANALYSE SELECT t1.value1,t1.value2,getday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0') FROM tblitem t1 WHERE t1.type_id=23::int2 and (t1.id >= 1::int8 and t1.id<=9223372036854775807::int8)\nOFFSET 0 LIMIT 20;\ntracker-# QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..7.70 rows=20 width=56) (actual time=19.50..846.89 rows=20 loops=1)\n -> Index Scan using tblitemtype_id on tblitem t1 (cost=0.00..230.10 rows=598 width=56) (actual time=19.49..846.81 rows=21 loops=1)\n Index Cond: (type_id = 23::smallint)\n Filter: ((id >= 1::bigint) AND (id <= 9223372036854775807::bigint))\n Total runtime: 847.04 msec\n\n----\nI realised that Postgresql did not like passing t1.id to the function without some form of constraints - hence the (t1.id >= 1::int8 and t1.id<=9223372036854775807::int8) dummy constraints.\n----\n\n\nHowever, when I seek to ORDER the results, then it takes 'forever':\n\nEXPLAIN ANALYSE SELECT t1.value1,t1.value2,\ngetday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0') FROM tblitem t1 WHERE t1.type_id=23::int2 and (t1.id >= 1::int8 and t1.id<=9223372036854775807::int8)\nORDER BY getday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0') DESC\nOFFSET 0 LIMIT 20;\n\n\ntracker-# tracker-#\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=257.66..257.71 rows=20 width=56) (actual time=25930.90..25930.95 rows=20 loops=1)\n -> Sort (cost=257.66..259.15 rows=598 width=56) (actual time=25930.90..25930.91 rows=21 loops=1)\n Sort Key: getday_total(1::smallint, 23::smallint, (id)::integer, 31::smallint, 59::smallint, 2::smallint, 2004::smallint, 182::smallint, 153::smallint, 6::smallint, 2004::smallint, 0)\n -> Index Scan using tblitemtype_id on tblitem t1 (cost=0.00..230.10 rows=598 width=56) (actual time=19.60..25927.68 rows=693 loops=1)\n Index Cond: (type_id = 23::smallint)\n Filter: ((id >= 1::bigint) AND (id <= 9223372036854775807::bigint))\n Total runtime: 25931.15 msec\n\n\nAnd this is a database of only a few thousand rows, we are anticipating that this database is going to get huge.\n\nWhat am I missing here? How can I get it to order by the total of interactions without hitting the performance problem?\n\nAny help would be much appreciated.\n\nNick\n\nnick A-T trainorthornton d-o-t co d-o-t uk\n\n\n\nVersion:\nPostgreSQL 7.3.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 (Mandrake Linux 9.1 3.2.2-3mdk)\n",
"msg_date": "Fri, 11 Jun 2004 12:14:55 +0100",
"msg_from": "Nick Trainor <[email protected]>",
"msg_from_op": true,
"msg_subject": "ORDER BY user defined function performance issues"
},
{
"msg_contents": "\nOn 11/06/2004 12:14 Nick Trainor wrote:\n> [snip]\n> However, when I seek to ORDER the results, then it takes 'forever':\n> \n> EXPLAIN ANALYSE SELECT t1.value1,t1.value2,\n> getday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0')\n> FROM tblitem t1 WHERE t1.type_id=23::int2 and (t1.id >= 1::int8 and\n> t1.id<=9223372036854775807::int8)\n> ORDER BY \n> getday_total('1','23',t1.id::integer,'31','59','2','2004','182','153','6','2004','0')\n> DESC\n> OFFSET 0 LIMIT 20;\n\nI expect that pg is having to evaluate your function every time it does a \ncompare within its sort. Something like \nSELECT t1.value1,t1.value2,\n getday_total(..) AS foo\nFROM tblitem t1 WHERE t1.type_id=23::int2 and (t1.id >= 1::int8 and \nt1.id<=9223372036854775807::int8)\nORDER BY foo\n\nmight work. Otherwise try selecting into a temp table then doing the order \nby on that table.\n\nHTH\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Fri, 11 Jun 2004 14:41:08 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY user defined function performance issues"
},
{
"msg_contents": "Nick Trainor <[email protected]> writes:\n> What am I missing here?\n\nThe ORDER BY query has to evaluate the function at *every* row of the\ntable before it can sort. The other query was only evaluating the\nfunction at twenty rows.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jun 2004 13:59:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ORDER BY user defined function performance issues "
}
] |
[
{
"msg_contents": "I'm running postgrSQL 7.2 on a linux Red Hat 8.0 box with 2GB of RAM \nWhen I boot-up the system , this is the TOP situation:\n\n11:59am up 4 min, 1 user, load average: 0.37, 0.26, 0.11\n\n77 processes: 74 sleeping, 3 running, 0 zombie, 0 stopped\n\nCPU states: 0.3% user, 0.7% system, 0.0% nice, 98.8% idle\n\nMem: 1031020K av, 177808K used, 853212K free, 0K shrd, 14744K buff\n\nSwap: 2096472K av, 0K used, 2096472K free 67828K cached\n\nAfter I've done a vacuum , the situation is:\n\n \n 12:04pm up 8 min, 1 user, load average: 0.22, 0.23, 0.12\n\n78 processes: 76 sleeping, 2 running, 0 zombie, 0 stopped\n\nCPU states: 2.5% user, 1.9% system, 0.0% nice, 95.4% idle\n\nMem: 1031020K av, 1016580K used, 14440K free, 0K shrd, 18624K buff\n\nSwap: 2096472K av, 0K used, 2096472K free 833896K cached\n\nAs you see the memory used by vacuum isn't released anymore? Anyone know why?\n\nMy statement is: vacuumdb --analyze dbname\nMy pg paramaters on postgresql.conf are:\n\n#\n# Shared Memory Size\n#\nshared_buffers = 2048 # 2*max_connections, min 16\nmax_fsm_relations = 100 # min 10, fsm is free space map\nmax_fsm_pages = 10000 # min 1000, fsm is free space map\nmax_locks_per_transaction = 64 # min 10\nwal_buffers = 8 # min 4\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 512 # min 32\nvacuum_mem = 8192 # min 1024\n\nDid anyone know why this happend? \n\nDistinti Saluti\n\nSgarbossa Domenico\nX Tecnica S.R.L.\nwww.xtecnica.com\nTel: 049/9409154 - 049/5970297\nFax: 049/9400288\n\n\n\n\n\n\n\n\nI'm running postgrSQL 7.2 on a linux Red Hat 8.0 \nbox with 2GB of RAM \nWhen I boot-up the system , this is the TOP \nsituation:\n\n11:59am up 4 min, 1 user, load average: 0.37, 0.26, \n0.11\n77 processes: 74 \nsleeping, 3 running, 0 zombie, 0 stopped\nCPU states: 0.3% user, 0.7% system, 0.0% nice, 98.8% \nidle\nMem: 1031020K av, 177808K used, 853212K free, 0K \nshrd, 14744K \nbuff\nSwap: 2096472K \nav, \n0K used, 2096472K free \n67828K cached\nAfter I've done a vacuum , the situation \nis:\n \n \n 12:04pm up 8 min, 1 user, load average: 0.22, 0.23, \n0.12\n78 processes: 76 \nsleeping, 2 running, 0 zombie, 0 stopped\nCPU states: 2.5% user, 1.9% system, 0.0% nice, 95.4% \nidle\nMem: 1031020K av, 1016580K used, 14440K free, 0K \nshrd, 18624K \nbuff\nSwap: 2096472K \nav, \n0K used, 2096472K free \n833896K cached\nAs you see the memory used by vacuum isn't released anymore? Anyone know \nwhy?\n \nMy statement is: vacuumdb --analyze dbname\nMy pg paramaters on postgresql.conf are:\n \n## Shared Memory \nSize#shared_buffers = 2048 # \n2*max_connections, min 16max_fsm_relations = 100 # min 10, \nfsm is free space mapmax_fsm_pages = 10000 # \nmin 1000, fsm is free space mapmax_locks_per_transaction = 64 # min \n10wal_buffers = \n8 # min \n4\n \n## Non-shared Memory \nSizes#sort_mem = \n512 # \nmin 32vacuum_mem = \n8192 # min 1024\nDid anyone know why this happend? \nDistinti Saluti\n \nSgarbossa DomenicoX Tecnica S.R.L.www.xtecnica.comTel: 049/9409154 - \n049/5970297Fax: 049/9400288",
"msg_date": "Fri, 11 Jun 2004 15:54:44 +0200",
"msg_from": "\"Domenico Sgarbossa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Problems with vacuum!"
},
{
"msg_contents": "Domenico Sgarbossa wrote:\n\n> I'm running postgrSQL 7.2 on a linux Red Hat 8.0 box with 2GB of RAM\n> When I boot-up the system , this is the TOP situation:\n>\n> 11:59am up 4 min, 1 user, load average: 0.37, 0.26, 0.11\n>\n> 77 processes: 74 sleeping, 3 running, 0 zombie, 0 stopped\n>\n> CPU states: 0.3% user, 0.7% system, 0.0% nice, 98.8% idle\n>\n> Mem: 1031020K av, 177808K used, 853212K free, 0K shrd, \n> 14744K buff\n>\n> Swap: 2096472K av, 0K used, 2096472K free \n> 67828K cached\n>\n> After I've done a vacuum , the situation is:\n> \n> \n>\n> 12:04pm up 8 min, 1 user, load average: 0.22, 0.23, 0.12\n>\n> 78 processes: 76 sleeping, 2 running, 0 zombie, 0 stopped\n>\n> CPU states: 2.5% user, 1.9% system, 0.0% nice, 95.4% idle\n>\n> Mem: 1031020K av, 1016580K used, 14440K free, 0K shrd, \n> 18624K buff\n>\n> Swap: 2096472K av, 0K used, 2096472K free \n> 833896K cached\n>\n> As you see the memory used by vacuum isn't released anymore? Anyone \n> know why?\n> \n> My statement is: vacuumdb --analyze dbname\n> My pg paramaters on postgresql.conf are:\n> \n> #\n> # Shared Memory Size\n> #\n> shared_buffers = 2048 # 2*max_connections, min 16\n> max_fsm_relations = 100 # min 10, fsm is free space map\n> max_fsm_pages = 10000 # min 1000, fsm is free space map\n> max_locks_per_transaction = 64 # min 10\n> wal_buffers = 8 # min 4\n> \n> #\n> # Non-shared Memory Sizes\n> #\n> sort_mem = 512 # min 32\n> vacuum_mem = 8192 # min 1024\n> Did anyone know why this happend?\n>\n> Distinti Saluti\n> \n> Sgarbossa Domenico\n> X Tecnica S.R.L.\n> www.xtecnica.com <http://www.xtecnica.com>\n> Tel: 049/9409154 - 049/5970297\n> Fax: 049/9400288\n\nYour vacuum memory was released. However, the linux kernel likes to \nkeep most memory in the cached buffer instead of totally freeing it. \nThe guru's behind the kernel figured out they get better performance \nthis way.\n\nHTH,\n\nChris\n\n",
"msg_date": "Fri, 11 Jun 2004 10:39:59 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Problems with vacuum!"
}
] |
[
{
"msg_contents": "Thanks for your advice!\n\nAs I immagine the system released dinamically the shared memory .... the\nproblem is that\nwhen , after vacuum, users runs my application (ERP) and the system swap on\ndisk so the global performance\ndecrease very fast until the system is so slow that I need to reboot the\nserver.\n\nIt seems that the cached memory isn't released by the system... so the\nsystem begin to swap to disk!\nThe swap problem is very important, it makes useless run vacuum 'cause after\nvacuum the system\nbegin to swap.... so my application runs very slow!\n\nCould you explain me better your final advice please?\n\n> Watching this \"cached\" value over normal usage of your machine to get\n> a running average is how you come up with your effective_cache_size\n> configuration directive.\n\nwhat does this mean pratically?\n\nthanks!\n\nDistinti Saluti\n\nSgarbossa Domenico\nX Tecnica S.R.L.\nwww.xtecnica.com\nTel: 049/9409154 - 049/5970297\nFax: 049/9400288\n\n----- Original Message -----\nFrom: \"Rosser Schwarz\" <[email protected]>\nTo: \"'Domenico Sgarbossa'\" <[email protected]>;\n<[email protected]>\nSent: Friday, June 11, 2004 4:09 PM\nSubject: RE: [BULK] [PERFORM] Problems with vacuum!\n\n\n> while you weren't looking, Domenico Sgarbossa wrote:\n>\n> Compare the \"cached\" values in each run of top.\n>\n> Before: 67828K cached\n>\n> After: 833896K cached\n>\n> The kernel is caching your tables for you. That memory isn't actually\n> being -used-, in the active, \"a process has claimed this memory\" sense.\n> If a process comes along that needs more memory than free, the kernel\n> will flush some of those cached pages (to swap?) to make room for\n> whatever the memory requirements of the newcomer are.\n>\n> Watching this \"cached\" value over normal usage of your machine to get\n> a running average is how you come up with your effective_cache_size\n> configuration directive.\n>\n> /rls\n>\n> --\n> Rosser Schwarz\n> Total Card, Inc.\n>\n>\n>\n\n\n\n",
"msg_date": "Fri, 11 Jun 2004 16:53:22 +0200",
"msg_from": "\"Domenico Sgarbossa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "\"Domenico Sgarbossa\" <[email protected]> writes:\n\n> Thanks for your advice!\n>\n> It seems that the cached memory isn't released by the system... so the\n> system begin to swap to disk!\n\nIf you really think this is true, there should be a process that is\nholding on to the memory. Use 'ps' to find that process.\n\nI also noticed that your shared_buffers setting seemed a bit low--you\nmight want to increase it.\n\n-Doug\n\n",
"msg_date": "Fri, 11 Jun 2004 11:03:48 -0400",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "\nWe actually found that in the 2.4-20 kernel for RedHat there was a known\nissue that was causing cached memory to not be reused and our box\nstarted to swap also. \n\nhttp://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=89226\n\nThis may be what you are experiencing if your using the same kernel.\n\nDawn Hollingsworth\nSenior Director R&D\nAirDefense, Inc.\n\n\n\nOn Fri, 2004-06-11 at 11:03, Doug McNaught wrote:\n> \"Domenico Sgarbossa\" <[email protected]> writes:\n> \n> > Thanks for your advice!\n> >\n> > It seems that the cached memory isn't released by the system... so the\n> > system begin to swap to disk!\n> \n\n\n",
"msg_date": "14 Jun 2004 11:24:07 -0400",
"msg_from": "Dawn Hollingsworth <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "The problems still remains...\nI've tried to change shmax to 128 mb (i've got 2 GB of ram),\nthen the others parameter are set as follow:\n\nshared_buffers = 8096 # 2*max_connections, min 16\nmax_fsm_relations = 500 # min 10, fsm is free space map\nmax_fsm_pages = 15000 # min 1000, fsm is free space map\nmax_locks_per_transaction = 64 # min 10\nwal_buffers = 8 # min 4\n\n#\n# Non-shared Memory Sizes\n#\nsort_mem = 1024 # min 32\nvacuum_mem = 16384 # min 1024\n\nI've scheduled 2 tasks nightly:\n\nvacuumdb --analyze dbname\npg_dump -x -f dbname.dmp dbname\n\nso when the users go home, i've got something like 15/20000kb free ram, the\nrest is cached and 0kb of swap...\nIt seems that when pg_dump starts the cached memory isn't released so the\nsystem begin to swap, then the system\ndoes the same with vacuum..... even if there 1.8 gb of cached ram (that\nis'nt released anymore....) very strange!\n\nanyone could explain mw why this happend?\n\n\nDistinti Saluti\n\nSgarbossa Domenico\nX Tecnica S.R.L.\nwww.xtecnica.com\nTel: 049/9409154 - 049/5970297\nFax: 049/9400288\n\n\n\n\n",
"msg_date": "Fri, 18 Jun 2004 15:54:20 +0200",
"msg_from": "\"Domenico Sgarbossa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "\"Domenico Sgarbossa\" <[email protected]> writes:\n> so when the users go home, i've got something like 15/20000kb free ram, the\n> rest is cached and 0kb of swap...\n> It seems that when pg_dump starts the cached memory isn't released so the\n> system begin to swap,\n\nA sane kernel should drop disk buffers rather than swapping. We heard\nrecently about a bug in some versions of the Linux kernel that cause it\nto prefer swapping to discarding disk cache, though. It sounds like\nthat's what you're hitting. Look into newer kernels ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 2004 11:11:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum! "
},
{
"msg_contents": "On Fri, 2004-06-18 at 09:11, Tom Lane wrote:\n> \"Domenico Sgarbossa\" <[email protected]> writes:\n> > so when the users go home, i've got something like 15/20000kb free ram, the\n> > rest is cached and 0kb of swap...\n> > It seems that when pg_dump starts the cached memory isn't released so the\n> > system begin to swap,\n> \n> A sane kernel should drop disk buffers rather than swapping. We heard\n> recently about a bug in some versions of the Linux kernel that cause it\n> to prefer swapping to discarding disk cache, though. It sounds like\n> that's what you're hitting. Look into newer kernels ...\n\nThis was a common problem in the linux 2.4 series kernels, but has\nsupposedly been fixed in the 2.6 kernels. Having lots of memory and\nturning off swap will \"fix\" the problem in 2.4, but if you run out of\nreal mem, you're hosed.\n\n",
"msg_date": "Fri, 18 Jun 2004 12:07:20 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "Hello,\n\nI would have to double check BUT I believe this is fixed in later 2.4.x \nkernels as well. If you don't want to go through the hassle of 2.6 \n(although it really is a nice kernel) then upgrade to 2.4.26.\n\nSincerely,\n\nJoshau D. Drake\n\nScott Marlowe wrote:\n> On Fri, 2004-06-18 at 09:11, Tom Lane wrote:\n> \n>>\"Domenico Sgarbossa\" <[email protected]> writes:\n>>\n>>>so when the users go home, i've got something like 15/20000kb free ram, the\n>>>rest is cached and 0kb of swap...\n>>>It seems that when pg_dump starts the cached memory isn't released so the\n>>>system begin to swap,\n>>\n>>A sane kernel should drop disk buffers rather than swapping. We heard\n>>recently about a bug in some versions of the Linux kernel that cause it\n>>to prefer swapping to discarding disk cache, though. It sounds like\n>>that's what you're hitting. Look into newer kernels ...\n> \n> \n> This was a common problem in the linux 2.4 series kernels, but has\n> supposedly been fixed in the 2.6 kernels. Having lots of memory and\n> turning off swap will \"fix\" the problem in 2.4, but if you run out of\n> real mem, you're hosed.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL",
"msg_date": "Fri, 18 Jun 2004 11:47:24 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "I believe it was more like the kernel was tuned to make it less common,\nbut certain things can still trigger it. I know the problem was still\nthere in the 2.4.24 on the last server I was playing with, but it was a\nlot less of a problem than it had been under 2.4.9 on an earlier machine\nwith the same basic amount of memory.\n\nOn Fri, 2004-06-18 at 12:47, Joshua D. Drake wrote:\n> Hello,\n> \n> I would have to double check BUT I believe this is fixed in later 2.4.x \n> kernels as well. If you don't want to go through the hassle of 2.6 \n> (although it really is a nice kernel) then upgrade to 2.4.26.\n> \n> Sincerely,\n> \n> Joshau D. Drake\n> \n> Scott Marlowe wrote:\n> > On Fri, 2004-06-18 at 09:11, Tom Lane wrote:\n> > \n> >>\"Domenico Sgarbossa\" <[email protected]> writes:\n> >>\n> >>>so when the users go home, i've got something like 15/20000kb free ram, the\n> >>>rest is cached and 0kb of swap...\n> >>>It seems that when pg_dump starts the cached memory isn't released so the\n> >>>system begin to swap,\n> >>\n> >>A sane kernel should drop disk buffers rather than swapping. We heard\n> >>recently about a bug in some versions of the Linux kernel that cause it\n> >>to prefer swapping to discarding disk cache, though. It sounds like\n> >>that's what you're hitting. Look into newer kernels ...\n> > \n> > \n> > This was a common problem in the linux 2.4 series kernels, but has\n> > supposedly been fixed in the 2.6 kernels. Having lots of memory and\n> > turning off swap will \"fix\" the problem in 2.4, but if you run out of\n> > real mem, you're hosed.\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> \n\n",
"msg_date": "Fri, 18 Jun 2004 12:58:27 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BULK] Problems with vacuum!"
},
{
"msg_contents": "As you suggets I've tried to upgrade to the kernel 2.4.28, but it seems that\nnothing change!\nI built a new machine with Red Hat 8 (kernel 2.4.28) a 1GB RAM using the\nsame parameters i've been used before.\nAfter the boot, i've got 800Mb of free memory, if a launch a pg_dump then\nthe system swap (only 1 or 2 mb.....) and the\nfree memory become 20mb.\nNow, if I tried a vacuumdb the system use partially the cached memory then\nbegin to swap again.... with 700mb of cached\nmemory it's very strange that the system swap again....\n\nanyone know why this happend?\n\nDistinti Saluti\n\nSgarbossa Domenico\nX Tecnica S.R.L.\nwww.xtecnica.com\nTel: 049/9409154 - 049/5970297\nFax: 049/9400288\n\n----- Original Message -----\nFrom: \"Scott Marlowe\" <[email protected]>\nTo: \"Joshua D. Drake\" <[email protected]>\nCc: \"Tom Lane\" <[email protected]>; \"Domenico Sgarbossa\"\n<[email protected]>; <[email protected]>\nSent: Friday, June 18, 2004 8:58 PM\nSubject: Re: [PERFORM] [BULK] Problems with vacuum!\n\n\n> I believe it was more like the kernel was tuned to make it less common,\n> but certain things can still trigger it. I know the problem was still\n> there in the 2.4.24 on the last server I was playing with, but it was a\n> lot less of a problem than it had been under 2.4.9 on an earlier machine\n> with the same basic amount of memory.\n>\n> On Fri, 2004-06-18 at 12:47, Joshua D. Drake wrote:\n> > Hello,\n> >\n> > I would have to double check BUT I believe this is fixed in later 2.4.x\n> > kernels as well. If you don't want to go through the hassle of 2.6\n> > (although it really is a nice kernel) then upgrade to 2.4.26.\n> >\n> > Sincerely,\n> >\n> > Joshau D. Drake\n> >\n> > Scott Marlowe wrote:\n> > > On Fri, 2004-06-18 at 09:11, Tom Lane wrote:\n> > >\n> > >>\"Domenico Sgarbossa\" <[email protected]> writes:\n> > >>\n> > >>>so when the users go home, i've got something like 15/20000kb free\nram, the\n> > >>>rest is cached and 0kb of swap...\n> > >>>It seems that when pg_dump starts the cached memory isn't released so\nthe\n> > >>>system begin to swap,\n> > >>\n> > >>A sane kernel should drop disk buffers rather than swapping. We heard\n> > >>recently about a bug in some versions of the Linux kernel that cause\nit\n> > >>to prefer swapping to discarding disk cache, though. It sounds like\n> > >>that's what you're hitting. Look into newer kernels ...\n> > >\n> > >\n> > > This was a common problem in the linux 2.4 series kernels, but has\n> > > supposedly been fixed in the 2.6 kernels. Having lots of memory and\n> > > turning off swap will \"fix\" the problem in 2.4, but if you run out of\n> > > real mem, you're hosed.\n> > >\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n\n\n",
"msg_date": "Tue, 22 Jun 2004 15:33:33 +0200",
"msg_from": "\"Domenico Sgarbossa\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BULK] Problems with vacuum!"
}
] |
[
{
"msg_contents": "I am batch inserting insert statements into a database with fsync = on.\nMy single disk system is on a 10k drive...even though I am inside a\ntransaction there is at least 1 file sync per row insert. \n\nIs this normal? On my hardware, which is pretty quick, I am limited to\nabout 50 inserts/sec. If I operate outside of a transaction, the number\nis closer to 30. With fsync off, I can hit over 1000.\n\nIIRC in previous versions insert performance was a lot faster when in\ntransaction, is that the case?\n\nMerlin\n",
"msg_date": "Fri, 11 Jun 2004 14:40:00 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "in-transaction insert performance in 7.5devel"
},
{
"msg_contents": "On Fri, 2004-06-11 at 14:40, Merlin Moncure wrote:\n> I am batch inserting insert statements into a database with fsync = on.\n> My single disk system is on a 10k drive...even though I am inside a\n> transaction there is at least 1 file sync per row insert. \n\nWhich filesystem?\n\nPostgreSQL isn't issuing the sync except at commit of a transaction, but\nsome filesystems do wacky things if you ask them too.\n\n\n",
"msg_date": "Fri, 11 Jun 2004 14:55:01 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: in-transaction insert performance in 7.5devel"
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I am batch inserting insert statements into a database with fsync = on.\n> My single disk system is on a 10k drive...even though I am inside a\n> transaction there is at least 1 file sync per row insert. \n\nAre you certain you're inside a transaction?\n\nTracing a process doing simple inserts within a transaction block,\nI don't see the process doing any I/O at all, just send/recv. The\nbackground writer process is doing the work, but it shouldn't block\nthe inserter.\n\n[ thinks for a bit... ] Hmm. I see that XLogFlush for a buffer's LSN\nis done while holding share lock on the buffer (see FlushBuffer in\nbufmgr.c). This would mean that anyone trying to acquire exclusive lock\non the buffer would have to wait for WAL fsync. In a situation where\nyou were repeatedly inserting into the same table, it's somewhat likely\nthat the inserter would block this way while the bgwriter is trying to\nflush a previous update of the same page. But that shouldn't happen for\n*every* insert; it could happen at most once every bgwriter_delay msec.\n\nDoes it help if you change FlushBuffer to release buffer lock while\nflushing xlog?\n\n\n\t/*\n\t * Protect buffer content against concurrent update. (Note that\n\t * hint-bit updates can still occur while the write is in progress,\n\t * but we assume that that will not invalidate the data written.)\n\t */\n\tLockBuffer(buffer, BUFFER_LOCK_SHARE);\n\n\t/*\n\t * Force XLOG flush for buffer' LSN. This implements the basic WAL\n\t * rule that log updates must hit disk before any of the data-file\n\t * changes they describe do.\n\t */\n\trecptr = BufferGetLSN(buf);\n\n+\tLockBuffer(buffer, BUFFER_LOCK_UNLOCK);\n\n\tXLogFlush(recptr);\n\n+\tLockBuffer(buffer, BUFFER_LOCK_SHARE);\n\n\n(This is not a committable change because it breaks the WAL guarantee;\nto do this we'd have to loop until the LSN doesn't change during flush,\nand I'm not sure that's a good idea. But you can do it for testing\npurposes just to see if this is where the performance issue is or not.)\n\nPrior versions hold this lock during flush as well, but it's less likely\nthat the same page an active process is interested in is being written\nout, since before the bgwriter only the least-recently-used page would\nbe a candidate for writing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jun 2004 15:52:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: in-transaction insert performance in 7.5devel "
}
] |
[
{
"msg_contents": "Rod Taylor wrote:\n> On Fri, 2004-06-11 at 14:40, Merlin Moncure wrote:\n> > I am batch inserting insert statements into a database with fsync =\non.\n> > My single disk system is on a 10k drive...even though I am inside a\n> > transaction there is at least 1 file sync per row insert.\n> \n> Which filesystem?\n> \n> PostgreSQL isn't issuing the sync except at commit of a transaction,\nbut\n> some filesystems do wacky things if you ask them too.\n\nUm, NTFS. :) I'm playing with the new 'syncer' to get a feel for the\nnew performance considerations.\n\nAs I understand it, sync() is never called anymore. mdsync() hits the\nall the files 1 by 1 with an fsync. My understanding of the commit\nprocess is that 30 tps is quite reasonable for my hardware. \n\nWhat surprised was that I am getting at least 1 seek/insert inside a\ntransaction. \n\nMerlin \n",
"msg_date": "Fri, 11 Jun 2004 15:04:03 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: in-transaction insert performance in 7.5devel"
},
{
"msg_contents": "> As I understand it, sync() is never called anymore. mdsync() hits the\n> all the files 1 by 1 with an fsync. My understanding of the commit\n> process is that 30 tps is quite reasonable for my hardware. \n\nSorry. I didn't see the version in the subject and assumed 7.4 on a\nLinux machine with excessive journaling enabled.\n\n",
"msg_date": "Fri, 11 Jun 2004 15:27:01 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: in-transaction insert performance in 7.5devel"
}
] |
[
{
"msg_contents": "\nTom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I am batch inserting insert statements into a database with fsync =\non.\n> > My single disk system is on a 10k drive...even though I am inside a\n> > transaction there is at least 1 file sync per row insert.\n> \n> Are you certain you're inside a transaction?\n> \n> Tracing a process doing simple inserts within a transaction block,\n> I don't see the process doing any I/O at all, just send/recv. The\n> background writer process is doing the work, but it shouldn't block\n> the inserter.\n> \n> [ thinks for a bit... ] Hmm. I see that XLogFlush for a buffer's LSN\n> is done while holding share lock on the buffer (see FlushBuffer in\n> bufmgr.c). This would mean that anyone trying to acquire exclusive\nlock\n> on the buffer would have to wait for WAL fsync. In a situation where\n> you were repeatedly inserting into the same table, it's somewhat\nlikely\n> that the inserter would block this way while the bgwriter is trying to\n> flush a previous update of the same page. But that shouldn't happen\nfor\n> *every* insert; it could happen at most once every bgwriter_delay\nmsec.\n> \n> Does it help if you change FlushBuffer to release buffer lock while\n> flushing xlog?\n\nPutting your change in resulted in about a 15% increase in insert\nperformance. There may be some quirky things going on here with NTFS...\n\nI did an update clean from cvs and I noticed big speedup across the\nboard. Right now sync performance is right in line with my\nexpectations. In any case, I checked and confirm that there are no\nspurious fsyncs running when they are not supposed to be.\n\nMerlin\n\n\n",
"msg_date": "Fri, 11 Jun 2004 17:04:44 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: in-transaction insert performance in 7.5devel "
},
{
"msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Does it help if you change FlushBuffer to release buffer lock while\n>> flushing xlog?\n\n> Putting your change in resulted in about a 15% increase in insert\n> performance. There may be some quirky things going on here with NTFS...\n\n> I did an update clean from cvs and I noticed big speedup across the\n> board. Right now sync performance is right in line with my\n> expectations. In any case, I checked and confirm that there are no\n> spurious fsyncs running when they are not supposed to be.\n\nWas that 15% before or after updating from CVS?\n\nThe more I think about the looping aspect the less I like it, so I'd\nprefer not to pursue making the unlock change for real. But if it's\nreally a 15% win then maybe we need to...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 11 Jun 2004 17:30:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: in-transaction insert performance in 7.5devel "
}
] |
[
{
"msg_contents": "Hello,\n\nConsider the following query:\n\nselect t1field1, avg(t2fieild2)\nfrom t1, t2\nwhere t1.field1 = t2.field2\ngroup by t1field1\n\nThat works fine. But I'd really like to see more fields of t1 in this\nquery, however I can't add them into the select because they're not\npart of the GROUP BY, thus I have to add them to there too:\n\nselect t1field1, t1field2, t1field3, avg(t2fieild2)\nfrom t1, t2\nwhere t1.field1 = t2.field2\ngroup by t1field1, t1field2, t1field3\n\nThe problem is that addind them all to GROUP BY causes a performance\nloss.. The only solution I found is using a subquery like this:\n\nselect * from\nt1, (select t1field1, avg(t2fieild2)\nfrom t1, t2\nwhere t1.field1 = t2.field2\ngroup by t1field1) t1inner\nwhere t1.field1 = t1inner.field1\n\nIt works just fine.. But I prefer not to use subqueries unless I am\nreally forced to due to the design of my application.\n\nAnother solution I considered is using aggreate function like that:\n\nselect t1field1, max(t1field2), max(t1field3), avg(t2fieild2)\nfrom t1, t2\nwhere t1.field1 = t2.field2\ngroup by t1field1\n\nSadly, this caused the same performance... I wonder though, is it\npossible to make an aggregate function like first(), last() in Oracle\n(IIRC)? I believe that in such cases MySQL does first() by itself.\n\nOther ideas are welcome too.\n \n\nRegards,\n Vitaly Belman\n \n ICQ: 1912453\n AIM: VitalyB1984\n MSN: [email protected]\n Yahoo!: VitalyBe\n\n",
"msg_date": "Sun, 13 Jun 2004 06:21:17 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Additional select fields in a GROUP BY"
},
{
"msg_contents": "Vitaly Belman <[email protected]> writes:\n> The problem is that addind them all to GROUP BY causes a performance\n> loss.\n\nReally? I'd think that there'd be no visible loss if the earlier\nfields of the GROUP BY are already unique. The sort comparison\nshould stop at the first field that determines the sort order.\nCan you provide a self-contained example?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 12 Jun 2004 23:39:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Additional select fields in a GROUP BY "
},
{
"msg_contents": "On Sun, Jun 13, 2004 at 06:21:17 +0300,\n Vitaly Belman <[email protected]> wrote:\n> \n> Consider the following query:\n> \n> select t1field1, avg(t2fieild2)\n> from t1, t2\n> where t1.field1 = t2.field2\n> group by t1field1\n> \n> That works fine. But I'd really like to see more fields of t1 in this\n> query, however I can't add them into the select because they're not\n> part of the GROUP BY, thus I have to add them to there too:\n\nIf t1.field1 is a candiate key for t1, then the normal thing to do is\nto group t2 by t2.field1 (assuming you really meant to join on t2.field1,\nnot t2.field2) and THEN join to t1. That may even be faster than the way you\nare doing things now.\n\nSo the query would look like:\n\nSELECT t1.field1, t1.field2, t1.field3, a.t2avg FROM t1,\n (SELECT field1, avg(field2) as t2avg FROM t2 GROUP BY field1) as a\n WHERE t1.field1 = a.field1\n",
"msg_date": "Sun, 13 Jun 2004 08:52:12 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Additional select fields in a GROUP BY"
},
{
"msg_contents": "Bruno:\n\nIt wasn't exactly my case but you did give me an idea by this tip,\nchanging a perspective did quite good to the timing of this query.\n\nTom:\n\nHmm.. I am not sure how I can demonstrate this to you... To see the\ntime differences you'd need the whole table.. That's quite a lot of\ndata to be posted on a mailing list, if you wish to test it on your\nside, I'll dump this table partly and send them to you somehow.\n\nI do stand by what I said though, here's the real query example:\n\nOriginal query (execution time, 800ms):\n\nselect s.series_id, avg(vote_avg), sum(vote_count) \nfrom v_bookseries s, bv_seriesgenres sg\nwhere s.series_id = sg.series_id and sg.genre_id = 1\ngroup by s.series_id\norder by sum(vote_count) desc\nlimit 10\n\nQUERY PLAN:\n\nLimit (cost=6523.51..6523.53 rows=10 width=12)\n -> Sort (cost=6523.51..6566.27 rows=17104 width=12)\n Sort Key: sum(b.vote_count)\n -> GroupAggregate (cost=1368.54..5320.92 rows=17104 width=12)\n -> Merge Join (cost=1368.54..4796.91 rows=58466 width=12)\n Merge Cond: (\"outer\".series_id = \"inner\".series_id)\n -> Merge Join (cost=0.00..6676.41 rows=65902 width=16)\n Merge Cond: (\"outer\".series_id = \"inner\".series_id)\n -> Index Scan using bv_series_pkey on\nbv_series s (cost=0.00..386.83 rows=17104 width=4)\n -> Index Scan using i_books_series_id on\nbv_books b (cost=0.00..14148.38 rows=171918 width=12)\n -> Sort (cost=1368.54..1406.47 rows=15173 width=4)\n Sort Key: sg.series_id\n -> Index Scan using i_seriesgenres_genre_id\non bv_seriesgenres sg (cost=0.00..314.83 rows=15173 width=4)\n Index Cond: (genre_id = 1)\n\n\nQuery with added GROUP BY members (execution time, 1400ms):\n\nselect s.series_id, s.series_name, s.series_picture, avg(vote_avg),\nsum(vote_count)\nfrom v_bookseries s, bv_seriesgenres sg\nwhere s.series_id = sg.series_id and sg.genre_id = 1\ngroup by s.series_id, s.series_name, s.series_picture\norder by sum(vote_count) desc\nlimit 10\n\nQUERY PLAN:\n\nLimit (cost=12619.76..12619.79 rows=10 width=47)\n -> Sort (cost=12619.76..12662.52 rows=17104 width=47)\n Sort Key: sum(b.vote_count)\n -> GroupAggregate (cost=10454.67..11417.18 rows=17104 width=47)\n -> Sort (cost=10454.67..10600.83 rows=58466 width=47)\n Sort Key: s.series_id, s.series_name, s.series_picture\n -> Merge Join (cost=1368.54..4796.91 rows=58466 width=47)\n Merge Cond: (\"outer\".series_id = \"inner\".series_id)\n -> Merge Join (cost=0.00..6676.41\nrows=65902 width=51)\n Merge Cond: (\"outer\".series_id =\n\"inner\".series_id)\n -> Index Scan using bv_series_pkey on\nbv_series s (cost=0.00..386.83 rows=17104 width=39)\n -> Index Scan using i_books_series_id\non bv_books b (cost=0.00..14148.38 rows=171918 width=12)\n -> Sort (cost=1368.54..1406.47 rows=15173 width=4)\n Sort Key: sg.series_id\n -> Index Scan using\ni_seriesgenres_genre_id on bv_seriesgenres sg (cost=0.00..314.83\nrows=15173 width=4)\n Index Cond: (genre_id = 1)\n\nNotice that the GROUP BY items added the following to the plan:\n\n -> GroupAggregate (cost=10454.67..11417.18 rows=17104 width=47)\n -> Sort (cost=10454.67..10600.83 rows=58466 width=47)\n Sort Key: s.series_id, s.series_name, s.series_picture\n\nWhich eventually almost doubles the execution time.\n\n\nOn Sun, 13 Jun 2004 08:52:12 -0500, Bruno Wolff III <[email protected]> wrote:\n> \n> On Sun, Jun 13, 2004 at 06:21:17 +0300,\n> Vitaly Belman <[email protected]> wrote:\n> >\n> > Consider the following query:\n> >\n> > select t1field1, avg(t2fieild2)\n> > from t1, t2\n> > where t1.field1 = t2.field2\n> > group by t1field1\n> >\n> > That works fine. But I'd really like to see more fields of t1 in this\n> > query, however I can't add them into the select because they're not\n> > part of the GROUP BY, thus I have to add them to there too:\n> \n> If t1.field1 is a candiate key for t1, then the normal thing to do is\n> to group t2 by t2.field1 (assuming you really meant to join on t2.field1,\n> not t2.field2) and THEN join to t1. That may even be faster than the way you\n> are doing things now.\n> \n> So the query would look like:\n> \n> SELECT t1.field1, t1.field2, t1.field3, a.t2avg FROM t1,\n> (SELECT field1, avg(field2) as t2avg FROM t2 GROUP BY field1) as a\n> WHERE t1.field1 = a.field1\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n",
"msg_date": "Sun, 13 Jun 2004 20:35:36 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Additional select fields in a GROUP BY"
},
{
"msg_contents": "Vitaly Belman <[email protected]> writes:\n> Notice that the GROUP BY items added the following to the plan:\n\n> -> Sort (cost=10454.67..10600.83 rows=58466 width=47)\n> Sort Key: s.series_id, s.series_name, s.series_picture\n\nOh, I see: in the first case you need no sort at all because the output\nof the indexscan is already known to be sorted by s.series_id. I was\nthinking of a sort with more or fewer sort columns, but that's not the\nissue here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 13 Jun 2004 13:59:24 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Additional select fields in a GROUP BY "
}
] |
[
{
"msg_contents": "Hello....\n\nI would like to know the performance of pg_fetch_array. Cosider the code:\n\n$query = \"select * from foo\";\n$result = pg_query( $db, $query );\n\nwhile ($row = pg_fetch_array($result))\n{\n $a = $row[\"a\"];\n $b = $row[\"b\"];\n $c = $row[\"c\"];\n $d = $row[\"d\"];\n}\n\nDoes php need to read database everytime when pg_fetch_array is executed in\nthe while loop or all the rows have been in the memory after pg_query?\n\nIf read database is needed, is there any method to copy all the rows into\nmemory by using other command? (because I have a application which needs\nlarge amount database update/retrieval and I wish the performance of the\noverall applications run faster.)\n\nor other method you would like to recommend in order to make the faster\nresponse time?\n\nThank you in advance.\n\n\n",
"msg_date": "Mon, 14 Jun 2004 17:12:25 +0800",
"msg_from": "\"Thanks\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_fetch_array"
},
{
"msg_contents": "> Does php need to read database everytime when pg_fetch_array is executed in\n> the while loop or all the rows have been in the memory after pg_query?\n\nYou may need to ask the php people about this one. The PostgreSQL\nprotocol would allow data to continue streaming in at the same time as\nyou are processing other rows (asynchronous retrieval). So, optionally\nthey may fetch and cache all rows in local memory at pg_query OR grab\nthem in sets of 1000 rows and cache that (fetching the next set when the\nfirst set runs out) OR grab one row for each fetch.\n\nYou could run a simple select that will fetch 100M rows from a table\nwith no WHERE clause. See how quickly the first row come in, and how\nmuch memory is used by the process.\n\nI suspect they call all of the rows at pg_query execution. Otherwise\nthey wouldn't know how to respond to a pg_num_rows() call.\n\n\nOn a side note, that is a rather unique email address.\n\n",
"msg_date": "Sun, 20 Jun 2004 16:02:17 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_fetch_array"
},
{
"msg_contents": "Thanks wrote:\n> Hello....\n> \n> I would like to know the performance of pg_fetch_array. Cosider the code:\n> \n> $query = \"select * from foo\";\n> $result = pg_query( $db, $query );\n> \n> while ($row = pg_fetch_array($result))\n> {\n> $a = $row[\"a\"];\n> $b = $row[\"b\"];\n> $c = $row[\"c\"];\n> $d = $row[\"d\"];\n> }\n> \n> Does php need to read database everytime when pg_fetch_array is executed in\n> the while loop or all the rows have been in the memory after pg_query?\n\nNot knowing anything in detail about the PHP drivers, I'm almost certain \nthat all rows are returned to PHP and then pg_fetch_array() reads that \nfrom memory.\n\n> If read database is needed, is there any method to copy all the rows into\n> memory by using other command? (because I have a application which needs\n> large amount database update/retrieval and I wish the performance of the\n> overall applications run faster.)\n> \n> or other method you would like to recommend in order to make the faster\n> response time?\n\nAre you sure that pg_fetch_array() is the problem? Can you give an \nexample of what you're trying to do with the data?\n\nPS - there's a PHP list too.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 21 Jun 2004 11:34:28 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_fetch_array"
}
] |
[
{
"msg_contents": "Hello all,\n\n I have a PostgreSQL instance running on a Dell Poweredge 6450. 4 700Mhz \nprocessors, 6 GB of Ram, 2GB Backside, 60GB Raid 10 Server. Today, it \ncrashed hard. At the command line (BASH), all I could get was 'bus fault' . \nAfter rebooting, we were unable to mount the partition that PostgreSQL was \non. When trying to load one of our remote backup archives (pg_dump) from \nthe last week, they all die at a point, saying:\n\npg_restore: [custom archiver] could not read data block -- expected 4096, got \n3870\n\n So, basically, we are running e2fsck very carefully as to not destroy any \nfiles, but still try and clean up as many issues as possible. Is there \nanyway to extract the pure data from those archives in spite of the bad \nblocks? Also, any other suggestions for the situation are welcome. Thanks.\n\n- Marcus\n",
"msg_date": "Mon, 14 Jun 2004 22:33:26 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failed System. Help."
}
] |
[
{
"msg_contents": "Just as a point of reference, my situation is very much like the one in this \nthread:\n\nhttp://archive.netbsd.se/?list=pgsql-admin&a=2004-02&mid=50891\n\nThanks again.\n\n- Marcus\n",
"msg_date": "Mon, 14 Jun 2004 22:41:10 -0500",
"msg_from": "Marcus Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Failed System follow up"
}
] |
[
{
"msg_contents": "Hi All,\n\nIm testing my postgres 7.3.2 db on redhat 9.0 using osdb 0.16. I couldnot find any docuementation for interpreting the results obtained. Im doing multiuser (10 users) test. I've two different con figurations for postgresql.conf. In one it takes 44 mins to finish while in other it takes 18 mins to finish. But tps (both mixed IR & mixed OLTP) is very high in first one. Also i want to know relation between tuples/sec and transactions/sec. \n\nThanks & Regards\n\n\n\n\nNeeraj Malhotra\n\n\nHi All,\n\nIm testing my postgres 7.3.2 db on redhat 9.0 using osdb 0.16. I couldnot find any docuementation for interpreting the results obtained. Im doing multiuser (10 users) test. I've two different con figurations for postgresql.conf. In one it takes 44 mins to finish while in other it takes 18 mins to finish. But tps (both mixed IR & mixed OLTP) is very high in first one. Also i want to know relation between tuples/sec and transactions/sec. \n\nThanks & Regards\n\n\n\nNeeraj Malhotra",
"msg_date": "15 Jun 2004 06:03:40 -0000",
"msg_from": "\"Neeraj \" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Interpreting OSDB Results"
}
] |
[
{
"msg_contents": "Hi all,\n\nI noticed the following. Given two tables, just simply articles and their \npackages:\n\n\tarticle(id int)\n\tpackage( id int, article_id int, amount)\n\nWhen taking the minimum package for articles given some constraint on the \narticle table\n\n\tselect \n\t\tarticle.id, \n\t\tp_min \n\tfrom \n\t\tarticle, \n\t\t(select \n\t\t\tarticle_id, \n\t\t\tmin(amount) as p_min \n\t\tfrom \n\t\t\tpackage \n\t\tgroup by \n\t\t\tarticle_id\n\t\t) as foo \n\twhere \n\t\tarticle.id = foo.article_id and \n\t\tarticle.id < 50;\n\nThe query plan shows\n\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Hash Join (cost=730.11..808.02 rows=1 width=8)\n Hash Cond: (\"outer\".article_id = \"inner\".id)\n -> Subquery Scan foo (cost=726.10..781.74 rows=4451 width=8)\n -> HashAggregate (cost=726.10..737.23 rows=4451 width=8)\n -> Seq Scan on package (cost=0.00..635.40 rows=18140 width=8)\n -> Hash (cost=4.01..4.01 rows=1 width=4)\n -> Index Scan using article_pkey on article (cost=0.00..4.01 rows=1 \nwidth=4)\n Index Cond: (id < 50)\n\nSo, a seq scan on the complete package table is done, although it seems the \nplanner could have deduced that only the part that has 'article_id < 50' \nwould be relevant, since 'article.id < 50' and 'article.id = foo.article_id'\n\nIf I ditch the group by, then this contraint does get pushed into the \nsubselect :\n\n\tselect \n\t\tarticle.id, \n\t\tp_min \n\tfrom \n\t\tarticle, \n\t\t(select \n\t\t\tarticle_id, \n\t\t\tamount as p_min \n\t\tfrom \n\t\t\tpackage\n\t\t) as foo \n\twhere \n\t\tarticle.id = foo.article_id and \n\t\tarticle.id < 50;\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Nested Loop (cost=0.00..14.22 rows=5 width=8)\n -> Index Scan using article_pkey on article (cost=0.00..4.01 rows=1 \nwidth=4)\n Index Cond: (id < 50)\n -> Index Scan using package_idx3 on package (cost=0.00..10.16 rows=4 \nwidth=8)\n Index Cond: (\"outer\".id = package.article_id)\n\n\nSo it seems the math-knowledge is there ;)\nI'm wondering about what I'm overlooking here. When I take the first query and \nadd the article constraint 'manually', I get:\n\n\tselect \n\t\tarticle.id, \n\t\tp_min \n\tfrom \n\t\tarticle, \n\t\t(select \n\t\t\tarticle_id, \n\t\t\tmin(amount) as p_min \n\t\tfrom \n\t\t\tpackage \n\t\twhere \n\t\t\tarticle_id < 50 \n\t\tgroup by \n\t\t\tarticle_id\n\t\t) as foo \n\twhere \n\t\tarticle.id = foo.article_id and \n\t\tarticle.id < 50;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..9.65 rows=1 width=8)\n Join Filter: (\"outer\".id = \"inner\".article_id)\n -> Index Scan using article_pkey on article (cost=0.00..4.01 rows=1 \nwidth=4)\n Index Cond: (id < 50)\n -> Subquery Scan foo (cost=0.00..5.62 rows=1 width=8)\n -> GroupAggregate (cost=0.00..5.61 rows=1 width=8)\n -> Index Scan using package_idx3 on package (cost=0.00..5.60 \nrows=2 width=8)\n Index Cond: (article_id < 50)\n\n\nwhich would have been a nice plan for the first query ;)\n\n\nCould someone shine a light on this, please?\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Tue, 15 Jun 2004 08:49:29 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "why is a constraint not 'pushed down' into a subselect when this\n\tsubselect is using a 'group by' ??"
},
{
"msg_contents": "Frank van Vugt <[email protected]> writes:\n> If I ditch the group by, then this contraint does get pushed into the \n> subselect :\n\nYou're misinterpreting it. Without the group by, the plan is a\ncandidate for nestloop-with-inner-index-scan; with the group by,\nthere's another step in the way.\n\nPushing down into subselects does get done, for instance in CVS tip\nI can change the last part of your query to \"foo.article_id < 50\"\nand get\n\n Hash Join (cost=25.18..53.53 rows=335 width=8)\n Hash Cond: (\"outer\".id = \"inner\".article_id)\n -> Seq Scan on article (cost=0.00..20.00 rows=1000 width=4)\n -> Hash (cost=25.01..25.01 rows=67 width=8)\n -> Subquery Scan foo (cost=24.17..25.01 rows=67 width=8)\n -> HashAggregate (cost=24.17..24.34 rows=67 width=8)\n -> Seq Scan on package (cost=0.00..22.50 rows=334 width=8)\n Filter: (article_id < 50)\n\nObviously this is on toy tables, but the point is that the constraint\ndoes get pushed down through the GROUP BY when appropriate.\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Tue, 15 Jun 2004 09:23:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why is a constraint not 'pushed down' into a subselect when this\n\tsubselect is using a 'group by' ??"
},
{
"msg_contents": "> Obviously this is on toy tables\n\nThe query is simplified, yes. But the data in the tables is real, albeit \nthey're not that large.\n\n> You're misinterpreting it.\n\nI might very well be ;)\nBut I also get the feeling I didn't explain to you well enough what I meant...\n\n> Without the group by, the plan is a candidate for \nnestloop-with-inner-index-scan\n\nYes, I understand that. I only ditched the group by to check whether the \ncontraint on the article table was indeed recognized as a constraint on the \npackage table based on 'article.id = foo.article_id'. And it is/was.\n\n> with the group by, there's another step in the way.\n\nYep, but on my system, package gets seq-scanned *without* any additional \nconstraint, resulting in a loooooong processing time.\n\n> Pushing down into subselects does get done, for instance in CVS tip\n> I can change the last part of your query to \"foo.article_id < 50\"\n> and get ...\n\nThis is why I think I wasn't clear enough.\n\nIn the real thing, the constraint on the article table is built by some \nexternal source and I cannot easily make assumptions to translate these to a \nconstraint on the package table, especially since I expect the planner to be \nfar better in that than me ;)\n\nSo, my base query is this:\n\n\tselect \n\t\tarticle.id, p_min \n\tfrom \n\t\tarticle, \n\t\t(select \n\t\t\tarticle_id, min(amount) as p_min \n\t\tfrom \n\t\t\tpackage \n\t\tgroup by \n\t\t\tarticle_id\n\t\t) as foo \n\twhere \n\t\tarticle.id = foo.article_id and \n\t\t<some constraint on article table>;\n\n\nNow, when <constraint> = true, this obviously results in seqscans:\n\n Hash Join (cost=1106.79..1251.46 rows=4452 width=8)\n Hash Cond: (\"outer\".article_id = \"inner\".id)\n -> Subquery Scan foo (cost=726.10..781.74 rows=4451 width=8)\n -> HashAggregate (cost=726.10..737.23 rows=4451 width=8)\n -> Seq Scan on package (cost=0.00..635.40 rows=18140 width=8)\n -> Hash (cost=369.35..369.35 rows=4535 width=4)\n -> Seq Scan on article (cost=0.00..369.35 rows=4535 width=4)\n\nBut when <constraint> = 'article.id < 50', only article is indexscanned:\n\n Hash Join (cost=730.11..808.02 rows=1 width=8)\n Hash Cond: (\"outer\".article_id = \"inner\".id)\n -> Subquery Scan foo (cost=726.10..781.74 rows=4451 width=8)\n -> HashAggregate (cost=726.10..737.23 rows=4451 width=8)\n -> Seq Scan on package (cost=0.00..635.40 rows=18140 width=8)\n -> Hash (cost=4.01..4.01 rows=1 width=4)\n -> Index Scan using article_pkey on article (cost=0.00..4.01 rows=1 \nwidth=4)\n Index Cond: (id < 50)\n\n\nWhich still results in poor performance due to the seqscan on package.\n\nPutting the constraint on package is boosting performance indeed, but I cannot \nmake that assumption.\n\nSo, what I was asking was:\n\nWhen the 'article.id < 50' constraint is added, it follows that \n'foo.article_id < 50' is a constraint as well. Why is this constraint not \nused to avoid the seqscan on package?\n\n> Obviously this is on toy tables, but the point is that the constraint\n> does get pushed down through the GROUP BY when appropriate.\n\nI've seen it being pushed down when it already was defined as a constraint on \nthe group by, like in your example. \n\nIf necessary, I'll throw together a few commands that build some example \ntables to show what I mean.\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Tue, 15 Jun 2004 17:30:40 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why is a constraint not 'pushed down' into a subselect when this\n\tsubselect is using a 'group by' ??"
},
{
"msg_contents": "Frank van Vugt <[email protected]> writes:\n> When the 'article.id < 50' constraint is added, it follows that \n> 'foo.article_id < 50' is a constraint as well. Why is this constraint not \n> used to avoid the seqscan on package?\n\nWe don't attempt to make every possible inference (and I don't think\nyou'd like it if we did). The current code will draw inferences about\ntransitive equality, for instance given a = b and b = c it will infer\na = c, if all three operators involved are mergejoinable. But given\na = b and some arbitrary other constraint on b, it won't consider\nsubstituting a into that other constraint. This example doesn't\npersuade me that it would be worth expending the cycles to do so.\n\nAside from the sheer cost of planning time, there are semantic\npitfalls to consider. In some datatypes there are values that are\n\"equal\" according to the = operator but are distinguishable by other\noperators --- for example, zero and minus zero in IEEE-standard\nfloat arithmetic. We'd need a great deal of caution in determining\nwhat inferences can be drawn.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Jun 2004 14:04:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why is a constraint not 'pushed down' into a subselect when this\n\tsubselect is using a 'group by' ??"
},
{
"msg_contents": "> We don't attempt to make every possible inference (and I don't think\n> you'd like it if we did).\n\nI wasn't really asking you to, either ;))\n\nJust trying to achieve a more in-depth understanding of the way things work.\n\n> This example doesn't\n> persuade me that it would be worth expending the cycles to do so.\n\nIn the real thing I can easily get good processing times by adding an extra \njoin with article inside in the group by and simply use the constraint on \nthat as well, so I'm ok with any choice you make on this.\n\nI thought this might have been some kind of special case though, given its \noccurence on the use of group by.\n\n> for example, zero and minus zero in IEEE-standard float arithmetic.\n\nGood example, brings back a few memories as well ;)\n\n\n\n\nThanks for your explanation, Tom!\n\n\n\n-- \nBest,\n\n\n\n\nFrank.\n\n",
"msg_date": "Wed, 16 Jun 2004 10:42:10 +0200",
"msg_from": "Frank van Vugt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why is a constraint not 'pushed down' into a subselect when this\n\tsubselect is using a 'group by' ??"
}
] |
[
{
"msg_contents": "Hello all,\n\nI have a rather large table (~20 GB) of network logs taken over the period\nof a month and stored under postgres 7.3. I'm trying to create an indexing\nscheme so that I can do a query with a \"where time > foo and time < bar\"\nand get the results without too much waiting. With this in mind, I created\na new database ordering each log entry by the time values (seconds, and\nmicroseconds), and then created an index on the time value for seconds. I\ndidn't use the clusterdb command, but I did do a \"select * into foo_tbl\norder by time\", which I understand to be the functional equivalent. The\nresult looks something like this: \n\n Table \"public.allflow_tv_mydoom\"\n Column | Type | Modifiers\n------------+---------+-----------\n tv_s | bigint |\n tv_us | bigint |\n src | inet |\n dst | inet |\n sport | integer |\n dport | integer |\n bytes_in | bigint |\n bytes_out | bigint |\n isn_in | bigint |\n isn_out | bigint |\n num_starts | integer |\n result | integer |\n flags | integer |\n age_s | bigint |\n age_us | bigint |\n ttl_left | bigint |\n end_s | bigint |\n end_us | bigint |\nIndexes: allflow_tv_mydoom_x btree (tv_s) \n\nwith tv_s, of course, being my value in seconds. I followed this up with\nan ANALYZE so postrgres could absorb this structure into its logic. \nHowever, whenever I want to do my query, it *insists* on doing a\nsequential scan, even when I explicitly turn sequential scan off in\npostgres.conf... here's an example: \n\nstandb=# explain select * from allflow_tv_mydoom where tv_s < 1074200099\nand tv_s > 107506499;\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Seq Scan on allflow_tv_mydoom (cost=100000000.00..102303307.94 rows=1\nwidth=132)\n Filter: ((tv_s < 1074200099) AND (tv_s > 107506499))\n(2 rows) \n\nIn this query I'm basically asking for a day's worth of data. It should be\nstraightforward: it's all laid out in order and indexed, but it still\ninsists on (what I believe to be) tearing through the entire DB and\nfiltering out the other time values as it sees them. Even if I want just\nthe data from a particular second, it does the same thing:\n\nstandb=# explain select * from allflow_tv_mydoom where tv_s = 1074200099;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Seq Scan on allflow_tv_mydoom (cost=100000000.00..102140682.45 rows=145\nwidth=132)\n Filter: (tv_s = 1074200099)\n(2 rows) \n\nNaturally, this is incredibly frustrating because it takes forever,\nregardless of the query.\n\nThe funny thing though is that I have the same data in another table\nwhere it is ordered and indexed by IP address, and the queries use the\nindex and work the way I want them to. Example:\n\nstandb=# explain select * from flow_ip_mydoom where src = '10.0.5.5';\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Index Scan using flow_ip_mydoom_x on flow_ip_mydoom (cost=0.00..333.41\nrows=376 width=132)\n Index Cond: (src = '10.0.5.5'::inet)\n(2 rows)\n\nCan someone please explain to me what's going on and how to fix it? If\nthere's an easier way to 'jump' to different time regions in the table\nwithout indexing all the different seconds values, I'd like to know this\nas well. \n\nThanks a bunch,\n-S\n\n\n\n\n",
"msg_date": "Tue, 15 Jun 2004 17:29:59 -0400 (EDT)",
"msg_from": "Stan Bielski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Indexing question"
},
{
"msg_contents": "Stan Bielski <[email protected]> writes:\n> Table \"public.allflow_tv_mydoom\"\n> Column | Type | Modifiers\n> ------------+---------+-----------\n> tv_s | bigint |\n ^^^^^^\n> Indexes: allflow_tv_mydoom_x btree (tv_s) \n\n> standb=# explain select * from allflow_tv_mydoom where tv_s < 1074200099\n> and tv_s > 107506499;\n> [ gives seqscan ]\n\nThis is a FAQ :-(. Unadorned integer constants are taken to be int4\nnot int8 (unless they are too large for int4), and cross-data-type\ncomparisons are not indexable in existing releases. So you have to\nexplicitly cast the comparison values to int8:\n\nexplain select * from allflow_tv_mydoom where tv_s < 1074200099::bigint\nand tv_s > 107506499::bigint;\n\n(or use the standard CAST syntax if you prefer).\n\n7.5 will have a fix for this ancient annoyance.\n\nBTW, is there a reason to be using tv_s+tv_us and not just a single\ntimestamptz column?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 20 Jun 2004 14:45:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Indexing question "
}
] |
[
{
"msg_contents": "I've known about this tool for a while, but it seems many people do not \nknow of its existence and I think it would be useful to a lot of people \nwho have a hard time reading explain analyze output. (And even those \nwho can read them without blinking.. when you get deep in join hell it \ngets tricky!)\n\nRed Hat Visual Explain - part of Red Hat Database.\n\nIt is what the name implies - a graphical (java) program to draw a \npicture of your query plan (along with all the juicy information \nexplain analyze provides). I just tried it out today and after \nupgrading my JDBC to 7.4 it worked fine (If you get a message about SET \nAUTOCOMMIT then you need to upgrade your jdbc jar)\n\nQuite handy for getting a grasp on stupidly large query plans.\n\nhttp://sources.redhat.com/rhdb/visualexplain.html\n\nI used the CVS version, I have no idea how well the \"official\" releases \nwork.\nAnyone else using it?\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Wed, 16 Jun 2004 10:01:32 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Visual Explain"
},
{
"msg_contents": "\nWill this run on other platforms? OSX maybe?\n\n> I've known about this tool for a while, but it seems many people do not\n> know of its existence and I think it would be useful to a lot of people\n> who have a hard time reading explain analyze output. (And even those\n> who can read them without blinking.. when you get deep in join hell it\n> gets tricky!)\n> \n> Red Hat Visual Explain - part of Red Hat Database.\n> \n> It is what the name implies - a graphical (java) program to draw a\n> picture of your query plan (along with all the juicy information\n> explain analyze provides). I just tried it out today and after\n> upgrading my JDBC to 7.4 it worked fine (If you get a message about SET\n> AUTOCOMMIT then you need to upgrade your jdbc jar)\n> \n> Quite handy for getting a grasp on stupidly large query plans.\n> \n> http://sources.redhat.com/rhdb/visualexplain.html\n> \n> I used the CVS version, I have no idea how well the \"official\" releases\n> work.\n> Anyone else using it?\n> \n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n\n\n-- \nThis message has been scanned for viruses and\ndangerous content by MailScanner, and is\nbelieved to be clean.\n\n",
"msg_date": "Thu, 17 Jun 2004 12:10:30 +0100",
"msg_from": "Adam Witney <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "\nOn 17/06/2004 12:10 Adam Witney wrote:\n> \n> Will this run on other platforms? OSX maybe?\n\nIt's a Java app so it runs on any any platform with a reasonably modern \nJava VM.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 17 Jun 2004 13:16:34 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Visual Explain"
},
{
"msg_contents": "\nOn 17/06/2004 12:10 Adam Witney wrote:\n> \n> Will this run on other platforms? OSX maybe?\n\nIt's a Java app so it runs on any any platform with a reasonably modern\nJava VM.\n\n--\nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n\n",
"msg_date": "Thu, 17 Jun 2004 13:52:15 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "\nOn Jun 17, 2004, at 7:10 AM, Adam Witney wrote:\n\n>\n> Will this run on other platforms? OSX maybe?\n>\n\nI've run it on both linux (rh8) and osx (panther).\n\nits java so it *should* run anywhere.\n\nIt isn't the fastest beast in the world though. takes a bit of time to \nrender the plan.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Thu, 17 Jun 2004 09:37:29 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "Is it possible to download the Visual Explain only (link)? I only see\nthat you can donwload the whole ISO (which I hardly need).\n\nOn Thu, 17 Jun 2004 13:52:15 +0100, Paul Thomas <[email protected]> wrote:\n> \n> \n> On 17/06/2004 12:10 Adam Witney wrote:\n> >\n> > Will this run on other platforms? OSX maybe?\n> \n> It's a Java app so it runs on any any platform with a reasonably modern\n> Java VM.\n> \n> --\n> Paul Thomas\n> +------------------------------+---------------------------------------------+\n> | Thomas Micro Systems Limited | Software Solutions for\n> Business |\n> | Computer Consultants |\n> http://www.thomas-micro-systems-ltd.co.uk |\n> +------------------------------+---------------------------------------------+\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n",
"msg_date": "Thu, 17 Jun 2004 19:54:41 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "\nOn Jun 17, 2004, at 12:54 PM, Vitaly Belman wrote:\n\n> Is it possible to download the Visual Explain only (link)? I only see\n> that you can donwload the whole ISO (which I hardly need).\n>\nyou'll need to snag it out of cvs:\n\nhttp://sources.redhat.com/rhdb/cvs.html\n\nYou can checkout just visual explain so you won't need to grab \neverything.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Thu, 17 Jun 2004 13:05:19 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "On 17/06/2004 17:54 Vitaly Belman wrote:\n> Is it possible to download the Visual Explain only (link)? I only see\n> that you can donwload the whole ISO (which I hardly need).\n\nYou can get it from CVS and build it yourself.\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 17 Jun 2004 18:35:11 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Visual Explain"
},
{
"msg_contents": "I see. Thanks :).\n\nOn Thu, 17 Jun 2004 18:35:11 +0100, Paul Thomas <[email protected]> wrote:\n> \n> On 17/06/2004 17:54 Vitaly Belman wrote:\n> > Is it possible to download the Visual Explain only (link)? I only see\n> > that you can donwload the whole ISO (which I hardly need).\n> \n> You can get it from CVS and build it yourself.\n> \n> \n> \n> --\n> Paul Thomas\n> +------------------------------+---------------------------------------------+\n> | Thomas Micro Systems Limited | Software Solutions for\n> Business |\n> | Computer Consultants |\n> http://www.thomas-micro-systems-ltd.co.uk |\n> +------------------------------+---------------------------------------------+\n>\n",
"msg_date": "Thu, 17 Jun 2004 21:13:41 +0300",
"msg_from": "Vitaly Belman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Visual Explain"
}
] |
[
{
"msg_contents": "Hi,\nI currently have a mysql server running with a database of around 800\ngb. The problem is that the server is old (500 MHz Pentium III with 512\nMB RAM) and I want to change this to a new server and convert the\nexisting database to Postgresql on Debian (I assume that Postgresql\noffers better performance for complex read only queries on large\ndatabases), though I was wondering if\n\n1. It is possible to have some sort of load-balancing through buying\nmany computers without replication, i.e have one server which has the\ndatabases and then other servers which has no database but just exists\nto balance the memory and processor load? (I have heard this is possible\nwith C-JDBC)It is difficult to have enough space to replicate a 600 gb\ndatabase across all computers)\n\n2. It is advantageous to buy AMD 64 rather than the Pentium IV?\n\nAny thoughts?\nThanks. \n\n",
"msg_date": "Wed, 16 Jun 2004 14:15:27 -0500",
"msg_from": "Bill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Basic Postgresql Performance Question"
},
{
"msg_contents": "* Bill ([email protected]) wrote:\n> I currently have a mysql server running with a database of around 800\n> gb. The problem is that the server is old (500 MHz Pentium III with 512\n> MB RAM) and I want to change this to a new server and convert the\n> existing database to Postgresql on Debian (I assume that Postgresql\n> offers better performance for complex read only queries on large\n> databases), though I was wondering if\n\nExcellent plan. If you're looking at using Debian stable I'd encourage\nyou to consider using the PostgreSQL back-port of 7.4.2 to Debian stable\non backports.org.\n\n> 1. It is possible to have some sort of load-balancing through buying\n> many computers without replication, i.e have one server which has the\n> databases and then other servers which has no database but just exists\n> to balance the memory and processor load? (I have heard this is possible\n> with C-JDBC)It is difficult to have enough space to replicate a 600 gb\n> database across all computers)\n\nI don't think so... There's something called OpenMosix which does this\non independent processes but not for threaded programs (since it doesn't\nmake sense to split them across different nodes if they're accessing the\nsame memory- every memory access would have to be checked) like the\nPostgreSQL server.\n\n> 2. It is advantageous to buy AMD 64 rather than the Pentium IV?\n\nPersonally, I certainly think so. More registers, more memory possible\ninside one application, other stuff...\n\n\tStephen",
"msg_date": "Wed, 16 Jun 2004 15:35:56 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Basic Postgresql Performance Question"
},
{
"msg_contents": "Stephen Frost wrote:\n\n>>1. It is possible to have some sort of load-balancing through buying\n>>many computers without replication, i.e have one server which has the\n>>databases and then other servers which has no database but just exists\n>>to balance the memory and processor load? (I have heard this is possible\n>>with C-JDBC)It is difficult to have enough space to replicate a 600 gb\n>>database across all computers)\n>> \n>>\n>\n>I don't think so... There's something called OpenMosix which does this\n>on independent processes but not for threaded programs (since it doesn't\n>make sense to split them across different nodes if they're accessing the\n>same memory- every memory access would have to be checked) like the\n>PostgreSQL server.\n>\n> \n>\nLook at www.linuxlabs.com they have a clustering system. Not exactly \nthe same thing but close to what I think you are looking for.\n\n-- \nKevin Barnard\n\n\n\n\n\n\n\n\n\n\nStephen Frost wrote:\n\n\n1. It is possible to have some sort of load-balancing through buying\nmany computers without replication, i.e have one server which has the\ndatabases and then other servers which has no database but just exists\nto balance the memory and processor load? (I have heard this is possible\nwith C-JDBC)It is difficult to have enough space to replicate a 600 gb\ndatabase across all computers)\n \n\n\nI don't think so... There's something called OpenMosix which does this\non independent processes but not for threaded programs (since it doesn't\nmake sense to split them across different nodes if they're accessing the\nsame memory- every memory access would have to be checked) like the\nPostgreSQL server.\n\n \n\nLook at www.linuxlabs.com they have a clustering system. Not exactly\nthe same thing but close to what I think you are looking for.\n-- \nKevin Barnard",
"msg_date": "Wed, 16 Jun 2004 15:08:38 -0500",
"msg_from": "Kevin Barnard <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Basic Postgresql Performance Question"
},
{
"msg_contents": "On Wed, 2004-06-16 at 13:15, Bill wrote:\n> Hi,\n> I currently have a mysql server running with a database of around 800\n> gb. The problem is that the server is old (500 MHz Pentium III with 512\n> MB RAM) and I want to change this to a new server and convert the\n> existing database to Postgresql on Debian (I assume that Postgresql\n> offers better performance for complex read only queries on large\n> databases),\n\nUsually, but there are always queries that run faster or slower on a\ngiven database due to differences in architecture and design. For\ninstance PostgreSQL tends to be slow when doing max/min aggs, but faster\nwhen doing things involving complex joins and unions.\n\n> though I was wondering if\n> \n> 1. It is possible to have some sort of load-balancing through buying\n> many computers without replication, i.e have one server which has the\n> databases and then other servers which has no database but just exists\n> to balance the memory and processor load? (I have heard this is possible\n> with C-JDBC)It is difficult to have enough space to replicate a 600 gb\n> database across all computers)\n\nThat depends. Most databases are first I/O bound, then memory bound,\nthen CPU bound, in that order. With an 800Gb database your main \"cost\"\nis gonna be moving data off of the platters and into memory, then having\nthe memory to hold the working sets, then the CPU to mush it together.\n\nNow, if you're reading only tiny portions at a time, but doing lots of\nstrange work on them, say weather forcasting, then you might be CPU\nbound. But without knowing what your usage patterns are like, we don't\nknow whether or not running on multiple boxes would help. There are\nreplication systems, but there's no such thing as a free lunch.\n\n> 2. It is advantageous to buy AMD 64 rather than the Pentium IV?\n\nYes and no. If having more than 2 gigs of ram is important, 64 bit\narchitecures run faster than 32 bit, where having over 2 gigs usually\nresults in a slow down due to the memory switching they use.\n\n",
"msg_date": "Wed, 16 Jun 2004 14:18:45 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Basic Postgresql Performance Question"
},
{
"msg_contents": "* Scott Marlowe ([email protected]) wrote:\n> On Wed, 2004-06-16 at 13:15, Bill wrote:\n> > 2. It is advantageous to buy AMD 64 rather than the Pentium IV?\n> \n> Yes and no. If having more than 2 gigs of ram is important, 64 bit\n> architecures run faster than 32 bit, where having over 2 gigs usually\n> results in a slow down due to the memory switching they use.\n\nThis is truer on more traditional 64bit platforms than on amd64 which\nhas more differences than just the ability to handle 64bit size things.\namd64 also gives you access to more registers than were on the\nregister-starved i386 platforms which increases the speed for most\napplications where it usually wouldn't when recompiled for 64bit.\n\n\tStephen",
"msg_date": "Wed, 16 Jun 2004 16:49:18 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Basic Postgresql Performance Question"
}
] |
[
{
"msg_contents": "> Was that 15% before or after updating from CVS?\n> \n> The more I think about the looping aspect the less I like it, so I'd\n> prefer not to pursue making the unlock change for real. But if it's\n> really a 15% win then maybe we need to...\n> \n> \t\t\tregards, tom lane\n\nAfter. So far, I haven't been able to reproduce original the insert\nproblem. A word of warning: the version I was testing with was patched\nwith some unapproved patches and that may have been part of the issue.\n\nHere are my results (10k drive, NTFS):\nfsync off, in xact: ~ 398 i/sec\nfsync off, outside xact: ~ 369 i/sec\nfsync on, in xact: ~ 374 i/sec\nfsync on, outside xact: ~ 35 i/sec\n\nwith your code change:\nfsync on, in xact: ~ 465 i/sec\nfsync on, outside xact: ~ 42 i/sec\n\nDon't put too much faith in these results. If you are still\ncontemplating a code change, I'll set up a test unit for more accurate\nresults and post the code (my current tests are in COBOL). \n\nMerlin\n",
"msg_date": "Thu, 17 Jun 2004 12:35:19 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: in-transaction insert performance in 7.5devel "
}
] |
[
{
"msg_contents": "\n\n\n\nPg: 7.4.2\nRedHat 7.3\nRam: 8gig\n\nI have 6 million row table that I vacuum full analyze each night. The time\nseems to be streching out further and further as I add more rows. I read\nthe archives and Josh's annotated pg.conf guide that setting the FSM higher\nmight help. Currently, my memory settings are set as such. Does this seem\nlow?\n\nLast reading from vaccum verbose:\n INFO: analyzing \"cdm.cdm_ddw_customer\"\nINFO: \"cdm_ddw_customer\": 209106 pages, 3000 rows sampled, 6041742\nestimated total rows\n>>I think I should now set my max FSM to at least 210000 but wanted to make\nsure\n\nshared_buffers = 2000 # min 16, at least max_connections*2, 8KB\neach\nsort_mem = 12288 # min 64, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n\nTIA\n\nPatrick Hatcher\nMacys.Com\n\n",
"msg_date": "Thu, 17 Jun 2004 13:09:47 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow vacuum performance"
},
{
"msg_contents": "On Thu, 17 Jun 2004, Patrick Hatcher wrote:\n\n> I have 6 million row table that I vacuum full analyze each night. The time\n> seems to be streching out further and further as I add more rows. I read\n\nYou could try to run normal (non full) vacuum every hour or so. If you do \nnormal vacuum often enough you probably don't need to run vacuum full at \nall.\n\n-- \n/Dennis Bj�rklund\n\n",
"msg_date": "Fri, 18 Jun 2004 06:30:12 +0200 (CEST)",
"msg_from": "Dennis Bjorklund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow vacuum performance"
},
{
"msg_contents": "On Thu, 2004-06-17 at 13:09 -0700, Patrick Hatcher wrote:\n> \n> \n> \n> Pg: 7.4.2\n> RedHat 7.3\n> Ram: 8gig\n> \n> I have 6 million row table that I vacuum full analyze each night. The time\n> seems to be streching out further and further as I add more rows. I read\n> the archives and Josh's annotated pg.conf guide that setting the FSM higher\n> might help. Currently, my memory settings are set as such. Does this seem\n> low?\n> \n> Last reading from vaccum verbose:\n> INFO: analyzing \"cdm.cdm_ddw_customer\"\n> INFO: \"cdm_ddw_customer\": 209106 pages, 3000 rows sampled, 6041742\n> estimated total rows\n> >>I think I should now set my max FSM to at least 210000 but wanted to make\n> sure\n\nYes, that's my interpretation of those numbers too. I would set\nmax_fsm_pages to 300000 (or more) in that case.\n\nIf you have 8G of RAM in the machine your shared_buffers seems very low\ntoo. Depending on how it is used I would increase that to at least the\nrecommended maximum (10000 - 80M).\n\nYou don't quote your setting for effective_cache_size, but you should\nprobably look at what \"/usr/bin/free\" reports as \"cached\", divide that\nby 10, and set it to that as a quick rule of thumb...\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n\n\n> shared_buffers = 2000 # min 16, at least max_connections*2, 8KB\n> each\n> sort_mem = 12288 # min 64, size in KB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 100000 # min max_fsm_relations*16, 6 bytes each\n> #max_fsm_relations = 1000 # min 100, ~50 bytes each\n> \n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Make things as simple as possible, but no simpler -- Einstein\n-------------------------------------------------------------------------",
"msg_date": "Fri, 18 Jun 2004 20:15:44 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow vacuum performance"
}
] |
[
{
"msg_contents": "I'm trying to migrate an application from an Oracle\nbackend to PostgreSQL and have a performance question.\n\nThe hardware for the database is the same, a SunFire\nv120, 2x73GB U2W SCSI disks, 1GB RAM, 650MHz US-IIe\nCPU. Running Solaris 8.\n\nThe table in question has 541741 rows. Under Oracle,\nthe query ' select distinct version from vers where\nversion is not null ' returns 534 rows in 6.14\nseconds, with an execution plan showing a table scan\nof vers followed by a sort.\n\nThe explain output on postgres shows the same\nexecution with a scan on vers and a sort but the query\ntime is 78.6 seconds.\n\nThe explain output from PostgreSQL is:\n QUERY PLAN\n---------------------------------------------------------------------------------\n Unique (cost=117865.77..120574.48 rows=142\nwidth=132)\n -> Sort (cost=117865.77..119220.13 rows=541741\nwidth=132)\n Sort Key: \"version\"\n -> Seq Scan on vers (cost=0.00..21367.41\nrows=541741 width=132)\n Filter: (\"version\" IS NOT NULL)\n\nI do have an index on the column in question but\nneither oracle nor postgresql choose to use it (which\ngiven that we're visiting all rows is perhaps not\nsurprising).\n\nI'm not as familiar with postgresql as I am with\nOracle but I think I've configured comparible\nbuffering and sort area sizes, certainly there isn't\nmuch physical IO going on in either case.\n\nWhat can I do to speed up this query? Other queries\nare slightly slower than under Oracle on the same\nhardware but nothing like this.\n\nThanks!\n\nG\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n",
"msg_date": "Fri, 18 Jun 2004 12:31:57 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Major differences between oracle and postgres performance - what can\n\tI do ?"
},
{
"msg_contents": "Gary Cowell wrote:\n> \n> I'm not as familiar with postgresql as I am with\n> Oracle but I think I've configured comparible\n> buffering and sort area sizes, certainly there isn't\n> much physical IO going on in either case.\n\nPeople are going to want to know:\n1. version of PG\n2. explain analyse output, rather than just explain\n3. What values you've used for the postgresql.conf file\n\nThe actual plan from explain analyse isn't going to be much use - as you \nsay, a scan of the whole table followed by sorting is the best you'll \nget. However, the actual costs of these steps might say something useful.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 18 Jun 2004 13:07:15 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance"
},
{
"msg_contents": "\nOn 18/06/2004 12:31 Gary Cowell wrote:\n> [snip]\n> I'm not as familiar with postgresql as I am with\n> Oracle but I think I've configured comparible\n> buffering and sort area sizes, certainly there isn't\n> much physical IO going on in either case.\n> \n> What can I do to speed up this query? Other queries\n> are slightly slower than under Oracle on the same\n> hardware but nothing like this.\n\nUsual questions:\n\nhave you vacuumed the table recently?\nwhat are your postgresql.conf settings?\ncan you show us explain ANALYZE output rather than just explain output?\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Fri, 18 Jun 2004 13:09:27 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "\nOn Jun 18, 2004, at 7:31 AM, Gary Cowell wrote:\n\n> The explain output on postgres shows the same\n> execution with a scan on vers and a sort but the query\n> time is 78.6 seconds.\n>\n\nDoes it run just as slow if you run it again?\nIt could be a case of the caches being empty\n\n> Oracle but I think I've configured comparible\n> buffering and sort area sizes, certainly there isn't\n> much physical IO going on in either case.\n>\n\nConfiguring PG like Oracle isn't the best thing in the world. The \ngeneral PG philosophy is to let the OS do all the caching & buffering \n- this is reversed in the Oracle world. In 7.4 the rule of thumb is no \nmore than 10k shared_buffers.. beyond that the overhead of maintaining \nit becomes excessive. (This isn't really the case in 7.5)\n\nCuriously, what are your sort_mem and shared_buffers settings?\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Fri, 18 Jun 2004 08:14:22 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "Gary Cowell wrote:\n> The explain output on postgres shows the same\n> execution with a scan on vers and a sort but the query\n> time is 78.6 seconds.\n> \n> The explain output from PostgreSQL is:\n> QUERY PLAN\n> ---------------------------------------------------------------------------------\n> Unique (cost=117865.77..120574.48 rows=142\n> width=132)\n> -> Sort (cost=117865.77..119220.13 rows=541741\n> width=132)\n> Sort Key: \"version\"\n> -> Seq Scan on vers (cost=0.00..21367.41\n> rows=541741 width=132)\n> Filter: (\"version\" IS NOT NULL)\n> \n> I do have an index on the column in question but\n> neither oracle nor postgresql choose to use it (which\n> given that we're visiting all rows is perhaps not\n> surprising).\n\nCan you post explain analyze for the same query? It contains actual numbers \nalond side the chosen plan.\n\n> \n> I'm not as familiar with postgresql as I am with\n> Oracle but I think I've configured comparible\n> buffering and sort area sizes, certainly there isn't\n> much physical IO going on in either case.\n\nWell, for postgresql you should check out\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nHTH\n\n Shridhar\n",
"msg_date": "Fri, 18 Jun 2004 17:47:55 +0530",
"msg_from": "Shridhar Daithankar <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance"
}
] |
[
{
"msg_contents": "Hi ,\nI have similare problem and found that the problem is by pg sort.\nIt is extremly slow by me.\n\nAlso in my case I tryed to migrate one db from oracle to pg .\n\nTo solve this problem I dinamicaly set sort_mem to some big value.\nIn this case the sort is working into RAM and is relative fast.\nYou can try this and remember sort mem is per sort, not per connection.\n\nIn my migration I found the only advantage for oracle is the very good sort.\n\nregards,\nivan.\n\nGary Cowell wrote:\n\n>--- [email protected] wrote: > You can roughly estimate time\n>spent for just scaning\n> \n>\n>>the table using\n>>something like this: \n>>\n>>\tselect sum(version) from ... where version is not\n>>null\n>>\n>>\tand just \n>>\n>>\tselect sum(version) from ...\n>>\n>>The results would be interesting to compare. \n>> \n>>\n>\n>To answer (I hope) everyones questions at once:\n>\n>1) Oracle and postmaster were not running at the same\n>time\n>2) The queries were run once, to cache as much as\n>possible then run again to get the timing\n>\n>3) Distinct vs. no distinct (i.e. sort performance).\n>\n>select length(version) from vers where version is not\n>null;\n>\n>Time: 9748.174 ms\n>\n>select distinct(version) from vers where version is\n>not null;\n>\n>Time: 67988.972 ms\n>\n>So about an extra 60 seconds with the distinct on.\n>\n>Here is the explain analyze output from psql:\n>\n># explain analyze select distinct version from vers\n>where version is not null;\n> \n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=117865.77..120574.48 rows=142\n>width=132) (actual time=63623.428..68269.111 rows=536\n>loops=1)\n> -> Sort (cost=117865.77..119220.13 rows=541741\n>width=132) (actual time=63623.417..66127.641\n>rows=541741 loops=1)\n> Sort Key: \"version\"\n> -> Seq Scan on vers (cost=0.00..21367.41\n>rows=541741 width=132) (actual time=0.218..7214.903\n>rows=541741 loops=1)\n> Filter: (\"version\" IS NOT NULL)\n> Total runtime: 68324.215 ms\n>(6 rows)\n>\n>Time: 68326.062 ms\n>\n>\n>And the non-default .conf parameters:\n>\n>tcpip_socket = true\n>max_connections = 100\n>password_encryption = true\n>shared_buffers = 2000\n>sort_mem = 16384 \n>vacuum_mem = 8192 \n>effective_cache_size = 4000\n>syslog = 2 \n>\n>postgresql version is 7.4.3\n>compiled with GCC 3.3.2 on sun4u architecture.\n>\n>\n>\n>\n>\n>\t\n>\t\n>\t\t\n>___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> \n>\n\n\n\n\n\n\n\nHi ,\nI have similare problem and found that the problem is by pg sort.\nIt is extremly slow by me.\n\nAlso in my case I tryed to migrate one db from oracle to pg .\n\nTo solve this problem I dinamicaly set sort_mem to some big value.\nIn this case the sort is working into RAM and is relative fast.\nYou can try this and remember sort mem is per sort, not per connection.\n\nIn my migration I found the only advantage for oracle is the very good sort.\n\nregards,\nivan.\n\nGary Cowell wrote:\n\n--- [email protected] wrote: > You can roughly estimate time\nspent for just scaning\n \n\nthe table using\nsomething like this: \n\n\tselect sum(version) from ... where version is not\nnull\n\n\tand just \n\n\tselect sum(version) from ...\n\nThe results would be interesting to compare. \n \n\n\nTo answer (I hope) everyones questions at once:\n\n1) Oracle and postmaster were not running at the same\ntime\n2) The queries were run once, to cache as much as\npossible then run again to get the timing\n\n3) Distinct vs. no distinct (i.e. sort performance).\n\nselect length(version) from vers where version is not\nnull;\n\nTime: 9748.174 ms\n\nselect distinct(version) from vers where version is\nnot null;\n\nTime: 67988.972 ms\n\nSo about an extra 60 seconds with the distinct on.\n\nHere is the explain analyze output from psql:\n\n# explain analyze select distinct version from vers\nwhere version is not null;\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=117865.77..120574.48 rows=142\nwidth=132) (actual time=63623.428..68269.111 rows=536\nloops=1)\n -> Sort (cost=117865.77..119220.13 rows=541741\nwidth=132) (actual time=63623.417..66127.641\nrows=541741 loops=1)\n Sort Key: \"version\"\n -> Seq Scan on vers (cost=0.00..21367.41\nrows=541741 width=132) (actual time=0.218..7214.903\nrows=541741 loops=1)\n Filter: (\"version\" IS NOT NULL)\n Total runtime: 68324.215 ms\n(6 rows)\n\nTime: 68326.062 ms\n\n\nAnd the non-default .conf parameters:\n\ntcpip_socket = true\nmax_connections = 100\npassword_encryption = true\nshared_buffers = 2000\nsort_mem = 16384 \nvacuum_mem = 8192 \neffective_cache_size = 4000\nsyslog = 2 \n\npostgresql version is 7.4.3\ncompiled with GCC 3.3.2 on sun4u architecture.\n\n\n\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])",
"msg_date": "Fri, 18 Jun 2004 14:36:00 +0200",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major differences between oracle and postgres performance"
},
{
"msg_contents": "--- [email protected] wrote: > You can roughly estimate time\nspent for just scaning\n> the table using\n> something like this: \n> \n> \tselect sum(version) from ... where version is not\n> null\n> \n> \tand just \n> \n> \tselect sum(version) from ...\n> \n> The results would be interesting to compare. \n\nTo answer (I hope) everyones questions at once:\n\n1) Oracle and postmaster were not running at the same\ntime\n2) The queries were run once, to cache as much as\npossible then run again to get the timing\n\n3) Distinct vs. no distinct (i.e. sort performance).\n\nselect length(version) from vers where version is not\nnull;\n\nTime: 9748.174 ms\n\nselect distinct(version) from vers where version is\nnot null;\n\nTime: 67988.972 ms\n\nSo about an extra 60 seconds with the distinct on.\n\nHere is the explain analyze output from psql:\n\n# explain analyze select distinct version from vers\nwhere version is not null;\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=117865.77..120574.48 rows=142\nwidth=132) (actual time=63623.428..68269.111 rows=536\nloops=1)\n -> Sort (cost=117865.77..119220.13 rows=541741\nwidth=132) (actual time=63623.417..66127.641\nrows=541741 loops=1)\n Sort Key: \"version\"\n -> Seq Scan on vers (cost=0.00..21367.41\nrows=541741 width=132) (actual time=0.218..7214.903\nrows=541741 loops=1)\n Filter: (\"version\" IS NOT NULL)\n Total runtime: 68324.215 ms\n(6 rows)\n\nTime: 68326.062 ms\n\n\nAnd the non-default .conf parameters:\n\ntcpip_socket = true\nmax_connections = 100\npassword_encryption = true\nshared_buffers = 2000\nsort_mem = 16384 \nvacuum_mem = 8192 \neffective_cache_size = 4000\nsyslog = 2 \n\npostgresql version is 7.4.3\ncompiled with GCC 3.3.2 on sun4u architecture.\n\n\n\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n",
"msg_date": "Fri, 18 Jun 2004 14:25:54 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "Gary Cowell wrote:\n> --- [email protected] wrote: > You can roughly estimate time\n> spent for just scaning\n> \n>>the table using\n>>something like this: \n>>\n>>\tselect sum(version) from ... where version is not\n>>null\n>>\n>>\tand just \n>>\n>>\tselect sum(version) from ...\n>>\n>>The results would be interesting to compare. \n> \n> \n> To answer (I hope) everyones questions at once:\n> \n> 1) Oracle and postmaster were not running at the same\n> time\n> 2) The queries were run once, to cache as much as\n> possible then run again to get the timing\n> \n> 3) Distinct vs. no distinct (i.e. sort performance).\n> \n> select length(version) from vers where version is not\n> null;\n> \n> Time: 9748.174 ms\n> \n> select distinct(version) from vers where version is\n> not null;\n> \n> Time: 67988.972 ms\n> \n> So about an extra 60 seconds with the distinct on.\n\nWhich is basically the sorting time...\n\n> Here is the explain analyze output from psql:\n> \n> # explain analyze select distinct version from vers\n> where version is not null;\n> \n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=117865.77..120574.48 rows=142\n> width=132) (actual time=63623.428..68269.111 rows=536\n> loops=1)\n> -> Sort (cost=117865.77..119220.13 rows=541741\n> width=132) (actual time=63623.417..66127.641\n> rows=541741 loops=1)\n> Sort Key: \"version\"\n> -> Seq Scan on vers (cost=0.00..21367.41\n> rows=541741 width=132) (actual time=0.218..7214.903\n> rows=541741 loops=1)\n> Filter: (\"version\" IS NOT NULL)\n> Total runtime: 68324.215 ms\n> (6 rows)\n> \n> Time: 68326.062 ms\n\nYep - the seq-scan takes 7214.903 ms, there's a huge setup time for the \nsort (63623.417) and it's not finished until 66127.641ms have elapsed.\n\n> \n> And the non-default .conf parameters:\n> \n> tcpip_socket = true\n> max_connections = 100\n> password_encryption = true\n> shared_buffers = 2000\n> sort_mem = 16384 \n> vacuum_mem = 8192 \n> effective_cache_size = 4000\n> syslog = 2 \n\nWell, I'd probably up vacuum_mem, and check how much RAM is being used \nfor disk cache - I'd guess it's more than 32MB (4000 * 8kb).\n\nYou might want to up the shared_buffers, but that's going to depend on \nthe load.\n\nTry increasing sort_mem temporarily, and see if that makes a difference:\n SET sort_mem = 64000;\n EXPLAIN ANALYSE ...\nThe only thing I can think is that you're getting disk activity to get a \nsort that slow. I'd be expecting a hash-sort if PG thought it could fit \nthe distinct values in memory.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 18 Jun 2004 14:56:14 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance"
},
{
"msg_contents": "Hi,\n\nTom Lane wrote:\n\n>=?iso-8859-1?q?Gary=20Cowell?= <[email protected]> writes:\n> \n>\n>> -> Sort (cost=117865.77..119220.13 rows=541741\n>>width=132) (actual time=63623.417..66127.641\n>>rows=541741 loops=1)\n>> \n>>\n>\n>This is clearly where the time is going.\n>\n> \n>\n>>sort_mem = 16384 \n>> \n>>\n>\n>Probably not enough for this problem. The estimated data size is\n>upwards of 60 meg (132 bytes * half a mil rows); allowing for per-row\n>overhead I suspect that you'd need sort_mem approaching 100 meg for\n>a fully-in-memory sort. (Also I'd take the width=132 with a *big*\n>grain of salt, unless you have reason to know that it's accurate.)\n>\n>The on-disk sorting algorithm that we use is designed to favor minimum\n>disk space consumption over speed. It has a fairly nonrandom access\n>pattern that can be pretty slow if your disks don't have good seek-time\n>specs.\n>\n>I don't know whether Oracle's performance advantage is because they're\n>not swapping the sort to disk at all, or because they use a different\n>on-disk sort method with a more sequential access pattern.\n>\n>[... thinks for awhile ...] It seems possible that they may use sort\n>code that knows it is performing a DISTINCT operation and discards\n>duplicates on sight. Given that there are only 534 distinct values,\n>the sort would easily stay in memory if that were happening.\n>\n>It would be interesting to compare Oracle and PG times for a straight\n>sort of half a million rows, without the DISTINCT part; that would\n>give us a clue whether they simply have much better sort technology,\n>or whether they have a special optimization for sort+unique.\n> \n>\nI was tested this situation and found that oracle is working also in \nthis case much faster (in some cases x10 ) compared to pg.\nAlso by in memory sort oracle is faster but the diferenc is not so big.\nSo I have oracle 8 and oracle 10 (also pg - it is my primary platform) \ninstalled and can run some tests.\nI am ready to help in this direction or if you can send any example I \nwill run it and post the result .\n\nregards,\nivan.\n\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>\n> \n>\n\n\n\n\n\n\n\n\nHi,\n\nTom Lane wrote:\n\n=?iso-8859-1?q?Gary=20Cowell?= <[email protected]> writes:\n \n\n -> Sort (cost=117865.77..119220.13 rows=541741\nwidth=132) (actual time=63623.417..66127.641\nrows=541741 loops=1)\n \n\n\nThis is clearly where the time is going.\n\n \n\nsort_mem = 16384 \n \n\n\nProbably not enough for this problem. The estimated data size is\nupwards of 60 meg (132 bytes * half a mil rows); allowing for per-row\noverhead I suspect that you'd need sort_mem approaching 100 meg for\na fully-in-memory sort. (Also I'd take the width=132 with a *big*\ngrain of salt, unless you have reason to know that it's accurate.)\n\nThe on-disk sorting algorithm that we use is designed to favor minimum\ndisk space consumption over speed. It has a fairly nonrandom access\npattern that can be pretty slow if your disks don't have good seek-time\nspecs.\n\nI don't know whether Oracle's performance advantage is because they're\nnot swapping the sort to disk at all, or because they use a different\non-disk sort method with a more sequential access pattern.\n\n[... thinks for awhile ...] It seems possible that they may use sort\ncode that knows it is performing a DISTINCT operation and discards\nduplicates on sight. Given that there are only 534 distinct values,\nthe sort would easily stay in memory if that were happening.\n\nIt would be interesting to compare Oracle and PG times for a straight\nsort of half a million rows, without the DISTINCT part; that would\ngive us a clue whether they simply have much better sort technology,\nor whether they have a special optimization for sort+unique.\n \n\nI was tested this situation and found that oracle is working also in this\ncase much faster (in some cases x10 ) compared to pg.\nAlso by in memory sort oracle is faster but the diferenc is not so big.\nSo I have oracle 8 and oracle 10 (also pg - it is my primary platform) installed\nand can run some tests.\nI am ready to help in this direction or if you can send any example I will\nrun it and post the result .\n\nregards,\nivan.\n\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend",
"msg_date": "Fri, 18 Jun 2004 15:59:02 +0200",
"msg_from": "pginfo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major differences between oracle and postgres performance"
},
{
"msg_contents": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]> writes:\n> -> Sort (cost=117865.77..119220.13 rows=541741\n> width=132) (actual time=63623.417..66127.641\n> rows=541741 loops=1)\n\nThis is clearly where the time is going.\n\n> sort_mem = 16384 \n\nProbably not enough for this problem. The estimated data size is\nupwards of 60 meg (132 bytes * half a mil rows); allowing for per-row\noverhead I suspect that you'd need sort_mem approaching 100 meg for\na fully-in-memory sort. (Also I'd take the width=132 with a *big*\ngrain of salt, unless you have reason to know that it's accurate.)\n\nThe on-disk sorting algorithm that we use is designed to favor minimum\ndisk space consumption over speed. It has a fairly nonrandom access\npattern that can be pretty slow if your disks don't have good seek-time\nspecs.\n\nI don't know whether Oracle's performance advantage is because they're\nnot swapping the sort to disk at all, or because they use a different\non-disk sort method with a more sequential access pattern.\n\n[... thinks for awhile ...] It seems possible that they may use sort\ncode that knows it is performing a DISTINCT operation and discards\nduplicates on sight. Given that there are only 534 distinct values,\nthe sort would easily stay in memory if that were happening.\n\nIt would be interesting to compare Oracle and PG times for a straight\nsort of half a million rows, without the DISTINCT part; that would\ngive us a clue whether they simply have much better sort technology,\nor whether they have a special optimization for sort+unique.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 2004 10:57:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "* Tom Lane ([email protected]) wrote:\n> [... thinks for awhile ...] It seems possible that they may use sort\n> code that knows it is performing a DISTINCT operation and discards\n> duplicates on sight. Given that there are only 534 distinct values,\n> the sort would easily stay in memory if that were happening.\n\nCould this optimization be added to PostgreSQL? It sounds like a very\nreasonable thing to do. Hopefully there wouldn't be too much complexity\nneeded to add it.\n\n\tStephen",
"msg_date": "Fri, 18 Jun 2004 12:30:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "Stephen Frost <[email protected]> writes:\n> * Tom Lane ([email protected]) wrote:\n>> [... thinks for awhile ...] It seems possible that they may use sort\n>> code that knows it is performing a DISTINCT operation and discards\n>> duplicates on sight. Given that there are only 534 distinct values,\n>> the sort would easily stay in memory if that were happening.\n\n> Could this optimization be added to PostgreSQL? It sounds like a very\n> reasonable thing to do.\n\nThat's what I was wondering about too. But first I'd like to get\nsome kind of reading on how effective it would be. If someone can\ndemonstrate that Oracle can do sort-and-drop-dups a lot faster than\nit can do a straight sort of the same amount of input data, that\nwould be a strong indication that it's worth doing. At this point\nwe don't know if that's the source of their win or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 2004 13:01:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "* Tom Lane ([email protected]) wrote:\n> Stephen Frost <[email protected]> writes:\n> > * Tom Lane ([email protected]) wrote:\n> >> [... thinks for awhile ...] It seems possible that they may use sort\n> >> code that knows it is performing a DISTINCT operation and discards\n> >> duplicates on sight. Given that there are only 534 distinct values,\n> >> the sort would easily stay in memory if that were happening.\n> \n> > Could this optimization be added to PostgreSQL? It sounds like a very\n> > reasonable thing to do.\n> \n> That's what I was wondering about too. But first I'd like to get\n> some kind of reading on how effective it would be. If someone can\n> demonstrate that Oracle can do sort-and-drop-dups a lot faster than\n> it can do a straight sort of the same amount of input data, that\n> would be a strong indication that it's worth doing. At this point\n> we don't know if that's the source of their win or not.\n\nAlright, I did a couple tests, these are different systems with\ndifferent hardware, but in the end I think the difference is clear:\n\ntsf=# explain analyze select distinct access_type_id from p_gen_dom_dedicated_swc_access ;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------- \n Unique (cost=321591.00..333205.56 rows=16 width=10) (actual time=32891.141..37420.429 rows=16 loops=1)\n -> Sort (cost=321591.00..327398.28 rows=2322912 width=10) (actual time=32891.137..35234.810 rows=2322912 loops=1)\n Sort Key: access_type_id\n -> Seq Scan on p_gen_dom_dedicated_swc_access (cost=0.00..55492.12 rows=2322912 width=10) (actual time=0.013..3743.470 rows=2322912 loops=1)\n Total runtime: 37587.519 ms\n(5 rows)\n\ntsf=# explain analyze select access_type_id from p_gen_dom_dedicated_swc_access order by access_type_id;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=321591.00..327398.28 rows=2322912 width=10) (actual time=32926.696..35278.847 rows=2322912 loops=1)\n Sort Key: access_type_id \n -> Seq Scan on p_gen_dom_dedicated_swc_access (cost=0.00..55492.12 rows=2322912 width=10) (actual time=0.014..3753.443 rows=2322912 loops=1)\n Total runtime: 36737.628 ms \n(4 rows) \n\nSo, about the same from postgres in each case. From Oracle:\n\n(select access_type_id from p_gen_dom_dedicated_swc_access order by access_type_id)\nsauron:/home/sfrost> time sqlplus mci_vendor/mci @test.sql > /dev/null\n\nreal 3m55.12s\nuser 2m25.87s\nsys 0m10.59s\n\n(select distinct access_type_id from p_gen_dom_dedicated_swc_access)\nsauron:/home/sfrost> time sqlplus mci_vendor/mci @test.sql > /dev/null\n\nreal 0m5.08s\nuser 0m3.86s\nsys 0m0.95s\n\nAll the queries were run multiple times, though there wasn't all that\nmuch difference in the times. Both systems are pretty speedy, but I\ntend to feel the Postgres box is faster in CPU/disk access time, which\nis probably why the Oracle system took 4 minutes to do what the Postgres\nsystems does in 40 seconds. My only other concern is the Oracle system\nhaving to do the write I/O while the postgres one doesn't... I don't\nsee an obvious way to get around that though, and I'm not sure if it'd\nreally make *that* big of a difference.\n\n\tStephen",
"msg_date": "Fri, 18 Jun 2004 13:53:17 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "* Stephen Frost ([email protected]) wrote:\n> systems does in 40 seconds. My only other concern is the Oracle system\n> having to do the write I/O while the postgres one doesn't... I don't\n> see an obvious way to get around that though, and I'm not sure if it'd\n> really make *that* big of a difference.\n\nAlright, after talking with some people on #postgresql I found that in\nOracle you can do 'set autotrace traceonly', which removes the I/O\nfactor from the Oracle query. Doing this I also discovered that it\nappears Oracle actually uses an index on that field that it knows about\nto derive what the distinct results would be. That probably invalidates\nthis test for what we were specifically looking for, but, hey, using the\nindex to figure out what the distinct values for the key are isn't\nexactly a bad idea. :)\n\nHere's the new results:\n\n(select access_type_id from p_gen_dom_dedicated_swc_access order by access_type_id;)\n-----------------------------------------------------------------------------------\nsauron:/home/sfrost> time sqlplus mci_vendor/mci @test.sql \n\nSQL*Plus: Release 9.2.0.1.0 - Production on Fri Jun 18 14:10:12 2004\n\nCopyright (c) 1982, 2002, Oracle Corporation. All rights reserved.\n\n\nConnected to:\nOracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production\nWith the Partitioning, OLAP and Oracle Data Mining options\nJServer Release 9.2.0.1.0 - Production\n\n\n2322912 rows selected.\n\n\nExecution Plan\n----------------------------------------------------------\n 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=11459 Card=1303962 B\n ytes=16951506)\n\n 1 0 SORT* (ORDER BY) (Cost=11459 Card=1303962 Bytes=16951506) :Q457001\n 2 1 TABLE ACCESS* (FULL) OF 'P_GEN_DOM_DEDICATED_SWC_ACCESS' :Q457000\n (Cost=1550 Card=1303962 Bytes=16951506)\n\n\n\n 1 PARALLEL_TO_SERIAL SELECT A1.C0 C0 FROM :Q457000 A1 ORDER BY A1\n .C0\n\n 2 PARALLEL_TO_PARALLEL SELECT /*+ NO_EXPAND ROWID(A1) */ A1.\"ACCESS\n _TYPE_ID\" C0 FROM \"P_GEN_DOM_DEDICAT\n\n\n\nStatistics\n----------------------------------------------------------\n 32 recursive calls\n 1594 db block gets\n 64495 consistent gets\n 105975 physical reads\n 0 redo size\n 40109427 bytes sent via SQL*Net to client\n 1704111 bytes received via SQL*Net from client\n 154862 SQL*Net roundtrips to/from client\n 2 sorts (memory)\n 4 sorts (disk)\n 2322912 rows processed\n\nDisconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production\nWith the Partitioning, OLAP and Oracle Data Mining options\nJServer Release 9.2.0.1.0 - Production\n\nreal 1m38.55s\nuser 0m23.36s\nsys 0m9.61s\n\n-----------------------------------------------------------------------------------\n(select distinct access_type_id from p_gen_dom_dedicated_swc_access)\n-----------------------------------------------------------------------------------\nsauron:/home/sfrost> time sqlplus mci_vendor/mci @test.sql\n\nSQL*Plus: Release 9.2.0.1.0 - Production on Fri Jun 18 14:13:54 2004\n\nCopyright (c) 1982, 2002, Oracle Corporation. All rights reserved.\n\n\nConnected to:\nOracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production\nWith the Partitioning, OLAP and Oracle Data Mining options\nJServer Release 9.2.0.1.0 - Production\n\n\n16 rows selected.\n\n\nExecution Plan\n----------------------------------------------------------\n 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=44874 Card=1303962 B\n ytes=16951506)\n\n 1 0 SORT (UNIQUE) (Cost=44874 Card=1303962 Bytes=16951506)\n 2 1 INDEX (FAST FULL SCAN) OF 'TABLE_8111_DUPLICATE_CHECK' (\n UNIQUE) (Cost=4 Card=1303962 Bytes=16951506)\n\n\n\n\n\nStatistics\n----------------------------------------------------------\n 0 recursive calls\n 0 db block gets\n 47069 consistent gets\n 47067 physical reads\n 0 redo size\n 841 bytes sent via SQL*Net to client\n 662 bytes received via SQL*Net from client\n 3 SQL*Net roundtrips to/from client\n 1 sorts (memory)\n 0 sorts (disk)\n 16 rows processed\n\nDisconnected from Oracle9i Enterprise Edition Release 9.2.0.1.0 - 64bit Production\nWith the Partitioning, OLAP and Oracle Data Mining options\nJServer Release 9.2.0.1.0 - Production\n\nreal 0m5.36s\nuser 0m0.04s\nsys 0m0.07s\n-----------------------------------------------------------------------------------\n\n\tStephen",
"msg_date": "Fri, 18 Jun 2004 14:16:03 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "Don't know about Oracle, but select-distinct in MSSQL2K will indeed throw\naway duplicates, which chops the CPU time. Very easy to see in the graphic\nquery plan, both in terms of CPU and the number of rows retrieved from a\nsingle-node or nested-loop subtree. Definitely a worthwhile optimization.\n\n\"Tom Lane\" <[email protected]> wrote in message\nnews:[email protected]...\n> Stephen Frost <[email protected]> writes:\n> > * Tom Lane ([email protected]) wrote:\n> >> [... thinks for awhile ...] It seems possible that they may use sort\n> >> code that knows it is performing a DISTINCT operation and discards\n> >> duplicates on sight. Given that there are only 534 distinct values,\n> >> the sort would easily stay in memory if that were happening.\n>\n> > Could this optimization be added to PostgreSQL? It sounds like a very\n> > reasonable thing to do.\n>\n> That's what I was wondering about too. But first I'd like to get\n> some kind of reading on how effective it would be. If someone can\n> demonstrate that Oracle can do sort-and-drop-dups a lot faster than\n> it can do a straight sort of the same amount of input data, that\n> would be a strong indication that it's worth doing. At this point\n> we don't know if that's the source of their win or not.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n\n",
"msg_date": "Tue, 22 Jun 2004 04:46:44 GMT",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
}
] |
[
{
"msg_contents": "Hi everyone .\n\n \n\nHow much memory should I give to the kernel and postgresql\n\n \n\nI have 1G of memory and 120G of HD\n\n \n\nShared Buffers = ?\n\nVacuum Mem = ?\n\nSHMAX = ?\n\n \n\nSorry I have so many question .I am a newbie :-(\n\n \n\nI have 30G of data \n\nAt least 30 simultaneus users\n\nBut I will use it only for query with lot of sorting\n\n \n\nthanks\n\n\n\n\n\n\n\n\n\n\nHi everyone .\n \nHow much memory should I give to the kernel and postgresql\n \nI have 1G of memory and 120G of HD\n \nShared Buffers = ?\nVacuum Mem = ?\nSHMAX = ?\n \nSorry I have so many question .I am a newbie L\n \nI have 30G of data \nAt least 30 simultaneus users\nBut I will use it only for query with lot of sorting\n \nthanks",
"msg_date": "Fri, 18 Jun 2004 21:34:40 +0800",
"msg_from": "\"Michael Ryan S. Puncia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "memory allocation"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nHi,\n\nOn Fri, 18 Jun 2004, Michael Ryan S. Puncia wrote:\n\n> How much memory should I give to the kernel and postgresql\n> \n> I have 1G of memory and 120G of HD\n> \n> Shared Buffers = ?\n> \n> Vacuum Mem = ?\n\nMaybe you should read\n\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.sxw\nOR\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\n> SHMAX = ?\n\nSHMMAX is not that relevant with PostgreSQL, it's rather relevant with \nyour operating system.\n\nRegards,\n- -- \nDevrim GUNDUZ\t \ndevrim~gunduz.org\t\t\t\tdevrim.gunduz~linux.org.tr \n\t\t\thttp://www.tdmsoft.com\n\t\t\thttp://www.gunduz.org\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.1 (GNU/Linux)\n\niD8DBQFA0vCUtl86P3SPfQ4RAvITAJ48FV24aBN+nc2+lkRwXc79HlHV6QCfSvRA\nYuGjn8hs1jvOJ2Ah9oamIJQ=\n=96+i\n-----END PGP SIGNATURE-----\n\n",
"msg_date": "Fri, 18 Jun 2004 16:39:30 +0300 (EEST)",
"msg_from": "Devrim GUNDUZ <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory allocation"
},
{
"msg_contents": "Michael Ryan S. Puncia wrote:\n> Hi everyone .\n> \n> \n> \n> How much memory should I give to the kernel and postgresql\n> \n> I have 1G of memory and 120G of HD\n\nDevrim's pointed you to a guide to the configuration file. There's also \nan introduction to performance tuning on the same site.\n\nAn important thing to remember is that the sort_mem is the amount of \nmemory available *per sort* and some queries can use several sorts.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 18 Jun 2004 14:58:16 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: memory allocation"
}
] |
[
{
"msg_contents": "> Try increasing sort_mem temporarily, and see if that\n> makes a difference:\n> SET sort_mem = 64000;\n> EXPLAIN ANALYSE ...\n\nI did this (actualy 65536) and got the following:\npvcsdb=# explain analyze select distinct version from\nvers where version is not null;\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=117865.77..120574.48 rows=142\nwidth=132) (actual time=81595.178..86573.228 rows=536\nloops=1)\n -> Sort (cost=117865.77..119220.13 rows=541741\nwidth=132) (actual time=81595.169..84412.069\nrows=541741 loops=1)\n Sort Key: \"version\"\n -> Seq Scan on vers (cost=0.00..21367.41\nrows=541741 width=132) (actual time=10.068..7397.374\nrows=541741 loops=1)\n Filter: (\"version\" IS NOT NULL)\n Total runtime: 86647.495 ms\n(6 rows)\n\n\nIn response to Tom Lane, I have compared a\nselect/order by on the same data in Oracle and PG to\nsee if this changes things:\n\n\nPG: Time: 67438.536 ms 541741 rows\nOracle: After an hour and a half I canned it\n\nSo it seems the idea that oracle is dropping duplicate\nrows prior to the sort when using distinct may indeed\nbe the case.\n\n From what I've seen here, it seems that PGs on-disk\nsort performance is exceeding that of Oracle - it's\njust that oracle sorts fewer rows for distinct.\n\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n",
"msg_date": "Fri, 18 Jun 2004 16:47:21 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]> writes:\n> So it seems the idea that oracle is dropping duplicate\n> rows prior to the sort when using distinct may indeed\n> be the case.\n\nOkay. We won't have any short-term solution for making DISTINCT do that,\nbut if you are on PG 7.4 you could get the same effect from using\nGROUP BY: instead of\n\tselect distinct version from vers where version is not null\ntry\n\tselect version from vers where version is not null group by version\nYou should get a HashAggregate plan out of that, and I'd think it'd be\npretty quick when there are not many distinct values of version.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 18 Jun 2004 16:47:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
},
{
"msg_contents": "--- Tom Lane <[email protected]> wrote: >\n=?iso-8859-1?q?Gary=20Cowell?=\n> <[email protected]> writes:\n> > So it seems the idea that oracle is dropping\n> duplicate\n> > rows prior to the sort when using distinct may\n> indeed\n> > be the case.\n> \n> Okay. We won't have any short-term solution for\n> making DISTINCT do that,\n> but if you are on PG 7.4 you could get the same\n> effect from using\n> GROUP BY: instead of\n> \tselect distinct version from vers where version is\n> not null\n> try\n> \tselect version from vers where version is not null\n> group by version\n> You should get a HashAggregate plan out of that, and\n> I'd think it'd be\n> pretty quick when there are not many distinct values\n> of version.\n> \n\nYeah out of the half million rows there are only ever\ngoing to be 500 or so distinct values.\n\nI do indeed get such a plan. It's much faster that\nway. Down to 16 seconds. I'll get the chap to rewrite\nhis app to use group by instead of distinct.\n\nThanks (everyone) for the top class help!\n\n\n\t\n\t\n\t\t\n___________________________________________________________ALL-NEW Yahoo! Messenger - sooooo many all-new ways to express yourself http://uk.messenger.yahoo.com\n",
"msg_date": "Sat, 19 Jun 2004 00:06:29 +0100 (BST)",
"msg_from": "=?iso-8859-1?q?Gary=20Cowell?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Major differences between oracle and postgres performance - what\n\tcan I do ?"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying install the postgresql-7.4.3 simple installation. I did ./configure command at the postgresql directory source. While the configuring proccess I receiving the follow message:\n\nchecking for tar... /bin/tar\nchecking for strip... strip\nchecking whether it is possible to strip libraries... yes\nchecking for bison... bison -y\n*** The installed version of Bison is too old. PostgreSQL needs\n*** Bison version 1.875 or later.\nchecking for perl... /usr/bin/perl\nchecking for main in -lbsd... no\nchecking for setproctitle in -lutil... no\nchecking for main in -lm... yes\n\nBut, after this message the install proccess continue like this message. The problem is that the installation never finish. I am thinking that the configure proccess is in loop. Have it anything relation with my hardware configuration? The computer where I did this is: AMD K6-II 200 MHZ; 64 MB memory;\nI would like why the configure proccess never finish.\n\nRegards,\n\nJanio\n\n\n\n\n\n\nHi,\n \nI am trying install the postgresql-7.4.3 simple \ninstallation. I did ./configure command at the postgresql directory source. \nWhile the configuring proccess I receiving the follow message:\n \nchecking for tar... /bin/tarchecking for \nstrip... stripchecking whether it is possible to strip libraries... \nyeschecking for bison... bison -y*** The installed version of Bison is \ntoo old. PostgreSQL needs*** Bison version 1.875 or later.checking \nfor perl... /usr/bin/perlchecking for main in -lbsd... nochecking for \nsetproctitle in -lutil... nochecking for main in -lm... yes\n \nBut, after this message the install proccess \ncontinue like this message. The problem is that the installation never finish. I \nam thinking that the configure proccess is in loop. Have it anything relation \nwith my hardware configuration? The computer where I did this is: AMD \nK6-II 200 MHZ; 64 MB memory;I would like why the configure proccess never \nfinish.\n \nRegards,\n \nJanio",
"msg_date": "Sun, 20 Jun 2004 13:39:38 -0300",
"msg_from": "\"Janio Rosa da Silva\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hi!"
},
{
"msg_contents": "Janio,\n\n> I am trying install the postgresql-7.4.3 simple installation. I did\n> ./configure command at the postgresql directory source. While the\n> configuring proccess I receiving the follow message:\n\nThis is the wrong list for this question. Please try PGSQL-ADMIN. You're \nmuch more likely to get help there.\n\nSorry!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 22 Jun 2004 09:23:26 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hi!"
}
] |
[
{
"msg_contents": "On Fri, 2004-06-18 at 19:51 -0700, Patrick Hatcher wrote:\n> \n> Thanks!\n> \n> My effective_cache_size = 625000\n> \n> I thought that having the shared_buffers above 2k or 3k didn't gain\n> any performance and may in fact degrade it?\n\nHi Patrick,\n\n\nQuoting from:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nshared_buffers\n Sets the size of PostgreSQL's' memory buffer where queries are\n held before being fed into the Kernel buffer of the host system.\n It's very important to remember that this is only a holding\n area, and not the total memory available for the server. As\n such, resist the urge to set this number to a large portion of\n your RAM, as this will actually degrade performance on many\n operating systems. Members of the pgsql-performance mailing list\n have found useful values in the range of 1000-6000, depending on\n available RAM, database size, and number of concurrent queries.\n For servers with very large amounts of available RAM (more than\n 1 GB) increasing this setting to 6-15% or available RAM has\n worked well for some users. The real analysis of the precise\n best setting is not fully understood and is more readily\n determined through testing than calculation. \n \n As a rule of thumb, observe shared memory usage of PostgreSQL\n with tools like ipcs and determine the setting. Remember that\n this is only half the story. You also need to set\n effective_cache_size so that postgreSQL will use available\n memory optimally.\n\nUsing this conservatively, on an 8G system, 6% would be roughly 60,000\npages - considerably higher than 2-3000...\n\nOne day when I wasn't timid (well, OK, I was desperate :-), I did see a\n_dramatic_ performance improvement in a single very narrow activity by\nsetting shared_buffers to 300000 on a 4G RAM system (I was rolling back\na transaction involving an update to 2.8 million rows) , but afterwards\nI set shared_buffers back to 10000, which I have now increased to 20000\non that system.\n\n\nYou may also want to look at:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nOr indeed, peruse the articles regularly as they appear:\nhttp://www.varlena.com/varlena/GeneralBits/\n\nRegards,\n\t\t\t\t\tAndrew McMillan\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Tomorrow will be cancelled due to lack of interest.\n-------------------------------------------------------------------------",
"msg_date": "Mon, 21 Jun 2004 22:11:49 +1200",
"msg_from": "Andrew McMillan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow vacuum performance"
},
{
"msg_contents": "Thanks!\n\n\nPatrick Hatcher\n\n\n\n\nAndrew McMillan <[email protected]> \nSent by: [email protected]\n06/21/04 03:11 AM\n\nTo\nPatrick Hatcher <[email protected]>\ncc\[email protected]\nSubject\nRe: [PERFORM] Slow vacuum performance\n\n\n\n\n\n\nOn Fri, 2004-06-18 at 19:51 -0700, Patrick Hatcher wrote:\n> \n> Thanks!\n> \n> My effective_cache_size = 625000\n> \n> I thought that having the shared_buffers above 2k or 3k didn't gain\n> any performance and may in fact degrade it?\n\nHi Patrick,\n\n\nQuoting from:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n\nshared_buffers\n Sets the size of PostgreSQL's' memory buffer where queries are\n held before being fed into the Kernel buffer of the host system.\n It's very important to remember that this is only a holding\n area, and not the total memory available for the server. As\n such, resist the urge to set this number to a large portion of\n your RAM, as this will actually degrade performance on many\n operating systems. Members of the pgsql-performance mailing list\n have found useful values in the range of 1000-6000, depending on\n available RAM, database size, and number of concurrent queries.\n For servers with very large amounts of available RAM (more than\n 1 GB) increasing this setting to 6-15% or available RAM has\n worked well for some users. The real analysis of the precise\n best setting is not fully understood and is more readily\n determined through testing than calculation. \n \n As a rule of thumb, observe shared memory usage of PostgreSQL\n with tools like ipcs and determine the setting. Remember that\n this is only half the story. You also need to set\n effective_cache_size so that postgreSQL will use available\n memory optimally.\n\nUsing this conservatively, on an 8G system, 6% would be roughly 60,000\npages - considerably higher than 2-3000...\n\nOne day when I wasn't timid (well, OK, I was desperate :-), I did see a\n_dramatic_ performance improvement in a single very narrow activity by\nsetting shared_buffers to 300000 on a 4G RAM system (I was rolling back\na transaction involving an update to 2.8 million rows) , but afterwards\nI set shared_buffers back to 10000, which I have now increased to 20000\non that system.\n\n\nYou may also want to look at:\nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\nOr indeed, peruse the articles regularly as they appear:\nhttp://www.varlena.com/varlena/GeneralBits/\n\nRegards,\n Andrew McMillan\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n Tomorrow will be cancelled due to lack of interest.\n-------------------------------------------------------------------------",
"msg_date": "Mon, 21 Jun 2004 08:24:14 -0700",
"msg_from": "Patrick Hatcher <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow vacuum performance"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWe're looking for an alternative to fiber-channel disk arrays for mass\nstorage. One of the ideas that we're exploring would involve having the\ncluster on an NFS mounted filesystem. Another technology we're looking\nat is the Linux NBD (Network Block Device).\n\nHas anyone had any experience with running postgres over either of these\ntechnologies? What issues do we need to know about / pay attention to?\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFA1yK6gfzn5SevSpoRAtYAAKCrghfAKV5kVuiTd/2TOwEbr4Q7hACgr3rT\nmEvFi8AOHX9I43T45fH1e0U=\n=1Cs9\n-----END PGP SIGNATURE-----",
"msg_date": "Mon, 21 Jun 2004 14:02:52 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres over Linux NBD or NFS"
},
{
"msg_contents": "Andrew Hammond <[email protected]> writes:\n> We're looking for an alternative to fiber-channel disk arrays for mass\n> storage. One of the ideas that we're exploring would involve having the\n> cluster on an NFS mounted filesystem. Another technology we're looking\n> at is the Linux NBD (Network Block Device).\n\nThere are a lot of horror stories concerning running databases (not only\nPostgres) over NFS. I wouldn't recommend it. Dunno anything about NBD\nthough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Jun 2004 21:33:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS "
},
{
"msg_contents": "\nOn Jun 21, 2004, at 2:02 PM, Andrew Hammond wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> We're looking for an alternative to fiber-channel disk arrays for mass\n> storage. One of the ideas that we're exploring would involve having the\n> cluster on an NFS mounted filesystem. Another technology we're looking\n> at is the Linux NBD (Network Block Device).\n>\n\nNo idea about NBDs, but its generally accepted that running over NFS \nwould significantly\ndecrease reliability and performance, i.e. it would be a Bad Move (tm). \nNot sure what you\nthink to gain. I sure wouldn't trust NFS with a production database.\n\nWhat exactly are you trying to gain, avoid, or do?\n\n> Has anyone had any experience with running postgres over either of \n> these\n> technologies? What issues do we need to know about / pay attention to?\n>\n> - --\n> Andrew Hammond 416-673-4138 [email protected]\n> Database Administrator, Afilias Canada Corp.\n> CB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.2.4 (GNU/Linux)\n> Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n>\n> iD8DBQFA1yK6gfzn5SevSpoRAtYAAKCrghfAKV5kVuiTd/2TOwEbr4Q7hACgr3rT\n> mEvFi8AOHX9I43T45fH1e0U=\n> =1Cs9\n> -----END PGP SIGNATURE-----\n> <ahammond.vcf>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> [email protected])\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Mon, 21 Jun 2004 22:46:41 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "On Mon, 2004-06-21 at 20:46, Andrew Rawnsley wrote:\n> On Jun 21, 2004, at 2:02 PM, Andrew Hammond wrote:\n> \n> > -----BEGIN PGP SIGNED MESSAGE-----\n> > Hash: SHA1\n> >\n> > We're looking for an alternative to fiber-channel disk arrays for mass\n> > storage. One of the ideas that we're exploring would involve having the\n> > cluster on an NFS mounted filesystem. Another technology we're looking\n> > at is the Linux NBD (Network Block Device).\n> >\n> \n> No idea about NBDs, but its generally accepted that running over NFS \n> would significantly\n> decrease reliability and performance, i.e. it would be a Bad Move (tm). \n> Not sure what you\n> think to gain. I sure wouldn't trust NFS with a production database.\n> \n> What exactly are you trying to gain, avoid, or do?\n\nI've gotten good performance over NFS using switched 100, then later\ngigabit. But I wouldn't trust it for diddly.\n\n",
"msg_date": "Mon, 21 Jun 2004 21:46:45 -0600",
"msg_from": "\"Scott Marlowe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "Anselmo bom dia!\n\nN�o � custoso monstar um Cluster (Storage caseiro) em PostgreSQL, Est� em\ndiscuss�o no forum da PostgreSQL a poss�bilidade de usar o NFS (Network file\nsystem) ou o NBD (Network Block Device), ambos consistem em \"Mapear\" a\nparti��o de dados do PostgreSQL em uma OUTRA m�quina com PostgreSQL a fim de\nque os as duas m�quinas trabalhem com a mesma base de dados.\n\nCarlos Eduardo Smanioto\nInfra Estrutura - Servidores e Seguran�a\nPlanae - Tecnologia da Informa��o\nFone/Fax +55 14 3224-3066 Ramal 207\nwww.planae.com.br\n\n----- Original Message ----- \nFrom: \"Scott Marlowe\" <[email protected]>\nTo: \"Andrew Rawnsley\" <[email protected]>\nCc: \"Andrew Hammond\" <[email protected]>;\n<[email protected]>\nSent: Tuesday, June 22, 2004 12:46 AM\nSubject: Re: [PERFORM] Postgres over Linux NBD or NFS\n\n\n> On Mon, 2004-06-21 at 20:46, Andrew Rawnsley wrote:\n> > On Jun 21, 2004, at 2:02 PM, Andrew Hammond wrote:\n> >\n> > > -----BEGIN PGP SIGNED MESSAGE-----\n> > > Hash: SHA1\n> > >\n> > > We're looking for an alternative to fiber-channel disk arrays for mass\n> > > storage. One of the ideas that we're exploring would involve having\nthe\n> > > cluster on an NFS mounted filesystem. Another technology we're looking\n> > > at is the Linux NBD (Network Block Device).\n> > >\n> >\n> > No idea about NBDs, but its generally accepted that running over NFS\n> > would significantly\n> > decrease reliability and performance, i.e. it would be a Bad Move (tm).\n> > Not sure what you\n> > think to gain. I sure wouldn't trust NFS with a production database.\n> >\n> > What exactly are you trying to gain, avoid, or do?\n>\n> I've gotten good performance over NFS using switched 100, then later\n> gigabit. But I wouldn't trust it for diddly.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n\n",
"msg_date": "Tue, 22 Jun 2004 08:25:38 -0300",
"msg_from": "\"Carlos Eduardo Smanioto\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Andrew Rawnsley) transmitted:\n> On Jun 21, 2004, at 2:02 PM, Andrew Hammond wrote:\n>> We're looking for an alternative to fiber-channel disk arrays for mass\n>> storage. One of the ideas that we're exploring would involve having the\n>> cluster on an NFS mounted filesystem. Another technology we're looking\n>> at is the Linux NBD (Network Block Device).\n>\n> No idea about NBDs, but its generally accepted that running over NFS\n> would significantly decrease reliability and performance, i.e. it\n> would be a Bad Move (tm). Not sure what you think to gain. I sure\n> wouldn't trust NFS with a production database.\n>\n> What exactly are you trying to gain, avoid, or do?\n\nThe point of the exercise is to try to come up with something that is\na near-substitute for a SAN.\n\nWith a SAN, you have a box with a whole lot of disk in it, and then\nyour database servers connect to that box, typically via something\nlike fibrechannel.\n\nOne of the goals is for this to allow trying out Opterons at low risk.\nShould performance turn out to suck or there be some other\ndisqualification, it's simple to hook the disk up to something else\ninstead.\n\nThe other goal is to be able to stick LOTS of disk into one box, and\ndole it out to multiple servers. It's more expensive to set up and\nmanage 3 RAID arrays than it is to set up and manage just 1, because\nyou have to manage 3 sets of disk hardware rather than 1.\n\nBut I'm getting convinced that the attempt to get this clever about it\nis counterproductive unless you have outrageous amounts of money to\nthrow at it.\n\n- NFS may well be acceptable if you buy into something with potent FS\n semantics, as with NetApp boxes. But they're REALLY expensive.\n\n- FibreChannel offers interesting options in conjunction with a fairly\n smart SAN box and Veritas, where you could have 5TB of storage in\n one box, and then assign 2TB apiece to two servers, and the other\n 1TB to a third. But the pricing premium again leaps out at ya.\n\nThe \"poor man's approach\" involves trying to fake this by building a\n\"disk box\" running Linux that exports the storage either as a\nfilesystem (using NFS) or as disk blocks (NBD). NFS clearly doesn't\nprovide the filesystem semantics needed to get decent reliability;\nwith NBD, it's not clear what happens :-(.\n\nBarring that, it means building a separate RAID array for each server,\nand living with the limitation that a full set of disk hardware has to\nbe devoted to each DB server.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\nhttp://www3.sympatico.ca/cbbrowne/\nRules of the Evil Overlord #46. \"If an advisor says to me \"My liege,\nhe is but one man. What can one man possibly do?\", I will reply\n\"This.\" and kill the advisor.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 22 Jun 2004 07:50:34 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "\nThere are some less expensive baby-SAN options coming out now - Dell \nhas an\nrebranded EMC baby SAN (which of course doesn't work with any other EMC\nsystem...) that starts at about $6000 or so. Just read the announcement \n- don't know\nanything else. While there have been some reports of undewhelming \nperformance for\ndatabase applications, the Apple XRaid has a sweet price point, \nparticularly if you're\nin an industry that they want some exposure in (we're in financial \nservices, they almost\ngave it to us...$7000 for 2TB, batteries, accessory kits, etc), and a \ndecent feature\nset. It works with non-Apple stuff..\n\nThe baby-SANs don't necessarily do many of the things that you can get \nout of a full-blown\nEMC/NetApp rig, but then again, you're not paying for it either.\n\nThere are a lot of lower-cost storage options popping up now, as \nIDE/SATA disks arrays\nproliferate. You can get external RAID boxes that talk SCSI or fiber \nwith IDE disks for\ndirt these days. Small, too. Portable. I'm see little need to buy \nmassive boxes with\ninternal storage arrays anymore.\n\n\nOn Jun 22, 2004, at 7:50 AM, Christopher Browne wrote:\n\n> In an attempt to throw the authorities off his trail, \n> [email protected] (Andrew Rawnsley) transmitted:\n>> On Jun 21, 2004, at 2:02 PM, Andrew Hammond wrote:\n>>> We're looking for an alternative to fiber-channel disk arrays for \n>>> mass\n>>> storage. One of the ideas that we're exploring would involve having \n>>> the\n>>> cluster on an NFS mounted filesystem. Another technology we're \n>>> looking\n>>> at is the Linux NBD (Network Block Device).\n>>\n>> No idea about NBDs, but its generally accepted that running over NFS\n>> would significantly decrease reliability and performance, i.e. it\n>> would be a Bad Move (tm). Not sure what you think to gain. I sure\n>> wouldn't trust NFS with a production database.\n>>\n>> What exactly are you trying to gain, avoid, or do?\n>\n> The point of the exercise is to try to come up with something that is\n> a near-substitute for a SAN.\n>\n> With a SAN, you have a box with a whole lot of disk in it, and then\n> your database servers connect to that box, typically via something\n> like fibrechannel.\n>\n> One of the goals is for this to allow trying out Opterons at low risk.\n> Should performance turn out to suck or there be some other\n> disqualification, it's simple to hook the disk up to something else\n> instead.\n>\n> The other goal is to be able to stick LOTS of disk into one box, and\n> dole it out to multiple servers. It's more expensive to set up and\n> manage 3 RAID arrays than it is to set up and manage just 1, because\n> you have to manage 3 sets of disk hardware rather than 1.\n>\n> But I'm getting convinced that the attempt to get this clever about it\n> is counterproductive unless you have outrageous amounts of money to\n> throw at it.\n>\n> - NFS may well be acceptable if you buy into something with potent FS\n> semantics, as with NetApp boxes. But they're REALLY expensive.\n>\n> - FibreChannel offers interesting options in conjunction with a fairly\n> smart SAN box and Veritas, where you could have 5TB of storage in\n> one box, and then assign 2TB apiece to two servers, and the other\n> 1TB to a third. But the pricing premium again leaps out at ya.\n>\n> The \"poor man's approach\" involves trying to fake this by building a\n> \"disk box\" running Linux that exports the storage either as a\n> filesystem (using NFS) or as disk blocks (NBD). NFS clearly doesn't\n> provide the filesystem semantics needed to get decent reliability;\n> with NBD, it's not clear what happens :-(.\n>\n> Barring that, it means building a separate RAID array for each server,\n> and living with the limitation that a full set of disk hardware has to\n> be devoted to each DB server.\n> -- \n> wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','acm.org').\n> http://www3.sympatico.ca/cbbrowne/\n> Rules of the Evil Overlord #46. \"If an advisor says to me \"My liege,\n> he is but one man. What can one man possibly do?\", I will reply\n> \"This.\" and kill the advisor.\" <http://www.eviloverlord.com/>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n--------------------\n\nAndrew Rawnsley\nPresident\nThe Ravensfield Digital Resource Group, Ltd.\n(740) 587-0114\nwww.ravensfield.com\n\n",
"msg_date": "Tue, 22 Jun 2004 08:49:31 -0400",
"msg_from": "Andrew Rawnsley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "How about iSCSI? This is exactly what it's for - presenting a bunch of\nremote SCSI hardware as if it were local. \n\nThere are several reference implementations on SourceForge from Intel, Cisco\n& others.\n\nI've never tried it myself, but I would if I had the need. And let's face\nit there are some very big players selling very pricey kit that uses it, so\nyou should have pretty high confidence that the fundamentals are strong.\n\nM\n\n\n\n> The other goal is to be able to stick LOTS of disk into one \n> box, and dole it out to multiple servers. It's more \n> expensive to set up and manage 3 RAID arrays than it is to \n> set up and manage just 1, because you have to manage 3 sets \n> of disk hardware rather than 1.\n[snip]\n> The \"poor man's approach\" involves trying to fake this by \n> building a \"disk box\" running Linux that exports the storage \n> either as a filesystem (using NFS) or as disk blocks (NBD). \n> NFS clearly doesn't provide the filesystem semantics needed \n> to get decent reliability; with NBD, it's not clear what happens :-(.\n\n\n",
"msg_date": "Tue, 22 Jun 2004 14:21:10 +0100",
"msg_from": "\"Matt Clark\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n|> What exactly are you trying to gain, avoid, or do?\n\nGain: seperate database storage from processing. This lets me move\nclusters from one server to another easily. Just stop the postgres\ninstance on server A and unmount it's filesystem. Then mount it on\nserver B and start postgres instance on server B. It gives me some\nfail-over capability as well as scalability and a lot of flexibility in\nbalancing load over multiple servers.\n\nAvoid: paying for brutally expensive FC gear.\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFA2Djagfzn5SevSpoRAj+bAKDFFgrhX+G1gkZRrydow3j/j35VaACbBN3Y\nC/0nWmqcwo/UlqvYpng06Ks=\n=k2vg\n-----END PGP SIGNATURE-----",
"msg_date": "Tue, 22 Jun 2004 09:49:15 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
},
{
"msg_contents": "I just got the BigAdmin newsletter from Sun today... interestingly enough it\nhad a link to an article described as:\n\n> Database Performance with NAS: Optimizing Oracle on NFS\n> This paper discusses the operation of relational databases with network \n> attached storage (NAS). Management implications and performance \n> expectations for databases using NAS are presented.\nThe link points to: http://www.sun.com/bigadmin/content/nas/\n\nI read just enough to see if it is relevant. Here is the first part of the\nsummary:\n\n> IT departments are increasingly utilizing Network Attached Storage (NAS) \n> and the Network File System (NFS) to meet the storage needs of mission-\n> critical relational databases. Reasons for this adoption include improved \n> storage virtualization, ease of storage deployment, decreased complexity,\n> and decreased total cost of ownership. This paper directly examines the\n> performance of databases with NAS. In laboratory tests comparing NFS with\n> local storage, NFS is shown capable of sustaining the same workload level\n> as local storage. Under similar workload conditions, NFS does consume an\n> increased number of CPU cycles; however, the proven benefits of NFS and\n> NAS outweigh this penalty in most production environments.\n\nMatthew Nuzum\t\t| ISPs: Make $200 - $5,000 per referral by\nwww.followers.net\t\t| recomending Elite CMS to your customers!\[email protected]\t| http://www.followers.net/isp\n\n\n\n",
"msg_date": "Tue, 22 Jun 2004 13:15:53 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres over Linux NBD or NFS"
}
] |
[
{
"msg_contents": "Hi, I am trying to make a cluster out of any database, postgresql or mysql or any other free database. I have looked at openmosix patched with the migshm patch for shared memory support and it seems that neither work fully. Postgresql in particular uses \"shared memory but not the system semaphores for locking it\". Thus apparently it won't benefit from an openmosix cluster. In addition mysql doesn't seem to migrate because it is multithreaded. Any ideas of how I can cluster my database (around 800 GB in size so even partial replication is not really practical)?\n\nIf interested this is my source for openmosix and migshm information http://howto.ipng.be/MigSHM-openMosix/x90.html\n\nThanks.\n\n\n\n\n\n\nHi, I am trying to make a cluster out of any \ndatabase, postgresql or mysql or any other free database. I have looked at \nopenmosix patched with the migshm patch for shared memory support and it seems \nthat neither work fully. Postgresql in particular uses \"shared memory but \nnot the system semaphores for locking it\". Thus apparently it won't \nbenefit from an openmosix cluster. In addition mysql doesn't seem to \nmigrate because it is multithreaded. Any ideas of how I can cluster my \ndatabase (around 800 GB in size so even partial replication is not really \npractical)?\n \nIf interested this is my source for openmosix and \nmigshm information http://howto.ipng.be/MigSHM-openMosix/x90.html\n \nThanks.",
"msg_date": "Tue, 22 Jun 2004 09:29:39 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgresql and openmosix migration"
},
{
"msg_contents": "Bill,\n\n> Any ideas of how I can cluster my database (around 800 GB\n> in size so even partial replication is not really practical)?\n\nUm, raise $150,000 to pay for a clustering implementation?\n\nVarious techniques of \"shared memory clustering\" have been tried with \nPostgreSQL, and none work. Neither does LinuxLabs \"ClusGres\", which is \nbased on similar principles -- unfortunately. (at least, LL repeatedly \npostponed the demo they said they'd give me. I've yet to see anything \nworking ...)\n\nFrankly, we're waiting for a well-funded corporation to jump in and decide \nthey want PostgreSQL clustering. Database server clustering is a \"big \nticket item\" requiring roughly 1,000 hours of programming and \ntroubleshooting. As such, you're not likely to see it come out of the OSS \ncommunity unaided.\n\nOh, and FYI, MySQL's \"clustering\" doesn't work either. It requires your \nentire database to fit into available RAM ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Tue, 22 Jun 2004 09:31:23 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "Ok, so maybe someone on this group will have a better idea. We have a\ndatabase of financial information, and this has literally millions of\nentries. I have installed indicies, but for the rather computationally\ndemanding processes we like to use, like a select query to find the\ncommodity with the highest monthly or annual returns, the computer generally\nruns unacceptably slow. So, other than clustring, how could I achieve a\nspeed increase in these complex queries? Is this better in mysql or\npostgresql?\n\nThanks.\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Bill\" <[email protected]>; <[email protected]>\nSent: Tuesday, June 22, 2004 11:31 AM\nSubject: Re: [PERFORM] postgresql and openmosix migration\n\n\n> Bill,\n>\n> > Any ideas of how I can cluster my database (around 800 GB\n> > in size so even partial replication is not really practical)?\n>\n> Um, raise $150,000 to pay for a clustering implementation?\n>\n> Various techniques of \"shared memory clustering\" have been tried with\n> PostgreSQL, and none work. Neither does LinuxLabs \"ClusGres\", which is\n> based on similar principles -- unfortunately. (at least, LL repeatedly\n> postponed the demo they said they'd give me. I've yet to see anything\n> working ...)\n>\n> Frankly, we're waiting for a well-funded corporation to jump in and decide\n> they want PostgreSQL clustering. Database server clustering is a \"big\n> ticket item\" requiring roughly 1,000 hours of programming and\n> troubleshooting. As such, you're not likely to see it come out of the\nOSS\n> community unaided.\n>\n> Oh, and FYI, MySQL's \"clustering\" doesn't work either. It requires your\n> entire database to fit into available RAM ....\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n\n",
"msg_date": "Tue, 22 Jun 2004 12:31:15 -0500",
"msg_from": "\"Bill\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "On Tue, Jun 22, 2004 at 12:31:15 -0500,\n Bill <[email protected]> wrote:\n> Ok, so maybe someone on this group will have a better idea. We have a\n> database of financial information, and this has literally millions of\n> entries. I have installed indicies, but for the rather computationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer generally\n> runs unacceptably slow. So, other than clustring, how could I achieve a\n> speed increase in these complex queries? Is this better in mysql or\n> postgresql?\n\nQueries using max (or min) can often be rewritten as queries using ORDER BY\nand LIMIT so that they can take advantage of indexes. Doing this might help\nwith some of the problems you are seeing.\nIf you commonly query on aggregated data it might be better to create\nderived tables of the aggregated data maintained by triggers, and query\nagainst them. If you do lots of selects relative to inserts and updates,\nthis could be a big win.\n",
"msg_date": "Tue, 22 Jun 2004 12:53:28 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "On Tue, 22 Jun 2004 12:31:15 -0500 Bill <[email protected]> wrote:\n> I have installed indicies,\n\nbut are there any statistics? vacuum analyze is your friend\n\n> but for the rather computationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer generally\n> runs unacceptably slow. So, other than clustring, how could I achieve a\n> speed increase in these complex queries?\n\n1) have you gone to the effort to tune the values in postgresql.conf?\n\n2) have you tried using explain to find out what the query planner is\n up to?\n\n> Is this better in mysql or\n> postgresql?\n\nif there is any complexity to the queries, postgresql will serve you better\nif you learn how to use it properly.\n\nrichard\n-- \nRichard Welty [email protected]\nAverill Park Networking 518-573-7592\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n\n",
"msg_date": "Tue, 22 Jun 2004 14:08:10 -0400 (EDT)",
"msg_from": "Richard Welty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "Hi Bill, I am more often in the \"needing help\" category than the \"giving\nhelp\" when it comes to advise about using postgresql. I have found it to be\nan extremely powerful tool and by far the best performance/price for my\nwork.\n\nI think you will get some excellent answers and help to your performance\nquestions if you send the list details about specific queries that are\nrunning too slow. If you are willing to throw more/bigger hardware at the\nproblem, let people know that when you ask and they will tell you if your\nbottleneck can be alleviated through more ram, disks, cpu or whatever.\nHaving been watching this list for some time now, I suspect most of the\nperformance problems can be improved using non-intuitive query or\nconfiguration modifications (for example, replacing min()/max() as suggested\nby Mr. Wolf).\n\nThe heavy hitters on the list will usually ask for an \"explain analyze\" of\nyour query. If your query is \"select * from foo\", then change it to\n\"EXPLAIN ANALYZE select * from foo\" and post the output. It will look\nsomething like this:\n QUERY PLAN\n\n----------------------------------------------------------------------------\n-------------------------------\n Seq Scan on foo (cost=0.00..1.04 rows=4 width=44) (actual time=8.46..8.47\nrows=4 loops=1)\n Total runtime: 19.63 msec\n(2 rows)\n\nI'm sure your data is confidential; mine is too. The good news is that none\nof your data is included in the query. Only technical details about what the\ndatabase is doing.\n\nIf your problem might involve the application that works with the data, give\nsome details about that. For example, if you're using a Java application,\nlet people know what driver version you use, what jvm and other related\ninfo. There are lurkers on this list using just about every programming\nlanguage imaginable on more platforms than you can shake a stick at (I don't\ncare how good you are at shaking sticks, either).\n\nThe more details you give the better help you're going to get and you'd be\namazed at the results I've seen people get with a judicious amount of\ntweaking. The other day someone had a query that took hours decrease to less\nthan 10 minutes by using some techniques prescribed by members on the list.\nBringing 30 - 60 second queries down to 2-3 seconds is commonplace.\n\nYou seem to be ready to throw money at the problem by investing in new\nhardware but I would suggest digging into the performance problems first.\nToo many times we've seen people on the list say, \"I've just spent $x0,000\non a new xyz and I'm still having problems with this query.\" Often times\nthe true solution is rewriting queries, tweaking config parameters, adding\nRAM and upgrading disks (in that order I believe).\n\nAs I found out even today on the SQL list, it's best to ask questions in\nthis form:\n\"I want to do this... I've been trying this... I'm getting this... which\nis problematic because...\"\n\nThe more clearly you state the abstract goal the more creative answers\nyou'll get with people often suggesting things you'd never considered.\n\nI hope this helps and I hope that you achieve your goals of a well\nperforming application. \n\nMatthew Nuzum\t\t| Makers of \"Elite Content Management System\"\nwww.followers.net\t\t| View samples of Elite CMS in action\[email protected]\t| http://www.followers.net/portfolio/\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Bill\n> Sent: Tuesday, June 22, 2004 1:31 PM\n> To: Josh Berkus\n> Cc: [email protected]\n> Subject: Re: [PERFORM] postgresql and openmosix migration\n> \n> Ok, so maybe someone on this group will have a better idea. We have a\n> database of financial information, and this has literally millions of\n> entries. I have installed indicies, but for the rather computationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer\n> generally\n> runs unacceptably slow. So, other than clustring, how could I achieve a\n> speed increase in these complex queries? Is this better in mysql or\n> postgresql?\n> \n> Thanks.\n\n",
"msg_date": "Tue, 22 Jun 2004 14:49:48 -0400",
"msg_from": "\"Matthew Nuzum\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "Bill wrote:\n> Ok, so maybe someone on this group will have a better idea. We have a\n> database of financial information, and this has literally millions of\n> entries. I have installed indicies, but for the rather computationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer generally\n> runs unacceptably slow. So, other than clustring, how could I achieve a\n> speed increase in these complex queries? Is this better in mysql or\n> postgresql?\n\nIf the bottleneck is really computational, not I/O, you might try PL/R \nin conjunction with the rpvm R package. rpvm allows R to make use of pvm \nto split its load among a cluster. See:\n\nR:\n http://www.r-project.org/\n\nPL/R:\n http://www.joeconway.com/plr/\n\nrpvm:\n http://cran.r-project.org/src/contrib/Descriptions/rpvm.html\n http://cran.r-project.org/doc/packages/rpvm.pdf\n\nI haven't had a chance to play with this myself yet, but I hope to \nrelatively soon.\n\nHTH,\n\nJoe\n",
"msg_date": "Tue, 22 Jun 2004 18:25:16 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBill wrote:\n| Ok, so maybe someone on this group will have a better idea. We have a\n| database of financial information, and this has literally millions of\n| entries. I have installed indicies, but for the rather computationally\n| demanding processes we like to use, like a select query to find the\n| commodity with the highest monthly or annual returns, the computer\ngenerally\n| runs unacceptably slow. So, other than clustring, how could I achieve a\n| speed increase in these complex queries? Is this better in mysql or\n| postgresql?\n\nPostgres generally beats MySQL on complex queries. The easiest solution\nto speed issues is to throw hardware at it. Generally, you're first\nbound by disk, RAM then CPU.\n\n1) Move your data over to an array of smallish 15kRPM disks. The more\nspindles the better.\n2) Use a 64 bit platform and take advantage of >4 GB memory.\n\nThere are dozens of options for the disk array. For the processing\nplatform, I'd recommend looking at Opteron. I've heard only good things\nand their price is much more reasonable than the other options.\n\n- --\nAndrew Hammond 416-673-4138 [email protected]\nDatabase Administrator, Afilias Canada Corp.\nCB83 2838 4B67 D40F D086 3568 81FC E7E5 27AF 4A9A\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\nComment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org\n\niD8DBQFA2Zf3gfzn5SevSpoRAr0HAJ0S/uVjuqYEuhMgdSAI3rfHK0ga1wCgwpHl\ng+yuBYpAt58vnJWtX+wii1s=\n=2fGN\n-----END PGP SIGNATURE-----",
"msg_date": "Wed, 23 Jun 2004 10:47:21 -0400",
"msg_from": "Andrew Hammond <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "Bill,\n\n> Ok, so maybe someone on this group will have a better idea. We have a\n> database of financial information, and this has literally millions of\n> entries. I have installed indicies, but for the rather computationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer\n> generally runs unacceptably slow. So, other than clustring, how could I\n> achieve a speed increase in these complex queries? \n\nWell, you can do this 2 ways:\n1) you can pick out one query at a time, and send us complete information on \nit, like Matt's really nice e-mail describes. People on this list will \nhelp you troubleshoot it. It will take a lot of time, but no money.\n\n2) You can hire a PG database expert. This will be much faster, but cost \nyou a lot of money.\n\n>Is this better in mysql\n> or postgresql?\n\nComplex queries? Large databases? That's us. MySQL is obtimized for \nsimple queries on small databases.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Wed, 23 Jun 2004 10:31:45 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "> 2) You can hire a PG database expert. This will be much faster, but cost \n> you a lot of money.\n\nI wouldn't exactly say \"a lot of money\". Lots of consulters out there\nare willing to put in a weeks worth of effort, on site, for\nsignificantly less than a support contract with most commercial DB\norganizations (including MySQL) -- and often give better results since\nthey're on-site rather than over phone or via email.\n\nBut yes, doing it via this mailing list is probably the cheapest option.\n\n\n",
"msg_date": "Wed, 23 Jun 2004 13:52:39 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
},
{
"msg_contents": "On Wed, 23 Jun 2004 13:52:39 -0400 Rod Taylor <[email protected]> wrote:\n> But yes, doing it via this mailing list is probably the cheapest option.\n\nyes, he just needs to decide how big a hurry he's in.\n\nalso, if he does decide to hire a consultant, i suggest he pop over\nto pgsql-jobs and ask there.\n\nrichard\n-- \nRichard Welty [email protected]\nAverill Park Networking 518-573-7592\n Java, PHP, PostgreSQL, Unix, Linux, IP Network Engineering, Security\n\n",
"msg_date": "Wed, 23 Jun 2004 14:20:24 -0400 (EDT)",
"msg_from": "Richard Welty <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgresql and openmosix migration"
}
] |
[
{
"msg_contents": "Sounds like an issue I have experienced in Oracle as well. If you can\nyou might want consider breaking out your database into oltp (on line\ntransaction processing) and data warehouse db. You run you any reports\nyou can nightly into a set of warehouse tables and save your daytime\ncpus for incoming info and special real-time (hottest commodity of the\nday) reports that you have tuned the best you can. Anything you can\ncalculate in advance that won't change over time, should be saved in the\nwarehouse tables, so you don't waste cpus, re-working data in real time.\nPre-running your reports won't speed them up but your users won't be\nwaiting for a report to calculate while they are looking at the screen. \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Bill\nSent: Tuesday, June 22, 2004 11:31 AM\nTo: Josh Berkus\nCc: [email protected]\nSubject: Re: [PERFORM] postgresql and openmosix migration\n\nOk, so maybe someone on this group will have a better idea. We have a\ndatabase of financial information, and this has literally millions of\nentries. I have installed indicies, but for the rather computationally\ndemanding processes we like to use, like a select query to find the\ncommodity with the highest monthly or annual returns, the computer\ngenerally\nruns unacceptably slow. So, other than clustring, how could I achieve a\nspeed increase in these complex queries? Is this better in mysql or\npostgresql?\n\nThanks.\n----- Original Message ----- \nFrom: \"Josh Berkus\" <[email protected]>\nTo: \"Bill\" <[email protected]>; <[email protected]>\nSent: Tuesday, June 22, 2004 11:31 AM\nSubject: Re: [PERFORM] postgresql and openmosix migration\n\n\n> Bill,\n>\n> > Any ideas of how I can cluster my database (around 800 GB\n> > in size so even partial replication is not really practical)?\n>\n> Um, raise $150,000 to pay for a clustering implementation?\n>\n> Various techniques of \"shared memory clustering\" have been tried with\n> PostgreSQL, and none work. Neither does LinuxLabs \"ClusGres\", which\nis\n> based on similar principles -- unfortunately. (at least, LL repeatedly\n> postponed the demo they said they'd give me. I've yet to see anything\n> working ...)\n>\n> Frankly, we're waiting for a well-funded corporation to jump in and\ndecide\n> they want PostgreSQL clustering. Database server clustering is a\n\"big\n> ticket item\" requiring roughly 1,000 hours of programming and\n> troubleshooting. As such, you're not likely to see it come out of\nthe\nOSS\n> community unaided.\n>\n> Oh, and FYI, MySQL's \"clustering\" doesn't work either. It requires\nyour\n> entire database to fit into available RAM ....\n>\n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\n",
"msg_date": "Tue, 22 Jun 2004 11:58:39 -0600",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql and openmosix migration"
}
] |
[
{
"msg_contents": "The pg_resetxlog was run as root. It caused ownership problems of\npg_control and xlog files.\nNow we have no access to the data now through psql. The data is still\nthere under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But\nthere is no reference to 36 of our tables in pg_class. Also the 18\nother tables that are reported in this database have no data in them.\nIs there anyway to have the database resync or make it aware of the data\nunder /var/lib/pgsql/data/base/17347?\nHow can this problem be resolved?\n\nThere is actually 346 db files adding up to 134 GB in this database.\n\n\nBelow are error messages of when the database trying to be started. I\nam not sure of the when pg_resetxlog was run. I suspect it was run to\nget rid ot the \"invalid primary checkpoint record\".\n\nThe postgresql DB had an error trying to be started up. \nThe error was\nJun 22 13:17:53 murphy postgres[27430]: [4-1] LOG: invalid primary\ncheckpoint record\nJun 22 13:17:53 murphy postgres[27430]: [5-1] LOG: could not open file\n\"/var/lib/pgsql/data/pg_xlog/0000000000000000\" (log file 0, segment 0):\nNo such file or directory\nJun 22 13:18:49 murphy postgres[28778]: [6-1] LOG: invalid secondary\ncheckpoint record\nJun 22 13:18:49 murphy postgres[28778]: [7-1] PANIC: could not locate a\nvalid checkpoint record\n\n\nJun 22 13:26:01 murphy postgres[30770]: [6-1] LOG: database system is\nready\nJun 22 13:26:02 murphy postgresql: Starting postgresql service:\nsucceeded\nJun 22 13:26:20 murphy postgres[30789]: [2-1] PANIC: could not access\nstatus of transaction 553\nJun 22 13:26:20 murphy postgres[30789]: [2-2] DETAIL: could not open\nfile \"/var/lib/pgsql/data/pg_clog/0000\": No such file or directory\nJun 22 13:26:20 murphy postgres[30789]: [2-3] STATEMENT: COMMIT\n\nand\nJun 22 13:26:20 murphy postgres[30791]: [10-1] LOG: redo starts at\n0/2000050\nJun 22 13:26:20 murphy postgres[30791]: [11-1] LOG: file\n\"/var/lib/pgsql/data/pg_clog/0000\" doesn't exist, reading as zeroes\nJun 22 13:26:20 murphy postgres[30791]: [12-1] LOG: record with zero\nlength at 0/2000E84\nJun 22 13:26:20 murphy postgres[30791]: [13-1] LOG: redo done at\n0/2000E60\nJun 22 13:26:20 murphy postgres[30791]: [14-1] WARNING: xlog flush\nrequest 213/7363F354 is not satisfied --- flushed only to 0/2000E84\nJun 22 13:26:20 murphy postgres[30791]: [14-2] CONTEXT: writing block\n840074 of relation 17347/356768772\nJun 22 13:26:20 murphy postgres[30791]: [15-1] WARNING: xlog flush\nrequest 213/58426648 is not satisfied --- flushed only to 0/2000E84\n\nand\nJun 22 13:38:23 murphy postgres[1460]: [2-1] ERROR: xlog flush request\n210/E757F150 is not satisfied --- flushed only to 0/2074CA0\nJun 22 13:38:23 murphy postgres[1460]: [2-2] CONTEXT: writing block\n824605 of relation 17347/356768772\n\nWe are using a san for our storage device.\n",
"msg_date": "Tue, 22 Jun 2004 14:42:56 -0400",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "after using pg_resetxlog, db lost"
},
{
"msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n> The pg_resetxlog was run as root. It caused ownership problems of\n> pg_control and xlog files.\n> Now we have no access to the data now through psql. The data is still\n> there under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But\n> there is no reference to 36 of our tables in pg_class. Also the 18\n> other tables that are reported in this database have no data in them.\n> Is there anyway to have the database resync or make it aware of the data\n> under /var/lib/pgsql/data/base/17347?\n> How can this problem be resolved?\n\nWhat this sounds like is that you reset the transaction counter along\nwith the xlog, so that those tables appear to have been created by\ntransactions \"in the future\". This could be repaired by doing\npg_resetxlog with a more appropriate initial transaction ID, but\nfiguring out what that value should be is not easy :-(\n\nWhat I'd suggest is grabbing pg_filedump from\nhttp://sources.redhat.com/rhdb/\nand using it to look through pg_class (which will be file\n$PGDATA/base/yourdbnumber/1259) to see the highest transaction ID\nmentioned in any row of pg_class. Then pg_resetxlog with a value\na bit larger than that. Now you should be able to see all the rows\nin pg_class ... but this doesn't get you out of the woods yet, unless\nthere are very-recently-created tables shown in pg_class. I'd suggest\nnext looking through whichever tables you know to be recently modified\nto find the highest transaction ID mentioned in them, and finally doing\nanother pg_resetxlog with a value a few million greater than that. Then\nyou should be okay.\n\nThe reason you need to do this in two steps is that you'll need to look\nat pg_class.relfilenode to get the file names of your recently-modified\ntables. Do NOT modify the database in any way while you are running\nwith the intermediate transaction ID setting.\n\n> Jun 22 13:38:23 murphy postgres[1460]: [2-1] ERROR: xlog flush request\n> 210/E757F150 is not satisfied --- flushed only to 0/2074CA0\n\nLooks like you also need a larger initial WAL offset in your\npg_resetxlog command. Unlike the case with transaction IDs, there's\nno need to try to be somewhat accurate in the setting --- I'd just\nuse a number WAY beyond what you had, maybe like 10000/0.\n\nFinally, the fact that all this happened suggests that you lost the\ncontents of pg_control (else pg_resetxlog would have picked up the right\nvalues from it). Be very sure that you run pg_resetxlog under the same\nlocale settings (LC_COLLATE,LC_CTYPE) that you initially initdb'd with.\nOtherwise you're likely to have nasty index-corruption problems later.\n\nGood luck. Next time, don't let amateurs fool with pg_resetxlog (and\nanyone who'd run it as root definitely doesn't know what they're doing).\nIt is a wizard's tool. Get knowledgeable advice from the PG lists\nbefore you use it rather than after.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jun 2004 15:36:03 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after using pg_resetxlog, db lost "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Shea,Dan [CIS]\" <[email protected]> writes:\n> \n>>The pg_resetxlog was run as root. It caused ownership problems of\n>>pg_control and xlog files.\n>>Now we have no access to the data now through psql. The data is still\n>>there under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But\n>>there is no reference to 36 of our tables in pg_class. Also the 18\n>>other tables that are reported in this database have no data in them.\n>>Is there anyway to have the database resync or make it aware of the data\n>>under /var/lib/pgsql/data/base/17347?\n>>How can this problem be resolved?\n> \n> \n> What this sounds like is that you reset the transaction counter along\n> with the xlog, so that those tables appear to have been created by\n> transactions \"in the future\". This could be repaired by doing\n> pg_resetxlog with a more appropriate initial transaction ID, but\n> figuring out what that value should be is not easy :-(\n\nTom - would there be any value in adding this to a pg_dump? I'm assuming \nthe numbers attached to tables etc are their OIDs anyway, so it might be \na useful reference in cases like this.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Tue, 22 Jun 2004 21:05:07 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after using pg_resetxlog, db lost"
},
{
"msg_contents": "Richard Huxton <[email protected]> writes:\n> Tom Lane wrote:\n>> This could be repaired by doing\n>> pg_resetxlog with a more appropriate initial transaction ID, but\n>> figuring out what that value should be is not easy :-(\n\n> Tom - would there be any value in adding this to a pg_dump?\n\nPossibly. CVS tip pg_dump has been changed to not output OIDs by\ndefault, as a result of gripes from people who wanted to be able to\n\"diff\" dumps from different servers and not have the diff cluttered\nby irrelevant OID differences. But a single header line showing\ncurrent XID and OID values doesn't seem like it would be a big problem.\nWe could put current timestamp there too, which was another recent topic\nof discussion.\n\nBring it up on pghackers and see if anyone has an objection...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Jun 2004 16:25:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after using pg_resetxlog, db lost "
}
] |
[
{
"msg_contents": ">>>>> \"Matthieu\" == Matthieu Compin <[email protected]> writes:\n\n[...]\n\n >> mais je r�it�re ma proposition de m'occuper de la partie qui a\n >> pos� probl�me cette fois-ci, c'est � dire la prise de contact\n >> avec les diff�rentes personnes qui pourraient �tre int�ress�es.\n\n Matthieu> La balle est dans ton camps. Prend contact avec Les\n Matthieu> Projets Importants, fixe moi une date et je te trouve des\n Matthieu> salles et du r�seau.\n\n Matthieu> On a donc la possibilit� de faire une belle grossse\n Matthieu> manifestation maitenant et tu peux pas dire non ;)\n\nOuf! Je suis soulag� de la tournure que �a prends.\n\n-- \nLaurent Martelli vice-pr�sident de Parinux\nhttp://www.bearteam.org/~laurent/ http://www.parinux.org/\[email protected] \n\n",
"msg_date": "Wed, 23 Jun 2004 18:02:23 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Traduc Party"
},
{
"msg_contents": "\nHow in hell did could this mail be sent to pgsql-performance ??? I\nmust have inadvertently hit a fatal and obscure keystroke in\nEmacs/Gnus.\n\nSorry for the noise.\n\n-- \nLaurent Martelli\[email protected] Java Aspect Components\nhttp://www.aopsys.com/ http://jac.objectweb.org\n\n",
"msg_date": "Wed, 23 Jun 2004 20:47:04 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Traduc Party"
},
{
"msg_contents": "\nOn 23/06/2004 19:47 Laurent Martelli wrote:\n> \n> How in hell did could this mail be sent to pgsql-performance ??? I\n> must have inadvertently hit a fatal and obscure keystroke in\n> Emacs/Gnus.\n\nThat sort of implies that there are Emacs keystrokes which aren't obsure. \nI've been using it dayly for 2 years now and have yet to discover any key \nsequence which makes any sense. But then I don't do drugs so my perseption \nis probably at odds with the origators of Emacs ;)\n\n-- \nPaul Thomas\n+------------------------------+---------------------------------------------+\n| Thomas Micro Systems Limited | Software Solutions for \nBusiness |\n| Computer Consultants | \nhttp://www.thomas-micro-systems-ltd.co.uk |\n+------------------------------+---------------------------------------------+\n",
"msg_date": "Thu, 24 Jun 2004 00:07:32 +0100",
"msg_from": "Paul Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Traduc Party"
}
] |
[
{
"msg_contents": "Bill wrote:\n> Ok, so maybe someone on this group will have a better idea. We have a\n> database of financial information, and this has literally millions of\n> entries. I have installed indicies, but for the rather\ncomputationally\n> demanding processes we like to use, like a select query to find the\n> commodity with the highest monthly or annual returns, the computer\n> generally\n> runs unacceptably slow. So, other than clustring, how could I achieve\na\n> speed increase in these complex queries? Is this better in mysql or\n> postgresql?\n\nThis is a very broad question. Optimizing your SQL to run fast as on\nany other database is something of an art form. This is a very broad\ntopic that could fill a book. For example, a common performance killer\nis not having enough sort memory for large ordered result sets.\n\nA critical skill is being able to figure out if the planner is\noptimizing your queries badly. Knowing this is a mixture of observation\nand intuition that comes with experience. The absolute best case\nperformance of a query is roughly defined by the data that is looked at\nto generate the result set and the size of the result set itself when\nthe query is pulling data from the cache. The cache problem is\ncompromisable by throwing more money at the problem but a poorly planned\nquery will run slowly on any hardware.\n\nI would suggest isolating particular problems and posting them to the\nlist. (explain analyze works wonders).\n\nMerlin\n",
"msg_date": "Wed, 23 Jun 2004 12:34:42 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgresql and openmosix migration"
}
] |
[
{
"msg_contents": "Tom I see you from past emails that you reference using -i -f with pg_filedump. I have tried this, but do not know what I am looking at. What would be the the transaction id? What parameter am I supposed to pass to find it?\n\n\n*******************************************************************\n* PostgreSQL File/Block Formatted Dump Utility - Version 3.0\n*\n* File: /npmu_base/data/base/17347/1259\n* Options used: -i -f\n*\n* Dump created on: Thu Jun 24 02:44:59 2004\n*******************************************************************\n\nBlock 0 ********************************************************\n<Header> -----\n Block Offset: 0x00000000 Offsets: Lower 232 (0x00e8)\n Block: Size 8192 Version 1 Upper 268 (0x010c)\n LSN: logid 0 recoff 0x00632c08 Special 8192 (0x2000)\n Items: 53 Free Space: 36\n Length (including item array): 236\n\n 0000: 00000000 082c6300 0b000000 e8000c01 .....,c.........\n 0010: 00200120 c4908801 00908801 3c8f8801 . . ........<...\n 0020: 788e8801 b48d8801 f08c8801 2c8c8801 x...........,...\n 0030: 689f3001 688b8801 a48a8801 e0898801 h.0.h...........\n 0040: 1c898801 58888801 94878801 d0868801 ....X...........\n 0050: 3c862801 a8852801 e4848801 50842801 <.(...(.....P.(.\n 0060: bc832801 f8828801 64822801 d0812801 ..(.....d.(...(.\n 0070: 0c818801 6c110000 d8100000 44100000 ....l.......D...\n 0080: b00f0000 1c0f0000 d49e2801 409e2801 ..........(.@.(.\n 0090: ac9d2801 189d2801 849c2801 f09b2801 ..(...(...(...(.\n 00a0: 5c9b2801 c89a2801 349a2801 a0992801 \\.(...(.4.(...(.\n 00b0: 0c992801 78982801 e4972801 50972801 ..(.x.(...(.P.(.\n 00c0: bc962801 28962801 94952801 00952801 ..(.(.(...(...(.\n 00d0: 6c942801 d8932801 44932801 b0922801 l.(...(.D.(...(.\n 00e0: 1c922801 88912801 00000000 ..(...(.....\n\n<Data> ------\n Item 1 -- Length: 196 Offset: 4292 (0x10c4) Flags: USED\n XID: min (2) CMIN|XMAX: 211 CMAX|XVAC: 469\n Block Id: 0 linp Index: 1 Attributes: 24 Size: 28\n infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n\n 10c4: 02000000 d3000000 d5010000 00000000 ................\n 10d4: 01001800 12291c00 cc420000 7461626c .....)...B..tabl\n 10e4: 655f636f 6e737472 61696e74 73000000 e_constraints...\n 10f4: 00000000 00000000 00000000 00000000 ................\n 1104: 00000000 00000000 00000000 00000000 ................\n 1114: 00000000 00000000 00000000 51420000 ............QB..\n 1124: cd420000 01000000 00000000 cc420000 .B...........B..\n 1134: 00000000 00000000 00000000 00000000 ................\n 1144: 00007600 09000000 00000000 00000000 ..v.............\n 1154: 00000100 30000000 01000000 00000000 ....0...........\n 1164: 09040000 02000000 00000000 01000000 ................\n 1174: 01000000 7f803f40 00000000 01000000 ......?@........\n 1184: 02000000 ....\n\n Item 2 -- Length: 196 Offset: 4096 (0x1000) Flags: USED\n XID: min (2) CMIN|XMAX: 215 CMAX|XVAC: 469\n Block Id: 0 linp Index: 2 Attributes: 24 Size: 28\n infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n\n 1000: 02000000 d7000000 d5010000 00000000 ................\n 1010: 02001800 12291c00 d0420000 7461626c .....)...B..tabl\n 1020: 655f7072 6976696c 65676573 00000000 e_privileges....\n 1030: 00000000 00000000 00000000 00000000 ................\n 1040: 00000000 00000000 00000000 00000000 ................\n 1050: 00000000 00000000 00000000 51420000 ............QB..\n 1060: d1420000 01000000 00000000 d0420000 .B...........B..\n 1070: 00000000 00000000 00000000 00000000 ................\n 1080: 00007600 08000000 00000000 00000000 ..v.............\n 1090: 00000100 30000000 01000000 00000000 ....0...........\n 10a0: 09040000 02000000 00000000 01000000 ................\n 10b0: 01000000 7f803f40 00000000 01000000 ......?@........\n 10c0: 02000000 ....\n\n Item 3 -- Length: 196 Offset: 3900 (0x0f3c) Flags: USED\n XID: min (2) CMIN|XMAX: 219 CMAX|XVAC: 469\n Block Id: 0 linp Index: 3 Attributes: 24 Size: 28\n\nDan.\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Tuesday, June 22, 2004 3:36 PM\nTo: Shea,Dan [CIS]\nCc: [email protected]\nSubject: Re: [PERFORM] after using pg_resetxlog, db lost \n\n\n\"Shea,Dan [CIS]\" <[email protected]> writes:\n> The pg_resetxlog was run as root. It caused ownership problems of\n> pg_control and xlog files.\n> Now we have no access to the data now through psql. The data is still\n> there under /var/lib/pgsql/data/base/17347 (PWFPM_DEV DB name). But\n> there is no reference to 36 of our tables in pg_class. Also the 18\n> other tables that are reported in this database have no data in them.\n> Is there anyway to have the database resync or make it aware of the data\n> under /var/lib/pgsql/data/base/17347?\n> How can this problem be resolved?\n\nWhat this sounds like is that you reset the transaction counter along\nwith the xlog, so that those tables appear to have been created by\ntransactions \"in the future\". This could be repaired by doing\npg_resetxlog with a more appropriate initial transaction ID, but\nfiguring out what that value should be is not easy :-(\n\nWhat I'd suggest is grabbing pg_filedump from\nhttp://sources.redhat.com/rhdb/\nand using it to look through pg_class (which will be file\n$PGDATA/base/yourdbnumber/1259) to see the highest transaction ID\nmentioned in any row of pg_class. Then pg_resetxlog with a value\na bit larger than that. Now you should be able to see all the rows\nin pg_class ... but this doesn't get you out of the woods yet, unless\nthere are very-recently-created tables shown in pg_class. I'd suggest\nnext looking through whichever tables you know to be recently modified\nto find the highest transaction ID mentioned in them, and finally doing\nanother pg_resetxlog with a value a few million greater than that. Then\nyou should be okay.\n\nThe reason you need to do this in two steps is that you'll need to look\nat pg_class.relfilenode to get the file names of your recently-modified\ntables. Do NOT modify the database in any way while you are running\nwith the intermediate transaction ID setting.\n\n> Jun 22 13:38:23 murphy postgres[1460]: [2-1] ERROR: xlog flush request\n> 210/E757F150 is not satisfied --- flushed only to 0/2074CA0\n\nLooks like you also need a larger initial WAL offset in your\npg_resetxlog command. Unlike the case with transaction IDs, there's\nno need to try to be somewhat accurate in the setting --- I'd just\nuse a number WAY beyond what you had, maybe like 10000/0.\n\nFinally, the fact that all this happened suggests that you lost the\ncontents of pg_control (else pg_resetxlog would have picked up the right\nvalues from it). Be very sure that you run pg_resetxlog under the same\nlocale settings (LC_COLLATE,LC_CTYPE) that you initially initdb'd with.\nOtherwise you're likely to have nasty index-corruption problems later.\n\nGood luck. Next time, don't let amateurs fool with pg_resetxlog (and\nanyone who'd run it as root definitely doesn't know what they're doing).\nIt is a wizard's tool. Get knowledgeable advice from the PG lists\nbefore you use it rather than after.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jun 2004 22:46:49 -0400",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: after using pg_resetxlog, db lost "
},
{
"msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n> Tom I see you from past emails that you reference using -i -f with\n> pg_filedump. I have tried this, but do not know what I am looking at.\n\nWhat you want to look at is valid XMIN and XMAX values. In this\nexample:\n\n> Item 1 -- Length: 196 Offset: 4292 (0x10c4) Flags: USED\n> XID: min (2) CMIN|XMAX: 211 CMAX|XVAC: 469\n> Block Id: 0 linp Index: 1 Attributes: 24 Size: 28\n> infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n\nthe infomask shows XMIN_COMMITTED, so xmin (here 2) is valid, but it also\nshows XMAX_INVALID, so the putative XMAX (211) should be ignored.\n\nIn general the xmin field should be valid, but xmax shares storage with\ncmin and so you have to look at the infomask bits to know whether to\nbelieve that the cmin/xmax field represents a transaction ID.\n\nThe cmax/xvac field could also hold a transaction ID. If I had only\nthe above data to go on, I'd guess that the current transaction counter\nis at least 469.\n\nUnder normal circumstances, command counter values (cmin or cmax) are\nunlikely to exceed a few hundred, while the transaction IDs you are\nlooking for are likely to be much larger. So you could get away with\njust computing the max of *all* the numbers you see in xmin, cmin/xmax,\nor cmax/cvac, and then using something a million or so bigger for safety\nfactor.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Jun 2004 23:41:26 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after using pg_resetxlog, db lost "
}
] |
[
{
"msg_contents": ">>>>> \"Laurent\" == Laurent Rathle <[email protected]> writes:\n\n Laurent> Le mardi 22 Juin 2004 19:59, Giancarlo a �crit�:\n >> 1ere r�gle: il n'y a pas de CVS chez Parinux 2eme r�gle: il n'y a\n >> pas de CVS chez Parinux, et... 3eme r�gle: il n'y a pas de CVS\n >> chez Parinux\n >> \n >> ;-)\n >> \n >> Bon en fait c plus compliqu� que ca, pour r�sumer: c'�tait trop\n >> le foutoir dans l'arbo du site, il faudrait faire du m�nage (afin\n >> notement de pouvoir faire des modules)\n\n Laurent> Et subversion ?\n\nPuisqu'on a pas d'existant sous CVS � migrer, je suis aussi plut�t\ncommencer directement avec subversion, qui est un CVS en mieux. \n\n-- \nLaurent Martelli vice-pr�sident de Parinux\nhttp://www.bearteam.org/~laurent/ http://www.parinux.org/\[email protected] \n\n",
"msg_date": "Thu, 24 Jun 2004 14:25:33 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Pas la samedi"
}
] |
[
{
"msg_contents": "I determined the largest was 12,293,162 and set it to \npg_resetxlog -x 15000000 /var/lib/pgsql/data\n\nI am now able to see all the data.\n\nI actually checked the log for the previous successfull startup before it the pg_control file was reset and it reported \nJun 22 11:55:44 pascal postgres[24993]: [5-1] LOG: next transaction ID: 14820367; next OID: 727013114\n\nSo I entered \npg_resetxlog -o 750000000 /var/lib/pgsql/data Setting oid value\n\nI couldn't set 10000/0, so tried below\npg_resetxlog -l 10000,0 /var/lib/pgsql/data \n\nThis seems to be wrong because the databse is complaining and shutting down\nJun 24 15:02:05 murphy postgres[28061]: [6-1] LOG: checkpoint record is at 2710/1000050\nJun 24 15:02:05 murphy postgres[28061]: [7-1] LOG: redo record is at 2710/1000050; undo record is at 0/0; shutdown TRUE\nJun 24 15:02:05 murphy postgres[28061]: [8-1] LOG: next transaction ID: 15000010; next OID: 750000000\nJun 24 15:02:05 murphy postgres[28061]: [9-1] LOG: database system was not properly shut down; automatic recovery in progress\nJun 24 15:02:05 murphy postgres[28062]: [5-1] FATAL: the database system is starting up\nJun 24 15:02:05 murphy postgres[28063]: [5-1] FATAL: the database system is starting up\nJun 24 15:02:05 murphy postgres[28061]: [10-1] LOG: redo starts at 2710/1000090\nJun 24 15:02:05 murphy postgres[28061]: [11-1] PANIC: could not access status of transaction 15000030\nJun 24 15:02:05 murphy postgres[28061]: [11-2] DETAIL: could not read from file \"/var/lib/pgsql/data/pg_clog/000E\" at offset 73728: Success\nJun 24 15:02:05 murphy postgres[24771]: [5-1] LOG: startup process (PID 28061) was terminated by signal 6\nJun 24 15:02:05 murphy postgres[24771]: [6-1] LOG: aborting startup due to startup process failure\nJun 24 15:50:51 murphy sshd(pam_unix)[690]: session opened for user root by (uid=0)\nJun 24 15:54:47 murphy su(pam_unix)[1541]: session opened for user postgres by root(uid=0)\nJun 24 16:03:47 murphy su(pam_unix)[2911]: session opened for user postgres by root(uid=0)\nJun 24 16:03:48 murphy su(pam_unix)[2911]: session closed for user postgres\nJun 24 16:03:48 murphy postgres[3182]: [1-1] LOG: could not create IPv6 socket: Address family not supported by protocol\nJun 24 16:03:48 murphy postgres[3188]: [2-1] LOG: database system was interrupted while in recovery at 2004-06-24 15:02:05 GMT\nJun 24 16:03:48 murphy postgres[3188]: [2-2] HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery.\nJun 24 16:03:48 murphy postgres[3188]: [3-1] LOG: checkpoint record is at 2710/1000050\nJun 24 16:03:48 murphy postgres[3188]: [4-1] LOG: redo record is at 2710/1000050; undo record is at 0/0; shutdown TRUE\nJun 24 16:03:48 murphy postgres[3188]: [5-1] LOG: next transaction ID: 15000010; next OID: 750000000\nJun 24 16:03:48 murphy postgres[3188]: [6-1] LOG: database system was not properly shut down; automatic recovery in progress\nJun 24 16:03:48 murphy postgres[3188]: [7-1] LOG: redo starts at 2710/1000090\nJun 24 16:03:48 murphy postgres[3188]: [8-1] PANIC: could not access status of transaction 15000030\nJun 24 16:03:48 murphy postgres[3188]: [8-2] DETAIL: could not read from file \"/var/lib/pgsql/data/pg_clog/000E\" at offset 73728: Success\nJun 24 16:03:48 murphy postgres[3182]: [2-1] LOG: startup process (PID 3188) was terminated by signal 6\nJun 24 16:03:48 murphy postgres[3182]: [3-1] LOG: aborting startup due to startup process failure\n\nHow do I set the xlog properly, or rather to 10000/0?\nDan.\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Wednesday, June 23, 2004 11:41 PM\nTo: Shea,Dan [CIS]\nCc: [email protected]\nSubject: Re: [PERFORM] after using pg_resetxlog, db lost \n\n\n\"Shea,Dan [CIS]\" <[email protected]> writes:\n> Tom I see you from past emails that you reference using -i -f with\n> pg_filedump. I have tried this, but do not know what I am looking at.\n\nWhat you want to look at is valid XMIN and XMAX values. In this\nexample:\n\n> Item 1 -- Length: 196 Offset: 4292 (0x10c4) Flags: USED\n> XID: min (2) CMIN|XMAX: 211 CMAX|XVAC: 469\n> Block Id: 0 linp Index: 1 Attributes: 24 Size: 28\n> infomask: 0x2912 (HASVARWIDTH|HASOID|XMIN_COMMITTED|XMAX_INVALID|UPDATED)\n\nthe infomask shows XMIN_COMMITTED, so xmin (here 2) is valid, but it also\nshows XMAX_INVALID, so the putative XMAX (211) should be ignored.\n\nIn general the xmin field should be valid, but xmax shares storage with\ncmin and so you have to look at the infomask bits to know whether to\nbelieve that the cmin/xmax field represents a transaction ID.\n\nThe cmax/xvac field could also hold a transaction ID. If I had only\nthe above data to go on, I'd guess that the current transaction counter\nis at least 469.\n\nUnder normal circumstances, command counter values (cmin or cmax) are\nunlikely to exceed a few hundred, while the transaction IDs you are\nlooking for are likely to be much larger. So you could get away with\njust computing the max of *all* the numbers you see in xmin, cmin/xmax,\nor cmax/cvac, and then using something a million or so bigger for safety\nfactor.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jun 2004 12:14:54 -0400",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: after using pg_resetxlog, db lost "
},
{
"msg_contents": "\"Shea,Dan [CIS]\" <[email protected]> writes:\n> I determined the largest was 12,293,162 and set it to \n> pg_resetxlog -x 15000000 /var/lib/pgsql/data\n\nOkay, but it looks like you will also need to adjust pg_clog to cover\nthat transaction ID range. (I had thought pg_resetxlog would handle\nthis for you, but it looks like not.)\n\n> Jun 24 15:02:05 murphy postgres[28061]: [11-1] PANIC: could not access status of transaction 15000030\n> Jun 24 15:02:05 murphy postgres[28061]: [11-2] DETAIL: could not read from file \"/var/lib/pgsql/data/pg_clog/000E\" at offset 73728: Success\n\nYou need to append zeroes (8K at a time) to\n/var/lib/pgsql/data/pg_clog/000E until it's longer than 73728 bytes.\nI'd use something like \n\tdd bs=8k count=1 </dev/zero >>/var/lib/pgsql/data/pg_clog/000E\nassuming that your system has /dev/zero.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jun 2004 12:33:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after using pg_resetxlog, db lost "
}
] |
[
{
"msg_contents": "Tom, thank you for your help.\nI increased 000E to 81920 and the databse is working now.\nWe are using RHAS 3.0 and it does have /dev/zero.\n\nDan.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Thursday, June 24, 2004 12:34 PM\nTo: Shea,Dan [CIS]\nCc: [email protected]\nSubject: Re: [PERFORM] after using pg_resetxlog, db lost \n\n\n\"Shea,Dan [CIS]\" <[email protected]> writes:\n> I determined the largest was 12,293,162 and set it to \n> pg_resetxlog -x 15000000 /var/lib/pgsql/data\n\nOkay, but it looks like you will also need to adjust pg_clog to cover\nthat transaction ID range. (I had thought pg_resetxlog would handle\nthis for you, but it looks like not.)\n\n> Jun 24 15:02:05 murphy postgres[28061]: [11-1] PANIC: could not access status of transaction 15000030\n> Jun 24 15:02:05 murphy postgres[28061]: [11-2] DETAIL: could not read from file \"/var/lib/pgsql/data/pg_clog/000E\" at offset 73728: Success\n\nYou need to append zeroes (8K at a time) to\n/var/lib/pgsql/data/pg_clog/000E until it's longer than 73728 bytes.\nI'd use something like \n\tdd bs=8k count=1 </dev/zero >>/var/lib/pgsql/data/pg_clog/000E\nassuming that your system has /dev/zero.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 24 Jun 2004 13:17:13 -0400",
"msg_from": "\"Shea,Dan [CIS]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: after using pg_resetxlog, db lost "
}
] |
[
{
"msg_contents": "I'm trying to make a (qua-technical, qua-business) case for switching from\nMS SQL, and one of the types of query that really doesn't sit well with MS\nSQL2K is:\n\n-- All fields integers or equivalent.\n-- Table T(k, x: nonkey fields...)\n-- Table U(k, a, z: m) -- for each value of (k) a set of non-intersecting\nranges [a,z) that map to (m) values.\n\n select T.*, U.m from T join U on T.k=U.k and T.x >= U.a and T.x < U.z\n\nTypically there are are about 1000-2000 U rows per value of (k), about 100K\nvalues of (k) and about 50M\nvalues of T.\n\nBy itself, this type of query grinds the CPU to dust. A clustered index on\nfields of U (take your pick) barely halves the problem of the loop through\n1000-2000 rows of U for each row of T. Hash join likewise.\nThe current workaround is a 'manual' radix index on top of the range table,\nbut it's something of a hack.\n\nWould the geometric of extensions handle such queries efficiently? I'm not\nfamiliar with applying R-trees to linear range problems.\n----\n\"Dreams come true, not free.\" -- S.Sondheim\n\n\n",
"msg_date": "Thu, 24 Jun 2004 21:24:58 GMT",
"msg_from": "\"Mischa Sandberg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Range query optimization"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was running Postgres 7.3 and it was running at about 15% with my\napplication. On Postgres 7.4 on another box, it was running at 100%...\nMy settings are default on both boxes I think.\n\nThere are only about 20 inserts per second, which is really low. \nAnyone have any ideas as to something I have to do to Postgres 7.4 to\nchange it from the default so that it's not eating up all my CPU? I\nhave no clue how to debug this...\n\nHelp please!!!! Should I downgrade to 7.3 to see what happens? BTW\nI'm running Postgres 7.3.2 on:\n\nLinux box 2.4.25-040218 #1 SMP Wed Feb 18 17:59:29 CET 2004 i686 i686\ni386 GNU/Linux\n\non a single processor P4 1.4GHz, 512 MB RAM. Does the SMP kernel do\nsomething with the single processor CPU? or should this not affect\npsql?\n\nThanks in advance!!!\nChris\n",
"msg_date": "Thu, 24 Jun 2004 23:59:32 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres 7.4 at 100%"
},
{
"msg_contents": "Chris Cheston wrote:\n> Hi all,\n> \n> I was running Postgres 7.3 and it was running at about 15% with my\n> application. On Postgres 7.4 on another box, it was running at 100%...\n\nPeople are going to need more information. Are you talking about \nCPU/disk IO/memory?\n\n> My settings are default on both boxes I think.\n\nDoubtful - PG crawls with the default settings. Check your old \npostgresql.conf file and compare. Also, read the tuning article at:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n\n> There are only about 20 inserts per second, which is really low. \n> Anyone have any ideas as to something I have to do to Postgres 7.4 to\n> change it from the default so that it's not eating up all my CPU? I\n> have no clue how to debug this...\n\nWhat does top/vmstat/iostat show during heavy usage?\n\n> Help please!!!! Should I downgrade to 7.3 to see what happens? BTW\n> I'm running Postgres 7.3.2 on:\n> \n> Linux box 2.4.25-040218 #1 SMP Wed Feb 18 17:59:29 CET 2004 i686 i686\n> i386 GNU/Linux\n> \n> on a single processor P4 1.4GHz, 512 MB RAM. Does the SMP kernel do\n> something with the single processor CPU? or should this not affect\n> psql?\n\nDon't know about the SMP thing. Unlikely that one of the big \ndistributions would mess that up much though.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 25 Jun 2004 10:09:51 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Hi Richard,\nThanks so much for replying. Pls see below.\nThanks in advance for any advice,\nChris\n\n> People are going to need more information. Are you talking about\n> CPU/disk IO/memory?\n\n## CPU is at 100%.\n\n> \n> > My settings are default on both boxes I think.\n> \n> Doubtful - PG crawls with the default settings. Check your old\n> postgresql.conf file and compare. Also, read the tuning article at:\n> http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\n> \n> > There are only about 20 inserts per second, which is really low.\n> > Anyone have any ideas as to something I have to do to Postgres 7.4 to\n> > change it from the default so that it's not eating up all my CPU? I\n> > have no clue how to debug this...\n> \n> What does top/vmstat/iostat show during heavy usage?\n> \nTOP:\n\n137 processes: 135 sleeping, 2 running, 0 zombie, 0 stopped\nCPU states: 81.4% user 17.9% system 0.0% nice 0.0% iowait 0.5% idle\nMem: 507036k av, 500244k used, 6792k free, 0k shrd, 68024k buff\n 133072k active, 277368k inactive\nSwap: 787176k av, 98924k used, 688252k free 232500k cached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n20734 postgres 15 0 4028 4028 3244 R 83.6 0.7 348:03 0 postmaster\n21249 numnet 9 0 78060 76M 8440 S 7.5 15.3 32:51 0 myapp\n18478 user 12 0 1224 1224 884 S 5.7 0.2 57:01 0 top\n\n[root@live root]# vmstat\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 98924 5980 68024 233528 4 3 6 22 28 10 13 6 24\n\niostat:\nTime: 03:18:18 AM \nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\ndev3-0 11.00 0.80 142.40 4 712\n\nTime: 03:18:23 AM \nDevice: tps kB_read/s kB_wrtn/s kB_read kB_wrtn\ndev3-0 10.60 0.00 143.20 0 716\n\nor blocks:\n\nTime: 03:20:58 AM \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\ndev3-0 25.40 3.20 756.80 16 3784\n\nTime: 03:21:03 AM \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\ndev3-0 30.20 3.20 841.60 16 4208\n\nextended:\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\n/dev/hda3 0.00 79.20 0.60 30.60 4.80 878.40 2.40 439.20\n 28.31 0.36 1.15 0.45 1.40\n\navg-cpu: %user %nice %sys %idle\n 31.00 0.00 18.20 50.80\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\n/dev/hda3 0.00 45.80 0.00 10.80 0.00 452.80 0.00 226.40\n 41.93 0.08 0.74 0.37 0.40\n\navg-cpu: %user %nice %sys %idle\n 83.20 0.00 16.60 0.20\n\nDevice: rrqm/s wrqm/s r/s w/s rsec/s wsec/s rkB/s wkB/s\navgrq-sz avgqu-sz await svctm %util\n/dev/hda 0.00 28.20 0.00 10.10 0.00 315.20 0.00 157.60\n 31.21 4294917.33 0.30 99.01 100.00\n/dev/hda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00 0.00\n/dev/hda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00\n 0.00 0.00 0.00 0.00 0.00\n/dev/hda3 0.00 28.20 0.00 10.10 0.00 315.20 0.00 157.60\n 31.21 0.03 0.30 0.20 0.20\n\nMy conf file:\n\n[root@live setup]# cat /var/lib/pgsql/data/postgresql.conf\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have \n# to SIGHUP the postmaster for the changes to take effect, or use \n# \"pg_ctl reload\".\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\ntcpip_socket = true\nmax_connections = 20\n # note: increasing max_connections costs about 500 bytes of shared\n # memory per connection slot, in addition to costs from shared_buffers\n # and max_locks_per_transaction.\n#superuser_reserved_connections = 2\n#port = 5432\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#virtual_host = '' # what interface to listen on; defaults to any\n#rendezvous_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = false\n#password_encryption = true\n#krb_server_keyfile = ''\n#db_user_namespace = false\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 40 # min 16, at least max_connections*2, 8KB each\n#sort_mem = 1024 # min 64, size in KB\n#vacuum_mem = 8192 # min 1024, size in KB\n\n# - Free Space Map -\n\n#max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each\n#max_fsm_relations = 1000 # min 100, ~50 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = true # turns forced synchronization on or off\n#wal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n#wal_buffers = 8 # min 4, 8KB each\n\n# - Checkpoints -\n\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5 # range 1-1000\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Enabling -\n\n#enable_hashagg = true\n#enable_hashjoin = true\n#enable_indexscan = true\n#enable_mergejoin = true\n#enable_nestloop = true\n#enable_seqscan = true\n#enable_sort = true\n#enable_tidscan = true\n\n# - Planner Cost Constants -\n\n#effective_cache_size = 1000 # typically 8KB each\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = true\n#geqo_threshold = 11\n#geqo_effort = 1\n#geqo_generations = 0\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Syslog -\n\nsyslog = 0 # range 0-2; 0=stdout; 1=both; 2=syslog\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\n\n# - When to Log -\n\nclient_min_messages = warning # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nlog_min_messages = log # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n\nlog_error_verbosity = default # terse, default, or verbose messages\n\nlog_min_error_statement = warning # Values in order of increasing severity:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, panic(off)\n \n#log_min_duration_statement = -1 # Log all statements whose\n # execution time exceeds the value, in\n # milliseconds. Zero prints all queries.\n # Minus-one disables.\n\n#silent_mode = false # DO NOT USE without Syslog!\n\n# - What to Log -\n\n#debug_print_parse = false\n#debug_print_rewritten = false\n#debug_print_plan = false\n#debug_pretty_print = false\n#log_connections = false\n#log_duration = false\n#log_pid = false\n#log_statement = false\n#log_timestamp = false\n#log_hostname = false\n#log_source_port = false\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = false\n#log_planner_stats = false\n#log_executor_stats = false\n#log_statement_stats = false\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = true\n#stats_command_string = false\n#stats_block_level = false\n#stats_row_level = false\n#stats_reset_on_server_start = true\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#check_function_bodies = true\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = false\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ environment setting\n#australian_timezones = false\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database encoding\n\n# These settings are initialized by initdb -- they may be changed\nlc_messages = 'en_US.UTF-8' # locale for system error\nmessage strings\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = true\n#dynamic_library_path = '$libdir'\n#max_expr_depth = 10000 # min 10\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10, ~260*max_connections bytes each\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = true\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = true\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = false\n",
"msg_date": "Sat, 26 Jun 2004 01:08:57 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Chris Cheston <[email protected]> writes:\n\n> shared_buffers = 40 # min 16, at least max_connections*2, 8KB each\n\nThis is ridiculously low for any kind of production server. Try\nsomething like 5000-10000 for a start.\n\n-Doug\n\n",
"msg_date": "Sat, 26 Jun 2004 09:32:08 -0400",
"msg_from": "Doug McNaught <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Hello,\n\nNot to mention upping your effective_cache.\n\nDoug McNaught wrote:\n\n>Chris Cheston <[email protected]> writes:\n>\n> \n>\n>>shared_buffers = 40 # min 16, at least max_connections*2, 8KB each\n>> \n>>\n>\n>This is ridiculously low for any kind of production server. Try\n>something like 5000-10000 for a start.\n>\n>-Doug\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 9: the planner will ignore your desire to choose an index scan if your\n> joining column's datatypes do not match\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n\n\n\n\n\n\n\n\nHello,\n\nNot to mention upping your effective_cache.\n\nDoug McNaught wrote:\n\nChris Cheston <[email protected]> writes:\n\n \n\nshared_buffers = 40 # min 16, at least max_connections*2, 8KB each\n \n\n\nThis is ridiculously low for any kind of production server. Try\nsomething like 5000-10000 for a start.\n\n-Doug\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n \n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL",
"msg_date": "Sat, 26 Jun 2004 07:11:49 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Hi all,\n\nI upped effective_cache to 16000 KB and I could only up the\nshared_buffers to 3000. Anything more and postgres would not start.\n\nPostmaster is still using lots of CPU. pg_stat_activity shows only\nquery is happening at a time so the requests are probably queueing on\nthis one thread. Is this the right way to go?\n\nAny other suggestions for me to figure out why Postmaster is using so much CPU?\n\nThanks in advance,\nChris\n\nnumnet=# select * from pg_stat_activity;\n datid | datname | procpid | usesysid | usename | \n \n current_query \n \n | query_start\n-------+---------+---------+----------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------------------------\n 17144 | numnet | 26120 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.02042-04\n 17144 | numnet | 26121 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.026025-04\n 17144 | numnet | 26122 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.030917-04\n 17144 | numnet | 26123 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.036266-04\n 17144 | numnet | 26124 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.041551-04\n 17144 | numnet | 26125 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.046449-04\n 17144 | numnet | 26126 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.051666-04\n 17144 | numnet | 26127 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.057398-04\n 17144 | numnet | 26128 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:01:24.06225-04\n 17144 | numnet | 26129 | 103 | numnet | SELECT id,name,number,\nsystemid,pin,last,to,from,lastest,start,end,continue,type,status,duration\nFROM logs WHERE (((from= 'me') and (to= 'you')) and (serverip=\n'23.6.6.33\n 17144 | numnet | 26147 | 103 | numnet | <IDLE> \n \n \n \n | 2004-06-26 18:03:46.175789-04\n(11 rows)\n\n\n\n\n----- Original Message -----\nFrom: Joshua D. Drake <[email protected]>\nDate: Sat, 26 Jun 2004 07:11:49 -0700\nSubject: Re: [PERFORM] postgres 7.4 at 100%\nTo: Doug McNaught <[email protected]>\nCc: [email protected]\n\n\nHello,\n\nNot to mention upping your effective_cache.\n\n\nDoug McNaught wrote:\n\nChris Cheston <[email protected]> writes:\n\n \nshared_buffers = 40 # min 16, at least max_connections*2, 8KB each\n This is ridiculously low for any kind of production server. Try\nsomething like 5000-10000 for a start.\n\n-Doug\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n \n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL\n",
"msg_date": "Sat, 26 Jun 2004 16:03:19 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "> I upped effective_cache to 16000 KB and I could only up the\n> shared_buffers to 3000. Anything more and postgres would not start.\n\nYou need to greatly incrase the shared memory max setting on your \nmachine so that you can use at the very least, 10000 shared buffers.\n\nChris\n\n",
"msg_date": "Sun, 27 Jun 2004 13:33:25 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:\n> > I upped effective_cache to 16000 KB and I could only up the\n> > shared_buffers to 3000. Anything more and postgres would not start.\n> \n> You need to greatly incrase the shared memory max setting on your \n> machine so that you can use at the very least, 10000 shared buffers.\n\n\nDoug said the same, yet the PG Tuning article recommends not make this\ntoo large as it is just temporary used by the query queue or so. (I\nguess the system would benefit using more memory for file system cache)\n\nSo who is correct? The tuning article or big-honking-shared-mem\nproponents?\n\nFWIW: I have a box with 512 MB RAM and see no different between 4096 and\n32758 shared buffers...\n\nRegards,\nFrank",
"msg_date": "Sun, 27 Jun 2004 23:46:58 -0500",
"msg_from": "Frank Knobbe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Frank Knobbe <[email protected]> writes:\n> On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:\n>>> I upped effective_cache to 16000 KB and I could only up the\n>>> shared_buffers to 3000. Anything more and postgres would not start.\n\n>> You need to greatly incrase the shared memory max setting on your\n>> machine so that you can use at the very least, 10000 shared buffers.\n\n> Doug said the same, yet the PG Tuning article recommends not make this\n> too large as it is just temporary used by the query queue or so.\n\nThe original report was that the guy had it set to 40 (!?), which is\nclearly far below the minimum reasonable value. But I'd not expect a\nhuge difference between 3000 and 10000 --- in my experience, 1000 is\nenough to get you over the \"knee\" of the performance curve and into\nthe domain of marginal improvements.\n\nSo while he surely should not go back to 40, it seems there's another\nfactor involved here that we've not recognized yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 2004 01:19:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100% "
},
{
"msg_contents": "Tom,\n\n> So while he surely should not go back to 40, it seems there's another\n> factor involved here that we've not recognized yet.\n\nI'd agree. Actually, the first thing I'd do, were it my machine, is reboot it \nand run memtest86 overnight. CPU thrashing like that may indicate bad RAM.\n\nIf the RAM checks out, I'd like to see the EXPLAIN ANALYZE for some of the \nlongest-running queries, and for those INSERTS.\n\nAlso, is the new machine running Software RAID?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n",
"msg_date": "Mon, 28 Jun 2004 09:47:59 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Frank,\n\n> Doug said the same, yet the PG Tuning article recommends not make this\n> too large as it is just temporary used by the query queue or so. (I\n> guess the system would benefit using more memory for file system cache)\n\nAs one of the writers of that article, let me point out:\n\n\" -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096) \n-- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) \"\n\nWhile this is probably a little conservative, it's still way bigger than 40.\n\nI would disagree with the folks who suggest 32,000 as a setting for you. On \nLinux, that's a bit too large; I've never seen performance improvements with \nshared_buffers greater than 18% of *available* RAM.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 28 Jun 2004 12:40:02 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "On Mon, 2004-06-28 at 14:40, Josh Berkus wrote:\n> As one of the writers of that article, let me point out:\n> \n> \" -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096) \n> -- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) \"\n> \n> While this is probably a little conservative, it's still way bigger than 40.\n\nI agree that 40 is a bit weak :) Chris' system has only 512 MB of RAM\nthough. I thought the quick response \"..for any kind of production\nserver, try 5000-10000...\" -- without considering how much memory he has\n-- was a bit... uhm... eager.\n\nBesides, if the shared memory is used to queue client requests,\nshouldn't that memory be sized according to workload (i.e. amount of\nclients, transactions per second, etc) instead of just taking a\npercentage of the total amount of memory? If there only a few\nconnections, why waste shared memory on that when the memory could be\nbetter used as file system cache to prevent PG from going to the disk so\noften? \n\nI understand tuning PG is almost an art form, yet it should be based on\nactual usage patterns, not just by system dimensions, don't you agree?\n\nRegards,\nFrank",
"msg_date": "Mon, 28 Jun 2004 16:46:55 -0500",
"msg_from": "Frank Knobbe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Frank,\n\n> I understand tuning PG is almost an art form, yet it should be based on\n> actual usage patterns, not just by system dimensions, don't you agree?\n \nWell, it's both. It's more that available RAM determines your *upper* limit; \nthat is, on Linux, you don't really want to have more than 20% allocated to \nthe shared_buffers or you'll be taking memory away from the kernel. \n\nWithin that limit, data size, query complexity and volume, and whether or not \nyou have long-running procedures tell you whether you're at the low end or \nthe high end.\n\nTo futher complicate things, these calculations are all going to change with \n7.5.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n",
"msg_date": "Mon, 28 Jun 2004 15:47:38 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Wow, this simple query is taking 676.24 ms to execute! it only takes\n18 ms on our other machine.\n\nThis table has 150,000 rows. Is this normal?\n\nno, the machine is not running software RAID. Anyone have any ideas\nnext as to what I should do to debug this? I'm really wondering if the\nLinux OS running SMP is the cause.\n\nThanks,\nChris\n\nlive=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\ntime=0.30..574.72 rows=143485 loops=1)\n Filter: (from = 'you'::character varying)\n Total runtime: 676.24 msec\n(3 rows)\n\nexplain analyze for inserts is fast too.\n\n\nOn Mon, 28 Jun 2004 09:47:59 -0700, Josh Berkus <[email protected]> wrote:\n> \n> Tom,\n> \n> > So while he surely should not go back to 40, it seems there's another\n> > factor involved here that we've not recognized yet.\n> \n> I'd agree. Actually, the first thing I'd do, were it my machine, is reboot it\n> and run memtest86 overnight. CPU thrashing like that may indicate bad RAM.\n> \n> If the RAM checks out, I'd like to see the EXPLAIN ANALYZE for some of the\n> longest-running queries, and for those INSERTS.\n> \n> Also, is the new machine running Software RAID?\n> \n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n",
"msg_date": "Tue, 29 Jun 2004 01:00:10 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "> live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------\n> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> time=0.30..574.72 rows=143485 loops=1)\n> Filter: (from = 'you'::character varying)\n> Total runtime: 676.24 msec\n> (3 rows)\n\nHave you got an index on calllogs(from)?\n\nHave you vacuumed and analyzed that table recently?\n\nChris\n\n",
"msg_date": "Tue, 29 Jun 2004 16:21:01 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "ok i just vacuumed it and it's taking slightly longer now to execute\n(only about 8 ms longer, to around 701 ms).\n\nNot using indexes for calllogs(from)... should I? The values for\ncalllogs(from) are not unique (sorry if I'm misunderstanding your\npoint).\n\nThanks,\n\nChris\n\nOn Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne\n<[email protected]> wrote:\n> \n> > live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> > QUERY PLAN\n> > ----------------------------------------------------------------------------------------------------------\n> > Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> > time=0.30..574.72 rows=143485 loops=1)\n> > Filter: (from = 'you'::character varying)\n> > Total runtime: 676.24 msec\n> > (3 rows)\n> \n> Have you got an index on calllogs(from)?\n> \n> Have you vacuumed and analyzed that table recently?\n> \n> Chris\n> \n>\n",
"msg_date": "Tue, 29 Jun 2004 01:37:30 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "On Tue, Jun 29, 2004 at 01:37:30 -0700,\n Chris Cheston <[email protected]> wrote:\n> ok i just vacuumed it and it's taking slightly longer now to execute\n> (only about 8 ms longer, to around 701 ms).\n> \n> Not using indexes for calllogs(from)... should I? The values for\n> calllogs(from) are not unique (sorry if I'm misunderstanding your\n> point).\n\nIf you are hoping for some other plan than a sequential scan through\nall of the records you are going to need an index. You can have an\nindex on a column (or function) that isn't unique for all rows.\n",
"msg_date": "Tue, 29 Jun 2004 08:28:13 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Chris Cheston <[email protected]> writes:\n> Wow, this simple query is taking 676.24 ms to execute! it only takes\n> 18 ms on our other machine.\n\n> This table has 150,000 rows. Is this normal?\n\n> live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------\n> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> time=0.30..574.72 rows=143485 loops=1)\n> Filter: (from = 'you'::character varying)\n> Total runtime: 676.24 msec\n> (3 rows)\n\nSo the query is pulling 140K+ rows out of a table with 150K entries?\nNo chance that an index will help for that. You're fortunate that the\nthing did not try to use an index though, because it thinks there are\nonly 24 rows matching 'you', which is one of the more spectacular\nstatistical failures I've seen lately. I take it you haven't ANALYZEd\nthis table in a long time?\n\nIt is hard to believe that your other machine can pull 140K+ rows in\n18 msec, though. Are you sure the table contents are the same in both\ncases?\n\nIf they are, the only reason I can think of for the discrepancy is a\nlarge amount of dead space in this copy of the table. What does VACUUM\nVERBOSE show for it, and how does that compare to what you see on the\nother machine? Try a CLUSTER or VACUUM FULL to see if you can shrink\nthe table's physical size (number of pages).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 29 Jun 2004 09:50:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100% "
},
{
"msg_contents": "Is the from field nullable? If not, try \"create index calllogs_from on \ncalllogs ( from );\" and then do an explain analyze of your query.\n\nGavin\n\n\nChris Cheston wrote:\n\n>ok i just vacuumed it and it's taking slightly longer now to execute\n>(only about 8 ms longer, to around 701 ms).\n>\n>Not using indexes for calllogs(from)... should I? The values for\n>calllogs(from) are not unique (sorry if I'm misunderstanding your\n>point).\n>\n>Thanks,\n>\n>Chris\n>\n>On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne\n><[email protected]> wrote:\n> \n>\n>>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n>>> QUERY PLAN\n>>>----------------------------------------------------------------------------------------------------------\n>>> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n>>>time=0.30..574.72 rows=143485 loops=1)\n>>> Filter: (from = 'you'::character varying)\n>>> Total runtime: 676.24 msec\n>>>(3 rows)\n>>> \n>>>\n>>Have you got an index on calllogs(from)?\n>>\n>>Have you vacuumed and analyzed that table recently?\n>>\n>>Chris\n>>\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n",
"msg_date": "Tue, 29 Jun 2004 09:03:24 -0700",
"msg_from": "\"Gavin M. Roy\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "Oh my, creating an index has absolutely reduced the times it takes to\nquery from around 700 ms to less than 1 ms!\n\nThanks so much for all your help. You've saved me!\n\nOne question:\n\nWhy would I or would I not create multiple indexes in a table? I\ncreated another index in the same table an it's improved performance\neven more.\n\nThanks,\nChris\n\nOn Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy <[email protected]> wrote:\n> \n> Is the from field nullable? If not, try \"create index calllogs_from on\n> calllogs ( from );\" and then do an explain analyze of your query.\n> \n> Gavin\n> \n> \n> \n> Chris Cheston wrote:\n> \n> >ok i just vacuumed it and it's taking slightly longer now to execute\n> >(only about 8 ms longer, to around 701 ms).\n> >\n> >Not using indexes for calllogs(from)... should I? The values for\n> >calllogs(from) are not unique (sorry if I'm misunderstanding your\n> >point).\n> >\n> >Thanks,\n> >\n> >Chris\n> >\n> >On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne\n> ><[email protected]> wrote:\n> >\n> >\n> >>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';\n> >>> QUERY PLAN\n> >>>----------------------------------------------------------------------------------------------------------\n> >>> Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual\n> >>>time=0.30..574.72 rows=143485 loops=1)\n> >>> Filter: (from = 'you'::character varying)\n> >>> Total runtime: 676.24 msec\n> >>>(3 rows)\n> >>>\n> >>>\n> >>Have you got an index on calllogs(from)?\n> >>\n> >>Have you vacuumed and analyzed that table recently?\n> >>\n> >>Chris\n> >>\n> >>\n> >>\n> >>\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to [email protected])\n> >\n> >\n> \n>\n",
"msg_date": "Wed, 30 Jun 2004 00:19:04 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "\n> Why would I or would I not create multiple indexes in a table? I\n> created another index in the same table an it's improved performance\n> even more.\n\nYou create indexes when you need indexes. Indexes are most helpful when \nthey match the WHERE clause of your selects.\n\nSo, if you commonly do one query that selects on one column, and another \nquery that selects on two other columns - then create one index on the \nfirst column and another index over the second two columns.\n\nChris\n",
"msg_date": "Wed, 30 Jun 2004 15:30:52 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "I see - thanks very much. I created an index for column 'oid' which I\nwas using in a WHERE. So rule of thumb- create an index for column(s)\nwhich I use in WHERE queries.\n\nThanks,\nChis\n\nOn Wed, 30 Jun 2004 15:30:52 +0800, Christopher Kings-Lynne\n<[email protected]> wrote:\n> \n> \n> > Why would I or would I not create multiple indexes in a table? I\n> > created another index in the same table an it's improved performance\n> > even more.\n> \n> You create indexes when you need indexes. Indexes are most helpful when\n> they match the WHERE clause of your selects.\n> \n> So, if you commonly do one query that selects on one column, and another\n> query that selects on two other columns - then create one index on the\n> first column and another index over the second two columns.\n> \n> Chris\n>\n",
"msg_date": "Wed, 30 Jun 2004 00:34:52 -0700",
"msg_from": "Chris Cheston <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 7.4 at 100%"
},
{
"msg_contents": "> I see - thanks very much. I created an index for column 'oid' which I\n> was using in a WHERE. So rule of thumb- create an index for column(s)\n> which I use in WHERE queries.\n\nSo to speak. They can also sometimes assist in sorting. The OID column \nis special. I suggest adding a unique index to that column. In \npostgresql it is _possible_ for the oid counter to wraparound, hence if \nyou rely on oids (not necessarily a good idea), it's best to put a \nunique index on the oid column.\n\nI _strongly_ suggest that you read this:\n\nhttp://www.postgresql.org/docs/7.4/static/indexes.html\n\nChris\n\n",
"msg_date": "Wed, 30 Jun 2004 18:02:50 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 7.4 at 100%"
}
] |
[
{
"msg_contents": "\nHi!\n\nI'd like to know if there is a way to see what queries are running\nwithin a certain postgres instance and how much resources (cpu/memory)\netc. they are using. Right now it's impossible to see what is happening\nwithin postgres when it's binaries are using 100% CPU.\n\nIn Sybase there is a command which let's you view what 'processes' are\nrunning within the server and how much cpu (according to Sybase) they\nare using. It also provides you with a stored procedure to kill off some\nbad behaving queries. How can one do this within postgres?\n\nThanks in advance!\n\nBest regards,\n\n\tPascal\n\n",
"msg_date": "Fri, 25 Jun 2004 21:37:49 +0200",
"msg_from": "\"P.A.M. van Dam\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can one see what queries are running withing a postgres instance?"
},
{
"msg_contents": "\n\nP.A.M. van Dam wrote:\n\n>Hi!\n>\n>I'd like to know if there is a way to see what queries are running\n>within a certain postgres instance and how much resources (cpu/memory)\n>etc. they are using. Right now it's impossible to see what is happening\n>within postgres when it's binaries are using 100% CPU.\n>\n>In Sybase there is a command which let's you view what 'processes' are\n>running within the server and how much cpu (according to Sybase) they\n>are using. It also provides you with a stored procedure to kill off some\n>bad behaving queries. How can one do this within postgres?\n>\n>Thanks in advance!\n>\n>Best regards,\n>\n>\tPascal\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\nselect * from pg_stat_activity. If you want to see the command that was \nrun, you will need to turn on stats_command_string = true in \npostgresql.conf and re-start server. PID shows up, so you can kill bad \nqueries from terminal and see CUP % in top\n\nRoger Ging\nV.P., Information Technology\nMusic Reports, Inc.\n\n",
"msg_date": "Fri, 25 Jun 2004 12:58:49 -0700",
"msg_from": "Roger Ging <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a"
},
{
"msg_contents": "Let see in contrib/ the application pg_who ... you will see the process, the \nqueries, and the CPU ... ;o)\n\nRegards,\n\nLe vendredi 25 Juin 2004 21:37, P.A.M. van Dam a écrit :\n> Hi!\n>\n> I'd like to know if there is a way to see what queries are running\n> within a certain postgres instance and how much resources (cpu/memory)\n> etc. they are using. Right now it's impossible to see what is happening\n> within postgres when it's binaries are using 100% CPU.\n>\n> In Sybase there is a command which let's you view what 'processes' are\n> running within the server and how much cpu (according to Sybase) they\n> are using. It also provides you with a stored procedure to kill off some\n> bad behaving queries. How can one do this within postgres?\n>\n> Thanks in advance!\n>\n> Best regards,\n>\n> \tPascal\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \nBill Footcow\n\n",
"msg_date": "Fri, 25 Jun 2004 22:51:09 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a postgres\n\tinstance?"
},
{
"msg_contents": "Sorry It's not in the contrib folder of PostgreSQL ... but you will find it on \ngborg.postgresql.org !\n\nRegards,\n\nLe vendredi 25 Juin 2004 22:51, Hervé Piedvache a écrit :\n> Let see in contrib/ the application pg_who ... you will see the process,\n> the queries, and the CPU ... ;o)\n>\n> Regards,\n>\n> Le vendredi 25 Juin 2004 21:37, P.A.M. van Dam a écrit :\n> > Hi!\n> >\n> > I'd like to know if there is a way to see what queries are running\n> > within a certain postgres instance and how much resources (cpu/memory)\n> > etc. they are using. Right now it's impossible to see what is happening\n> > within postgres when it's binaries are using 100% CPU.\n> >\n> > In Sybase there is a command which let's you view what 'processes' are\n> > running within the server and how much cpu (according to Sybase) they\n> > are using. It also provides you with a stored procedure to kill off some\n> > bad behaving queries. How can one do this within postgres?\n> >\n> > Thanks in advance!\n> >\n> > Best regards,\n> >\n> > \tPascal\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n\n-- \nBill Footcow\n\n",
"msg_date": "Sat, 26 Jun 2004 09:27:12 +0200",
"msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a postgres\n\tinstance?"
},
{
"msg_contents": "\n>>Let see in contrib/ the application pg_who ... you will see the process,\n>>the queries, and the CPU ... ;o)\n\nEven easier:\n\nSELECT * FROM pg_stat_activity;\n\nAs a superuser.\n\nChris\n",
"msg_date": "Sat, 26 Jun 2004 16:58:16 +0800",
"msg_from": "Christopher Kings-Lynne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a"
},
{
"msg_contents": "Christopher Kings-Lynne <[email protected]> writes:\n> Even easier:\n> SELECT * FROM pg_stat_activity;\n\nBut note you must enable stats_command_string to make this very useful.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 26 Jun 2004 10:45:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a "
},
{
"msg_contents": "On Sat, Jun 26, 2004 at 04:58:16PM +0800, Christopher Kings-Lynne wrote:\n> \n> >>Let see in contrib/ the application pg_who ... you will see the process,\n> >>the queries, and the CPU ... ;o)\n> \n> Even easier:\n> \n> SELECT * FROM pg_stat_activity;\n> \n> As a superuser.\n\nThanks!\n\nThat works as needed!\n\nBest regards,\n\n\tPascal\n\n> \n> Chris\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faqs/FAQ.html\n",
"msg_date": "Sun, 27 Jun 2004 19:25:20 +0200",
"msg_from": "\"P.A.M. van Dam \" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can one see what queries are running withing a"
}
] |
[
{
"msg_contents": "Hi,\n\nI have one performance issue... and realy have no idea what's going on...\nWhen I set enable_seqscan to 0, query2 runs the same way...\n\nupload => 60667 entities\nuploadfield => 506316 entities\n\nQuery1:\nselect count(*) from Upload NATURAL JOIN UploadField Where Upload.ShopID \n= 123123;\n\n181.944 ms\n\nQuery2:\nselect count(*) from Upload NATURAL JOIN UploadField Where \nUpload.UploadID = 123123;\n\n1136.024 ms\n\nGreetings,\nJim J.\n\n\n-------\nDetails:\nPostgreSQL 7.4.2 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2 \n20030222 (Red Hat Linux 3.2.2-5)\n\n QUERY1 PLAN\n--------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1972.50..1972.50 rows=1 width=0) (actual \ntime=181.657..181.658 rows=1 loops=1)\n -> Nested Loop (cost=0.00..1972.46 rows=17 width=0) (actual \ntime=181.610..181.610 rows=0 loops=1)\n -> Seq Scan on upload (cost=0.00..1945.34 rows=2 width=8) \n(actual time=181.597..181.597 rows=0 loops=1)\n Filter: (shopid = 123123)\n -> Index Scan using relationship_3_fk on uploadfield \n(cost=0.00..13.44 rows=10 width=8) (never executed)\n Index Cond: (\"outer\".uploadid = uploadfield.uploadid)\n Total runtime: 181.944 ms\n\n QUERY2 PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=15886.74..15886.74 rows=1 width=0) (actual \ntime=1135.804..1135.806 rows=1 loops=1)\n -> Nested Loop (cost=1945.34..15886.69 rows=20 width=0) (actual \ntime=1135.765..1135.765 rows=0 loops=1)\n -> Seq Scan on uploadfield (cost=0.00..13940.95 rows=10 \nwidth=8) (actual time=1135.754..1135.754 rows=0 loops=1)\n Filter: (123123 = uploadid)\n -> Materialize (cost=1945.34..1945.36 rows=2 width=8) (never \nexecuted)\n -> Seq Scan on upload (cost=0.00..1945.34 rows=2 \nwidth=8) (never executed)\n Filter: (uploadid = 123123)\n Total runtime: 1136.024 ms\n\n\n Table \"public.upload\"\n Column | Type | Modifiers\n------------+------------------------+-----------\n uploadid | bigint | not null\n nativedb | text | not null\n shopid | bigint | not null\nIndexes:\n \"pk_upload\" primary key, btree (uploadid)\n \"nativedb\" btree (nativedb)\n \"uploadshopid\" btree (shopid)\n\n Table \"public.uploadfield\"\n Column | Type | Modifiers\n---------------+----------+-----------\n uploadfieldid | bigint | not null\n fieldnameid | smallint | not null\n uploadid | bigint | not null\n\nIndexes:\n \"pk_uploadfield\" primary key, btree (uploadfieldid)\n \"relationship_3_fk\" btree (uploadid)\n \"relationship_4_fk\" btree (fieldnameid)\nForeign-key constraints:\n \"fk_uploadfi_fieldname_fieldnam\" FOREIGN KEY (fieldnameid) \nREFERENCES fieldname(fieldnameid) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"fk_uploadfi_uploadfie_upload\" FOREIGN KEY (uploadid) REFERENCES \nupload(uploadid) ON UPDATE RESTRICT ON DELETE RESTRICT\n\n",
"msg_date": "Mon, 28 Jun 2004 02:37:33 +0200",
"msg_from": "Jim <[email protected]>",
"msg_from_op": true,
"msg_subject": "SQL stupid query plan... terrible performance !"
},
{
"msg_contents": "\nOn Jun 27, 2004, at 8:37 PM, Jim wrote:\n\n> Hi,\n>\n> I have one performance issue... and realy have no idea what's going \n> on...\n> When I set enable_seqscan to 0, query2 runs the same way...\n>\n> upload => 60667 entities\n> uploadfield => 506316 entities\n>\n\nHave you vacuum analyze'd recently?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n",
"msg_date": "Sun, 27 Jun 2004 22:48:16 -0400",
"msg_from": "Jeff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL stupid query plan... terrible performance !"
},
{
"msg_contents": "Jim <[email protected]> writes:\n> I have one performance issue... and realy have no idea what's going on...\n\n[yawn...] Cast the constants to bigint. See previous discussions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 27 Jun 2004 23:29:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL stupid query plan... terrible performance ! "
},
{
"msg_contents": "On Sun, 27 Jun 2004 23:29:46 -0400, Tom Lane <[email protected]> wrote:\n> Jim <[email protected]> writes:\n> > I have one performance issue... and realy have no idea what's going on...\n> \n> [yawn...] Cast the constants to bigint. See previous discussions.\n> \n> \t\t\tregards, tom lane\n\nWould there be any way of adding some sort of indicator to the plan as\nto why sequential was chosen?\n\neg \n Seq Scan on upload (type mismatch) (cost....)\n Seq Scan on upload (statistics) (cost....)\n Seq Scan on upload (catch-all) (cost....)\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n",
"msg_date": "Mon, 28 Jun 2004 15:29:57 +1000",
"msg_from": "Klint Gore <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL stupid query plan... terrible performance ! "
},
{
"msg_contents": "Klint Gore <[email protected]> writes:\n> On Sun, 27 Jun 2004 23:29:46 -0400, Tom Lane <[email protected]> wrote:\n>> [yawn...] Cast the constants to bigint. See previous discussions.\n\n> Would there be any way of adding some sort of indicator to the plan as\n> to why sequential was chosen?\n\nNot really ... the plan that's presented is the one that looked the\ncheapest out of the feasible plans. How are you going to identify a\nsingle reason as to why any other plan was not generated or lost out\non a cost-estimate basis? Humans might be able to do so (note that\nthe above quote is an off-the-cuff estimate, not something I'd care\nto defend rigorously) but I don't think software can do it.\n\nFWIW, the particular problem here should be fixed in 7.5.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 28 Jun 2004 01:48:43 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: SQL stupid query plan... terrible performance ! "
},
{
"msg_contents": " 2004-06-28 07:48, Tom Lane wrote:\n\n>Klint Gore <[email protected]> writes:\n>> On Sun, 27 Jun 2004 23:29:46 -0400, Tom Lane <[email protected]> wrote:\n>>> [yawn...] Cast the constants to bigint. See previous discussions.\n> \n>\n[cuuuut]\n\nThanks a lot guys. The term \"Cast the constants to bigint\" It is what I \nwas looking for. I add explicitly ::data_type in my queries and \neverything works fine now.\n\nOne more thanks to Tom Lane - After your answer I found your post on the \nnewsgroup about this problem... the date of the post is 2001 year... You \nare really patience man.... :)\n\nBut I really have no idea what term I could use to force goggle to give \nme solution ;)\n\nGreetings,\nJim J.\n",
"msg_date": "Mon, 28 Jun 2004 19:44:03 +0200",
"msg_from": "Jim <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: SQL stupid query plan... terrible performance !"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.