threads
listlengths
1
275
[ { "msg_contents": "We have a pretty busy linux server running postgres 8.1.4, waiting to\nupgrade until 8.3 to avoid dump/restoring twice.\n\n# cat /proc/meminfo\n total: used: free: shared: buffers: cached:\nMem: 3704217600 3592069120 112148480 0 39460864 2316271616\nSwap: 2516918272 270336 2516647936\n\n# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 15\nmodel : 4\nmodel name : Intel(R) Xeon(TM) CPU 3.00GHz\nstepping : 3\ncpu MHz : 2992.795\n\nThe postgresql.conf was basically the default so I decided to\nincrease the cache size and a couple paramaters to make more use of\nthat memory - here's what I did:\n\nshared_buffers = 16384 (was 1000)\nwork_mem = 16384 (was 1024)\nwal_buffers = 24 (was 8)\ncheckpoint_segments = 5 (was 3)\neffective_cache_size = 10000 (was 1000)\nstats_command_string = on (was off)\nstats_block_level = on (was off)\nstats_row_level = on (was off)\n\nIn order to do this I had to change /proc/sys/kernel/shmmax to\n536870912 (don't have /etc/sysctl)\n\nAlso, the entire cluster gets vacuumed analyzed nightly. \n\nAfter making these changes, the performance on the server actually\nworsened. I slowly backed off on some of the paramaters but didn't\nseem to help.\n\nWondering if those changes are silly? For a server this size I\ndidn't think this would be problematic.\n\nThank you,\n\nJosh\n\n", "msg_date": "Thu, 4 Oct 2007 10:28:04 -0500", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning Help - What did I do wrong?" }, { "msg_contents": "On 10/4/07, Josh Trutwin <[email protected]> wrote:\n> We have a pretty busy linux server running postgres 8.1.4, waiting to\n> upgrade until 8.3 to avoid dump/restoring twice.\n\nYou should immediate update your version to 8.1.whateverislatest.\nThat requires no dump / restore and it is a bug fix update. I doubt\nthis problem is because you're out of date on patches, but who\nknows...\n\n> # cat /proc/meminfo\n> total: used: free: shared: buffers: cached:\n> Mem: 3704217600 3592069120 112148480 0 39460864 2316271616\n> Swap: 2516918272 270336 2516647936\n\nWell, you've got plenty of memory, and a large chunk is being used as cache.\n\n> The postgresql.conf was basically the default so I decided to\n> increase the cache size and a couple paramaters to make more use of\n> that memory - here's what I did:\n>\n> shared_buffers = 16384 (was 1000)\n> work_mem = 16384 (was 1024)\n> wal_buffers = 24 (was 8)\n> checkpoint_segments = 5 (was 3)\n> effective_cache_size = 10000 (was 1000)\n> stats_command_string = on (was off)\n> stats_block_level = on (was off)\n> stats_row_level = on (was off)\n\nYour changes seem reasonable.\n\n> Also, the entire cluster gets vacuumed analyzed nightly.\n\nYou should look into running the autovacuum daemon. for heavily used\ndatabases nightly vacuuming may not be enough.\n\n> After making these changes, the performance on the server actually\n> worsened. I slowly backed off on some of the paramaters but didn't\n> seem to help.\n\nMost likely turning on stats collection slowed you down a bit.\n\nWe need to see examples of what's slow, including explain analyze\noutput for slow queries. Also a brief explanation of the type of load\nyour database server is seeing. I.e. is it a lot of little\ntransactions, mostly read, batch processing, lots of users, one user,\netc... Right now we don't have enough info to really help you.\n", "msg_date": "Thu, 4 Oct 2007 11:19:22 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Help - What did I do wrong?" }, { "msg_contents": "Oh, and in addition to my previous message, you should use tools like\nvmstat, iostat and top to get an idea of what your server is doing.\n\nWhat kind of drive subsystem do you have? What kind of raid controller? etc...\n", "msg_date": "Thu, 4 Oct 2007 11:22:17 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Help - What did I do wrong?" }, { "msg_contents": "On Thu, 4 Oct 2007 11:19:22 -0500\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> We need to see examples of what's slow, including explain analyze\n> output for slow queries. Also a brief explanation of the type of\n> load your database server is seeing. I.e. is it a lot of little\n> transactions, mostly read, batch processing, lots of users, one\n> user, etc... Right now we don't have enough info to really help\n> you.\n\nSorry, this server is for a few (100+?) websites so it's running\nalong site apache, php. All connections to postgresql (except for\nthe occaional psql console login) are done from php requests, using\nthe same user (basically there are two users, the one php uses and\npostgres). The bulk of the activity would be reads, but\ncertainly inesrts/updates/deletes would be interspersed in there.\nMost of the activity is done via auto-commits, not many long\ntransactions.\n\nFrom your followup email:\n\n> ... you should use tools like vmstat, iostat and top to get an idea\n> of what your server is doing.\n\n# vmstat\n procs memory swap io\nsystem cpu\n r b w swpd free buff cache si so bi bo in cs\nus sy id\n 3 1 0 268 68332 39016 2201436 0 0 3 3 4\n2 3 4 2\n\nsorry about the wrapping...\n\niostat is not found - will see if I can download it. top typically\nshows postmaster as the top process with 10-15% of the CPU, followed\nby apache threads.\n\n 12:01pm up 104 days, 12:05, 2 users, load average: 9.75, 9.30,\n7.70\n215 processes: 214 sleeping, 1 running, 0 zombie, 0 stopped\nCPU states: 0.1% user, 0.0% system, 0.0% nice, 0.4% idle\nMem: 3617400K av, 3552784K used, 64616K free, 0K shrd, 37456K\nbuff\nSwap: 2457928K av, 264K used, 2457664K free\n2273664K cached\n\n PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME\nCOMMAND\n31797 postgres 17 0 28836 28M 1784 S 0 8.5 0.7 10:15\npostmaster\n\n> What kind of drive subsystem do you have? What kind of raid\n> controller? etc...\n\nGathering more information on this - Raid is a software\nRAID-1. Some information: \nI believe itI believe it\n# df -h\nFilesystem Size Used Avail Use% Mounted on\n/dev/md0 66G 50G 16G 76% /\n/dev/sda1 15M 6.6M 8.5M 44% /boot\n\n# cat /proc/mdstat\nPersonalities : [raid0] [raid1]\nread_ahead 1024 sectors\nmd0 : active raid1 sdb3[0] sdc3[1]\n 70573440 blocks [2/2] [UU]\n\nunused devices: <none>\n\nThanks for your help, I'm more of a developer guy so let me know what\nelse is useful.\n\nJosh\n", "msg_date": "Thu, 4 Oct 2007 12:00:27 -0500", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning Help - What did I do wrong?" }, { "msg_contents": "On 10/4/07, Josh Trutwin <[email protected]> wrote:\n> On Thu, 4 Oct 2007 11:19:22 -0500\n> \"Scott Marlowe\" <[email protected]> wrote:\n>\n> > We need to see examples of what's slow, including explain analyze\n> > output for slow queries. Also a brief explanation of the type of\n> > load your database server is seeing. I.e. is it a lot of little\n> > transactions, mostly read, batch processing, lots of users, one\n> > user, etc... Right now we don't have enough info to really help\n> > you.\n>\n> Sorry, this server is for a few (100+?) websites so it's running\n> along site apache, php. All connections to postgresql (except for\n> the occaional psql console login) are done from php requests, using\n> the same user (basically there are two users, the one php uses and\n> postgres). The bulk of the activity would be reads, but\n> certainly inesrts/updates/deletes would be interspersed in there.\n> Most of the activity is done via auto-commits, not many long\n> transactions.\n\nSo, are there certain queries that are much slower than the others?\nRun them from psql with explain analyze in front of them and post the\nquery and the output here.\n\n> From your followup email:\n>\n> > ... you should use tools like vmstat, iostat and top to get an idea\n> > of what your server is doing.\n>\n> # vmstat\n> procs memory swap io\n> system cpu\n> r b w swpd free buff cache si so bi bo in cs\n> us sy id\n> 3 1 0 268 68332 39016 2201436 0 0 3 3 4\n> 2 3 4 2\n\nvmstat needs to be run for a while to give you useful numbers. try:\n\nvmstat 5\n\nand let it run for a few minutes. The first line won't count so much,\nbut after that you'll get more reasonable numbers.\n\n> iostat is not found - will see if I can download it. top typically\n> shows postmaster as the top process with 10-15% of the CPU, followed\n> by apache threads.\n\nWhat OS are you on?\n\n> 12:01pm up 104 days, 12:05, 2 users, load average: 9.75, 9.30,\n> 7.70\n\nThat's pretty heavy load. I notice there's no wait % listed for CPU,\nso I assume it's not a late model Linux kernel or anything.\n\n> 215 processes: 214 sleeping, 1 running, 0 zombie, 0 stopped\n> CPU states: 0.1% user, 0.0% system, 0.0% nice, 0.4% idle\n> Mem: 3617400K av, 3552784K used, 64616K free, 0K shrd, 37456K\n> buff\n> Swap: 2457928K av, 264K used, 2457664K free\n> 2273664K cached\n>\n> PID USER PRI NI SIZE RSS SHARE STAT LIB %CPU %MEM TIME\n> COMMAND\n> 31797 postgres 17 0 28836 28M 1784 S 0 8.5 0.7 10:15\n> postmaster\n\nAre the postmasters using most of the CPU? OR the other processes?\n\n> > What kind of drive subsystem do you have? What kind of raid\n> > controller? etc...\n>\n> Gathering more information on this - Raid is a software\n> RAID-1. Some information:\n\nOK, given that it's read mostly, it's likely not a problem that a\nfaster RAID controller would help. Possibly more drives in a RAID 10\nwould help a little, but let's look at optimizing your query and\npostmaster first.\n\nDo you have the postmaster configured to log long running queries?\nThat's a good starting point. also google pg_fouine (I think I spelt\nit right) for analyzing your logs.\n\nIt's quite likely the issue here is one long running query that\nchewing all your I/O or CPU and making everything else slow. Once we\nfind that query things should get better and we can worry about\nperformance tuning in a more leisurely manner.\n", "msg_date": "Thu, 4 Oct 2007 12:42:53 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Help - What did I do wrong?" }, { "msg_contents": ">>> On Thu, Oct 4, 2007 at 10:28 AM, in message\n<[email protected]>, Josh Trutwin\n<[email protected]> wrote: \n> running postgres 8.1.4\n \n> # cat /proc/meminfo\n> total: used: free: shared: buffers: cached:\n> Mem: 3704217600 3592069120 112148480 0 39460864 2316271616\n \n> shared_buffers = 16384 (was 1000)\n> effective_cache_size = 10000 (was 1000)\n \nIt's kind of silly to tell PostgreSQL that its total cache space is 10000\npages when you've got more than that in shared buffers plus all that OS\ncache space. Try something around 285000 pages for effective_cache_size.\n \n> stats_command_string = on (was off)\n> stats_block_level = on (was off)\n> stats_row_level = on (was off)\n \n> After making these changes, the performance on the server actually\n> worsened. I slowly backed off on some of the paramaters but didn't\n> seem to help.\n \nDid you try turning off the collection of those additional statistics?\nThat isn't free.\n \nYou didn't get specific about what you saw in performance problems. If\nyou are seeing occasional \"freezes\" of all queries, you are likely looking\nat a known issue with \"spikiness\" of disk output. For some this can be\ncorrected by using very aggressive background writer settings. Some have\nsolved it by disabling OS write delays. Some haven't found a solution and\nare waiting for 8.3; there have been some serious changes made to attempt\nto resolve this issue.\n \n-Kevin\n \n\n\n", "msg_date": "Thu, 04 Oct 2007 14:03:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Help - What did I do wrong?" }, { "msg_contents": "On Thu, 04 Oct 2007 14:03:07 -0500\n\"Kevin Grittner\" <[email protected]> wrote:\n\n> It's kind of silly to tell PostgreSQL that its total cache space is\n> 10000 pages when you've got more than that in shared buffers plus\n> all that OS cache space. Try something around 285000 pages for\n> effective_cache_size. \n\nGood point.\n\n> > stats_command_string = on (was off)\n> > stats_block_level = on (was off)\n> > stats_row_level = on (was off)\n> \n> > After making these changes, the performance on the server actually\n> > worsened. I slowly backed off on some of the paramaters but\n> > didn't seem to help.\n> \n> Did you try turning off the collection of those additional\n> statistics? That isn't free.\n\nI turned off all but row level since I decided to try turning\nautovacuum on.\n\n> You didn't get specific about what you saw in performance\n> problems. If you are seeing occasional \"freezes\" of all queries,\n> you are likely looking at a known issue with \"spikiness\" of disk\n> output. For some this can be corrected by using very aggressive\n> background writer settings. Some have solved it by disabling OS\n> write delays. Some haven't found a solution and are waiting for\n> 8.3; there have been some serious changes made to attempt to\n> resolve this issue. \n\nThanks - I put some additional information in replies to Scott, but\nmainly the performance of the web sites that are talking to postgres\nis the problem - people calling in, etc. I turned on slow query\nlogging to see if I can find if it's particular queries or something\nelse? \n\nJosh\n", "msg_date": "Thu, 4 Oct 2007 14:28:52 -0500", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning Help - What did I do wrong?" } ]
[ { "msg_contents": "Hi,\n\nI am new to postgres having worked with Oracle in the past. I am interested \nin understanding Postgres's table partition functionality better. \nSpecifically, I have a third party application running against my postgres \ndatabase, but the database is becoming rather large to maintain. I am \nthinking about partitioning the biggest table.\n\nWould I be able to set-up partitioning on this table with it being seemless \nto the third party app (assuming that it performs pretty standard DML \nstatements against the table in question)?\n\nThanks\nTore \n\n\n", "msg_date": "Thu, 4 Oct 2007 18:51:14 +0100", "msg_from": "\"Tore Lukashaugen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioning in postgres - basic question" }, { "msg_contents": "Tore Lukashaugen wrote:\n> Hi,\n> \n> I am new to postgres having worked with Oracle in the past. I am interested \n> in understanding Postgres's table partition functionality better. \n> Specifically, I have a third party application running against my postgres \n> database, but the database is becoming rather large to maintain. I am \n> thinking about partitioning the biggest table.\n> \n> Would I be able to set-up partitioning on this table with it being seemless \n> to the third party app (assuming that it performs pretty standard DML \n> statements against the table in question)?\n\nhttp://www.postgresql.org/docs/8.2/static/ddl-partitioning.html#DDL-PARTITIONING-IMPLEMENTATION\n\nThe examples use rules but some on the list have said triggers work \nbetter if you have a lot of partitions.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Fri, 05 Oct 2007 15:19:23 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioning in postgres - basic question" } ]
[ { "msg_contents": "If I have this:\n\ncreate table foo (bar int primary key);\n\n...then in my ideal world, Postgres would be able to use that index on bar \nto help me with this:\n\nselect bar from foo order by bar desc limit 20;\n\nBut in my experience, PG8.2 is doing a full table scan on foo, then \nsorting it, then doing the limit. I have a more complex primary key, but I \nwas hoping the same concept would still apply. Am I doing something wrong, \nor just expecting something that doesn't exist?\n", "msg_date": "Thu, 4 Oct 2007 11:00:30 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "quickly getting the top N rows" }, { "msg_contents": "On Thu, 2007-10-04 at 11:00 -0700, Ben wrote:\n> If I have this:\n> \n> create table foo (bar int primary key);\n> \n> ...then in my ideal world, Postgres would be able to use that index on bar \n> to help me with this:\n> \n> select bar from foo order by bar desc limit 20;\n> \n> But in my experience, PG8.2 is doing a full table scan on foo, then \n> sorting it, then doing the limit. I have a more complex primary key, but I \n> was hoping the same concept would still apply. Am I doing something wrong, \n> or just expecting something that doesn't exist?\n\nIt has to do with the way that NULL values are stored in the index.\nThis page has details and instructions for how to get it to work:\n\nhttp://developer.postgresql.org/pgdocs/postgres/indexes-ordering.html\n\n-- Mark Lewis\n", "msg_date": "Thu, 04 Oct 2007 11:10:24 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "In response to Ben <[email protected]>:\n\n> If I have this:\n> \n> create table foo (bar int primary key);\n> \n> ...then in my ideal world, Postgres would be able to use that index on bar \n> to help me with this:\n> \n> select bar from foo order by bar desc limit 20;\n> \n> But in my experience, PG8.2 is doing a full table scan on foo, then \n> sorting it, then doing the limit. I have a more complex primary key, but I \n> was hoping the same concept would still apply. Am I doing something wrong, \n> or just expecting something that doesn't exist?\n\nShow us the explain.\n\nHowever, 2 guesses:\n1) You never analyzed the table, thus PG has awful statistics and\n doesn't know how to pick a good plan.\n2) You have so few rows in the table that a seq scan is actually\n faster than an index scan, which is why PG uses it instead.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 4 Oct 2007 14:14:11 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "Ben <[email protected]> schrieb:\n\n> If I have this:\n> \n> create table foo (bar int primary key);\n> \n> ...then in my ideal world, Postgres would be able to use that index on bar \n> to help me with this:\n> \n> select bar from foo order by bar desc limit 20;\n> \n> But in my experience, PG8.2 is doing a full table scan on foo, then sorting \n> it, then doing the limit. I have a more complex primary key, but I was \n\nPlease show us the output from\n\nEXPLAIN ANALYSE select bar from foo order by bar desc limit 20;\n\nI try it, with 8.1, and i can see an index scan. You have, maybe, wrong\nstatistics ot not enough (to few) rows in your table.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Thu, 4 Oct 2007 20:22:42 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "On Thu, 4 Oct 2007, Bill Moran wrote:\n\n> However, 2 guesses:\n> 1) You never analyzed the table, thus PG has awful statistics and\n> doesn't know how to pick a good plan.\n> 2) You have so few rows in the table that a seq scan is actually\n> faster than an index scan, which is why PG uses it instead.\n\nNo, the tables are recently analyzed and there are a couple hundred \nthousand rows in there. But I think I just figured it out.... it's a \n3-column index, and two columns of that index are the same for every row. \nWhen I drop those two columns from the ordering restriction, the index \ngets used and things speed up 5 orders of magnitude.\n\nMaybe the planner is smart enough to think that if a column in the order \nby clause is identical for most rows, then using an index won't help.... \nbut not smart enough to realize that if said column is at the *end* of the \norder by arguments, after columns which do sort quite well, then it should \nuse an index after all.\n", "msg_date": "Thu, 4 Oct 2007 11:33:39 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "Ben <[email protected]> writes:\n> No, the tables are recently analyzed and there are a couple hundred \n> thousand rows in there. But I think I just figured it out.... it's a \n> 3-column index, and two columns of that index are the same for every row. \n> When I drop those two columns from the ordering restriction, the index \n> gets used and things speed up 5 orders of magnitude.\n\n> Maybe the planner is smart enough to think that if a column in the order \n> by clause is identical for most rows, then using an index won't help.... \n> but not smart enough to realize that if said column is at the *end* of the \n> order by arguments, after columns which do sort quite well, then it should \n> use an index after all.\n\nYou're being about as clear as mud here, except that you obviously lied\nabout what you were doing in your first message. If you have a planner\nproblem, show us the *exact* query, the *exact* table definition, and\nunfaked EXPLAIN ANALYZE output.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2007 14:52:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows " }, { "msg_contents": "On Thu, 4 Oct 2007, Tom Lane wrote:\n\n> You're being about as clear as mud here, except that you obviously lied\n> about what you were doing in your first message. If you have a planner\n> problem, show us the *exact* query, the *exact* table definition, and\n> unfaked EXPLAIN ANALYZE output.\n\nI didn't realize that simplification was viewed as so sinister, but \nthanks, I'll remember that in the future.\n\nThe table:\n Table \"public.log\"\n Column | Type | Modifiers\n----------------+-----------------------------+---------------------\n clientkey | character(30) | not null\n premiseskey | character(30) | not null\n logkey | character(30) | not null\n logicaldel | character(1) | default 'N'::bpchar\n lockey | character(30) |\n devlockey | character(30) |\n eventkey | character(30) |\n logshorttext | character varying(255) |\n logdesc | character varying(255) |\n loguserkey | character(30) |\n logassetkey | character(30) |\n logresourcekey | character(30) |\n logtime | timestamp without time zone |\n logip | character varying(50) |\n logarchived | character(1) |\n logarchivedate | timestamp without time zone |\n loghasvideo | character(1) |\n loghasaudio | character(1) |\n resvehiclekey | character(30) |\n synccreated | character(1) |\n logtypekey | character(30) |\nIndexes:\n \"log_pkey\" PRIMARY KEY, btree (clientkey, premiseskey, logkey)\n \"eventkey_idx\" btree (eventkey),\n \"log_ak1\" btree (clientkey, premiseskey, logtime, logkey)\n\n\nThe original, slow query:\n\nexplain analyze SELECT * FROM log WHERE clientkey in \n('000000004500000000010000000001') AND premiseskey in \n('000000004500000000010000000001') and logicaldel = 'N'\nORDER BY logtime desc, logkey desc, clientkey desc, premiseskey desc LIMIT 20 offset 0;\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=356402.58..356402.63 rows=20 width=563) (actual time=215858.481..215858.527 rows=20 loops=1)\n -> Sort (cost=356402.58..357598.25 rows=478267 width=563) (actual time=215858.478..215858.498 rows=20 loops=1)\n Sort Key: logtime, logkey, clientkey, premiseskey\n -> Seq Scan on log (cost=0.00..52061.67 rows=478267 width=563) (actual time=29.340..100043.313 rows=475669 loops=1)\n Filter: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar) AND (logicaldel = 'N'::bpchar))\n Total runtime: 262462.582 ms\n(6 rows)\n\n\nEvery row in log has identical clientkey and premiseskey values, so if I \njust remove those columns from the order by clause, I get this far \nsuperior plan:\n\nexplain analyze SELECT * FROM log WHERE clientkey in \n('000000004500000000010000000001') AND premiseskey in\n('000000004500000000010000000001') and logicaldel = 'N'\nORDER BY logtime desc, logkey desc LIMIT 20 offset 0;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..12.33 rows=20 width=563) (actual time=0.047..0.105 rows=20 loops=1)\n -> Index Scan Backward using log_ak1 on log (cost=0.00..294735.70 rows=478267 width=563) (actual time=0.044..0.076 rows=20 loops=1)\n Index Cond: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar))\n Filter: (logicaldel = 'N'::bpchar)\n Total runtime: 0.165 ms\n(5 rows)\n\n\n...which made me to think that maybe postgres is not using log_ak1 in the \nformer case because two of the columns in the order by match every row.\n\nUnfortunately, in this case it's not an option to alter the query. I'm \njust trying to figure out an explaination.\n", "msg_date": "Thu, 4 Oct 2007 12:52:36 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: quickly getting the top N rows " }, { "msg_contents": "On Thu, 2007-10-04 at 12:52 -0700, Ben wrote:\n\n> The original, slow query:\n> \n> explain analyze SELECT * FROM log WHERE clientkey in \n> ('000000004500000000010000000001') AND premiseskey in \n> ('000000004500000000010000000001') and logicaldel = 'N'\n> ORDER BY logtime desc, logkey desc, clientkey desc, premiseskey desc LIMIT 20 offset 0;\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=356402.58..356402.63 rows=20 width=563) (actual time=215858.481..215858.527 rows=20 loops=1)\n> -> Sort (cost=356402.58..357598.25 rows=478267 width=563) (actual time=215858.478..215858.498 rows=20 loops=1)\n> Sort Key: logtime, logkey, clientkey, premiseskey\n> -> Seq Scan on log (cost=0.00..52061.67 rows=478267 width=563) (actual time=29.340..100043.313 rows=475669 loops=1)\n> Filter: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar) AND (logicaldel = 'N'::bpchar))\n> Total runtime: 262462.582 ms\n> (6 rows)\n> \n> \n> Every row in log has identical clientkey and premiseskey values, so if I \n> just remove those columns from the order by clause, I get this far \n> superior plan:\n> \n> explain analyze SELECT * FROM log WHERE clientkey in \n> ('000000004500000000010000000001') AND premiseskey in\n> ('000000004500000000010000000001') and logicaldel = 'N'\n> ORDER BY logtime desc, logkey desc LIMIT 20 offset 0;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..12.33 rows=20 width=563) (actual time=0.047..0.105 rows=20 loops=1)\n> -> Index Scan Backward using log_ak1 on log (cost=0.00..294735.70 rows=478267 width=563) (actual time=0.044..0.076 rows=20 loops=1)\n> Index Cond: ((clientkey = '000000004500000000010000000001'::bpchar) AND (premiseskey = '000000004500000000010000000001'::bpchar))\n> Filter: (logicaldel = 'N'::bpchar)\n> Total runtime: 0.165 ms\n> (5 rows)\n> \n> \n> ...which made me to think that maybe postgres is not using log_ak1 in the \n> former case because two of the columns in the order by match every row.\n> \n> Unfortunately, in this case it's not an option to alter the query. I'm \n> just trying to figure out an explaination.\n\nIn the first query, Postgres cannot use the index because the sort order\nof the index does not match the sort order of the query. When you change\nthe sort order of the query so that it matches that of the index, then\nthe index is used. \n\nIf you define your index on (logtime, logkey, clientkey, premiseskey)\nrather than on (clientkey, premiseskey, logtime, logkey) you will have a\nfast query. Yes, the column order matters.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 04 Oct 2007 21:23:05 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "On 10/4/07, Ben <[email protected]> wrote:\n> If I have this:\n>\n> create table foo (bar int primary key);\n>\n> ...then in my ideal world, Postgres would be able to use that index on bar\n> to help me with this:\n>\n> select bar from foo order by bar desc limit 20;\n>\n> But in my experience, PG8.2 is doing a full table scan on foo, then\n> sorting it, then doing the limit. I have a more complex primary key, but I\n> was hoping the same concept would still apply. Am I doing something wrong,\n> or just expecting something that doesn't exist?\n\npg uses an intelligent planner. It looks at the table, the number of\nrows, the distribution of values, and makes a decision whether to use\nseq scan or index. Do you have any evidence that in your case seq\nscan is a bad choice?\n\ntry this experiment:\n\npsql mydb\n=# select * from foo; -- this will prime the db and put the table in\nmemory if it will fit\n=# \\timing\n=# set enable_seqscan=off;\n=# select bar from foo order by bar desc limit 20;\n=# set enable_seqscan=on;\n=# select bar from foo order by bar desc limit 20;\n\nand compare the times each takes. run each way a few times to be sure\nyou're not getting random variance.\n\nOn my reporting db with somewhere around 75 million tables, a similar\nquery 0.894 mS and uses an index scan. Which is good, because a\nsequential scan on that table takes about 15 to 30 minutes.\n", "msg_date": "Thu, 4 Oct 2007 15:32:55 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "On Thu, 4 Oct 2007, Simon Riggs wrote:\n\n> In the first query, Postgres cannot use the index because the sort order\n> of the index does not match the sort order of the query. When you change\n> the sort order of the query so that it matches that of the index, then\n> the index is used.\n>\n> If you define your index on (logtime, logkey, clientkey, premiseskey)\n> rather than on (clientkey, premiseskey, logtime, logkey) you will have a\n> fast query. Yes, the column order matters.\n\nI thought that might explain it, but then I'm surprised that it can still \nuse an index when the first two columns of the index aren't in the query. \nWouldn't that mean that it might have to walk the entire index to find \nmatching rows?\n\n....unless it's smart enough to realize that the first two columns will \nmatch everything. Which would be cool.\n", "msg_date": "Thu, 4 Oct 2007 13:34:22 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "On 10/4/07, Ben <[email protected]> wrote:\n> On Thu, 4 Oct 2007, Tom Lane wrote:\n>\n> > You're being about as clear as mud here, except that you obviously lied\n> > about what you were doing in your first message. If you have a planner\n> > problem, show us the *exact* query, the *exact* table definition, and\n> > unfaked EXPLAIN ANALYZE output.\n>\n> I didn't realize that simplification was viewed as so sinister, but\n> thanks, I'll remember that in the future.\n\nIt's not sinister, it's counterproductive and wastes people's time.\n", "msg_date": "Thu, 4 Oct 2007 15:35:46 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "Scott Marlowe wrote:\n> On 10/4/07, Ben <[email protected]> wrote:\n>> On Thu, 4 Oct 2007, Tom Lane wrote:\n>>\n>>> You're being about as clear as mud here, except that you obviously lied\n>>> about what you were doing in your first message. If you have a planner\n>>> problem, show us the *exact* query, the *exact* table definition, and\n>>> unfaked EXPLAIN ANALYZE output.\n>> I didn't realize that simplification was viewed as so sinister, but\n>> thanks, I'll remember that in the future.\n> \n> It's not sinister, it's counterproductive and wastes people's time.\n\nUnless \"Ben\" is an alias for someone high up with an unnamed rival \ndatabase, seeking to distract us all... How do you find out the IP \naddress of a yacht? ;-)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 04 Oct 2007 22:01:45 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows" }, { "msg_contents": "Ben <[email protected]> writes:\n> On Thu, 4 Oct 2007, Simon Riggs wrote:\n> I thought that might explain it, but then I'm surprised that it can still \n> use an index when the first two columns of the index aren't in the query. \n> Wouldn't that mean that it might have to walk the entire index to find \n> matching rows?\n\n> ....unless it's smart enough to realize that the first two columns will \n> match everything. Which would be cool.\n\nThere's some limited smarts in there about deciding that leading columns\nof an index don't matter to the sort ordering if they're constrained to\njust one value by the query. But it doesn't catch the case you need,\nwhich is that columns of an ORDER BY request are no-ops when they're\nconstrained to just one value.\n\nThat whole area has been rewritten for 8.3 and I believe it will handle\nthis case. No time to try it right now though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Oct 2007 17:04:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: quickly getting the top N rows " }, { "msg_contents": "\n\nOn Thu, 4 Oct 2007, Tom Lane wrote:\n\n> There's some limited smarts in there about deciding that leading columns\n> of an index don't matter to the sort ordering if they're constrained to\n> just one value by the query. But it doesn't catch the case you need,\n> which is that columns of an ORDER BY request are no-ops when they're\n> constrained to just one value.\n\nOh, no, that explains it perfectly, because that's precisely the case I \nhave - I dropped the columns from the ordering, but not the where clause. \nThanks, now I understand the current behavior.\n", "msg_date": "Thu, 4 Oct 2007 14:16:49 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": true, "msg_subject": "Re: quickly getting the top N rows " } ]
[ { "msg_contents": "Hi,\n\nI have very slow performance for a TSearch2 table. I have pasted the \nEXPLAIN ANALYZE queries below. 12 seconds is slow for almost any \npurpose. Is there any way to speed this up?\n\n# explain analyze select * FROM fulltext_article, to_tsquery \n('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q) DESC;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------\nSort (cost=6576.74..6579.07 rows=933 width=774) (actual \ntime=12969.237..12970.490 rows=5119 loops=1)\n Sort Key: rank(fulltext_article.idxfti, q.q)\n -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n(actual time=209.513..12955.498 rows=5119 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.005..0.006 rows=1 loops=1)\n -> Bitmap Heap Scan on fulltext_article \n(cost=3069.79..6516.70 rows=933 width=742) (actual \ntime=209.322..234.390 rows=5119 loops=1)\n Recheck Cond: (fulltext_article.idxfti @@ q.q)\n -> Bitmap Index Scan on fulltext_article_idxfti_idx \n(cost=0.00..3069.56 rows=933 width=0) (actual time=208.373..208.373 \nrows=5119 loops=1)\n Index Cond: (fulltext_article.idxfti @@ q.q)\nTotal runtime: 12973.035 ms\n(9 rows)\n\n# select count(*) from fulltext_article;\ncount\n--------\n933001\n(1 row)\n\n# select COUNT(*) FROM fulltext_article, to_tsquery('simple','blue & \ngreen') AS q WHERE idxfti @@ q;\ncount\n-------\n 6308\n(1 row)\n\nBenjamin\n", "msg_date": "Fri, 5 Oct 2007 00:50:17 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": true, "msg_subject": "Slow TSearch2 performance for table with 1 million documents." }, { "msg_contents": "Benjamin Arai wrote:\n> Hi,\n> \n> I have very slow performance for a TSearch2 table. I have pasted the\n> EXPLAIN ANALYZE queries below. 12 seconds is slow for almost any\n> purpose. Is there any way to speed this up?\n> \n> # explain analyze select * FROM fulltext_article,\n> to_tsquery('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti,\n> q) DESC;\n\nAdmittedly I'm kind of new to tsearch, but wouldn't\n\nSELECT *\n FROM fulltext_article\n WHERE idxfti @@ to_tsquery('simple','dog')\n ORDER BY rank(idxfti, to_tsquery('simple', 'dog')) DESC;\n\nbe faster?\n\nQuick testing shows a similar query in our database to not use a nested\nloop and a function scan. For comparison, here are our plans:\n\nYour approach:\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4.86..4.87 rows=1 width=164) (actual time=0.151..0.161\nrows=5 loops=1)\n Sort Key: rank(fulltext_article.idxfti, q.q)\n -> Nested Loop (cost=0.00..4.85 rows=1 width=164) (actual\ntime=0.067..0.119 rows=5 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\n(actual time=0.010..0.012 rows=1 loops=1)\n -> Index Scan using fulltext_article_idxfti_idx on\nfulltext_article (cost=0.00..4.82 rows=1 width=132) (actual\ntime=0.033..0.056 rows=5 loops=1)\n Index Cond: (fulltext_article.idxfti @@ \"outer\".q)\n Filter: (fulltext_article.idxfti @@ \"outer\".q)\n Total runtime: 0.242 ms\n(8 rows)\n\n\nMy suggested approach:\n\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4.84..4.84 rows=1 width=132) (actual time=0.085..0.095\nrows=5 loops=1)\n Sort Key: rank(idxfti, '''dog'''::tsquery)\n -> Index Scan using fulltext_article_idxfti_idx on fulltext_article\n (cost=0.00..4.83 rows=1 width=132) (actual time=0.025..0.052 rows=5\nloops=1)\n Index Cond: (idxfti @@ '''dog'''::tsquery)\n Filter: (idxfti @@ '''dog'''::tsquery)\n Total runtime: 0.163 ms\n(6 rows)\n\nI hope this helps.\n\n-- \nAlban Hertroys\[email protected]\n\nmagproductions b.v.\n\nT: ++31(0)534346874\nF: ++31(0)534346876\nM:\nI: www.magproductions.nl\nA: Postbus 416\n 7500 AK Enschede\n\n// Integrate Your World //\n", "msg_date": "Fri, 05 Oct 2007 14:00:59 +0200", "msg_from": "Alban Hertroys <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow TSearch2 performance for table with 1 million\n documents." }, { "msg_contents": "Benjamin Arai <[email protected]> writes:\n> # explain analyze select * FROM fulltext_article, to_tsquery \n> ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q) DESC;\n \n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> ------------------------------------------------------------------------ \n> ------------\n> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual \n> time=12969.237..12970.490 rows=5119 loops=1)\n> Sort Key: rank(fulltext_article.idxfti, q.q)\n> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n> (actual time=209.513..12955.498 rows=5119 loops=1)\n> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n> (actual time=0.005..0.006 rows=1 loops=1)\n> -> Bitmap Heap Scan on fulltext_article \n> (cost=3069.79..6516.70 rows=933 width=742) (actual \n> time=209.322..234.390 rows=5119 loops=1)\n> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n> -> Bitmap Index Scan on fulltext_article_idxfti_idx \n> (cost=0.00..3069.56 rows=933 width=0) (actual time=208.373..208.373 \n> rows=5119 loops=1)\n> Index Cond: (fulltext_article.idxfti @@ q.q)\n> Total runtime: 12973.035 ms\n> (9 rows)\n\nThe time seems all spent at the join step, which is odd because it\nreally hasn't got much to do. AFAICS all it has to do is compute the\nrank() values that the sort step will use. Is it possible that\nrank() is really slow?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Oct 2007 11:12:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow TSearch2 performance for table with 1 million documents. " }, { "msg_contents": "On Fri, 5 Oct 2007, Tom Lane wrote:\n\n> Benjamin Arai <[email protected]> writes:\n>> # explain analyze select * FROM fulltext_article, to_tsquery\n>> ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q) DESC;\n>\n>> QUERY PLAN\n>> ------------------------------------------------------------------------\n>> ------------------------------------------------------------------------\n>> ------------\n>> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual\n>> time=12969.237..12970.490 rows=5119 loops=1)\n>> Sort Key: rank(fulltext_article.idxfti, q.q)\n>> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774)\n>> (actual time=209.513..12955.498 rows=5119 loops=1)\n>> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\n>> (actual time=0.005..0.006 rows=1 loops=1)\n>> -> Bitmap Heap Scan on fulltext_article\n>> (cost=3069.79..6516.70 rows=933 width=742) (actual\n>> time=209.322..234.390 rows=5119 loops=1)\n>> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n>> -> Bitmap Index Scan on fulltext_article_idxfti_idx\n>> (cost=0.00..3069.56 rows=933 width=0) (actual time=208.373..208.373\n>> rows=5119 loops=1)\n>> Index Cond: (fulltext_article.idxfti @@ q.q)\n>> Total runtime: 12973.035 ms\n>> (9 rows)\n>\n> The time seems all spent at the join step, which is odd because it\n> really hasn't got much to do. AFAICS all it has to do is compute the\n> rank() values that the sort step will use. Is it possible that\n> rank() is really slow?\n\ncan you try rank_cd() instead ?\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Fri, 5 Oct 2007 19:32:32 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow TSearch2 performance for table with 1\n\tmillion documents." }, { "msg_contents": "\nOn Oct 5, 2007, at 8:32 AM, Oleg Bartunov wrote:\n\n> On Fri, 5 Oct 2007, Tom Lane wrote:\n>\n>> Benjamin Arai <[email protected]> writes:\n>>> # explain analyze select * FROM fulltext_article, to_tsquery\n>>> ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q) \n>>> DESC;\n>>\n>>> QUERY PLAN\n>>> -------------------------------------------------------------------- \n>>> ----\n>>> -------------------------------------------------------------------- \n>>> ----\n>>> ------------\n>>> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual\n>>> time=12969.237..12970.490 rows=5119 loops=1)\n>>> Sort Key: rank(fulltext_article.idxfti, q.q)\n>>> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774)\n>>> (actual time=209.513..12955.498 rows=5119 loops=1)\n>>> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\n>>> (actual time=0.005..0.006 rows=1 loops=1)\n>>> -> Bitmap Heap Scan on fulltext_article\n>>> (cost=3069.79..6516.70 rows=933 width=742) (actual\n>>> time=209.322..234.390 rows=5119 loops=1)\n>>> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n>>> -> Bitmap Index Scan on fulltext_article_idxfti_idx\n>>> (cost=0.00..3069.56 rows=933 width=0) (actual time=208.373..208.373\n>>> rows=5119 loops=1)\n>>> Index Cond: (fulltext_article.idxfti @@ q.q)\n>>> Total runtime: 12973.035 ms\n>>> (9 rows)\n>>\n>> The time seems all spent at the join step, which is odd because it\n>> really hasn't got much to do. AFAICS all it has to do is compute the\n>> rank() values that the sort step will use. Is it possible that\n>> rank() is really slow?\n>\n> can you try rank_cd() instead ?\n>\nUsing Rank:\n\n-# ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, q) \nDESC;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------\nSort (cost=6576.74..6579.07 rows=933 width=774) (actual \ntime=98083.081..98084.351 rows=5119 loops=1)\n Sort Key: rank(fulltext_article.idxfti, q.q)\n -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n(actual time=479.122..98067.594 rows=5119 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.003..0.004 rows=1 loops=1)\n -> Bitmap Heap Scan on fulltext_article \n(cost=3069.79..6516.70 rows=933 width=742) (actual \ntime=341.739..37112.110 rows=5119 loops=1)\n Recheck Cond: (fulltext_article.idxfti @@ q.q)\n -> Bitmap Index Scan on fulltext_article_idxfti_idx \n(cost=0.00..3069.56 rows=933 width=0) (actual time=321.443..321.443 \nrows=5119 loops=1)\n Index Cond: (fulltext_article.idxfti @@ q.q)\nTotal runtime: 98087.575 ms\n(9 rows)\n\nUsing Rank_cd:\n\n# explain analyze select * FROM fulltext_article, to_tsquery\n('simple','cat') AS q WHERE idxfti @@ q ORDER BY rank_cd(idxfti, q) \nDESC;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------\nSort (cost=6576.74..6579.07 rows=933 width=774) (actual \ntime=199316.648..199324.631 rows=26054 loops=1)\n Sort Key: rank_cd(fulltext_article.idxfti, q.q)\n -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n(actual time=871.428..199244.330 rows=26054 loops=1)\n -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=1)\n -> Bitmap Heap Scan on fulltext_article \n(cost=3069.79..6516.70 rows=933 width=742) (actual \ntime=850.674..50146.477 rows=26054 loops=1)\n Recheck Cond: (fulltext_article.idxfti @@ q.q)\n -> Bitmap Index Scan on fulltext_article_idxfti_idx \n(cost=0.00..3069.56 rows=933 width=0) (actual time=838.120..838.120 \nrows=26054 loops=1)\n Index Cond: (fulltext_article.idxfti @@ q.q)\nTotal runtime: 199338.297 ms\n(9 rows)\n\n>\n>>\n>> \t\t\tregards, tom lane\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n>>\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\n> Sternberg Astronomical Institute, Moscow University, Russia\n> Internet: [email protected], http://www.sai.msu.su/~megera/\n> phone: +007(495)939-16-83, +007(495)939-23-83\n>\n\n", "msg_date": "Fri, 5 Oct 2007 15:57:31 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Slow TSearch2 performance for table with 1 million\n\tdocuments." }, { "msg_contents": "It appears that the ORDER BY rank operation is the slowing factor. \nIf I remove it then the query is pretty fast. Is there another way \nto perform ORDER BY such that it does not do a sort?\n\nBenjamin\n\nOn Oct 5, 2007, at 3:57 PM, Benjamin Arai wrote:\n\n>\n> On Oct 5, 2007, at 8:32 AM, Oleg Bartunov wrote:\n>\n>> On Fri, 5 Oct 2007, Tom Lane wrote:\n>>\n>>> Benjamin Arai <[email protected]> writes:\n>>>> # explain analyze select * FROM fulltext_article, to_tsquery\n>>>> ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, \n>>>> q) DESC;\n>>>\n>>>> QUERY PLAN\n>>>> ------------------------------------------------------------------- \n>>>> -----\n>>>> ------------------------------------------------------------------- \n>>>> -----\n>>>> ------------\n>>>> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual\n>>>> time=12969.237..12970.490 rows=5119 loops=1)\n>>>> Sort Key: rank(fulltext_article.idxfti, q.q)\n>>>> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774)\n>>>> (actual time=209.513..12955.498 rows=5119 loops=1)\n>>>> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)\n>>>> (actual time=0.005..0.006 rows=1 loops=1)\n>>>> -> Bitmap Heap Scan on fulltext_article\n>>>> (cost=3069.79..6516.70 rows=933 width=742) (actual\n>>>> time=209.322..234.390 rows=5119 loops=1)\n>>>> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n>>>> -> Bitmap Index Scan on \n>>>> fulltext_article_idxfti_idx\n>>>> (cost=0.00..3069.56 rows=933 width=0) (actual time=208.373..208.373\n>>>> rows=5119 loops=1)\n>>>> Index Cond: (fulltext_article.idxfti @@ q.q)\n>>>> Total runtime: 12973.035 ms\n>>>> (9 rows)\n>>>\n>>> The time seems all spent at the join step, which is odd because it\n>>> really hasn't got much to do. AFAICS all it has to do is compute \n>>> the\n>>> rank() values that the sort step will use. Is it possible that\n>>> rank() is really slow?\n>>\n>> can you try rank_cd() instead ?\n>>\n> Using Rank:\n>\n> -# ('simple','dog') AS q WHERE idxfti @@ q ORDER BY rank(idxfti, \n> q) DESC;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> ----------------\n> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual \n> time=98083.081..98084.351 rows=5119 loops=1)\n> Sort Key: rank(fulltext_article.idxfti, q.q)\n> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n> (actual time=479.122..98067.594 rows=5119 loops=1)\n> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n> (actual time=0.003..0.004 rows=1 loops=1)\n> -> Bitmap Heap Scan on fulltext_article \n> (cost=3069.79..6516.70 rows=933 width=742) (actual \n> time=341.739..37112.110 rows=5119 loops=1)\n> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n> -> Bitmap Index Scan on \n> fulltext_article_idxfti_idx (cost=0.00..3069.56 rows=933 width=0) \n> (actual time=321.443..321.443 rows=5119 loops=1)\n> Index Cond: (fulltext_article.idxfti @@ q.q)\n> Total runtime: 98087.575 ms\n> (9 rows)\n>\n> Using Rank_cd:\n>\n> # explain analyze select * FROM fulltext_article, to_tsquery\n> ('simple','cat') AS q WHERE idxfti @@ q ORDER BY rank_cd(idxfti, \n> q) DESC;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------- \n> ---------------------------------------------------------------------- \n> -----------------\n> Sort (cost=6576.74..6579.07 rows=933 width=774) (actual \n> time=199316.648..199324.631 rows=26054 loops=1)\n> Sort Key: rank_cd(fulltext_article.idxfti, q.q)\n> -> Nested Loop (cost=3069.79..6530.71 rows=933 width=774) \n> (actual time=871.428..199244.330 rows=26054 loops=1)\n> -> Function Scan on q (cost=0.00..0.01 rows=1 width=32) \n> (actual time=0.006..0.007 rows=1 loops=1)\n> -> Bitmap Heap Scan on fulltext_article \n> (cost=3069.79..6516.70 rows=933 width=742) (actual \n> time=850.674..50146.477 rows=26054 loops=1)\n> Recheck Cond: (fulltext_article.idxfti @@ q.q)\n> -> Bitmap Index Scan on \n> fulltext_article_idxfti_idx (cost=0.00..3069.56 rows=933 width=0) \n> (actual time=838.120..838.120 rows=26054 loops=1)\n> Index Cond: (fulltext_article.idxfti @@ q.q)\n> Total runtime: 199338.297 ms\n> (9 rows)\n>\n>>\n>>>\n>>> \t\t\tregards, tom lane\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 5: don't forget to increase your free space map settings\n>>>\n>>\n>> \tRegards,\n>> \t\tOleg\n>> _____________________________________________________________\n>> Oleg Bartunov, Research Scientist, Head of AstroNet \n>> (www.astronet.ru),\n>> Sternberg Astronomical Institute, Moscow University, Russia\n>> Internet: [email protected], http://www.sai.msu.su/~megera/\n>> phone: +007(495)939-16-83, +007(495)939-23-83\n>>\n>\n\n", "msg_date": "Thu, 11 Oct 2007 07:52:29 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow TSearch2 performance for table with 1 million\n\tdocuments." }, { "msg_contents": "Benjamin Arai <[email protected]> writes:\n> It appears that the ORDER BY rank operation is the slowing factor. \n> If I remove it then the query is pretty fast. Is there another way \n> to perform ORDER BY such that it does not do a sort?\n\nI think you misunderstood: it's not the sort that's slow, it's the\ncomputation of the rank() values that are inputs to the sort.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2007 11:53:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow TSearch2 performance for table with 1 million\n\tdocuments." }, { "msg_contents": "Oh, I see. I didn't look carefully at the EXPLAIN ANALYZE I posted. \nSo, is there a solution to the rank problem?\n\nBenjamin\n\nOn Oct 11, 2007, at 8:53 AM, Tom Lane wrote:\n\n> Benjamin Arai <[email protected]> writes:\n>> It appears that the ORDER BY rank operation is the slowing factor.\n>> If I remove it then the query is pretty fast. Is there another way\n>> to perform ORDER BY such that it does not do a sort?\n>\n> I think you misunderstood: it's not the sort that's slow, it's the\n> computation of the rank() values that are inputs to the sort.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Thu, 11 Oct 2007 09:12:20 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Slow TSearch2 performance for table with 1 million\n\tdocuments." } ]
[ { "msg_contents": "I'm new in PostGreSQL and I need some help.\nI have a table with ~2 million records. Queries in this table are too slow and some are not completed.I think it must be a simple question to solve but, I'm trying without success. I'm worried because next week I will need to work with tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 8.2;Index type: btree.I have 2 GB of RAM.\nPOSTGRESQL XXX.LOG:\n\n<2007-10-05 09:01:42%SELECT> LOG: could not send data to client: Unknown winsock error 10061\n<2007-10-05 09:03:03%idle> LOG: could not receive data from client: Unknown winsock error 10061\n<2007-10-05 09:03:03%idle> LOG: unexpected EOF on client connection\n\n\nPSQLODBC.LOG:\n\n[13236.470] ------------------------------------------------------------\n[13236.470] hdbc=02DE3008, stmt=02C7B1A8, result=02C791D0\n[13236.470] prepare=0, internal=0\n[13236.470] bindings=32090580, bindings_allocated=20\n[13236.470] parameters=00000000, parameters_allocated=0\n[13236.470] statement_type=0, statement='select \n\na_teste_nestle.\"CODCLI\",\n\na_teste_nestle.\"CODFAB\",\n\na_teste_nestle.\"CODFAMILIANESTLE\",\n\na_teste_nestle.\"CODFILIAL\",\n\na_teste_nestle.\"CODGRUPONESTLE\",\n\na_teste_nestle.\"CODSUBGRUPONESTLE\",\n\na_teste_nestle.\"CONDVENDA\",\n\na_teste_nestle.\"DATA\",\n\na_teste_nestle.\"DESCRICAO\",\n\na_teste_nestle.\"PESO\",\n\na_teste_nestle.\"PRACA\",\n\na_teste_nestle.\"PUNIT\",\n\na_teste_nestle.\"PVENDA\",\n\na_teste_nestle.\"QT\",\n\na_teste_nestle.\"QTITVENDIDOS\",\n\na_teste_nestle.\"QTPESOPREV\",\n\na_teste_nestle.\"QTVENDAPREV\",\n\na_teste_nestle.\"SUPERVISOR\",\n\na_teste_nestle.\"VENDEDOR\",\n\na_teste_nestle.\"VLVENDAPREV\"\n\nfrom a_teste_nestle \n\n \n\n'\n[13236.486] stmt_with_params='select \na_teste_nestle.\"CODCLI\",\na_teste_nestle.\"CODFAB\",\na_teste_nestle.\"CODFAMILIANESTLE\",\na_teste_nestle.\"CODFILIAL\",\na_teste_nestle.\"CODGRUPONESTLE\",\na_teste_nestle.\"CODSUBGRUPONESTLE\",\na_teste_nestle.\"CONDVENDA\",\na_teste_nestle.\"DATA\",\na_teste_nestle.\"DESCRICAO\",\na_teste_nestle.\"PESO\",\na_teste_nestle.\"PRACA\",\na_teste_nestle.\"PUNIT\",\na_teste_nestle.\"PVENDA\",\na_teste_nestle.\"QT\",\na_teste_nestle.\"QTITVENDIDOS\",\na_teste_nestle.\"QTPESOPREV\",\na_teste_nestle.\"QTVENDAPREV\",\na_teste_nestle.\"SUPERVISOR\",\na_teste_nestle.\"VENDEDOR\",\na_teste_nestle.\"VLVENDAPREV\"\nfrom a_teste_nestle \n\n'\n[13236.486] data_at_exec=-1, current_exec_param=-1, put_data=0\n[13236.501] currTuple=-1, current_col=-1, lobj_fd=-1\n[13236.501] maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, scroll_concurrency=1\n[13236.501] cursor_name='SQL_CUR02C7B1A8'\n[13236.501] ----------------QResult Info -------------------------------\n[13236.501] fields=02C7C9B8, backend_tuples=00000000, tupleField=0, conn=02DE3008\n[13236.501] fetch_count=0, num_total_rows=819200, num_fields=20, cursor='(NULL)'\n[13236.501] message='Out of memory while reading tuples.', command='(NULL)', notice='(NULL)'\n[13236.501] status=7, inTuples=1\n[13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109, errmsg='Out of memory while reading tuples.'\n[13236.517] ------------------------------------------------------------\n[13236.517] henv=02C727B8, conn=02DE3008, status=1, num_stmts=16\n[13236.517] sock=02DD3120, stmts=02DD8EE8, lobj_type=17288\n[13236.517] ---------------- Socket Info -------------------------------\n[13236.517] socket=512, reverse=0, errornumber=0, errormsg='(NULL)'\n[13236.517] buffer_in=46642688, buffer_out=46633712\n[13236.517] buffer_filled_in=4096, buffer_filled_out=0, buffer_read_in=3426\n[63860.095]conn=02DE3008, PGAPI_Disconnect\n[63880.251]conn=02C73A78, PGAPI_Disconnect\n\n\n\n\n\n\n\n\nPOSTGRESQL.CONF:\n\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 512MB # min 128kB or max_connections*16kB\n # (change requires restart)\ntemp_buffers = 32MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 256MB # min 64kB\nmaintenance_work_mem = 128MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 409600 # min max_fsm_relations*16, 6 bytes each\n # (change requires restart)\n#max_fsm_relations = 1000 # min 100, ~70 bytes each\n # (change requires restart)\n\n\n\n\nThe table structure is:\n\nCREATE TABLE \"public\".\"a_teste_nestle\" (\n \"DATA\" TIMESTAMP WITH TIME ZONE, \n \"CODCLI\" DOUBLE PRECISION, \n \"VENDEDOR\" DOUBLE PRECISION, \n \"SUPERVISOR\" DOUBLE PRECISION, \n \"CODFILIAL\" VARCHAR(2), \n \"PRACA\" DOUBLE PRECISION, \n \"CONDVENDA\" DOUBLE PRECISION, \n \"QTITVENDIDOS\" DOUBLE PRECISION, \n \"PVENDA\" DOUBLE PRECISION, \n \"PESO\" DOUBLE PRECISION, \n \"CODPROD\" VARCHAR(15), \n \"CODFAB\" VARCHAR(15), \n \"DESCRICAO\" VARCHAR(80), \n \"CODGRUPONESTLE\" DOUBLE PRECISION, \n \"CODSUBGRUPONESTLE\" DOUBLE PRECISION, \n \"CODFAMILIANESTLE\" DOUBLE PRECISION, \n \"QTPESOPREV\" DOUBLE PRECISION, \n \"QTVENDAPREV\" DOUBLE PRECISION, \n \"VLVENDAPREV\" DOUBLE PRECISION, \n \"QT\" DOUBLE PRECISION, \n \"PUNIT\" DOUBLE PRECISION\n) WITHOUT OIDS;\n\nCREATE INDEX \"a_teste_nestle_idx\" ON \"public\".\"a_teste_nestle\"\n USING btree (\"DATA\");\n\n\nThanks,\n\n\n\n_________________________\nCláudia Macedo Amorim\nConsultora de Desenvolvimento\nPC Sistemas - www.pcsist.com.br\n(62) 3250-0200\[email protected]\n\n \nAuto Serviço WinThor: um novo conceito em tecnologia, segurança e agilidade.\n\n\n\n\n\n\nI'm new in PostGreSQL and I need some help.\nI have a table with ~2 million records. Queries in this table are too slow and some are not completed.I think it must be a simple question to solve but, I'm trying without success. I'm worried because next week I will need to work with tables with ~100 million records. I'm using:O.S.: Windows XP;PostgreSQL 8.2;Index type: btree.  I have 2 GB of RAM.   \n \nPOSTGRESQL XXX.LOG:\n \n<2007-10-05 09:01:42%SELECT> LOG:  could \nnot send data to client: Unknown winsock error 10061\n<2007-10-05 09:03:03%idle> LOG:  could \nnot receive data from client: Unknown winsock error 10061\n<2007-10-05 09:03:03%idle> LOG:  \nunexpected EOF on client connection\n \n \nPSQLODBC.LOG:\n \n\n[13236.470]                 \n------------------------------------------------------------[13236.470]                 \nhdbc=02DE3008, stmt=02C7B1A8, result=02C791D0[13236.470]                 \nprepare=0, internal=0[13236.470]                 \nbindings=32090580, bindings_allocated=20[13236.470]                 \nparameters=00000000, parameters_allocated=0[13236.470]                 \nstatement_type=0, statement='select \na_teste_nestle.\"CODCLI\",\na_teste_nestle.\"CODFAB\",\na_teste_nestle.\"CODFAMILIANESTLE\",\na_teste_nestle.\"CODFILIAL\",\na_teste_nestle.\"CODGRUPONESTLE\",\na_teste_nestle.\"CODSUBGRUPONESTLE\",\na_teste_nestle.\"CONDVENDA\",\na_teste_nestle.\"DATA\",\na_teste_nestle.\"DESCRICAO\",\na_teste_nestle.\"PESO\",\na_teste_nestle.\"PRACA\",\na_teste_nestle.\"PUNIT\",\na_teste_nestle.\"PVENDA\",\na_teste_nestle.\"QT\",\na_teste_nestle.\"QTITVENDIDOS\",\na_teste_nestle.\"QTPESOPREV\",\na_teste_nestle.\"QTVENDAPREV\",\na_teste_nestle.\"SUPERVISOR\",\na_teste_nestle.\"VENDEDOR\",\na_teste_nestle.\"VLVENDAPREV\"\nfrom a_teste_nestle \n \n'[13236.486]                 \nstmt_with_params='select \na_teste_nestle.\"CODCLI\",a_teste_nestle.\"CODFAB\",a_teste_nestle.\"CODFAMILIANESTLE\",a_teste_nestle.\"CODFILIAL\",a_teste_nestle.\"CODGRUPONESTLE\",a_teste_nestle.\"CODSUBGRUPONESTLE\",a_teste_nestle.\"CONDVENDA\",a_teste_nestle.\"DATA\",a_teste_nestle.\"DESCRICAO\",a_teste_nestle.\"PESO\",a_teste_nestle.\"PRACA\",a_teste_nestle.\"PUNIT\",a_teste_nestle.\"PVENDA\",a_teste_nestle.\"QT\",a_teste_nestle.\"QTITVENDIDOS\",a_teste_nestle.\"QTPESOPREV\",a_teste_nestle.\"QTVENDAPREV\",a_teste_nestle.\"SUPERVISOR\",a_teste_nestle.\"VENDEDOR\",a_teste_nestle.\"VLVENDAPREV\"from \na_teste_nestle '[13236.486]                 \ndata_at_exec=-1, current_exec_param=-1, put_data=0[13236.501]                 \ncurrTuple=-1, current_col=-1, lobj_fd=-1[13236.501]                 \nmaxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, \nscroll_concurrency=1[13236.501]                 \ncursor_name='SQL_CUR02C7B1A8'[13236.501]                 \n----------------QResult Info \n-------------------------------[13236.501]                 \nfields=02C7C9B8, backend_tuples=00000000, tupleField=0, \nconn=02DE3008[13236.501]                 \nfetch_count=0, num_total_rows=819200, num_fields=20, \ncursor='(NULL)'[13236.501]                 \nmessage='Out of memory while reading tuples.', command='(NULL)', \nnotice='(NULL)'[13236.501]                 \nstatus=7, inTuples=1[13236.501]CONN ERROR: func=SC_execute, \ndesc='(null)', errnum=109, errmsg='Out of memory while reading \ntuples.'[13236.517]            \n------------------------------------------------------------[13236.517]            \nhenv=02C727B8, conn=02DE3008, status=1, num_stmts=16[13236.517]            \nsock=02DD3120, stmts=02DD8EE8, lobj_type=17288[13236.517]            \n---------------- Socket Info \n-------------------------------[13236.517]          \n  socket=512, reverse=0, \nerrornumber=0, errormsg='(NULL)'[13236.517]            \nbuffer_in=46642688, buffer_out=46633712[13236.517]            \nbuffer_filled_in=4096, buffer_filled_out=0, \nbuffer_read_in=3426[63860.095]conn=02DE3008, \nPGAPI_Disconnect[63880.251]conn=02C73A78, \nPGAPI_Disconnect\n \n \n \nPOSTGRESQL.CONF:\n \n#---------------------------------------------------------------------------# \nRESOURCE USAGE (except \nWAL)#---------------------------------------------------------------------------\n# - Memory \n-\nshared_buffers = 512MB   # min 128kB or \nmax_connections*16kB     # (change requires \nrestart)temp_buffers = 32MB   # min \n800kB#max_prepared_transactions = 5  # can be 0 or \nmore     # (change requires restart)# Note: \nincreasing max_prepared_transactions costs ~600 bytes of shared memory# per \ntransaction slot, plus lock space (see max_locks_per_transaction).work_mem = \n256MB    # min 64kBmaintenance_work_mem = \n128MB  # min 1MB#max_stack_depth = 2MB   # min \n100kB\n# - Free \nSpace Map -\nmax_fsm_pages = 409600  # min max_fsm_relations*16, 6 bytes \neach     # (change requires \nrestart)#max_fsm_relations = 1000  # min 100, ~70 bytes \neach     # (change requires \nrestart)\n \n \n \nThe table structure is:\n \nCREATE TABLE \"public\".\"a_teste_nestle\" (  \"DATA\" TIMESTAMP WITH \nTIME ZONE,   \"CODCLI\" DOUBLE PRECISION,   \"VENDEDOR\" DOUBLE \nPRECISION,   \"SUPERVISOR\" DOUBLE PRECISION,   \"CODFILIAL\" \nVARCHAR(2),   \"PRACA\" DOUBLE PRECISION,   \"CONDVENDA\" DOUBLE \nPRECISION,   \"QTITVENDIDOS\" DOUBLE PRECISION,   \"PVENDA\" \nDOUBLE PRECISION,   \"PESO\" DOUBLE PRECISION,   \"CODPROD\" \nVARCHAR(15),   \"CODFAB\" VARCHAR(15),   \"DESCRICAO\" \nVARCHAR(80),   \"CODGRUPONESTLE\" DOUBLE PRECISION,   \n\"CODSUBGRUPONESTLE\" DOUBLE PRECISION,   \"CODFAMILIANESTLE\" DOUBLE \nPRECISION,   \"QTPESOPREV\" DOUBLE PRECISION,   \"QTVENDAPREV\" \nDOUBLE PRECISION,   \"VLVENDAPREV\" DOUBLE PRECISION,   \"QT\" \nDOUBLE PRECISION,   \"PUNIT\" DOUBLE PRECISION) WITHOUT \nOIDS;\n \nCREATE INDEX \"a_teste_nestle_idx\" ON \"public\".\"a_teste_nestle\"  \nUSING btree (\"DATA\");\n \n \nThanks,\n\n_________________________Cláudia Macedo \nAmorimConsultora de DesenvolvimentoPC Sistemas - www.pcsist.com.br(62) [email protected]\n \n Auto Serviço WinThor: um novo conceito em \ntecnologia, segurança e agilidade.", "msg_date": "Fri, 5 Oct 2007 11:34:07 -0300", "msg_from": "=?iso-8859-1?Q?Cl=E1udia_Macedo_Amorim?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Problems with + 1 million record table" }, { "msg_contents": "On 10/5/07, Cláudia Macedo Amorim <[email protected]> wrote:\n>\n> I'm new in PostGreSQL and I need some help.\n>\n> I have a table with ~2 million records. Queries in this table are too slow\n> and some are not completed.\n> I think it must be a simple question to solve but, I'm trying without\n> success. I'm worried because next week I will need to work with tables\n> with ~100 million records.\n>\n> I'm using:\n> O.S.: Windows XP;\n> PostgreSQL 8.2;\n> Index type: btree.\n>\n> I have 2 GB of RAM.\n>\n> POSTGRESQL XXX.LOG:\n>\n> <2007-10-05 09:01:42%SELECT> LOG: could not send data to client: Unknown\n> winsock error 10061\n> <2007-10-05 09:03:03%idle> LOG: could not receive data from client: Unknown\n> winsock error 10061\n> <2007-10-05 09:03:03%idle> LOG: unexpected EOF on client connection\n\nThis looks like your client is dying on receiving too much data. You\ncan either try to fix the client to handle more data, which isn't the\nbest way to proceed, or you can retrieve your data with a cursor a\nchunk at a time.\n\n> PSQLODBC.LOG:\n> [13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109,\n> errmsg='Out of memory while reading tuples.'\n\nAssuming this is the client side error, yes, you're simply reading too\nmany rows at once.\n\n> POSTGRESQL.CONF:\n> shared_buffers = 512MB # min 128kB or max_connections*16kB\n\nReasonable for a machine with 2 G ram.\n\n> work_mem = 256MB # min 64kB\n\nIf and only if you have one or two users, this is ok. Otherwise it's\na bit high.\n\nTake a look at cursors, here's the declare ref page:\n\nhttp://www.postgresql.org/docs/8.2/static/sql-declare.html\n", "msg_date": "Fri, 5 Oct 2007 10:01:59 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with + 1 million record table" }, { "msg_contents": "On 5-10-2007 16:34 Cl�udia Macedo Amorim wrote:\n> [13236.470] statement_type=0, statement='select\n> a_teste_nestle.\"CODCLI\",\n> a_teste_nestle.\"CODFAB\",\n> a_teste_nestle.\"CODFAMILIANESTLE\",\n> a_teste_nestle.\"CODFILIAL\",\n> a_teste_nestle.\"CODGRUPONESTLE\",\n> a_teste_nestle.\"CODSUBGRUPONESTLE\",\n> a_teste_nestle.\"CONDVENDA\",\n> a_teste_nestle.\"DATA\",\n> a_teste_nestle.\"DESCRICAO\",\n> a_teste_nestle.\"PESO\",\n> a_teste_nestle.\"PRACA\",\n> a_teste_nestle.\"PUNIT\",\n> a_teste_nestle.\"PVENDA\",\n> a_teste_nestle.\"QT\",\n> a_teste_nestle.\"QTITVENDIDOS\",\n> a_teste_nestle.\"QTPESOPREV\",\n> a_teste_nestle.\"QTVENDAPREV\",\n> a_teste_nestle.\"SUPERVISOR\",\n> a_teste_nestle.\"VENDEDOR\",\n> a_teste_nestle.\"VLVENDAPREV\"\n> from a_teste_nestle\n> \n> '\n\nIs that the entire query? Are you sure you really want to select the \nentire table without having a where-clause? That's normally not a very \nscalable aproach...\n\nBest regards,\n\nArjen\n", "msg_date": "Fri, 05 Oct 2007 18:38:22 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with + 1 million record table" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nCl�udia Macedo Amorim wrote:\n> I'm new in PostGreSQL and I need some help.\n> I have a table with ~2 million records. Queries in this table are too slow and some are not completed.I think it must be a simple question to solve but, I'm trying without success. I'm worried because next week I will need to work with tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 8.2;Index type: btree.I have 2 GB of RAM.\n> POSTGRESQL XXX.LOG:\n> \n> <2007-10-05 09:01:42%SELECT> LOG: could not send data to client: Unknown winsock error 10061\n> <2007-10-05 09:03:03%idle> LOG: could not receive data from client: Unknown winsock error 10061\n> <2007-10-05 09:03:03%idle> LOG: unexpected EOF on client connection\n\n\nYou are not providing a where clause which means you are scanning all 2\nmillion records. If you need to do that, do it in a cursor.\n\n\nJoshua D. Drake\n\n\n\n> \n> \n> PSQLODBC.LOG:\n> \n> [13236.470] ------------------------------------------------------------\n> [13236.470] hdbc=02DE3008, stmt=02C7B1A8, result=02C791D0\n> [13236.470] prepare=0, internal=0\n> [13236.470] bindings=32090580, bindings_allocated=20\n> [13236.470] parameters=00000000, parameters_allocated=0\n> [13236.470] statement_type=0, statement='select \n> \n> a_teste_nestle.\"CODCLI\",\n> \n> a_teste_nestle.\"CODFAB\",\n> \n> a_teste_nestle.\"CODFAMILIANESTLE\",\n> \n> a_teste_nestle.\"CODFILIAL\",\n> \n> a_teste_nestle.\"CODGRUPONESTLE\",\n> \n> a_teste_nestle.\"CODSUBGRUPONESTLE\",\n> \n> a_teste_nestle.\"CONDVENDA\",\n> \n> a_teste_nestle.\"DATA\",\n> \n> a_teste_nestle.\"DESCRICAO\",\n> \n> a_teste_nestle.\"PESO\",\n> \n> a_teste_nestle.\"PRACA\",\n> \n> a_teste_nestle.\"PUNIT\",\n> \n> a_teste_nestle.\"PVENDA\",\n> \n> a_teste_nestle.\"QT\",\n> \n> a_teste_nestle.\"QTITVENDIDOS\",\n> \n> a_teste_nestle.\"QTPESOPREV\",\n> \n> a_teste_nestle.\"QTVENDAPREV\",\n> \n> a_teste_nestle.\"SUPERVISOR\",\n> \n> a_teste_nestle.\"VENDEDOR\",\n> \n> a_teste_nestle.\"VLVENDAPREV\"\n> \n> from a_teste_nestle \n> \n> \n> \n> '\n> [13236.486] stmt_with_params='select \n> a_teste_nestle.\"CODCLI\",\n> a_teste_nestle.\"CODFAB\",\n> a_teste_nestle.\"CODFAMILIANESTLE\",\n> a_teste_nestle.\"CODFILIAL\",\n> a_teste_nestle.\"CODGRUPONESTLE\",\n> a_teste_nestle.\"CODSUBGRUPONESTLE\",\n> a_teste_nestle.\"CONDVENDA\",\n> a_teste_nestle.\"DATA\",\n> a_teste_nestle.\"DESCRICAO\",\n> a_teste_nestle.\"PESO\",\n> a_teste_nestle.\"PRACA\",\n> a_teste_nestle.\"PUNIT\",\n> a_teste_nestle.\"PVENDA\",\n> a_teste_nestle.\"QT\",\n> a_teste_nestle.\"QTITVENDIDOS\",\n> a_teste_nestle.\"QTPESOPREV\",\n> a_teste_nestle.\"QTVENDAPREV\",\n> a_teste_nestle.\"SUPERVISOR\",\n> a_teste_nestle.\"VENDEDOR\",\n> a_teste_nestle.\"VLVENDAPREV\"\n> from a_teste_nestle \n> \n> '\n> [13236.486] data_at_exec=-1, current_exec_param=-1, put_data=0\n> [13236.501] currTuple=-1, current_col=-1, lobj_fd=-1\n> [13236.501] maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0, scroll_concurrency=1\n> [13236.501] cursor_name='SQL_CUR02C7B1A8'\n> [13236.501] ----------------QResult Info -------------------------------\n> [13236.501] fields=02C7C9B8, backend_tuples=00000000, tupleField=0, conn=02DE3008\n> [13236.501] fetch_count=0, num_total_rows=819200, num_fields=20, cursor='(NULL)'\n> [13236.501] message='Out of memory while reading tuples.', command='(NULL)', notice='(NULL)'\n> [13236.501] status=7, inTuples=1\n> [13236.501]CONN ERROR: func=SC_execute, desc='(null)', errnum=109, errmsg='Out of memory while reading tuples.'\n> [13236.517] ------------------------------------------------------------\n> [13236.517] henv=02C727B8, conn=02DE3008, status=1, num_stmts=16\n> [13236.517] sock=02DD3120, stmts=02DD8EE8, lobj_type=17288\n> [13236.517] ---------------- Socket Info -------------------------------\n> [13236.517] socket=512, reverse=0, errornumber=0, errormsg='(NULL)'\n> [13236.517] buffer_in=46642688, buffer_out=46633712\n> [13236.517] buffer_filled_in=4096, buffer_filled_out=0, buffer_read_in=3426\n> [63860.095]conn=02DE3008, PGAPI_Disconnect\n> [63880.251]conn=02C73A78, PGAPI_Disconnect\n> \n> \n> \n> \n> \n> \n> \n> \n> POSTGRESQL.CONF:\n> \n> \n> \n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> \n> # - Memory -\n> \n> shared_buffers = 512MB # min 128kB or max_connections*16kB\n> # (change requires restart)\n> temp_buffers = 32MB # min 800kB\n> #max_prepared_transactions = 5 # can be 0 or more\n> # (change requires restart)\n> # Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n> # per transaction slot, plus lock space (see max_locks_per_transaction).\n> work_mem = 256MB # min 64kB\n> maintenance_work_mem = 128MB # min 1MB\n> #max_stack_depth = 2MB # min 100kB\n> \n> # - Free Space Map -\n> \n> max_fsm_pages = 409600 # min max_fsm_relations*16, 6 bytes each\n> # (change requires restart)\n> #max_fsm_relations = 1000 # min 100, ~70 bytes each\n> # (change requires restart)\n> \n> \n> \n> \n> The table structure is:\n> \n> CREATE TABLE \"public\".\"a_teste_nestle\" (\n> \"DATA\" TIMESTAMP WITH TIME ZONE, \n> \"CODCLI\" DOUBLE PRECISION, \n> \"VENDEDOR\" DOUBLE PRECISION, \n> \"SUPERVISOR\" DOUBLE PRECISION, \n> \"CODFILIAL\" VARCHAR(2), \n> \"PRACA\" DOUBLE PRECISION, \n> \"CONDVENDA\" DOUBLE PRECISION, \n> \"QTITVENDIDOS\" DOUBLE PRECISION, \n> \"PVENDA\" DOUBLE PRECISION, \n> \"PESO\" DOUBLE PRECISION, \n> \"CODPROD\" VARCHAR(15), \n> \"CODFAB\" VARCHAR(15), \n> \"DESCRICAO\" VARCHAR(80), \n> \"CODGRUPONESTLE\" DOUBLE PRECISION, \n> \"CODSUBGRUPONESTLE\" DOUBLE PRECISION, \n> \"CODFAMILIANESTLE\" DOUBLE PRECISION, \n> \"QTPESOPREV\" DOUBLE PRECISION, \n> \"QTVENDAPREV\" DOUBLE PRECISION, \n> \"VLVENDAPREV\" DOUBLE PRECISION, \n> \"QT\" DOUBLE PRECISION, \n> \"PUNIT\" DOUBLE PRECISION\n> ) WITHOUT OIDS;\n> \n> CREATE INDEX \"a_teste_nestle_idx\" ON \"public\".\"a_teste_nestle\"\n> USING btree (\"DATA\");\n> \n> \n> Thanks,\n> \n> \n> \n> _________________________\n> Cl�udia Macedo Amorim\n> Consultora de Desenvolvimento\n> PC Sistemas - www.pcsist.com.br\n> (62) 3250-0200\n> [email protected]\n> \n> \n> Auto Servi�o WinThor: um novo conceito em tecnologia, seguran�a e agilidade.\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFHBnhJATb/zqfZUUQRAqarAKCk2VDeiHDFYBS8K7bT5yI7LavGSwCbBcHq\nhcJQZ8qPpfbbxSUVt1sMKFU=\n=Ju0i\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 05 Oct 2007 10:45:46 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with + 1 million record table" }, { "msg_contents": "Joshua D. Drake wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> Cl�udia Macedo Amorim wrote:\n>> I'm new in PostGreSQL and I need some help.\n>> I have a table with ~2 million records. Queries in this table are too slow and some are not completed.I think it must be a simple question to solve but, I'm trying without success. I'm worried because next week I will need to work with tables with ~100 million records.I'm using:O.S.: Windows XP;PostgreSQL 8.2;Index type: btree.I have 2 GB of RAM.\n>> POSTGRESQL XXX.LOG:\n>>\n>> <2007-10-05 09:01:42%SELECT> LOG: could not send data to client: Unknown winsock error 10061\n>> <2007-10-05 09:03:03%idle> LOG: could not receive data from client: Unknown winsock error 10061\n>> <2007-10-05 09:03:03%idle> LOG: unexpected EOF on client connection\n> \n> \n> You are not providing a where clause which means you are scanning all 2\n> million records. If you need to do that, do it in a cursor.\n> \n> \n> Joshua D. Drake\n> \n> \n\nI would also add that if you want to use anything other than the data \ncolumn in the where clause you should add an index to those columns as well.\n\n>>\n>> The table structure is:\n>>\n>> CREATE TABLE \"public\".\"a_teste_nestle\" (\n>> \"DATA\" TIMESTAMP WITH TIME ZONE, \n>> \"CODCLI\" DOUBLE PRECISION, \n>> \"VENDEDOR\" DOUBLE PRECISION, \n>> \"SUPERVISOR\" DOUBLE PRECISION, \n>> \"CODFILIAL\" VARCHAR(2), \n>> \"PRACA\" DOUBLE PRECISION, \n>> \"CONDVENDA\" DOUBLE PRECISION, \n>> \"QTITVENDIDOS\" DOUBLE PRECISION, \n>> \"PVENDA\" DOUBLE PRECISION, \n>> \"PESO\" DOUBLE PRECISION, \n>> \"CODPROD\" VARCHAR(15), \n>> \"CODFAB\" VARCHAR(15), \n>> \"DESCRICAO\" VARCHAR(80), \n>> \"CODGRUPONESTLE\" DOUBLE PRECISION, \n>> \"CODSUBGRUPONESTLE\" DOUBLE PRECISION, \n>> \"CODFAMILIANESTLE\" DOUBLE PRECISION, \n>> \"QTPESOPREV\" DOUBLE PRECISION, \n>> \"QTVENDAPREV\" DOUBLE PRECISION, \n>> \"VLVENDAPREV\" DOUBLE PRECISION, \n>> \"QT\" DOUBLE PRECISION, \n>> \"PUNIT\" DOUBLE PRECISION\n>> ) WITHOUT OIDS;\n>>\n>> CREATE INDEX \"a_teste_nestle_idx\" ON \"public\".\"a_teste_nestle\"\n>> USING btree (\"DATA\");\n>>\n>>\n>> Thanks,\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Sat, 06 Oct 2007 10:49:08 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with + 1 million record table" } ]
[ { "msg_contents": "Postgresql: 8.2.4 and 8.2.5\n\nI've run into a really strange problem. I have a query that runs much\nslower after analyze. I load the DB, then run the query and it runs in\nabout 200ms. I then analyze the DB, run the query again and it takes about\n1500ms. Before analyze it seems to choose Bitmap Heap Scan on episodes\ncurrent_episode, but after it chooses Index Scan Backward using\nindex_episodes_on_publish_on on episodes current_episode. I've tried it on\ntwo different pg servers and the results are same with default\nrandom_page_cost and with random_page_cost = 2. The entire DB is only 8MB\nso easily fits in memory on all the test systems. Any ideas what's going\non? Setting enable_indexscan = 0 also yields a fast (200ms) plan, so that's\na workaround.\n\nHere are the plans:\n\n\njeff=# explain ANALYZE SELECT DISTINCT ON (shows.id) shows.id,\nseasons.position AS alias_0 FROM shows\n LEFT OUTER JOIN images ON images.id = shows.landing_page_image_id\n LEFT OUTER JOIN seasons ON seasons.show_id = shows.id\n LEFT OUTER JOIN episodes ON episodes.season_id = seasons.id AND\nepisodes.publish_on = (\n SELECT MAX(current_episode.publish_on) AS publish_on FROM episodes AS\ncurrent_episode\n WHERE current_episode.publish_on IS NOT NULL AND\ncurrent_episode.publish_on <= NOW()\n AND current_episode.season_id = episodes.season_id\n )\n LEFT OUTER JOIN seasons current_seasons_shows ON\ncurrent_seasons_shows.show_id = shows.id AND seasons.position = (\n SELECT MAX(latest_unvaulted_season.position) FROM seasons AS\nlatest_unvaulted_season, episodes\n WHERE latest_unvaulted_season.show_id = seasons.show_id AND (\n latest_unvaulted_season.vaults_on IS NULL OR\nlatest_unvaulted_season.vaults_on > NOW()\n )\n AND episodes.season_id = latest_unvaulted_season.id AND\nepisodes.publish_on IS NOT NULL\n AND episodes.publish_on <= NOW()\n )\n LEFT OUTER JOIN episodes current_episodes_seasons ON\ncurrent_episodes_seasons.season_id = current_seasons_shows.id\n AND episodes.publish_on = (\n SELECT MAX(current_episode.publish_on) AS publish_on FROM episodes AS\ncurrent_episode\n WHERE current_episode.publish_on IS NOT NULL AND\ncurrent_episode.publish_on <= NOW()\n AND current_episode.season_id = episodes.season_id\n )\n WHERE (\n ( shows.deleted_at IS NULL OR shows.deleted_at > '2007-10-05\n21:14:02.438466' )\n AND ( shows.archived_at IS NULL OR shows.archived_at > '2007-10-05\n21:14:02.438559' )\n )\n AND (episodes.id IS NOT NULL AND current_seasons_shows.id IS NOT NULL)\n;\n\n \nQUERY PLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n-------\n Unique (cost=106.63..106.64 rows=1 width=8) (actual time=178.931..181.528\nrows=29 loops=1)\n -> Sort (cost=106.63..106.64 rows=1 width=8) (actual\ntime=178.931..180.120 rows=6229 loops=1)\n Sort Key: shows.id\n -> Nested Loop Left Join (cost=2.27..106.62 rows=1 width=8)\n(actual time=0.302..175.506 rows=6229 loops=1)\n Join Filter: (episodes.publish_on = (subplan))\n -> Nested Loop Left Join (cost=2.27..80.66 rows=1 width=24)\n(actual time=0.262..21.725 rows=500 loops=1)\n -> Nested Loop (cost=2.27..76.88 rows=1 width=28)\n(actual time=0.221..19.641 rows=500 loops=1)\n Join Filter: (current_seasons_shows.show_id =\nshows.id)\n -> Nested Loop (cost=2.27..74.55 rows=1\nwidth=28) (actual time=0.215..10.716 rows=267 loops=1)\n -> Nested Loop (cost=0.00..45.72 rows=1\nwidth=20) (actual time=0.164..2.593 rows=29 loops=1\n)\n -> Seq Scan on shows\n(cost=0.00..1.51 rows=4 width=8) (actual time=0.010..0.033 rows=\n33 loops=1)\n Filter: (((deleted_at IS NULL)\nOR (deleted_at > '2007-10-05 21:14:02.438466'::tim\nestamp without time zone)) AND ((archived_at IS NULL) OR (archived_at >\n'2007-10-05 21:14:02.438559'::timestamp without time z\none)))\n -> Index Scan using\nindex_seasons_on_show_id_and_position on seasons (cost=0.00..11.0\n4 rows=1 width=12) (actual time=0.076..0.076 rows=1 loops=33)\n Index Cond: (seasons.show_id =\nshows.id)\n Filter: (\"position\" =\n(subplan))\n SubPlan\n -> Aggregate\n(cost=9.26..9.27 rows=1 width=4) (actual time=0.040..0.040 rows=\n1 loops=58)\n -> Nested Loop\n(cost=2.27..9.25 rows=1 width=4) (actual time=0.015..0.0\n36 rows=12 loops=58)\n -> Seq Scan on\nseasons latest_unvaulted_season (cost=0.00..2.03 r\nows=1 width=8) (actual time=0.004..0.009 rows=1 loops=58)\n Filter:\n((show_id = $1) AND ((vaults_on IS NULL) OR (vaults_o\nn > now())))\n -> Bitmap Heap\nScan on episodes (cost=2.27..7.18 rows=3 width=4)\n(actual time=0.009..0.019 rows=11 loops=62)\n Recheck\nCond: (episodes.season_id = latest_unvaulted_season.i\nd)\n Filter:\n((publish_on IS NOT NULL) AND (publish_on <= now()))\n -> Bitmap\nIndex Scan on index_episodes_on_season_id (cost=0\n.00..2.27 rows=3 width=0) (actual time=0.005..0.005 rows=12 loops=62)\n Index\nCond: (episodes.season_id = latest_unvaulted_seas\non.id)\n -> Bitmap Heap Scan on episodes\n(cost=2.27..28.80 rows=3 width=12) (actual time=0.041..0.27\n4 rows=9 loops=29)\n Recheck Cond: (episodes.season_id =\nseasons.id)\n Filter: ((id IS NOT NULL) AND\n(publish_on = (subplan)))\n -> Bitmap Index Scan on\nindex_episodes_on_season_id (cost=0.00..2.27 rows=3 width=0)\n(actual time=0.005..0.005 rows=11 loops=29)\n Index Cond: (episodes.season_id\n= seasons.id)\n SubPlan\n -> Aggregate (cost=7.20..7.21\nrows=1 width=8) (actual time=0.022..0.022 rows=1 loop\ns=324)\n -> Bitmap Heap Scan on\nepisodes current_episode (cost=2.27..7.19 rows=1 width\n=8) (actual time=0.007..0.016 rows=12 loops=324)\n Recheck Cond:\n(season_id = $0)\n Filter: ((publish_on IS\nNOT NULL) AND (publish_on <= now()))\n -> Bitmap Index Scan\non index_episodes_on_season_id (cost=0.00..2.27 ro\nws=3 width=0) (actual time=0.004..0.004 rows=12 loops=324)\n Index Cond:\n(season_id = $0)\n -> Seq Scan on seasons current_seasons_shows\n(cost=0.00..1.59 rows=59 width=8) (actual time=0.002\n..0.018 rows=59 loops=267)\n Filter: (id IS NOT NULL)\n -> Index Scan using images_pkey on images\n(cost=0.00..3.77 rows=1 width=4) (actual time=0.003..0.003 ro\nws=1 loops=500)\n Index Cond: (images.id =\nshows.landing_page_image_id)\n -> Index Scan using index_episodes_on_season_id on episodes\ncurrent_episodes_seasons (cost=0.00..4.30 rows=3\nwidth=4) (actual time=0.002..0.009 rows=12 loops=500)\n Index Cond: (current_episodes_seasons.season_id =\ncurrent_seasons_shows.id)\n SubPlan\n -> Aggregate (cost=7.20..7.21 rows=1 width=8) (actual\ntime=0.022..0.022 rows=1 loops=6229)\n -> Bitmap Heap Scan on episodes current_episode\n(cost=2.27..7.19 rows=1 width=8) (actual time=0.007..\n0.016 rows=13 loops=6229)\n Recheck Cond: (season_id = $0)\n Filter: ((publish_on IS NOT NULL) AND\n(publish_on <= now()))\n -> Bitmap Index Scan on\nindex_episodes_on_season_id (cost=0.00..2.27 rows=3 width=0) (actual ti\nme=0.004..0.004 rows=13 loops=6229)\n Index Cond: (season_id = $0)\n Total runtime: 181.829 ms\n(51 rows)\n\n\nNow, analyze comes along and updates the statistics for me:\n\njeff=# ANALYZE ;\nANALYZE\n\n \nQUERY\n PLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n-\n Unique (cost=0.00..1226.04 rows=1 width=8) (actual time=1.121..1472.459\nrows=29 loops=1)\n -> Nested Loop Left Join (cost=0.00..1226.03 rows=1 width=8) (actual\ntime=1.120..1470.904 rows=6229 loops=1)\n Join Filter: (episodes.publish_on = (subplan))\n -> Nested Loop Left Join (cost=0.00..1156.52 rows=1 width=24)\n(actual time=0.662..99.337 rows=500 loops=1)\n -> Nested Loop (cost=0.00..1155.40 rows=1 width=28) (actual\ntime=0.655..97.353 rows=500 loops=1)\n Join Filter: (current_seasons_shows.show_id = shows.id)\n -> Nested Loop (cost=0.00..1153.07 rows=1 width=28)\n(actual time=0.652..88.459 rows=267 loops=1)\n -> Nested Loop (cost=0.00..1062.02 rows=1\nwidth=20) (actual time=0.089..3.140 rows=29 loops=1)\n Join Filter: (seasons.show_id = shows.id)\n -> Index Scan using\nindex_seasons_on_show_id_and_position on seasons (cost=0.00..1060.12 ro\nws=1 width=12) (actual time=0.065..2.466 rows=30 loops=1)\n Filter: (\"position\" = (subplan))\n SubPlan\n -> Aggregate (cost=17.83..17.84\nrows=1 width=4) (actual time=0.039..0.040 rows=1 lo\nops=59)\n -> Nested Loop\n(cost=2.34..17.80 rows=11 width=4) (actual time=0.016..0.035 r\nows=12 loops=59)\n -> Seq Scan on seasons\nlatest_unvaulted_season (cost=0.00..2.03 rows=1\nwidth=8) (actual time=0.004..0.009 rows=1 loops=59)\n Filter: ((show_id\n= $2) AND ((vaults_on IS NULL) OR (vaults_on > no\nw())))\n -> Bitmap Heap Scan on\nepisodes (cost=2.34..15.62 rows=12 width=4) (act\nual time=0.009..0.017 rows=11 loops=63)\n Recheck Cond:\n(episodes.season_id = latest_unvaulted_season.id)\n Filter:\n((publish_on IS NOT NULL) AND (publish_on <= now()))\n -> Bitmap Index\nScan on index_episodes_on_season_id (cost=0.00..2\n.34 rows=12 width=0) (actual time=0.006..0.006 rows=11 loops=63)\n Index Cond:\n(episodes.season_id = latest_unvaulted_season.id)\n -> Seq Scan on shows (cost=0.00..1.51\nrows=31 width=8) (actual time=0.002..0.013 rows=33 lo\nops=30)\n Filter: (((deleted_at IS NULL) OR\n(deleted_at > '2007-10-05 21:14:02.438466'::timestamp\n without time zone)) AND ((archived_at IS NULL) OR (archived_at >\n'2007-10-05 21:14:02.438559'::timestamp without time zone)))\n -> Index Scan using index_episodes_on_season_id\non episodes (cost=0.00..91.04 rows=1 width=12) (a\nctual time=0.466..2.936 rows=9 loops=29)\n -> Bitmap Index\nScan on index_episodes_on_season_id (cost=0.00..2\n.34 rows=12 width=0) (actual time=0.006..0.006 rows=11 loops=63)\n Index Cond:\n(episodes.season_id = latest_unvaulted_season.id)\n -> Seq Scan on shows (cost=0.00..1.51\nrows=31 width=8) (actual time=0.002..0.013 rows=33 lo\nops=30)\n Filter: (((deleted_at IS NULL) OR\n(deleted_at > '2007-10-05 21:14:02.438466'::timestamp\n without time zone)) AND ((archived_at IS NULL) OR (archived_at >\n'2007-10-05 21:14:02.438559'::timestamp without time zone)))\n -> Index Scan using index_episodes_on_season_id\non episodes (cost=0.00..91.04 rows=1 width=12) (a\nctual time=0.466..2.936 rows=9 loops=29)\n Index Cond: (episodes.season_id =\nseasons.id)\n Filter: ((id IS NOT NULL) AND (publish_on =\n(subplan)))\n SubPlan\n -> Result (cost=5.70..5.71 rows=1\nwidth=0) (actual time=0.261..0.261 rows=1 loops=324)\n InitPlan\n -> Limit (cost=0.00..5.70\nrows=1 width=8) (actual time=0.259..0.260 rows=1 loops=\n324)\n -> Index Scan Backward\nusing index_episodes_on_publish_on on episodes curren\nt_episode (cost=0.00..62.72 rows=11 width=8) (actual time=0.258..0.258\nrows=1 loops=324)\n Index Cond:\n(publish_on <= now())\n Filter: ((publish_on\nIS NOT NULL) AND (season_id = $0))\n -> Seq Scan on seasons current_seasons_shows\n(cost=0.00..1.59 rows=59 width=8) (actual time=0.002..0.01\n7 rows=59 loops=267)\n Filter: (id IS NOT NULL)\n -> Index Scan using images_pkey on images (cost=0.00..1.11\nrows=1 width=4) (actual time=0.002..0.003 rows=1 l\noops=500)\n Index Cond: (images.id = shows.landing_page_image_id)\n -> Index Scan using index_episodes_on_season_id on episodes\ncurrent_episodes_seasons (cost=0.00..0.77 rows=12 width\n=4) (actual time=0.002..0.008 rows=12 loops=500)\n Index Cond: (current_episodes_seasons.season_id =\ncurrent_seasons_shows.id)\n SubPlan\n -> Result (cost=5.70..5.71 rows=1 width=0) (actual\ntime=0.218..0.218 rows=1 loops=6229)\n InitPlan\n -> Limit (cost=0.00..5.70 rows=1 width=8) (actual\ntime=0.217..0.217 rows=1 loops=6229)\n -> Index Scan Backward using\nindex_episodes_on_publish_on on episodes current_episode (cost=0.00..6\n2.72 rows=11 width=8) (actual time=0.216..0.216 rows=1 loops=6229)\n Index Cond: (publish_on <= now())\n Filter: ((publish_on IS NOT NULL) AND\n(season_id = $0))\n Total runtime: 1472.613 ms\n(47 rows)\n\nset enable_indexscan = 0;\n\n \nQU\nERY PLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------\n----------------------------------------------------------------------------\n--------------------------------------------------\n-------\n Unique (cost=1456.34..1456.35 rows=1 width=8) (actual\ntime=180.423..183.024 rows=29 loops=1)\n -> Sort (cost=1456.34..1456.35 rows=1 width=8) (actual\ntime=180.422..181.624 rows=6229 loops=1)\n Sort Key: shows.id\n -> Nested Loop Left Join (cost=3.31..1456.33 rows=1 width=8)\n(actual time=0.165..177.225 rows=6229 loops=1)\n Join Filter: (episodes.publish_on = (subplan))\n -> Nested Loop Left Join (cost=2.92..1265.33 rows=1\nwidth=24) (actual time=0.133..22.742 rows=500 loops=1)\n -> Nested Loop (cost=2.34..1262.73 rows=1 width=28)\n(actual time=0.125..20.141 rows=500 loops=1)\n Join Filter: (current_seasons_shows.show_id =\nshows.id)\n -> Nested Loop (cost=2.34..1260.40 rows=1\nwidth=28) (actual time=0.122..11.087 rows=267 loops=1)\n -> Nested Loop (cost=0.00..1056.47 rows=1\nwidth=20) (actual time=0.087..2.966 rows=29 loops\n=1)\n Join Filter: (seasons.show_id =\nshows.id)\n -> Seq Scan on seasons\n(cost=0.00..1054.57 rows=1 width=12) (actual time=0.063..2.309\n rows=30 loops=1)\n Filter: (\"position\" =\n(subplan))\n SubPlan\n -> Aggregate\n(cost=17.83..17.84 rows=1 width=4) (actual time=0.037..0.038 row\ns=1 loops=59)\n -> Nested Loop\n(cost=2.34..17.80 rows=11 width=4) (actual time=0.014..0\n.033 rows=12 loops=59)\n -> Seq Scan on\nseasons latest_unvaulted_season (cost=0.00..2.03 r\nows=1 width=8) (actual time=0.004..0.009 rows=1 loops=59)\n Filter:\n((show_id = $1) AND ((vaults_on IS NULL) OR (vaults_o\nn > now())))\n -> Bitmap Heap\nScan on episodes (cost=2.34..15.62 rows=12 width=4\n) (actual time=0.008..0.016 rows=11 loops=63)\n Recheck\nCond: (episodes.season_id = latest_unvaulted_season.i\nd)\n Filter:\n((publish_on IS NOT NULL) AND (publish_on <= now()))\n -> Bitmap\nIndex Scan on index_episodes_on_season_id (cost=0\n.00..2.34 rows=12 width=0) (actual time=0.005..0.005 rows=11 loops=63)\n Index\nCond: (episodes.season_id = latest_unvaulted_seas\non.id)\n -> Seq Scan on shows\n(cost=0.00..1.51 rows=31 width=8) (actual time=0.002..0.012 rows\n=33 loops=30)\n Filter: (((deleted_at IS NULL)\nOR (deleted_at > '2007-10-05 21:14:02.438466'::tim\nestamp without time zone)) AND ((archived_at IS NULL) OR (archived_at >\n'2007-10-05 21:14:02.438559'::timestamp without time z\none)))\n -> Bitmap Heap Scan on episodes\n(cost=2.34..203.90 rows=3 width=12) (actual time=0.041..0.2\n74 rows=9 loops=29)\n Recheck Cond: (episodes.season_id =\nseasons.id)\n Filter: ((id IS NOT NULL) AND\n(publish_on = (subplan)))\n -> Bitmap Index Scan on\nindex_episodes_on_season_id (cost=0.00..2.34 rows=12 width=0)\n (actual time=0.005..0.005 rows=11 loops=29)\n Index Cond: (episodes.season_id\n= seasons.id)\n SubPlan\n -> Aggregate (cost=15.68..15.69\nrows=1 width=8) (actual time=0.022..0.022 rows=1 lo\nops=324)\n -> Bitmap Heap Scan on\nepisodes current_episode (cost=2.34..15.65 rows=11 wid\nth=8) (actual time=0.007..0.016 rows=12 loops=324)\n Recheck Cond:\n(season_id = $0)\n Filter: ((publish_on IS\nNOT NULL) AND (publish_on <= now()))\n -> Bitmap Index Scan\non index_episodes_on_season_id (cost=0.00..2.34 ro\nws=12 width=0) (actual time=0.004..0.004 rows=12 loops=324)\n Index Cond:\n(season_id = $0)\n -> Seq Scan on seasons current_seasons_shows\n(cost=0.00..1.59 rows=59 width=8) (actual time=0.002\n..0.018 rows=59 loops=267)\n Filter: (id IS NOT NULL)\n -> Bitmap Heap Scan on images (cost=0.58..2.59 rows=1\nwidth=4) (actual time=0.003..0.003 rows=1 loops=5\n00)\n Recheck Cond: (images.id =\nshows.landing_page_image_id)\n -> Bitmap Index Scan on images_pkey\n(cost=0.00..0.58 rows=1 width=0) (actual time=0.002..0.002 ro\nws=1 loops=500)\n Index Cond: (images.id =\nshows.landing_page_image_id)\n -> Bitmap Heap Scan on episodes current_episodes_seasons\n(cost=0.39..2.51 rows=12 width=4) (actual time=0.006\n..0.010 rows=12 loops=500)\n Recheck Cond: (current_episodes_seasons.season_id =\ncurrent_seasons_shows.id)\n -> Bitmap Index Scan on index_episodes_on_season_id\n(cost=0.00..0.39 rows=12 width=0) (actual time=0.00\n4..0.004 rows=12 loops=500)\n Index Cond: (current_episodes_seasons.season_id =\ncurrent_seasons_shows.id)\n SubPlan\n -> Aggregate (cost=15.68..15.69 rows=1 width=8) (actual\ntime=0.022..0.022 rows=1 loops=6229)\n -> Bitmap Heap Scan on episodes current_episode\n(cost=2.34..15.65 rows=11 width=8) (actual time=0.007\n..0.016 rows=13 loops=6229)\n Recheck Cond: (season_id = $0)\n Filter: ((publish_on IS NOT NULL) AND\n(publish_on <= now()))\n -> Bitmap Index Scan on\nindex_episodes_on_season_id (cost=0.00..2.34 rows=12 width=0) (actual t\nime=0.004..0.004 rows=13 loops=6229)\n Index Cond: (season_id = $0)\n Total runtime: 183.160 ms\n(55 rows)\n\n\n----\nJeff Frost, Owner <[email protected]>\nFrost Consulting, LLC http://www.frostconsultingllc.com/\nPhone: 650-780-7908 FAX: 650-649-1954\n\n\n\n", "msg_date": "Fri, 5 Oct 2007 19:02:26 -0700", "msg_from": "\"Jeff Frost\" <[email protected]>", "msg_from_op": true, "msg_subject": "query plan worse after analyze" }, { "msg_contents": "* Jeff Frost ([email protected]) wrote:\n> Here are the plans:\n\nIt's probably just me but, honestly, I find it terribly frustrating to\ntry and read a line-wrapped explain-analyze output... I realize it\nmight not be something you can control in your mailer, but you might\nconsider putting the various plans up somewhere online (perhaps a\npastebin like http://pgsql.privatepaste.com) instead of or in addition\nto sending it in the email.\n\n\tThanks!\n\n\t\tStephen", "msg_date": "Fri, 5 Oct 2007 23:14:38 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze" }, { "msg_contents": "On 10/5/07, Stephen Frost <[email protected]> wrote:\n> ... (perhaps a pastebin like http://pgsql.privatepaste.com) ...\n\nThis is cool.\n", "msg_date": "Fri, 5 Oct 2007 22:58:51 -0500", "msg_from": "\"=?UTF-8?Q?Rodrigo_De_Le=C3=B3n?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze" }, { "msg_contents": "On Fri, 5 Oct 2007, Stephen Frost wrote:\n\n> * Jeff Frost ([email protected]) wrote:\n>> Here are the plans:\n>\n> It's probably just me but, honestly, I find it terribly frustrating to\n> try and read a line-wrapped explain-analyze output... I realize it\n> might not be something you can control in your mailer, but you might\n> consider putting the various plans up somewhere online (perhaps a\n> pastebin like http://pgsql.privatepaste.com) instead of or in addition\n> to sending it in the email.\n\nIt's not you. In fact, after I sent this and saw what it looked like, I put \nit into a txt file and replied with an attachment. Unfortunately, it didn't \nbounce, nor did it show up on the list. :-(\n\nSo, here's a pastebin...it's a bit better:\n\nhttp://pastebin.com/m4f0194b\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 5 Oct 2007 21:12:20 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze" }, { "msg_contents": "\"Jeff Frost\" <[email protected]> writes:\n> Before analyze it seems to choose Bitmap Heap Scan on episodes\n> current_episode, but after it chooses Index Scan Backward using\n> index_episodes_on_publish_on on episodes current_episode.\n\nHave you tried raising the stats target for \"episodes\"? Seems like\nthe problem is a misestimate of the frequency of matches for\nseason_id = something.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Oct 2007 01:50:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze " }, { "msg_contents": "On Sat, 6 Oct 2007, Tom Lane wrote:\n\n> \"Jeff Frost\" <[email protected]> writes:\n>> Before analyze it seems to choose Bitmap Heap Scan on episodes\n>> current_episode, but after it chooses Index Scan Backward using\n>> index_episodes_on_publish_on on episodes current_episode.\n>\n> Have you tried raising the stats target for \"episodes\"? Seems like\n> the problem is a misestimate of the frequency of matches for\n> season_id = something.\n\nCan you set the stats target for an entire table up?\n\nI tried this:\n\nALTER TABLE episodes ALTER COLUMN season_id SET STATISTICS 1000;\n\nand got the same plan.\n\nAnd since I had this on a test server, I set the default stats target \nup to 100, reran analyze and got the same plan.\n\nSame if I up it to 1000. :-(\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Fri, 5 Oct 2007 23:41:05 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze " }, { "msg_contents": "Stephen Frost wrote:\n> * Jeff Frost ([email protected]) wrote:\n> > Here are the plans:\n> \n> It's probably just me but, honestly, I find it terribly frustrating to\n> try and read a line-wrapped explain-analyze output... I realize it\n> might not be something you can control in your mailer, but you might\n> consider putting the various plans up somewhere online (perhaps a\n> pastebin like http://pgsql.privatepaste.com) instead of or in addition\n> to sending it in the email.\n\nI've considered writing a script to unwrap the stuff, but never got\naround to doing it, because I need it quite often. (One problem is\ndealing with the several different ways of wrapping).\n\nIf there are any takers I would be very thankful.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Sat, 6 Oct 2007 11:11:15 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze" }, { "msg_contents": "--- Stephen Frost <[email protected]> wrote:\n> (perhaps a\n> pastebin like http://pgsql.privatepaste.com) instead of or in addition\n> to sending it in the email.\n\nIt this new? I don't remember seeing this used before.\n\nRegards,\nRichard Broersma Jr.\n", "msg_date": "Sun, 7 Oct 2007 09:06:37 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan worse after analyze" } ]
[ { "msg_contents": "Greetings All,\n \nI have to authenticate against an existing (constantly modified) PostgreSQL\ndatabase under Solaris 10 (X86). While my PHP scripts are an obvious\nno-brainer, the rest of the contents need to be protected as well (images,\netc) so the http authentication is required. I am using the blastwave\nApache2 and PostgreSQL packages.\n\nI am concerned about slowing down apache too much with these postgres\nqueries on every get and post - and I would love to have some tips from you\nguys on how to avoid that (beyond the normal tuning for postgres). For now,\nI can not even get it working at all so I can test it. I have posted on\nblastwave and solaris groups without reply but I have been watching this\nlist for years and you guys always come through if you can.\n\nWhen I enable mod_dbd and set up the directory access, apache complains that\nit can not find the DBDriver pgsql and I read from the manual page that it\nis looking for apr_dbd_pgsql.so to be available to apache. Other research\nindicates libpq.so is what it is looking for. Yet other research says that\nlibaprutil-1.so should contain one of these two when you look at it in a ldd\ncommand. The closest thing that my system seems to have is the libpq.so in\nthe /opt/csw/postgres/lib directory and even when I provide a symbolic link\nfrom the /opt/csw/apache2/lib for this library I still get a not found\ncondition from the DBDriver setting. It seems Google only helps if you are\nnot on Solaris - and even then not so much.\n \nIs ANYONE using blastwave packages and getting the http authentication to\nwork against a PostgreSQL database on Solaris 10? If so, how? I am out of\nstraws to grasp. \n\nAs I say, from a performance point of view, I would really like to know if\nthere is anything I can do to make sure that postgres is performing as\nquickly as possible under apache2 so that my http authentication is not\nimpacted too significantly.\n\nThanks in advance for any help that anyone can be!\n \n \n Jeff Brower\n\n", "msg_date": "Sun, 7 Oct 2007 09:14:43 -0400", "msg_from": "\"Jeffrey Brower\" <[email protected]>", "msg_from_op": true, "msg_subject": "Apache2 PostgreSQL http authentication" }, { "msg_contents": "\nOn Oct 7, 2007, at 9:14 , Jeffrey Brower wrote:\n\n> Greetings All,\n>\n> I have to authenticate against an existing (constantly modified) \n> PostgreSQL\n> database under Solaris 10 (X86). While my PHP scripts are an obvious\n> no-brainer, the rest of the contents need to be protected as well \n> (images,\n> etc) so the http authentication is required. I am using the blastwave\n> Apache2 and PostgreSQL packages.\n\nI found it trivial to install mod_auth_pgsql.\nhttp://www.giuseppetanzilli.it/mod_auth_pgsql/\n\nAs far as performance, only your testing will tell if it is \nsufficient. In my setup, the authentication overhead is the least of \nmy worries.\n\nCheers,\nM\n", "msg_date": "Sun, 7 Oct 2007 11:19:51 -0400", "msg_from": "\"A.M.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Thanks for the reply! I have used this in the past on Linux systems with\nApache 1 - but I had no idea if the Apache2 version would compile under\nSolaris (let alone the Solaris X86 version) and run dependably. I sent\nGiuseppe an email and asked him, but I've gotten no reply. It looks like it\nhad been a while since his system was updated (the version for apache2 was\nlast updated in January of 2006) but that could easily be a testament to its\nsolid performance since then.\n\nThank you for letting me know that someone is actually using it under\nSolaris 10 X86 and that it will work dependably. I have heard tale of\nfailures using the apache supplied module so this makes me happy.\n\nI will post my results here.\n\nThanks again!\n\n Jeff \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of A.M.\nSent: Sunday, October 07, 2007 11:20 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\n\nOn Oct 7, 2007, at 9:14 , Jeffrey Brower wrote:\n\n> Greetings All,\n>\n> I have to authenticate against an existing (constantly modified) \n> PostgreSQL database under Solaris 10 (X86). While my PHP scripts are \n> an obvious no-brainer, the rest of the contents need to be protected \n> as well (images,\n> etc) so the http authentication is required. I am using the blastwave\n> Apache2 and PostgreSQL packages.\n\nI found it trivial to install mod_auth_pgsql.\nhttp://www.giuseppetanzilli.it/mod_auth_pgsql/\n\nAs far as performance, only your testing will tell if it is sufficient. In\nmy setup, the authentication overhead is the least of my worries.\n\nCheers,\nM\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Sun, 7 Oct 2007 15:07:36 -0400", "msg_from": "\"Jeffrey Brower\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Not so trivial for me as it turns out. \n\nOnce I got the apxs command ironed out, I still could not compile it as I am\nmissing all the headers in the blastwave package: apr.h apr_hooks.h\napr_strings.h httpd.h and so on. Compilation aborted on me.\n\nI hope I am not looking at rebuilding from source downloads just to get an\nauthentication working with postgres.\n\nCertainly SOMEONE is doing http authentication under Solaris.\n\n Jeff\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeffrey Brower\nSent: Sunday, October 07, 2007 3:08 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\nThanks for the reply! I have used this in the past on Linux systems with\nApache 1 - but I had no idea if the Apache2 version would compile under\nSolaris (let alone the Solaris X86 version) and run dependably. I sent\nGiuseppe an email and asked him, but I've gotten no reply. It looks like it\nhad been a while since his system was updated (the version for apache2 was\nlast updated in January of 2006) but that could easily be a testament to its\nsolid performance since then.\n\nThank you for letting me know that someone is actually using it under\nSolaris 10 X86 and that it will work dependably. I have heard tale of\nfailures using the apache supplied module so this makes me happy.\n\nI will post my results here.\n\nThanks again!\n\n Jeff \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of A.M.\nSent: Sunday, October 07, 2007 11:20 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\n\nOn Oct 7, 2007, at 9:14 , Jeffrey Brower wrote:\n\n> Greetings All,\n>\n> I have to authenticate against an existing (constantly modified) \n> PostgreSQL database under Solaris 10 (X86). While my PHP scripts are \n> an obvious no-brainer, the rest of the contents need to be protected \n> as well (images,\n> etc) so the http authentication is required. I am using the blastwave\n> Apache2 and PostgreSQL packages.\n\nI found it trivial to install mod_auth_pgsql.\nhttp://www.giuseppetanzilli.it/mod_auth_pgsql/\n\nAs far as performance, only your testing will tell if it is sufficient. In\nmy setup, the authentication overhead is the least of my worries.\n\nCheers,\nM\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Sun, 7 Oct 2007 16:28:55 -0400", "msg_from": "\"Jeffrey Brower\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "On Sun, 7 Oct 2007 09:14:43 -0400\n\"Jeffrey Brower\" <[email protected]> wrote:\n> As I say, from a performance point of view, I would really like to know if\n> there is anything I can do to make sure that postgres is performing as\n> quickly as possible under apache2 so that my http authentication is not\n> impacted too significantly.\n\nHow often does the user information change? Can you simply create\nstandard Apache password files from cron during non-busy hours?\nSometimes the lower tech solution works best.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 7 Oct 2007 16:44:57 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Using a cron task was my first thought. Unfortunately, new users are given\na logon that they immediately use. I thought about shelling out and\nupdating a password file on an on-demand basis but I am not sure if that is\nsuch a great idea either - especially since users can change their passwords\nand renew their logons at will as well.\n\n Jeff\n \n\n-----Original Message-----\nFrom: D'Arcy J.M. Cain [mailto:[email protected]] \nSent: Sunday, October 07, 2007 4:45 PM\nTo: [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\nOn Sun, 7 Oct 2007 09:14:43 -0400\n\"Jeffrey Brower\" <[email protected]> wrote:\n> As I say, from a performance point of view, I would really like to \n> know if there is anything I can do to make sure that postgres is \n> performing as quickly as possible under apache2 so that my http \n> authentication is not impacted too significantly.\n\nHow often does the user information change? Can you simply create standard\nApache password files from cron during non-busy hours?\nSometimes the lower tech solution works best.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n\n", "msg_date": "Sun, 7 Oct 2007 16:58:29 -0400", "msg_from": "\"Jeffrey Brower\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Success!\n\nFirst you need to make sure that the blastwave package for apache\ndevelopment is on your machine. Use the blastwave command: \n\npkg-get -i apache2_devel\n\nThis gives you the headers you are missing from the apache binary install\n(as well as loading the libtool etc that the apxs command will use.\n\nNow go to http://www.giuseppetanzilli.it/mod_auth_pgsql2/ and download the\nsource. I put it in /Documents/mod_auth_pgsql2/mod_auth_pgsql-2.0.3\n\nExtract it in place (or you can move it someplace else, but you will have to\nchange where you execute the next command. This is the one that does the\nbusiness of compiling, installing and updating your httpd.conf file all at\nonce. I have Solaris SunStudio installed so it works rather neatly. I have\nthe Blastwave PostgreSQL package installed at it's default location too - so\nthis should work for you as long as you have the same packages installed.\nChange directories to where ever you extracted mod_auth_pgsql and enter this\ncommand: \n\n/opt/csw/apache2/sbin/apxs -i -a -c -I /opt/csw/postgresql/include -L\n/opt/csw/postgresql/lib -lpq mod_auth_pgsql.c\n\nIf this completed OK you are pretty much installed. Now you need to set up\nyour authentication. This had a speed bump in it too. You need to shut\ndown the basic authentication from apache if you are going to use the\nPostgreSQL authentication. This is not in any of the manuals but it seems\nto be required because it only works correctly this way. More on that\nlater.\n\nIn your httpd.conf you will need to add your configuration. You can also\nuse .htaccess but I don't like using that because it is yet another file the\napache server looks for on every request in every directory. My test\nconfiguration (which works) is:\n\n\n<Directory \"/path/to/apache2/htdocs/secretstuff\">\n AuthName \"My PostgreSQL Authenticator\"\n AuthType Basic\n AuthBasicAuthoritative Off\n Auth_PG_host localhost\n Auth_PG_port 5432\n Auth_PG_user mypostgresuserid\n Auth_PG_pwd mypostgrespassword\n Auth_PG_database mydatabasename\n Auth_PG_pwd_table mytablename\n Auth_PG_uid_field myuseridfieldname\n Auth_PG_pwd_field mypasswordfieldname\n Auth_PG_encrypted on\n Auth_PG_hash_type CRYPT\n Auth_PG_pwd_whereclause \" and myaccountstatus = 'Active' \"\n <LIMIT GET POST>\n require valid-user\n </LIMIT>\n</Directory>\n\nAnd that is it. A few notes are in order. The \"AuthBasicAuthoritative Off\"\nneeds to be there (this is the one that is not specified as required in any\nmanual I can find). If you use plain text passwords in the database (so\nthat you can do things like send them to users if they forget their\npassword), you will want to use \"Auth_PG_encrypted off\" and remove the\n\"Auth_PG_hash_type CRYPT\" (or what ever password encryption you use).\n\nThere is also a \"Auth_PG_cache_passwords\" setting you can use in case the\nsystem gets a lot of traffic and the lookups slow things down.\n\nI hope this helps someone searching for the same solutions. This really\ndoes work well.\n\n Jeff Brower\n\n\n\n\n-----Original Message-----\nFrom: Jeffrey Brower [mailto:[email protected]] \nSent: Sunday, October 07, 2007 4:29 PM\nTo: [email protected]; [email protected]\nSubject: RE: [PERFORM] Apache2 PostgreSQL http authentication\n\nNot so trivial for me as it turns out. \n\nOnce I got the apxs command ironed out, I still could not compile it as I am\nmissing all the headers in the blastwave package: apr.h apr_hooks.h\napr_strings.h httpd.h and so on. Compilation aborted on me.\n\nI hope I am not looking at rebuilding from source downloads just to get an\nauthentication working with postgres.\n\nCertainly SOMEONE is doing http authentication under Solaris.\n\n Jeff\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeffrey Brower\nSent: Sunday, October 07, 2007 3:08 PM\nTo: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\nThanks for the reply! I have used this in the past on Linux systems with\nApache 1 - but I had no idea if the Apache2 version would compile under\nSolaris (let alone the Solaris X86 version) and run dependably. I sent\nGiuseppe an email and asked him, but I've gotten no reply. It looks like it\nhad been a while since his system was updated (the version for apache2 was\nlast updated in January of 2006) but that could easily be a testament to its\nsolid performance since then.\n\nThank you for letting me know that someone is actually using it under\nSolaris 10 X86 and that it will work dependably. I have heard tale of\nfailures using the apache supplied module so this makes me happy.\n\nI will post my results here.\n\nThanks again!\n\n Jeff \n\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of A.M.\nSent: Sunday, October 07, 2007 11:20 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n\n\nOn Oct 7, 2007, at 9:14 , Jeffrey Brower wrote:\n\n> Greetings All,\n>\n> I have to authenticate against an existing (constantly modified) \n> PostgreSQL database under Solaris 10 (X86). While my PHP scripts are \n> an obvious no-brainer, the rest of the contents need to be protected \n> as well (images,\n> etc) so the http authentication is required. I am using the blastwave\n> Apache2 and PostgreSQL packages.\n\nI found it trivial to install mod_auth_pgsql.\nhttp://www.giuseppetanzilli.it/mod_auth_pgsql/\n\nAs far as performance, only your testing will tell if it is sufficient. In\nmy setup, the authentication overhead is the least of my worries.\n\nCheers,\nM\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Sun, 7 Oct 2007 20:29:27 -0400", "msg_from": "\"Jeffrey Brower\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apache2 PostgreSQL http authentication" } ]
[ { "msg_contents": "I'd consider having a small daemon LISTENing for NOTIFYs that you send by triggers whenever the table has changed. That'll make sure it only dumps if something actually changed. And you can also implement some ratelimiting if needed.\n\n/Magnus \n\n> ------- Original Message -------\n> From: \"Jeffrey Brower\" <[email protected]>\n> To: \"'D'Arcy J.M. Cain'\" <[email protected]>\n> Sent: 07-10-07, 22:58:29\n> Subject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n> \n> Using a cron task was my first thought. Unfortunately, new users are given\n> a logon that they immediately use. I thought about shelling out and\n> updating a password file on an on-demand basis but I am not sure if that is\n> such a great idea either - especially since users can change their passwords\n> and renew their logons at will as well.\n> \n> Jeff\n> \n> \n> -----Original Message-----\n> From: D'Arcy J.M. Cain [mailto:[email protected]] \n> Sent: Sunday, October 07, 2007 4:45 PM\n> To: [email protected]\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Apache2 PostgreSQL http authentication\n> \n> On Sun, 7 Oct 2007 09:14:43 -0400\n> \"Jeffrey Brower\" <[email protected]> wrote:\n> > As I say, from a performance point of view, I would really like to \n> > know if there is anything I can do to make sure that postgres is \n> > performing as quickly as possible under apache2 so that my http \n> > authentication is not impacted too significantly.\n> \n> How often does the user information change? Can you simply create standard\n> Apache password files from cron during non-busy hours?\n> Sometimes the lower tech solution works best.\n> \n> -- \n> D'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\n> http://www.druid.net/darcy/ | and a sheep voting on\n> +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Mon, 8 Oct 2007 08:12:10 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Magnus Hagander schrieb:\n> I'd consider having a small daemon LISTENing for NOTIFYs that you send by triggers whenever the table has changed. That'll make sure it only dumps if something actually changed. And you can also implement some ratelimiting if needed.\nDo you really think such a homegrown solution will be more\nperformant then just accessing postgres? If you have\nmany users the lookup time in a .htaccess/.htpasswd is not for\nfree either.\n\nRegards\nTino\n", "msg_date": "Mon, 08 Oct 2007 19:09:48 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apache2 PostgreSQL http authentication" }, { "msg_contents": "Tino Wildenhain wrote:\n> Magnus Hagander schrieb:\n>> I'd consider having a small daemon LISTENing for NOTIFYs that you send\n>> by triggers whenever the table has changed. That'll make sure it only\n>> dumps if something actually changed. And you can also implement some\n>> ratelimiting if needed.\n> Do you really think such a homegrown solution will be more\n> performant then just accessing postgres? If you have\n> many users the lookup time in a .htaccess/.htpasswd is not for\n> free either.\n\nRight, that's what it depends on. I'd measure it. In systems with not\ntoo many users (say just a couple of thousand), I've measured great\nimprovements in speed. It depends on how you authenticate as well - if\nyou authenticate every single http request, the difference is greater.\n\n//Magnus\n", "msg_date": "Mon, 08 Oct 2007 19:12:23 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Apache2 PostgreSQL http authentication" } ]
[ { "msg_contents": "hi,\n\nis it recommended to run a postgresql server on a nfs-share \n(gigabit-network)? so basically i have a NAS + a database-server,\nand wonder if i should put the database on local hard-drives in the \ndb-server, or on the NAS, and mount it using NFS on the database-server.\n\ni haven't investigated the issue much yet (checked the \nmailing-list-archives, but couldn't find anything definitive.. ), so \nwould like to hear opinions/recommendations?\n\ncan the NAS solution be faster? how much is usually the NFS-overhead?\n\nor is there a consensus on this? saying for example \"generally, you \nshould never use NFS with postgresql?\" or it depends on some factors?\n\nintuitively it seems to me that NFS will be always an extra overhead, \nbut maybe it's an unmeasurably small overhead?\n\nthanks,\ngabor\n", "msg_date": "Mon, 08 Oct 2007 14:39:31 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql on NFS.. recommended? not recommended?" }, { "msg_contents": "On 10/8/07, Gábor Farkas <[email protected]> wrote:\n> hi,\n>\n> is it recommended to run a postgresql server on a nfs-share\n> (gigabit-network)? so basically i have a NAS + a database-server,\n> and wonder if i should put the database on local hard-drives in the\n> db-server, or on the NAS, and mount it using NFS on the database-server.\n>\n> i haven't investigated the issue much yet (checked the\n> mailing-list-archives, but couldn't find anything definitive.. ), so\n> would like to hear opinions/recommendations?\n>\n> can the NAS solution be faster? how much is usually the NFS-overhead?\n>\n> or is there a consensus on this? saying for example \"generally, you\n> should never use NFS with postgresql?\" or it depends on some factors?\n>\n> intuitively it seems to me that NFS will be always an extra overhead,\n> but maybe it's an unmeasurably small overhead?\n>\n> thanks,\n> gabor\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nThis came up recently on pgsql-hackers.\nhttp://archives.postgresql.org/pgsql-hackers/2007-09/msg01182.php. See\ncontinuation of the thread here:\nhttp://archives.postgresql.org/pgsql-hackers/2007-10/msg00017.php\n\n- Josh/eggyknap\n", "msg_date": "Mon, 8 Oct 2007 07:43:42 -0600", "msg_from": "\"Josh Tolley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql on NFS.. recommended? not recommended?" } ]
[ { "msg_contents": "PGSQL 8.2.4\n\n \n\nI have noticed a slight spike in the amount of CPU usage in the last few\nweeks. I am sure it has to do with a change or two that was made to\nsome queries. What is the best way to log the SQL that is being\nexecuted? I would prefer to limit the size of the log file to 2 G. Is\nthere a way to do this?\n\n \n\nThanks for any help,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPGSQL 8.2.4\n \nI have noticed a slight spike in the amount of CPU usage in\nthe last few weeks.  I am sure it has to do with a change or two that was made\nto some queries.  What is the best way to log the SQL that is being executed? \nI would prefer to limit the size of the log file to 2 G.  Is there a way to do\nthis?\n \nThanks for any help,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Tue, 9 Oct 2007 09:11:43 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Monitoring" }, { "msg_contents": "On 10/9/07, Campbell, Lance <[email protected]> wrote:\n> I have noticed a slight spike in the amount of CPU usage in the last few\n> weeks. I am sure it has to do with a change or two that was made to some\n> queries. What is the best way to log the SQL that is being executed? I\n> would prefer to limit the size of the log file to 2 G. Is there a way to do\n> this?\n>\n\nUse http://pgfouine.projects.postgresql.org/.\n", "msg_date": "Tue, 9 Oct 2007 16:36:31 +0200", "msg_from": "\"=?UTF-8?Q?Marcin_St=C4=99pnicki?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" }, { "msg_contents": "Campbell, Lance wrote:\n> I have noticed a slight spike in the amount of CPU usage in the last few\n> weeks. I am sure it has to do with a change or two that was made to\n> some queries. What is the best way to log the SQL that is being\n> executed? \n\nTake a look at statement_timeout and log_statement configuration variables.\n\n> I would prefer to limit the size of the log file to 2 G. Is\n> there a way to do this?\n\nlog_rotation_size, together with an external tool to delete old log\nfiles. Or use log_truncate_on_rotation and log_rotation_age instead of\nlog_rotation_size.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 09 Oct 2007 16:07:59 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" }, { "msg_contents": "\n\"Heikki Linnakangas\" <[email protected]> writes:\n\n> Campbell, Lance wrote:\n>> I have noticed a slight spike in the amount of CPU usage in the last few\n>> weeks. I am sure it has to do with a change or two that was made to\n>> some queries. What is the best way to log the SQL that is being\n>> executed? \n>\n> Take a look at statement_timeout and log_statement configuration variables.\n\nI suspect he meant log_min_duration_statement which lets you log only queries\nwhich take too long and not statement_timeout which would actually kill your\nquery if it took too long.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 09 Oct 2007 20:00:06 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" }, { "msg_contents": "\n> On 10/9/07, Campbell, Lance <[email protected]> wrote:\n> \n>> I have noticed a slight spike in the amount of CPU usage in the last few\n>> weeks. I am sure it has to do with a change or two that was made to some\n>> queries. What is the best way to log the SQL that is being executed? I\n>> would prefer to limit the size of the log file to 2 G. Is there a way to do\n>> this?\n>>\n>> \n>\n> Use http://pgfouine.projects.postgresql.org/.\nThe best thing you can do is setting the log_min_duration_statement to \nsome reasonable value (say 200 ms or something like that), and then \nrepeatedly fix the worst queries (modifying them, adding indexes, ...) \netc. We've adopted this as a common part of weekly development / \nproduction tuning, and the performance of the apps shoot up (response \ntime of the web application dropped from 2 seconds to less than 0.5 second).\n\nActually we wrote something similar as pgfounie was not as nice as \ntoday, at that time (2005] - you can find that tool on \nhttp://opensource.pearshealthcyber.cz/. Actually I'm working on a \ncomplete rewrite of that tool into Java (new features, performance etc.) \n- it's almost done, the alpha release should be ready in two weeks or \nsomething like that. If you are interested in this, just let me know and \nI'll notify you once the first version is available on sf.net.\n\nTomas\n", "msg_date": "Wed, 10 Oct 2007 12:41:49 +0200", "msg_from": "=?UTF-8?B?VG9tw6HFoSBWb25kcmE=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" }, { "msg_contents": "On 10/10/07, Tomáš Vondra <[email protected]> wrote:\n> Actually we wrote something similar as pgfounie was not as nice as\n> today, at that time (2005] - you can find that tool on\n> http://opensource.pearshealthcyber.cz/. Actually I'm working on a\n> complete rewrite of that tool into Java (new features, performance etc.)\n> - it's almost done, the alpha release should be ready in two weeks or\n> something like that. If you are interested in this, just let me know and\n> I'll notify you once the first version is available on sf.net.\n\n+1\n", "msg_date": "Wed, 10 Oct 2007 09:58:51 -0500", "msg_from": "\"=?UTF-8?Q?Rodrigo_De_Le=C3=B3n?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" }, { "msg_contents": "On Wed, 10 Oct 2007 12:41:49 +0200\nTomáš Vondra <[email protected]> wrote:\n\n<snip>\n\n> Actually we wrote something similar as pgfounie was not as nice as \n> today, at that time (2005] - you can find that tool on \n> http://opensource.pearshealthcyber.cz/. Actually I'm working on a \n> complete rewrite of that tool into Java (new features, performance\n> etc.) \n> - it's almost done, the alpha release should be ready in two weeks\n> or something like that. If you are interested in this, just let me\n> know and I'll notify you once the first version is available on\n> sf.net.\n\nCan you post an announcement here?\n\nJosh\n", "msg_date": "Wed, 10 Oct 2007 12:25:31 -0500", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Monitoring" } ]
[ { "msg_contents": "Hi -,\nI have a very peculiar situation.\n\nI am running a postgres 7.4.6 database. It is running slow....... .\nI vacuum --analyze daily. I just did again.\nI did a vacuum full last night.\n\nBut to no avail. CPU usage and memory are normal, but the system is\ncrawling.\n\nHere is the info from vacuum.\n\nCPU 0.02s/0.01u sec elapsed 0.02 sec.\nINFO: free space map: 167 relations, 1412 pages stored; 3440 total pages\nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared\nmemory.\nVACUUM\n\n\nIs there anything else I should be looking at like FSM configuration in the\nconf file?\n\nAny help would be appreciated.\n\nThanks.\nRadhika\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nHi -,I have a very peculiar situation.I am running a postgres 7.4.6 database. It is running slow....... .I vacuum --analyze daily. I just did again. I did a vacuum full last night.But to no avail. CPU usage and memory are normal, but the system is crawling.\nHere is the info from vacuum.CPU 0.02s/0.01u sec elapsed 0.02 sec.INFO:  free space map: 167 relations, 1412 pages stored; 3440 total pages neededDETAIL:  Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared memory.\nVACUUMIs there anything else I should be looking at like FSM configuration in the conf file?Any help would be appreciated.Thanks.Radhika-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Tue, 9 Oct 2007 16:00:04 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres running Very slowly" }, { "msg_contents": "In response to \"Radhika S\" <[email protected]>:\n\n> Hi -,\n> I have a very peculiar situation.\n> \n> I am running a postgres 7.4.6 database. It is running slow....... .\n\n7.4.6 is very old. You're lucky it hasn't corrupted your data. At\nleast upgrade to the latest 7.4.18 (yes, that's 12 patches ahead of\nyou). Optimally, upgrade to 8.2.5, which has a huge number of\nperformance improvements.\n\n> I vacuum --analyze daily. I just did again.\n> I did a vacuum full last night.\n> \n> But to no avail. CPU usage and memory are normal, but the system is\n> crawling.\n\nYou need to specifically define \"crawling\" before anyone will be able\nto provide any useful advice. What queries are running slow? What\ndoes the explain output look like? The answers are in the details,\nso we can't provide the answers unless you provide the details. Like\nthe OS you're running it on, for example.\n\n> Here is the info from vacuum.\n> \n> CPU 0.02s/0.01u sec elapsed 0.02 sec.\n> INFO: free space map: 167 relations, 1412 pages stored; 3440 total pages\n> needed\n> DETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB shared\n> memory.\n> VACUUM\n\nThis doesn't look problematic, so I doubt your vacuum policy is to blame.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 9 Oct 2007 16:18:48 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres running Very slowly" } ]
[ { "msg_contents": "I have a situation where a query is running much slower than I would\nexpect. The ANALYZE showed that it is hashing some information which\nis rarely needed. When I set enable_hashjoin = off for the\nconnection the query run in 1/1000 the time.\n \nThis isn't a debilitating level of performance, but it would be nice\nto clean it up, and we haven't yet come up with a viable solution.\n \nThe runs below are after several identical runs to create a fully\ncached situation. Autovacuum is aggressive and there is a nightly\nvacuum analyze of the whole database. This box has 4 x 2 GHz Xeon\nCPUs, 6 GB RAM, RAID 5 with 13 spindles on 256 MB BBU controller.\n \nI simplified the original a bit; sorry it's still kinda big.\n \n-Kevin\n \nlisten_addresses = '*' \nmax_connections = 200\nshared_buffers = 160MB\ntemp_buffers = 50MB\nwork_mem = 10MB\nmaintenance_work_mem = 160MB\nmax_fsm_pages = 800000\nbgwriter_lru_percent = 20.0\nbgwriter_lru_maxpages = 200\nbgwriter_all_percent = 10.0\nbgwriter_all_maxpages = 600\nwal_buffers = 160kB\ncheckpoint_segments = 10\nrandom_page_cost = 2.0\neffective_cache_size = 5GB\nredirect_stderr = on\nlog_line_prefix = '[%m] %p %q<%u %d %r> '\nstats_block_level = on\nstats_row_level = on\nautovacuum = on\nautovacuum_naptime = 10s\nautovacuum_vacuum_threshold = 1\nautovacuum_analyze_threshold = 1\ndatestyle = 'iso, mdy'\nlc_messages = 'C'\nlc_monetary = 'C'\nlc_numeric = 'C'\nlc_time = 'C'\nescape_string_warning = off\nstandard_conforming_strings = on\nsql_inheritance = off\n \nbigbird=> select version();\n version\n-------------------------------------------------------------------------------------\n PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE Linux)\n(1 row)\n\nbigbird=> explain analyze\nSELECT\n \"CH\".\"caseNo\",\n \"CH\".\"countyNo\",\n \"CH\".\"chargeNo\",\n \"CH\".\"statuteCite\",\n \"CH\".\"sevClsCode\",\n \"CH\".\"modSevClsCode\",\n \"CH\".\"descr\",\n \"CH\".\"offenseDate\",\n \"CH\".\"pleaCode\",\n \"CH\".\"pleaDate\",\n \"CH\".\"chargeSeqNo\",\n \"CHST\".\"eventDate\" AS \"reopEventDate\",\n \"CTHE\".\"descr\" AS \"reopEventDescr\"\n FROM \"Charge\" \"CH\"\n LEFT OUTER JOIN \"CaseHist\" \"CHST\"\n ON ( \"CHST\".\"countyNo\" = \"CH\".\"countyNo\"\n AND \"CHST\".\"caseNo\" = \"CH\".\"caseNo\"\n AND \"CHST\".\"histSeqNo\" = \"CH\".\"reopHistSeqNo\"\n )\n LEFT OUTER JOIN \"CaseTypeHistEvent\" \"CTHE\"\n ON ( \"CHST\".\"eventType\" = \"CTHE\".\"eventType\"\n AND \"CHST\".\"caseType\" = \"CTHE\".\"caseType\"\n AND \"CHST\".\"countyNo\" = \"CTHE\".\"countyNo\"\n )\n WHERE (\n (\"CH\".\"caseNo\" = '2005CF000001')\n AND (\"CH\".\"countyNo\" = 13))\n ORDER BY\n \"chargeNo\",\n \"chargeSeqNo\"\n;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2554.50..2554.52 rows=7 width=146) (actual time=443.068..443.070 rows=3 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Hash Left Join (cost=2318.91..2554.40 rows=7 width=146) (actual time=443.004..443.039 rows=3 loops=1)\n Hash Cond: (((\"CHST\".\"eventType\")::bpchar = (\"CTHE\".\"eventType\")::bpchar) AND ((\"CHST\".\"caseType\")::bpchar = (\"CTHE\".\"caseType\")::bpchar))\n -> Nested Loop Left Join (cost=0.00..208.13 rows=7 width=131) (actual time=0.062..0.093 rows=3 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..15.37 rows=7 width=112) (actual time=0.052..0.059 rows=3 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2005CF000001'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..27.46 rows=6 width=41) (actual time=0.002..0.002 rows=0 loops=3)\n Index Cond: (((\"CHST\".\"countyNo\")::smallint = 13) AND ((\"CHST\".\"caseNo\")::bpchar = '2005CF000001'::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Hash (cost=2084.80..2084.80 rows=15607 width=98) (actual time=442.919..442.919 rows=15607 loops=1)\n -> Subquery Scan \"CTHE\" (cost=1630.43..2084.80 rows=15607 width=98) (actual time=331.665..411.390 rows=15607 loops=1)\n -> Merge Right Join (cost=1630.43..1928.73 rows=15607 width=89) (actual time=331.661..391.999 rows=15607 loops=1)\n Merge Cond: (((d.\"countyNo\")::smallint = \"inner\".\"?column9?\") AND ((d.\"caseType\")::bpchar = \"inner\".\"?column10?\") AND ((d.\"eventType\")::bpchar = \"inner\".\"?column11?\"))\n -> Index Scan using \"CaseTypeHistEventD_pkey\" on \"CaseTypeHistEventD\" d (cost=0.00..87.77 rows=2051 width=21) (actual time=0.026..0.730 rows=434 loops=1)\n -> Sort (cost=1630.43..1669.45 rows=15607 width=76) (actual time=331.022..341.450 rows=15607 loops=1)\n Sort Key: (c.\"countyNo\")::smallint, (b.\"caseType\")::bpchar, (b.\"eventType\")::bpchar\n -> Nested Loop (cost=0.00..543.41 rows=15607 width=76) (actual time=0.035..47.206 rows=15607 loops=1)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (actual time=0.010..0.017 rows=1 loops=1)\n Index Cond: ((\"countyNo\")::smallint = 13)\n -> Seq Scan on \"CaseTypeHistEventB\" b (cost=0.00..383.07 rows=15607 width=74) (actual time=0.019..14.634 rows=15607 loops=1)\n Total runtime: 444.452 ms\n(21 rows)\n\nbigbird=> set enable_hashjoin = off;\nSET\nbigbird=> [same query]\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=3497.26..3497.28 rows=7 width=146) (actual time=0.115..0.117 rows=3 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Merge Left Join (cost=3380.05..3497.17 rows=7 width=146) (actual time=0.091..0.097 rows=3 loops=1)\n Merge Cond: ((\"outer\".\"?column16?\" = \"inner\".\"?column5?\") AND (\"outer\".\"?column17?\" = \"inner\".\"?column6?\"))\n -> Sort (cost=208.23..208.25 rows=7 width=131) (actual time=0.087..0.089 rows=3 loops=1)\n Sort Key: (\"CHST\".\"caseType\")::bpchar, (\"CHST\".\"eventType\")::bpchar\n -> Nested Loop Left Join (cost=0.00..208.13 rows=7 width=131) (actual time=0.053..0.070 rows=3 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..15.37 rows=7 width=112) (actual time=0.043..0.048 rows=3 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2005CF000001'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..27.46 rows=6 width=41) (actual time=0.001..0.001 rows=0 loops=3)\n Index Cond: (((\"CHST\".\"countyNo\")::smallint = 13) AND ((\"CHST\".\"caseNo\")::bpchar = '2005CF000001'::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Sort (cost=3171.82..3210.84 rows=15607 width=98) (never executed)\n Sort Key: (\"CTHE\".\"caseType\")::bpchar, (\"CTHE\".\"eventType\")::bpchar\n -> Subquery Scan \"CTHE\" (cost=1630.43..2084.80 rows=15607 width=98) (never executed)\n -> Merge Right Join (cost=1630.43..1928.73 rows=15607 width=89) (never executed)\n Merge Cond: (((d.\"countyNo\")::smallint = \"inner\".\"?column9?\") AND ((d.\"caseType\")::bpchar = \"inner\".\"?column10?\") AND ((d.\"eventType\")::bpchar = \"inner\".\"?column11?\"))\n -> Index Scan using \"CaseTypeHistEventD_pkey\" on \"CaseTypeHistEventD\" d (cost=0.00..87.77 rows=2051 width=21) (never executed)\n -> Sort (cost=1630.43..1669.45 rows=15607 width=76) (never executed)\n Sort Key: (c.\"countyNo\")::smallint, (b.\"caseType\")::bpchar, (b.\"eventType\")::bpchar\n -> Nested Loop (cost=0.00..543.41 rows=15607 width=76) (never executed)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (never executed)\n Index Cond: ((\"countyNo\")::smallint = 13)\n -> Seq Scan on \"CaseTypeHistEventB\" b (cost=0.00..383.07 rows=15607 width=74) (never executed)\n Total runtime: 0.437 ms\n(24 rows)\n\nbigbird=> \\d \"Charge\"\n Table \"public.Charge\"\n Column | Type | Modifiers\n--------------------+---------------------+-----------\n caseNo | \"CaseNoT\" | not null\n chargeSeqNo | \"ChargeSeqNoT\" | not null\n countyNo | \"CountyNoT\" | not null\n areSentCondsMet | boolean | not null\n caseType | \"CaseTypeT\" | not null\n chargeNo | \"ChargeNoT\" | not null\n descr | \"StatuteDescrT\" | not null\n isPartyTo | boolean | not null\n lastChargeModSeqNo | integer | not null\n lastJdgmtSeqNo | \"JdgmtSeqNoT\" | not null\n ordStatuteFlag | character(1) | not null\n plntfAgencyNo | \"PlntfAgencyNoT\" | not null\n statuteAgencyNo | \"PlntfAgencyNoT\" | not null\n statuteCite | \"StatuteCiteT\" | not null\n statuteEffDate | \"DateT\" |\n arrestCaseNo | \"ArrestCaseNoT\" |\n arrestDate | \"DateT\" |\n arrestTrackingNo | \"ArrestTrackingNoT\" |\n bookAgencyNo | \"IssAgencyNoT\" |\n bookCaseNo | \"BookCaseNoT\" |\n chargeId | \"ChargeIdT\" |\n dispoCode | \"DispoCodeT\" |\n issAgencyNo | \"IssAgencyNoT\" |\n modSevClsCode | \"SevClsCodeT\" |\n offenseDate | \"DateT\" |\n offenseDateRange | \"OffenseDateRangeT\" |\n pleaCode | \"PleaCodeT\" |\n pleaDate | \"DateT\" |\n reopHistSeqNo | \"HistSeqNoT\" |\n sevClsCode | \"SevClsCodeT\" |\n statuteSevSeqNo | \"StatuteSevSeqNoT\" |\n wcisClsCode | \"WcisClsCodeT\" |\n pleaHistSeqNo | \"HistSeqNoT\" |\n chargeStatusCode | \"ChargeStatusCodeT\" |\nIndexes:\n \"Charge_pkey\" PRIMARY KEY, btree (\"countyNo\", \"caseNo\", \"chargeSeqNo\")\n \"Charge_ArrestTrackingNo\" UNIQUE, btree (\"arrestTrackingNo\", \"countyNo\", \"caseNo\", \"chargeSeqNo\")\n \"Charge_OffenseDate\" btree (\"offenseDate\", \"countyNo\", \"issAgencyNo\")\n\nbigbird=> \\d \"CaseHist\"\n Table \"public.CaseHist\"\n Column | Type | Modifiers\n---------------+------------------+-----------\n caseNo | \"CaseNoT\" | not null\n histSeqNo | \"HistSeqNoT\" | not null\n countyNo | \"CountyNoT\" | not null\n caseType | \"CaseTypeT\" |\n eventAmt | \"MoneyT\" |\n eventDate | \"DateT\" |\n eventType | \"EventTypeT\" |\n userId | \"UserIdT\" |\n courtRptrCode | \"CtofcNoT\" |\n ctofcNo | \"CtofcNoT\" |\n dktTxt | \"TextT\" |\n prevRespCtofc | \"CtofcNoT\" |\n tag | \"TagTypeT\" |\n tapeCounterNo | \"TapeCounterNoT\" |\n tapeLoc | \"TapeLocT\" |\n wcisReported | \"DateT\" |\n weightPd | \"PdCodeT\" |\n weightTime | \"CalDurationT\" |\n sealCtofcNo | \"CtofcNoT\" |\n sccaCaseNo | \"SccaCaseNoT\" |\nIndexes:\n \"CaseHist_pkey\" PRIMARY KEY, btree (\"countyNo\", \"caseNo\", \"histSeqNo\")\n \"CaseHist_CaseHistCibRpt\" btree (\"countyNo\", \"eventDate\", \"eventType\", \"caseType\")\n\nbigbird=> \\d \"CaseTypeHistEvent\"\n View \"public.CaseTypeHistEvent\"\n Column | Type | Modifiers\n----------------+---------------+-----------\n caseType | \"CaseTypeT\" |\n eventType | \"EventTypeT\" |\n descr | \"EventDescrT\" |\n isActive | boolean |\n isKeyEvent | boolean |\n isMoneyEnabled | boolean |\n keyEventSeqNo | integer |\n countyNo | \"CountyNoT\" |\nView definition:\n SELECT b.\"caseType\", b.\"eventType\", b.descr, b.\"isActive\",\n CASE\n WHEN d.\"eventType\" IS NOT NULL THEN d.\"isKeyEvent\"\n ELSE b.\"isKeyEvent\"\n END AS \"isKeyEvent\",\n CASE\n WHEN d.\"eventType\" IS NOT NULL THEN d.\"isMoneyEnabled\"\n ELSE b.\"isMoneyEnabled\"\n END AS \"isMoneyEnabled\", COALESCE(\n CASE\n WHEN d.\"eventType\" IS NOT NULL THEN d.\"keyEventSeqNo\"::smallint\n ELSE b.\"keyEventSeqNo\"::smallint\n END::integer, 0) AS \"keyEventSeqNo\", c.\"countyNo\"\n FROM ONLY \"CaseTypeHistEventB\" b\n JOIN ONLY \"ControlRecord\" c ON 1 = 1\n LEFT JOIN ONLY \"CaseTypeHistEventD\" d ON d.\"caseType\"::bpchar = b.\"caseType\"::bpchar AND d.\"eventType\"::bpchar = b.\"eventType\"::bpchar AND d.\"countyNo\"::smallint = c.\"countyNo\"::smallint;\n\nbigbird=> \\d \"CaseTypeHistEventB\"\n Table \"public.CaseTypeHistEventB\"\n Column | Type | Modifiers\n----------------+----------------+-----------\n caseType | \"CaseTypeT\" | not null\n eventType | \"EventTypeT\" | not null\n descr | \"EventDescrT\" | not null\n isActive | boolean | not null\n isKeyEvent | boolean | not null\n isMoneyEnabled | boolean | not null\n keyEventSeqNo | \"KeyEventSeqT\" |\nIndexes:\n \"CaseTypeHistEventB_pkey\" PRIMARY KEY, btree (\"caseType\", \"eventType\") CLUSTER\n\nbigbird=> \\d \"CaseTypeHistEventD\"\n Table \"public.CaseTypeHistEventD\"\n Column | Type | Modifiers\n----------------+----------------+-----------\n countyNo | \"CountyNoT\" | not null\n caseType | \"CaseTypeT\" | not null\n eventType | \"EventTypeT\" | not null\n isMoneyEnabled | boolean | not null\n isKeyEvent | boolean | not null\n keyEventSeqNo | \"KeyEventSeqT\" |\nIndexes:\n \"CaseTypeHistEventD_pkey\" PRIMARY KEY, btree (\"countyNo\", \"caseType\", \"eventType\")\n \"CaseTypeHistEventD_CaseType\" btree (\"caseType\", \"eventType\")\n\nbigbird=> select count(*), count(\"reopHistSeqNo\") from \"Charge\";\n count | count\n----------+--------\n 14041511 | 141720\n(1 row)\n\n", "msg_date": "Tue, 09 Oct 2007 15:09:51 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "hashjoin chosen over 1000x faster plan" }, { "msg_contents": "On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:\n\n> I have a situation where a query is running much slower than I would\n> expect. The ANALYZE showed that it is hashing some information which\n> is rarely needed. When I set enable_hashjoin = off for the\n> connection the query run in 1/1000 the time.\n\nCan you confirm the two queries give identical outputs? It isn't clear\nto me why the second sort is (never executed) in your second plan, which\nI would only expect to see for an inner merge join.\n\nCan you show the details for ControlRecord also.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 10 Oct 2007 07:31:44 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 1:31 AM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:\n> \n>> I have a situation where a query is running much slower than I would\n>> expect. The ANALYZE showed that it is hashing some information which\n>> is rarely needed. When I set enable_hashjoin = off for the\n>> connection the query run in 1/1000 the time.\n> \n> Can you confirm the two queries give identical outputs?\n \nI checked; the output is identical.\n \n> It isn't clear\n> to me why the second sort is (never executed) in your second plan, which\n> I would only expect to see for an inner merge join.\n \nI assume that is because there were no rows to sort. The\nCaseTypeHistEvent view is only needed if there is a link to an event\nwhich reopens the charge after it is disposed. This only happens for\nabout 1% of the Charge records.\n \nThe view is a bit weird, but evolved this way. Originally, there\nwas a table by that name which was maintained statewide by our\norganization (the Consolidated Court Automation Programs, or CCAP).\nThen there was a decision to allow counties to override certain\ncolumns with their own values. Modifying the central copy and\nmerging the changes into 72 modified copies was nasty, so we split\nthe state-maintained portion and the county overrides into two tables\nwith a B (for Base) and D (for Distributed) suffix, and provided a\nview to present the merged form to the existing queries. That way\nonly the software to maintain the data needed to be modified, rather\nthan all the references to it.\n \nThere aren't a lot of rows in the distributed table; most counties\ntake the defaults. The ControlRecord table is joined to the base to\nshow one row of the base data per county in the database. This\nperformance problem is on the central, consolidated copy of all 72\ncounties.\n \n> Can you show the details for ControlRecord also.\n \nbigbird=> \\d \"ControlRecord\"\n Table \"public.ControlRecord\"\n Column | Type | Modifiers\n--------------------+------------------------+-----------\n countyNo | \"CountyNoT\" | not null\n dispEventTime | boolean | not null\n exportDeletes | boolean | not null\n standAloneMode | boolean | not null\n sysMailData | character(1) | not null\n chargeClsEvent | \"EventTypeT\" |\n checkPrinterDriver | character varying(50) |\n cofcCtofcNo | \"CtofcNoT\" |\n ctofcNo | \"CtofcNoT\" |\n defaultDaAttyNo | \"AttyNoT\" |\n districtNo | \"DistrictNoT\" |\n dktFee | \"MoneyT\" |\n dotCourtNo | character(8) |\n initialTrafCal | \"ActivityTypeT\" |\n maxToPrint | smallint |\n postJdgmtStatus | \"StatusCodeT\" |\n rcptPrinterDriver | character varying(50) |\n savedTxtFilePath | character varying(120) |\n scffAmt | \"MoneyT\" |\n scsfAmt | \"MoneyT\" |\n taxWarrantNo | \"CountyNoT\" |\n dorAgencyNo | character(10) |\n jurorMailerPrntDrv | character varying(50) |\n calKioskMessage | \"TextT\" |\n autoAssgnCaseEqual | boolean | not null\n sectionLimit | integer | not null\n sectionBufferLimit | integer | not null\n calKioskKeyboard | character(1) |\n saveCFRdoc | boolean |\n showAudioRecTab | boolean |\n weekdayStartTime | \"TimeT\" |\n weekdayEndTime | \"TimeT\" |\n saturdayStartTime | \"TimeT\" |\n saturdayEndTime | \"TimeT\" |\n sundayStartTime | \"TimeT\" |\n sundayEndTime | \"TimeT\" |\n reportStorageDays | integer |\nIndexes:\n \"ControlRecord_pkey\" PRIMARY KEY, btree (\"countyNo\")\n \n-Kevin\n \n\n", "msg_date": "Wed, 10 Oct 2007 09:15:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "On Wed, 2007-10-10 at 09:15 -0500, Kevin Grittner wrote:\n> >>> On Wed, Oct 10, 2007 at 1:31 AM, in message\n> <[email protected]>, Simon Riggs <[email protected]>\n> wrote: \n> > On Tue, 2007-10-09 at 15:09 -0500, Kevin Grittner wrote:\n> > \n> >> I have a situation where a query is running much slower than I would\n> >> expect. The ANALYZE showed that it is hashing some information which\n> >> is rarely needed. When I set enable_hashjoin = off for the\n> >> connection the query run in 1/1000 the time.\n> > \n> > Can you confirm the two queries give identical outputs?\n> \n> I checked; the output is identical.\n> \n> > It isn't clear\n> > to me why the second sort is (never executed) in your second plan, which\n> > I would only expect to see for an inner merge join.\n> \n> I assume that is because there were no rows to sort. The\n> CaseTypeHistEvent view is only needed if there is a link to an event\n> which reopens the charge after it is disposed. This only happens for\n> about 1% of the Charge records.\n\nSo CHST.EventType is mostly NULL? So the good news is that the default\nplan works best when it does actually find a match. So for 1% of cases\nyou will have an execution time of about 1s, <1ms for the others if you\nfiddle with the planner methods.\n\nThe planner thinks every row will find a match, yet the actual number is\nonly 1%. Hmmm, same section of code as last week.\n\nBasically the planner doesn't ever optimise for the possibility of the\nnever-executed case because even a single row returned would destroy\nthat assumption. \n\nIf we had an Option node in there, we could run the first part of the\nplan before deciding whether to do an MJ or an HJ. Doing that would\navoid doing 2 sorts and return even quicker in the common case (about\n80% time) without being slower in the slowest.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 10 Oct 2007 18:05:55 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Basically the planner doesn't ever optimise for the possibility of the\n> never-executed case because even a single row returned would destroy\n> that assumption. \n\nIt's worse than that: the outer subplan *does* return some rows.\nI suppose that all of them had NULLs in the join keys, which means\nthat (since 8.1 or so) nodeMergejoin discards them as unmatchable.\nHad even one been non-NULL the expensive subplan would have been run.\n\nThis seems like too much of a corner case to justify adding a lot of\nmachinery for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Oct 2007 14:07:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan " }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 1:07 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> Simon Riggs <[email protected]> writes:\n>> Basically the planner doesn't ever optimise for the possibility of the\n>> never-executed case because even a single row returned would destroy\n>> that assumption. \n> \n> It's worse than that: the outer subplan *does* return some rows.\n> I suppose that all of them had NULLs in the join keys, which means\n> that (since 8.1 or so) nodeMergejoin discards them as unmatchable.\n> Had even one been non-NULL the expensive subplan would have been run.\n \nWell, this query is run tens of thousands of times per day by our web\napplication; less than one percent of those runs would require the\nsubplan. (In my initial post I showed counts to demonstrate that 1%\nof the rows had a non-NULL value and, while I wouldn't expect the\nplanner to know this, these tend to be clustered on a lower\npercentage of cases.) If the philosophy of the planner is to go for\nthe lowest average cost (versus lowest worst case cost) shouldn't it\nuse the statistics for to look at the percentage of NULLs?\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 10 Oct 2007 13:30:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "On Wed, 2007-10-10 at 13:30 -0500, Kevin Grittner wrote:\n> >>> On Wed, Oct 10, 2007 at 1:07 PM, in message <[email protected]>,\n> Tom Lane <[email protected]> wrote: \n> > Simon Riggs <[email protected]> writes:\n> >> Basically the planner doesn't ever optimise for the possibility of the\n> >> never-executed case because even a single row returned would destroy\n> >> that assumption. \n> > \n> > It's worse than that: the outer subplan *does* return some rows.\n> > I suppose that all of them had NULLs in the join keys, which means\n> > that (since 8.1 or so) nodeMergejoin discards them as unmatchable.\n> > Had even one been non-NULL the expensive subplan would have been run.\n> \n> Well, this query is run tens of thousands of times per day by our web\n> application; less than one percent of those runs would require the\n> subplan. (In my initial post I showed counts to demonstrate that 1%\n> of the rows had a non-NULL value and, while I wouldn't expect the\n> planner to know this, these tend to be clustered on a lower\n> percentage of cases.) If the philosophy of the planner is to go for\n> the lowest average cost (versus lowest worst case cost) shouldn't it\n> use the statistics for to look at the percentage of NULLs?\n\nBut the planner doesn't work on probability. It works on a best-guess\nselectivity, as known at planning time.\n\nThat's why dynamic planning was invented, which we don't do yet. Don't\nhold your breath either.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 10 Oct 2007 19:54:52 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "On Wed, 2007-10-10 at 14:07 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > Basically the planner doesn't ever optimise for the possibility of the\n> > never-executed case because even a single row returned would destroy\n> > that assumption. \n> \n> It's worse than that: the outer subplan *does* return some rows.\n> I suppose that all of them had NULLs in the join keys, which means\n> that (since 8.1 or so) nodeMergejoin discards them as unmatchable.\n> Had even one been non-NULL the expensive subplan would have been run.\n\nYup\n\n> This seems like too much of a corner case to justify adding a lot of\n> machinery for.\n\nWell, I thought about that and it is pretty common to have root classes\nleft outer joined to sub-classes, if you are using the One Table per\nSubclass object-relational mapping. The joined-subclass mapping within\nHibernate implements this.\n\nNot everybody uses that, but it is an option some people take in some\ncircumstances. So we should keep it on our radar if we want to extend\nour for support complex applications.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 10 Oct 2007 20:01:03 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 1:54 PM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> \n> But the planner doesn't work on probability. It works on a best-guess\n> selectivity, as known at planning time.\n \nThe point I'm trying to make is that at planning time the\npg_statistic row for this \"Charge\".\"reopHistSeqNo\" column showed\nstanullfrac as 0.989; it doesn't seem to have taken this into account\nwhen making its guess about how many rows would be joined when it was\ncompared to the primary key column of the \"CaseHist\" table. I'm\nsuggesting that it might be a good thing if it did.\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 10 Oct 2007 14:35:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "On Wed, 2007-10-10 at 14:35 -0500, Kevin Grittner wrote:\n> >>> On Wed, Oct 10, 2007 at 1:54 PM, in message\n> <[email protected]>, Simon Riggs <[email protected]>\n> wrote: \n> > \n> > But the planner doesn't work on probability. It works on a best-guess\n> > selectivity, as known at planning time.\n> \n> The point I'm trying to make is that at planning time the\n> pg_statistic row for this \"Charge\".\"reopHistSeqNo\" column showed\n> stanullfrac as 0.989; it doesn't seem to have taken this into account\n> when making its guess about how many rows would be joined when it was\n> compared to the primary key column of the \"CaseHist\" table. I'm\n> suggesting that it might be a good thing if it did.\n\nUnderstood, it would be a good thing if it did.\n\nIt's more complex than you think:\n\nThe fast plan is an all-or-nothing plan. It is *only* faster when the\nnumber of matched rows is zero. You know it is zero, but currently the\nplanner doesn't, nor is it able to make use of the information when it\nhas it, half thru execution. Even if we could work out the high\nprobability of it being zero, we would still be left with the decision\nof whether to optimise for the zero or for the non-zero.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 10 Oct 2007 20:52:25 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 2:52 PM, in message\n<[email protected]>, Simon Riggs <[email protected]>\nwrote: \n> \n> The fast plan is an all-or-nothing plan. It is *only* faster when the\n> number of matched rows is zero. You know it is zero, but currently the\n> planner doesn't, nor is it able to make use of the information when it\n> has it, half thru execution. Even if we could work out the high\n> probability of it being zero, we would still be left with the decision\n> of whether to optimise for the zero or for the non-zero.\n \nFor a different case number which has four charges, two reopened:\n \n Sort (cost=2450.27..2450.28 rows=4 width=146) (actual time=463.048..463.052 rows=4 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Hash Left Join (cost=2318.93..2450.23 rows=4 width=146) (actual time=462.857..462.995 rows=4 loops=1)\n Hash Cond: (((\"CHST\".\"eventType\")::bpchar = (\"CTHE\".\"eventType\")::bpchar) AND ((\"CHST\".\"caseType\")::bpchar = (\"CTHE\".\"caseType\")::bpchar))\n -> Nested Loop Left Join (cost=0.00..115.67 rows=4 width=131) (actual time=0.045..0.165 rows=4 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..10.69 rows=4 width=112) (actual time=0.036..0.053 rows=4 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2004CF002575'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..26.18 rows=5 width=41) (actual time=0.018..0.019 rows=0 loops=4)\n Index Cond: (((\"CHST\".\"countyNo\")::smallint = 13) AND ((\"CHST\".\"caseNo\")::bpchar = '2004CF002575'::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Hash (cost=2084.82..2084.82 rows=15607 width=98) (actual time=462.780..462.780 rows=15607 loops=1)\n -> Subquery Scan \"CTHE\" (cost=1630.43..2084.82 rows=15607 width=98) (actual time=355.962..433.081 rows=15607 loops=1)\n -> Merge Right Join (cost=1630.43..1928.75 rows=15607 width=89) (actual time=355.960..414.249 rows=15607 loops=1)\n Merge Cond: (((d.\"countyNo\")::smallint = \"inner\".\"?column9?\") AND ((d.\"caseType\")::bpchar = \"inner\".\"?column10?\") AND ((d.\"eventType\")::bpchar = \"inner\".\"?column11?\"))\n -> Index Scan using \"CaseTypeHistEventD_pkey\" on \"CaseTypeHistEventD\" d (cost=0.00..87.77 rows=2051 width=21) (actual time=0.025..0.713 rows=434 loops=1)\n -> Sort (cost=1630.43..1669.45 rows=15607 width=76) (actual time=355.320..365.251 rows=15607 loops=1)\n Sort Key: (c.\"countyNo\")::smallint, (b.\"caseType\")::bpchar, (b.\"eventType\")::bpchar\n -> Nested Loop (cost=0.00..543.41 rows=15607 width=76) (actual time=0.035..46.914 rows=15607 loops=1)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (actual time=0.010..0.019 rows=1 loops=1)\n Index Cond: ((\"countyNo\")::smallint = 13)\n -> Seq Scan on \"CaseTypeHistEventB\" b (cost=0.00..383.07 rows=15607 width=74) (actual time=0.019..14.069 rows=15607 loops=1)\n Total runtime: 464.588 ms\n(21 rows)\n \nWith set enable_hashjoin = off:\n \n Sort (cost=3404.68..3404.69 rows=4 width=146) (actual time=448.049..448.053 rows=4 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Merge Left Join (cost=3287.55..3404.64 rows=4 width=146) (actual time=447.986..448.005 rows=4 loops=1)\n Merge Cond: ((\"outer\".\"?column16?\" = \"inner\".\"?column5?\") AND (\"outer\".\"?column17?\" = \"inner\".\"?column6?\"))\n -> Sort (cost=115.71..115.72 rows=4 width=131) (actual time=0.179..0.182 rows=4 loops=1)\n Sort Key: (\"CHST\".\"caseType\")::bpchar, (\"CHST\".\"eventType\")::bpchar\n -> Nested Loop Left Join (cost=0.00..115.67 rows=4 width=131) (actual time=0.051..0.139 rows=4 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..10.69 rows=4 width=112) (actual time=0.040..0.053 rows=4 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2004CF002575'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..26.18 rows=5 width=41) (actual time=0.013..0.014 rows=0 loops=4)\n Index Cond: (((\"CHST\".\"countyNo\")::smallint = 13) AND ((\"CHST\".\"caseNo\")::bpchar = '2004CF002575'::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Sort (cost=3171.84..3210.86 rows=15607 width=98) (actual time=446.459..446.936 rows=768 loops=1)\n Sort Key: (\"CTHE\".\"caseType\")::bpchar, (\"CTHE\".\"eventType\")::bpchar\n -> Subquery Scan \"CTHE\" (cost=1630.43..2084.82 rows=15607 width=98) (actual time=322.928..405.654 rows=15607 loops=1)\n -> Merge Right Join (cost=1630.43..1928.75 rows=15607 width=89) (actual time=322.922..381.371 rows=15607 loops=1)\n Merge Cond: (((d.\"countyNo\")::smallint = \"inner\".\"?column9?\") AND ((d.\"caseType\")::bpchar = \"inner\".\"?column10?\") AND ((d.\"eventType\")::bpchar = \"inner\".\"?column11?\"))\n -> Index Scan using \"CaseTypeHistEventD_pkey\" on \"CaseTypeHistEventD\" d (cost=0.00..87.77 rows=2051 width=21) (actual time=0.024..0.734 rows=434 loops=1)\n -> Sort (cost=1630.43..1669.45 rows=15607 width=76) (actual time=322.294..332.182 rows=15607 loops=1)\n Sort Key: (c.\"countyNo\")::smallint, (b.\"caseType\")::bpchar, (b.\"eventType\")::bpchar\n -> Nested Loop (cost=0.00..543.41 rows=15607 width=76) (actual time=0.035..45.539 rows=15607 loops=1)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (actual time=0.010..0.016 rows=1 loops=1)\n Index Cond: ((\"countyNo\")::smallint = 13)\n -> Seq Scan on \"CaseTypeHistEventB\" b (cost=0.00..383.07 rows=15607 width=74) (actual time=0.019..13.754 rows=15607 loops=1)\n Total runtime: 449.660 ms\n(24 rows)\n \nSo in all cases it is faster without the hashjoin; it's just a\nquestion of whether it is 4% faster or 1000 times faster, with a 99+%\nchance of being 1000 times faster.\n \nThis may get back to a question I've always had about the wisdom of\nrounding fractional reads to whole numbers. You lose information\nwhich might lead to better plan choices. You can't read half a row,\nbut you can read one row half the time.\n \n-Kevin\n \n\n", "msg_date": "Wed, 10 Oct 2007 15:11:48 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> The point I'm trying to make is that at planning time the\n> pg_statistic row for this \"Charge\".\"reopHistSeqNo\" column showed\n> stanullfrac as 0.989; it doesn't seem to have taken this into account\n> when making its guess about how many rows would be joined when it was\n> compared to the primary key column of the \"CaseHist\" table.\n\nIt certainly does take nulls into account, but the estimate of resulting\nrows was still nonzero; and even if it were zero, I'd be very hesitant\nto make it choose a plan that is fast only if there were exactly zero\nsuch rows and is slow otherwise. Most of the complaints we've had about\nissues of this sort involve the opposite problem, ie, the planner is\nchoosing a plan that works well for few rows but falls down because\nreality involves many rows. \"Fast-for-few-rows\" plans are usually a lot\nmore brittle than the alternatives in terms of the penalty you pay for\ntoo many rows, and so putting a thumb on the scales to push it towards a\n\"fast\" corner case sounds pretty unsafe to me.\n\nAs Simon notes, the only technically sound way to handle this would\ninvolve run-time plan changeover, which is something we're not nearly\nready to tackle.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Oct 2007 16:32:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan " }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 3:32 PM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \n> I'd be very hesitant\n> to make it choose a plan that is fast only if there were exactly zero\n> such rows and is slow otherwise.\n \nI'm not sure why it looks at the slow option at all; it seems like a remaining weakness in the OUTER JOIN optimizations. If I change the query to use an inner join between the CaseHist table and the view, I get more of what I was expecting for the \"slow\" option. This ten times faster, and I can't see why it would not be usable with an outer join.\n \nbigbird=# explain analyze\nbigbird-# SELECT\nbigbird-# \"CH\".\"caseNo\",\nbigbird-# \"CH\".\"countyNo\",\nbigbird-# \"CH\".\"chargeNo\",\nbigbird-# \"CH\".\"statuteCite\",\nbigbird-# \"CH\".\"sevClsCode\",\nbigbird-# \"CH\".\"modSevClsCode\",\nbigbird-# \"CH\".\"descr\",\nbigbird-# \"CH\".\"offenseDate\",\nbigbird-# \"CH\".\"pleaCode\",\nbigbird-# \"CH\".\"pleaDate\",\nbigbird-# \"CH\".\"chargeSeqNo\",\nbigbird-# \"CHST\".\"eventDate\" AS \"reopEventDate\",\nbigbird-# \"CTHE\".\"descr\" AS \"reopEventDescr\"\nbigbird-# FROM \"Charge\" \"CH\"\nbigbird-# LEFT OUTER JOIN \"CaseHist\" \"CHST\"\nbigbird-# ON ( \"CHST\".\"countyNo\" = \"CH\".\"countyNo\"\nbigbird(# AND \"CHST\".\"caseNo\" = \"CH\".\"caseNo\"\nbigbird(# AND \"CHST\".\"histSeqNo\" = \"CH\".\"reopHistSeqNo\"\nbigbird(# )\nbigbird-# JOIN \"CaseTypeHistEvent\" \"CTHE\"\nbigbird-# ON ( \"CHST\".\"eventType\" = \"CTHE\".\"eventType\"\nbigbird(# AND \"CHST\".\"caseType\" = \"CTHE\".\"caseType\"\nbigbird(# AND \"CHST\".\"countyNo\" = \"CTHE\".\"countyNo\"\nbigbird(# )\nbigbird-# WHERE (\nbigbird(# (\"CH\".\"caseNo\" = '2004CF002575')\nbigbird(# AND (\"CH\".\"countyNo\" = 13))\nbigbird-# ORDER BY\nbigbird-# \"chargeNo\",\nbigbird-# \"chargeSeqNo\"\nbigbird-# ;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=126.69..126.70 rows=1 width=168) (actual time=36.854..36.855 rows=2 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Nested Loop Left Join (cost=0.00..126.68 rows=1 width=168) (actual time=36.465..36.623 rows=2 loops=1)\n Join Filter: ((d.\"countyNo\")::smallint = (c.\"countyNo\")::smallint)\n -> Nested Loop (cost=0.00..123.44 rows=1 width=185) (actual time=24.264..24.408 rows=2 loops=1)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (actual time=9.424..9.427 rows=1 loops=1)\n Index Cond: (13 = (\"countyNo\")::smallint)\n -> Nested Loop (cost=0.00..119.16 rows=1 width=185) (actual time=14.835..14.975 rows=2 loops=1)\n -> Nested Loop (cost=0.00..115.67 rows=1 width=131) (actual time=8.346..8.463 rows=2 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..10.69 rows=4 width=112) (actual time=5.723..8.228 rows=4 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2004CF002575'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..26.18 rows=5 width=41) (actual time=0.052..0.053 rows=0 loops=4)\n Index Cond: ((13 = (\"CHST\".\"countyNo\")::smallint) AND ('2004CF002575'::bpchar = (\"CHST\".\"caseNo\")::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Index Scan using \"CaseTypeHistEventB_pkey\" on \"CaseTypeHistEventB\" b (cost=0.00..3.48 rows=1 width=69) (actual time=3.248..3.250 rows=1 loops=2)\n Index Cond: (((\"CHST\".\"caseType\")::bpchar = (b.\"caseType\")::bpchar) AND ((\"CHST\".\"eventType\")::bpchar = (b.\"eventType\")::bpchar))\n -> Index Scan using \"CaseTypeHistEventD_CaseType\" on \"CaseTypeHistEventD\" d (cost=0.00..3.23 rows=1 width=17) (actual time=6.103..6.103 rows=0 loops=2)\n Index Cond: (((d.\"caseType\")::bpchar = (b.\"caseType\")::bpchar) AND ((d.\"eventType\")::bpchar = (b.\"eventType\")::bpchar))\n Total runtime: 46.072 ms\n(18 rows)\n \n-Kevin\n \n\n", "msg_date": "Wed, 10 Oct 2007 15:48:32 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 3:48 PM, in message\n<[email protected]>, \"Kevin Grittner\"\n<[email protected]> wrote: \n> \n> This ten times faster\n \nThat understates it -- I forgot to get things cached, as I had done\nfor all the other tests. When cached, this is sub-millisecond,\nalthough not quite the 1000-fold increase which I get when no matches\nare found.\n \n-Kevin\n \n Sort (cost=126.70..126.70 rows=1 width=168) (actual time=0.259..0.261 rows=2 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Nested Loop Left Join (cost=0.00..126.69 rows=1 width=168) (actual time=0.157..0.234 rows=2 loops=1)\n Join Filter: ((d.\"countyNo\")::smallint = (c.\"countyNo\")::smallint)\n -> Nested Loop (cost=0.00..123.44 rows=1 width=185) (actual time=0.139..0.203 rows=2 loops=1)\n -> Index Scan using \"ControlRecord_pkey\" on \"ControlRecord\" c (cost=0.00..4.27 rows=1 width=2) (actual time=0.024..0.026 rows=1 loops=1)\n Index Cond: (13 = (\"countyNo\")::smallint)\n -> Nested Loop (cost=0.00..119.17 rows=1 width=185) (actual time=0.109..0.169 rows=2 loops=1)\n -> Nested Loop (cost=0.00..115.67 rows=1 width=131) (actual time=0.087..0.127 rows=2 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..10.69 rows=4 width=112) (actual time=0.038..0.051 rows=4 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2004CF002575'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..26.18 rows=5 width=41) (actual time=0.014..0.015 rows=0 loops=4)\n Index Cond: ((13 = (\"CHST\".\"countyNo\")::smallint) AND ('2004CF002575'::bpchar = (\"CHST\".\"caseNo\")::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Index Scan using \"CaseTypeHistEventB_pkey\" on \"CaseTypeHistEventB\" b (cost=0.00..3.48 rows=1 width=69) (actual time=0.015..0.017 rows=1 loops=2)\n Index Cond: (((\"CHST\".\"caseType\")::bpchar = (b.\"caseType\")::bpchar) AND ((\"CHST\".\"eventType\")::bpchar = (b.\"eventType\")::bpchar))\n -> Index Scan using \"CaseTypeHistEventD_CaseType\" on \"CaseTypeHistEventD\" d (cost=0.00..3.23 rows=1 width=17) (actual time=0.011..0.011 rows=0 loops=2)\n Index Cond: (((d.\"caseType\")::bpchar = (b.\"caseType\")::bpchar) AND ((d.\"eventType\")::bpchar = (b.\"eventType\")::bpchar))\n Total runtime: 0.605 ms\n(18 rows)\n\n", "msg_date": "Wed, 10 Oct 2007 16:02:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> I'm not sure why it looks at the slow option at all; it seems like a remain=\n> ing weakness in the OUTER JOIN optimizations.\n\nI think that comes mostly from the fact that you've got non-nullable\ntargetlist entries in the definition of the CaseTypeHistEvent view.\nThose prevent that view from being flattened into the upper query when\nit's underneath an outer join, because the current variable-evaluation\nrules provide no other way to ensure that the values are forced NULL\nwhen they need to be. This is something we should fix someday but don't\nhold your breath waiting --- it's likely to take some pretty fundamental\nrejiggering of the planner's handling of Vars.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Oct 2007 18:08:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hashjoin chosen over 1000x faster plan " }, { "msg_contents": ">>> On Wed, Oct 10, 2007 at 3:48 PM, in message\n<[email protected]>, \"Kevin Grittner\"\n<[email protected]> wrote: \n> I'm not sure why it looks at the slow option at all; it seems like a \n> remaining weakness in the OUTER JOIN optimizations. If I change the query to \n> use an inner join between the CaseHist table and the view, I get more of what \n> I was expecting for the \"slow\" option.\n \nJust to wrap this up (from my perspective), it looks like we're\nheaded to a workaround of using the underlying \"base\" table instead\nof the view. We ignore any county override of our description, but\nperformance is good, and they were reluctant to change it to an inner\njoin.\n \n-Kevin\n \nSELECT\n \"CH\".\"caseNo\",\n \"CH\".\"countyNo\",\n \"CH\".\"chargeNo\",\n \"CH\".\"statuteCite\",\n \"CH\".\"sevClsCode\",\n \"CH\".\"modSevClsCode\",\n \"CH\".\"descr\",\n \"CH\".\"offenseDate\",\n \"CH\".\"pleaCode\",\n \"CH\".\"pleaDate\",\n \"CH\".\"chargeSeqNo\",\n \"CHST\".\"eventDate\" AS \"reopEventDate\",\n \"CTHE\".\"descr\" AS \"reopEventDescr\"\n FROM \"Charge\" \"CH\"\n LEFT OUTER JOIN \"CaseHist\" \"CHST\"\n ON ( \"CHST\".\"countyNo\" = \"CH\".\"countyNo\"\n AND \"CHST\".\"caseNo\" = \"CH\".\"caseNo\"\n AND \"CHST\".\"histSeqNo\" = \"CH\".\"reopHistSeqNo\"\n )\n LEFT OUTER JOIN \"CaseTypeHistEventB\" \"CTHE\"\n ON ( \"CHST\".\"eventType\" = \"CTHE\".\"eventType\"\n AND \"CHST\".\"caseType\" = \"CTHE\".\"caseType\"\n )\n WHERE (\n (\"CH\".\"caseNo\" = '2004CF002575')\n AND (\"CH\".\"countyNo\" = 13))\n ORDER BY\n \"chargeNo\",\n \"chargeSeqNo\"\n;\n \n Sort (cost=129.70..129.71 rows=4 width=168) (actual time=0.218..0.220 rows=4 loops=1)\n Sort Key: \"CH\".\"chargeNo\", \"CH\".\"chargeSeqNo\"\n -> Nested Loop Left Join (cost=0.00..129.66 rows=4 width=168) (actual time=0.059..0.190 rows=4 loops=1)\n -> Nested Loop Left Join (cost=0.00..115.67 rows=4 width=129) (actual time=0.055..0.139 rows=4 loops=1)\n -> Index Scan using \"Charge_pkey\" on \"Charge\" \"CH\" (cost=0.00..10.69 rows=4 width=112) (actual time=0.046..0.059 rows=4 loops=1)\n Index Cond: (((\"countyNo\")::smallint = 13) AND ((\"caseNo\")::bpchar = '2004CF002575'::bpchar))\n -> Index Scan using \"CaseHist_pkey\" on \"CaseHist\" \"CHST\" (cost=0.00..26.18 rows=5 width=41) (actual time=0.013..0.014 rows=0 loops=4)\n Index Cond: (((\"CHST\".\"countyNo\")::smallint = 13) AND ((\"CHST\".\"caseNo\")::bpchar = '2004CF002575'::bpchar) AND ((\"CHST\".\"histSeqNo\")::smallint = (\"CH\".\"reopHistSeqNo\")::smallint))\n -> Index Scan using \"CaseTypeHistEventB_pkey\" on \"CaseTypeHistEventB\" \"CTHE\" (cost=0.00..3.48 rows=1 width=69) (actual time=0.008..0.009 rows=0 loops=4)\n Index Cond: (((\"CHST\".\"caseType\")::bpchar = (\"CTHE\".\"caseType\")::bpchar) AND ((\"CHST\".\"eventType\")::bpchar = (\"CTHE\".\"eventType\")::bpchar))\n Total runtime: 0.410 ms\n(11 rows)\n\n", "msg_date": "Wed, 10 Oct 2007 17:09:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hashjoin chosen over 1000x faster plan" } ]
[ { "msg_contents": "Hi,\nAlong with my previous message (slow postgres), I notice the shared buffer\nsetting for our production database is set to 1000.\nHow much higher can I go? I don't know how much my kernel can take?\n\nI am running postgres 7.4.6 on Redhat enterprise 3 server.\n\nThanks,\nRadhika\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nHi,Along with my previous message (slow postgres), I notice the shared buffer setting for our production database is set to 1000.How much higher can I go?  I don't know how much my kernel can take?I am running postgres \n7.4.6 on Redhat enterprise 3 server.Thanks,Radhika-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Tue, 9 Oct 2007 16:12:56 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Shared Buffer setting in postgresql.conf" }, { "msg_contents": "On 10/9/07, Radhika S <[email protected]> wrote:\n> Hi,\n> Along with my previous message (slow postgres), I notice the shared buffer\n> setting for our production database is set to 1000.\n> How much higher can I go? I don't know how much my kernel can take?\n\nA lot higher. How much memory do you have?\n\n> I am running postgres 7.4.6 on Redhat enterprise 3 server.\n\nUnless you've got a very good reason do yourself a favour and upgrade to 8.2.5.\n", "msg_date": "Wed, 10 Oct 2007 07:25:45 +0200", "msg_from": "\"=?UTF-8?Q?Marcin_St=C4=99pnicki?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "On 10/9/07, Radhika S <[email protected]> wrote:\n> Hi,\n> Along with my previous message (slow postgres), I notice the shared buffer\n> setting for our production database is set to 1000.\n> How much higher can I go? I don't know how much my kernel can take?\n>\n> I am running postgres 7.4.6 on Redhat enterprise 3 server.\n\nYour kernel can go much much higher. However, 7.4 was not very\nefficient at handling large amount of shared_buffers, so the rule of\nthumb is to make it big enough to hold your largest working set and\ntest to see if it's faster or slower.\n\nMost of the time it will be faster, but sometimes in 7.4 it will be\nslower due to the inefficient caching algorithm it used.\n\ntwo points:\n\n* 7.4.18 or so is the latest version in that branch. Updating it is a\nsimple pg_ctl stop;rpm -Uvh postgresql-7.4.18.rpm;pg_ctl start or\nequivalent. Painless and takes a minute or two, and there are actual\nfactual data eating bugs in 7.4.6.\n\n* 8.2 (8.3 due out soon) is MUCH faster than 7.4, AND it can handle\nmuch larger shared_buffer settings than 7.4\n\nBack to shared_buffer issues. Keep in mind the kernel caches too, and\nit pretty good at it. A common school of thought is to give\npostgresql about 25% of the memory in the machine for shared_buffers\nand let the kernel handle the rest. It's not a hard fast number. I\nrun about 35% of the memory for shared_buffers on my server, and it\nworks very well.\n\nKeep in mind, memory handed over to shared buffers means less memory\nfor other things, like sorts or kernel buffering / caching, so\nTANSTAAFL (There ain't no such thing as a free lunch) is the key word.\n\nIn 7.4, using 25% is often too high a setting for it to handle well,\nand the practical useful maximum is usually under 10,000\nshared_buffers, and often closer to 1,000 to 5,000\n", "msg_date": "Wed, 10 Oct 2007 10:20:02 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "On Wed, 10 Oct 2007 10:20:02 -0500\n\"Scott Marlowe\" <[email protected]> wrote:\n\n> In 7.4, using 25% is often too high a setting for it to handle well,\n> and the practical useful maximum is usually under 10,000\n> shared_buffers, and often closer to 1,000 to 5,000\n\nScott - interesting reply. Is this also true for 8.1? I currently\nhave mine set to 16384 - server has 3.5 GB of total memory.\n\nJosh\n", "msg_date": "Wed, 10 Oct 2007 15:32:01 -0500", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "On 10/10/07, Josh Trutwin <[email protected]> wrote:\n> On Wed, 10 Oct 2007 10:20:02 -0500\n> \"Scott Marlowe\" <[email protected]> wrote:\n>\n> > In 7.4, using 25% is often too high a setting for it to handle well,\n> > and the practical useful maximum is usually under 10,000\n> > shared_buffers, and often closer to 1,000 to 5,000\n>\n> Scott - interesting reply. Is this also true for 8.1? I currently\n> have mine set to 16384 - server has 3.5 GB of total memory.\n\nNo, starting with 8.0, the code to manage the shared_buffers is much\nmore efficient with large numbers of shared buffers. With 8.0 and up\nthe primary considerations are that the shared_buffers be big enough\nto hold your working set, but not so big as to run the system out of\nmemory for other things, sorts, kernel caching, etc...\n", "msg_date": "Wed, 10 Oct 2007 15:58:54 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "Thank you scott.\n\nWe plan on upgrading to Postgres 8.2 very soon.\nWould it be safe to say I can make my SHARED BUFFER setting 200MB (I have\n2GB memory ).\nThe default is 24MB.\n\nRegds,\nRadhika\n\nOn 10/10/07, Scott Marlowe <[email protected]> wrote:\n>\n> On 10/9/07, Radhika S <[email protected]> wrote:\n> > Hi,\n> > Along with my previous message (slow postgres), I notice the shared\n> buffer\n> > setting for our production database is set to 1000.\n> > How much higher can I go? I don't know how much my kernel can take?\n> >\n> > I am running postgres 7.4.6 on Redhat enterprise 3 server.\n>\n> Your kernel can go much much higher. However, 7.4 was not very\n> efficient at handling large amount of shared_buffers, so the rule of\n> thumb is to make it big enough to hold your largest working set and\n> test to see if it's faster or slower.\n>\n> Most of the time it will be faster, but sometimes in 7.4 it will be\n> slower due to the inefficient caching algorithm it used.\n>\n> two points:\n>\n> * 7.4.18 or so is the latest version in that branch. Updating it is a\n> simple pg_ctl stop;rpm -Uvh postgresql-7.4.18.rpm;pg_ctl start or\n> equivalent. Painless and takes a minute or two, and there are actual\n> factual data eating bugs in 7.4.6.\n>\n> * 8.2 (8.3 due out soon) is MUCH faster than 7.4, AND it can handle\n> much larger shared_buffer settings than 7.4\n>\n> Back to shared_buffer issues. Keep in mind the kernel caches too, and\n> it pretty good at it. A common school of thought is to give\n> postgresql about 25% of the memory in the machine for shared_buffers\n> and let the kernel handle the rest. It's not a hard fast number. I\n> run about 35% of the memory for shared_buffers on my server, and it\n> works very well.\n>\n> Keep in mind, memory handed over to shared buffers means less memory\n> for other things, like sorts or kernel buffering / caching, so\n> TANSTAAFL (There ain't no such thing as a free lunch) is the key word.\n>\n> In 7.4, using 25% is often too high a setting for it to handle well,\n> and the practical useful maximum is usually under 10,000\n> shared_buffers, and often closer to 1,000 to 5,000\n>\n\n\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nThank you scott.We plan on upgrading to Postgres 8.2 very soon.Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have 2GB memory ).The default is 24MB.Regds,Radhika\nOn 10/10/07, Scott Marlowe <[email protected]> wrote:\nOn 10/9/07, Radhika S <[email protected]> wrote:> Hi,> Along with my previous message (slow postgres), I notice the shared buffer> setting for our production database is set to 1000.\n> How much higher can I go?  I don't know how much my kernel can take?>> I am running postgres 7.4.6 on Redhat enterprise 3 server.Your kernel can go much much higher.  However, 7.4 was not very\nefficient at handling large amount of shared_buffers, so the rule ofthumb is to make it big enough to hold your largest working set andtest to see if it's faster or slower.Most of the time it will be faster, but sometimes in \n7.4 it will beslower due to the inefficient caching algorithm it used.two points:* 7.4.18 or so is the latest version in that branch.  Updating it is asimple pg_ctl stop;rpm -Uvh postgresql-7.4.18.rpm\n;pg_ctl start orequivalent.  Painless and takes a minute or two, and there are actualfactual data eating bugs in 7.4.6.* 8.2 (8.3 due out soon) is MUCH faster than 7.4, AND it can handlemuch larger shared_buffer settings than \n7.4Back to shared_buffer issues.  Keep in mind the kernel caches too, andit pretty good at it.  A common school of thought is to givepostgresql about 25% of the memory in the machine for shared_buffers\nand let the kernel handle the rest.  It's not a hard fast number.  Irun about 35% of the memory for shared_buffers on my server, and itworks very well.Keep in mind, memory handed over to shared buffers means less memory\nfor other things, like sorts or kernel buffering / caching, soTANSTAAFL (There ain't no such thing as a free lunch) is the key word.In 7.4, using 25% is often too high a setting for it to handle well,\nand the practical useful maximum is usually under 10,000shared_buffers, and often closer to 1,000 to 5,000-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Wed, 10 Oct 2007 20:33:50 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "On 10/10/07, Radhika S <[email protected]> wrote:\n> Thank you scott.\n>\n> We plan on upgrading to Postgres 8.2 very soon.\n> Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have\n> 2GB memory ).\n> The default is 24MB.\n\nOn a dedicated db machine with 2 Gigs of ram 500Meg is fine. I run\n768 Meg shared_buffers on a machine that runs postgresql 8.2 and\napache/php and routinely have 1Gig of kernel cache on it.\n\nSo yeah, 200M shared_buffers should be no problem.\n", "msg_date": "Wed, 10 Oct 2007 19:49:23 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" }, { "msg_contents": "\nOn Wed, 2007-10-10 at 19:49 -0500, Scott Marlowe wrote:\n> On 10/10/07, Radhika S <[email protected]> wrote:\n> > Thank you scott.\n> >\n> > We plan on upgrading to Postgres 8.2 very soon.\n> > Would it be safe to say I can make my SHARED BUFFER setting 200MB (I have\n> > 2GB memory ).\n> > The default is 24MB.\n> \n> On a dedicated db machine with 2 Gigs of ram 500Meg is fine. I run\n> 768 Meg shared_buffers on a machine that runs postgresql 8.2 and\n> apache/php and routinely have 1Gig of kernel cache on it.\n> \n> So yeah, 200M shared_buffers should be no problem.\n\nI have 768MB of ram and I'm allocatting 300MB as shared buffers.\n\n", "msg_date": "Thu, 18 Oct 2007 09:33:46 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared Buffer setting in postgresql.conf" } ]
[ { "msg_contents": "Hi\n\nI have been having some serious performance issues when using prepared\nstatements which I can not re-produce when using a direct statement. Let\nme try to explain\n\nThe query does an order by in descending order on several columns for\nwhich an index exists. \n\nThe explain output as follows\n\nrascal=# explain SELECT oid, * FROM calllog\nWHERE calllog_mainteng = '124 '\nAND calllog_phase = 8\nAND calllog_self < 366942\nOR calllog_mainteng = '124 '\nAND calllog_phase < 8\nORDER BY calllog_mainteng DESC,\n calllog_phase DESC,\n calllog_self DESC limit 25;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..111.62 rows=25 width=2164)\n -> Index Scan Backward using calllog_rmc_idx on calllog\n(cost=0.00..53475.22 rows=11977 width=2164)\n Index Cond: (calllog_mainteng = '124 '::bpchar)\n Filter: (((calllog_phase = 8) AND (calllog_self < 366942)) OR\n(calllog_phase < 8))\n\nWhen running the query directly from psql it returns the required rows\nin less than 100 milli-seconds.\n\nHowever, when using a prepared statement from my C application on the\nabove query and executing it the query duration is as follows\n\nSELECT oid, * FROM calllog\nWHERE calllog_mainteng = '124 '\nAND calllog_phase = 8\nAND calllog_self < 366942\nOR calllog_mainteng = '124 '\nAND calllog_phase < 8\nORDER BY calllog_mainteng DESC,\n calllog_phase DESC,\n calllog_self DESC limit 25\nRow[s] = 25, Duration = 435409.474 ms\n\nThe index as per the explain is defined as follows\n\n\"calllog_rmc_idx\" UNIQUE, btree (calllog_mainteng, calllog_phase,\ncalllog_self)\n\nVACUUM and all those good things done\n\nVersion of PostgreSQL 8.1 and 8.2\n\nenable_seqscan = off\nenable_sort = off\n\nAny advice/suggestions/thoughts much appreciated\n\n-- \nRegards\nTheo\n\n", "msg_date": "Wed, 10 Oct 2007 16:45:40 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with prepared statements" }, { "msg_contents": "Theo Kramer a �crit :\n> Hi\n>\n> I have been having some serious performance issues when using prepared\n> statements which I can not re-produce when using a direct statement. Let\n> me try to explain\n>\n> The query does an order by in descending order on several columns for\n> which an index exists. \n>\n> The explain output as follows\n>\n> rascal=# explain SELECT oid, * FROM calllog\n> WHERE calllog_mainteng = '124 '\n> AND calllog_phase = 8\n> AND calllog_self < 366942\n> OR calllog_mainteng = '124 '\n> AND calllog_phase < 8\n> ORDER BY calllog_mainteng DESC,\n> calllog_phase DESC,\n> calllog_self DESC limit 25;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..111.62 rows=25 width=2164)\n> -> Index Scan Backward using calllog_rmc_idx on calllog\n> (cost=0.00..53475.22 rows=11977 width=2164)\n> Index Cond: (calllog_mainteng = '124 '::bpchar)\n> Filter: (((calllog_phase = 8) AND (calllog_self < 366942)) OR\n> (calllog_phase < 8))\n>\n> When running the query directly from psql it returns the required rows\n> in less than 100 milli-seconds.\n>\n> However, when using a prepared statement from my C application on the\n> above query and executing it the query duration is as follows\n>\n> SELECT oid, * FROM calllog\n> WHERE calllog_mainteng = '124 '\n> AND calllog_phase = 8\n> AND calllog_self < 366942\n> OR calllog_mainteng = '124 '\n> AND calllog_phase < 8\n> ORDER BY calllog_mainteng DESC,\n> calllog_phase DESC,\n> calllog_self DESC limit 25\n> Row[s] = 25, Duration = 435409.474 ms\n>\n> The index as per the explain is defined as follows\n>\n> \"calllog_rmc_idx\" UNIQUE, btree (calllog_mainteng, calllog_phase,\n> calllog_self)\n>\n> VACUUM and all those good things done\n>\n> Version of PostgreSQL 8.1 and 8.2\n>\n> enable_seqscan = off\n> enable_sort = off\n>\n> Any advice/suggestions/thoughts much appreciated\n> \nReading the manual, you can learn that prepared statement can (not) \nfollow the same plan as direct query:\nthe plan is make before pg know the value of the variable.\n\nSee 'Notes' http://www.postgresql.org/docs/8.2/interactive/sql-prepare.html\n\n", "msg_date": "Wed, 10 Oct 2007 17:00:40 +0200", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Wed, 2007-10-10 at 17:00 +0200, Cédric Villemain wrote:\n> <snip> \n> Reading the manual, you can learn that prepared statement can (not) \n> follow the same plan as direct query:\n> the plan is make before pg know the value of the variable.\n> \n> See 'Notes' http://www.postgresql.org/docs/8.2/interactive/sql-prepare.html\n\nThanks, had missed that, however, I am afraid that I fail to see how\npreparing a query using PQprepare() and then executing it using\nPQexecPrepared(), is 8 thousand times slower than directly executing\nit.,, ( 403386.583ms/50.0ms = 8067 ).\n\nWhen doing a 'manual' prepare and explain analyze I get the following\n\nrascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\ncalllog_mainteng, calllog_phase, calllog_self FROM calllog\nWHERE calllog_mainteng = $1\nAND calllog_phase = $2\nAND calllog_self < $3 \nOR calllog_mainteng = $1 \nAND calllog_phase < $2\nORDER BY calllog_mainteng DESC,\n calllog_phase DESC,\n calllog_self DESC limit 25;\nPREPARE\nrascal=# explain analyze execute cq ('124 ', 8, 366942);\n QUERY\nPLAN \n---------------------------------------------------------------------------\n Limit (cost=0.00..232.73 rows=25 width=26) (actual time=2.992..3.178\nrows=25 loops=1)\n -> Index Scan Backward using calllog_rmc_idx on calllog\n(cost=0.00..38651.38 rows=4152 width=26) (actual time=2.986..3.116\nrows=25 loops=1)\n Index Cond: (calllog_mainteng = $1)\n Filter: (((calllog_phase = $2) AND (calllog_self < $3)) OR\n(calllog_phase < $2))\n Total runtime: 3.272 ms\n\n\nSo I suspect that there is something more fundamental here...\n\n-- \nRegards\nTheo\n\n", "msg_date": "Wed, 10 Oct 2007 21:34:00 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On 10/10/07, Theo Kramer <[email protected]> wrote:\n> When running the query directly from psql it returns the required rows\n> in less than 100 milli-seconds.\n>\n> However, when using a prepared statement from my C application on the\n> above query and executing it the query duration is as follows\n> ...\n> Row[s] = 25, Duration = 435409.474 ms\n>\n\nHow are you timing it? Does it really take 435 seconds to complete?\nTry the following in psql:\n\npostgres# PREPARE yourplan (VARCHAR, INT, INT) AS\nSELECT oid, * FROM calllog\nWHERE calllog_mainteng = $1\nAND calllog_phase = $2\nAND calllog_self < $3\nOR calllog_mainteng = $1\nAND calllog_phase < 8\nORDER BY calllog_mainteng DESC,\n calllog_phase DESC,\n calllog_self DESC limit 25;\n\npostgres# EXECUTE yourplan('124 ', 8, 366942);\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 10 Oct 2007 15:55:07 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Wed, 2007-10-10 at 15:55 -0400, Jonah H. Harris wrote:\n> On 10/10/07, Theo Kramer <[email protected]> wrote:\n> > When running the query directly from psql it returns the required rows\n> > in less than 100 milli-seconds.\n> >\n> > However, when using a prepared statement from my C application on the\n> > above query and executing it the query duration is as follows\n> > ...\n> > Row[s] = 25, Duration = 435409.474 ms\n> >\n> \n> How are you timing it? Does it really take 435 seconds to complete?\n\nFraid so - and I am running postgresql on a separate machine from the\nclient machine - with the load going way up on the postgresql machine\nand the client machine remains idle until the query returns.\n\nAlso the postgresql has only the one prepared statement executing during\nmy tests.\n\n> Try the following in psql:\n\nDid that - see my previous email.\n-- \nRegards\nTheo\n\n", "msg_date": "Wed, 10 Oct 2007 22:08:30 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "Theo Kramer a écrit :\n> On Wed, 2007-10-10 at 17:00 +0200, Cédric Villemain wrote:\n> \n>> <snip> \n>> Reading the manual, you can learn that prepared statement can (not) \n>> follow the same plan as direct query:\n>> the plan is make before pg know the value of the variable.\n>>\n>> See 'Notes' http://www.postgresql.org/docs/8.2/interactive/sql-prepare.html\n>> \n>\n> Thanks, had missed that, however, I am afraid that I fail to see how\n> preparing a query using PQprepare() and then executing it using\n> PQexecPrepared(), is 8 thousand times slower than directly executing\n> it.,, ( 403386.583ms/50.0ms = 8067 ).\n>\n> When doing a 'manual' prepare and explain analyze I get the following\n>\n> rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\n> calllog_mainteng, calllog_phase, calllog_self FROM calllog\n> WHERE calllog_mainteng = $1\n> AND calllog_phase = $2\n> AND calllog_self < $3 \n> OR calllog_mainteng = $1 \n> AND calllog_phase < $2\n> ORDER BY calllog_mainteng DESC,\n> calllog_phase DESC,\n> calllog_self DESC limit 25;\n> PREPARE\n> rascal=# explain analyze execute cq ('124 ', 8, 366942);\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------\n> Limit (cost=0.00..232.73 rows=25 width=26) (actual time=2.992..3.178\n> rows=25 loops=1)\n> -> Index Scan Backward using calllog_rmc_idx on calllog\n> (cost=0.00..38651.38 rows=4152 width=26) (actual time=2.986..3.116\n> rows=25 loops=1)\n> Index Cond: (calllog_mainteng = $1)\n> Filter: (((calllog_phase = $2) AND (calllog_self < $3)) OR\n> (calllog_phase < $2))\n> Total runtime: 3.272 ms\n>\n>\n> So I suspect that there is something more fundamental here...\n> \nmy two cents:\nperhaps ... please check that with your C code\nAnd be sure you are not providing time from application. If you have a \nlot of data and/or a lag on your lan, it can be the cause of your so \nbig difference between psql and C\n\n\n\n", "msg_date": "Thu, 11 Oct 2007 10:51:58 +0200", "msg_from": "=?UTF-8?B?Q8OpZHJpYyBWaWxsZW1haW4=?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "Theo Kramer wrote:\n> Thanks, had missed that, however, I am afraid that I fail to see how\n> preparing a query using PQprepare() and then executing it using\n> PQexecPrepared(), is 8 thousand times slower than directly executing\n> it.,, ( 403386.583ms/50.0ms = 8067 ).\n> \n> When doing a 'manual' prepare and explain analyze I get the following\n\n> rascal=# explain analyze execute cq ('124 ', 8, 366942);\n> Total runtime: 3.272 ms\n> \n> So I suspect that there is something more fundamental here...\n\nOK, so there must be something different between the two scenarios. It \ncan only be one of:\n 1. Query\n 2. DB Environment (user, locale, settings)\n 3. Network environment (server/client/network activity etc)\n\nAre you sure you have the parameter types correct in your long-running \nquery?\nTry setting log_min_duration_statement=9000 or so to capture \nlong-running queries.\n\nMake sure the user and any custom settings are the same. Compare SHOW \nALL for both ways.\n\nYou've said elsewhere you've ruled out the network environment, so \nthere's not point worrying about that further.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 11 Oct 2007 10:12:46 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote: \n> Theo Kramer wrote:\n> > \n> > So I suspect that there is something more fundamental here...\n> \n> OK, so there must be something different between the two scenarios. It \n> can only be one of:\n> 1. Query\n> 2. DB Environment (user, locale, settings)\n> 3. Network environment (server/client/network activity etc)\n\nI suspect that it could also be in the way the libpq PQprepare(), and\nPQexecPrepared() are handled... as opposed to the way PREPARE and\nEXECUTE are handled.\n\n> \n> Are you sure you have the parameter types correct in your long-running \n> query?\n\nYes - the problem surfaced during a going live session on an 80 user\nsystem... and we had to roll back to the previous system in a hurry.\nThis was a part of the application that had missed testing, but I have\nhad other reports from some of my other systems where this appears to be\na problem but not of the magnitude that this one is.\n\nIn any case I have managed to reproduce it in my test environment with\nconfiguration settings the same.\n\n> Try setting log_min_duration_statement=9000 or so to capture \n> long-running queries.\n\nThanks - will give that a try.\n\n> \n> Make sure the user and any custom settings are the same. Compare SHOW \n> ALL for both ways.\n\n> You've said elsewhere you've ruled out the network environment, so \n> there's not point worrying about that further.\n\nIt is definitely not a network problem - ie. the postgresql server load\ngoes way up when this query is run.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Thu, 11 Oct 2007 12:33:18 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On 10/11/07, Theo Kramer <[email protected]> wrote:\n> On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote:\n> > Theo Kramer wrote:\n> > >\n> > > So I suspect that there is something more fundamental here...\n> >\n> > OK, so there must be something different between the two scenarios. It\n> > can only be one of:\n> > 1. Query\n> > 2. DB Environment (user, locale, settings)\n> > 3. Network environment (server/client/network activity etc)\n>\n> I suspect that it could also be in the way the libpq PQprepare(), and\n> PQexecPrepared() are handled... as opposed to the way PREPARE and\n> EXECUTE are handled.\n\nPQexecPrepared is generally the fastest way to run queries from a C\napp as long as you get the right plan. Some suggestions\n\n* you can explain/explain analyze executing prepared statements from\npsql shell...try that and see if you can reproduce results\n* at worst case you can drop to execParams which is faster (and\nbetter) than PQexec, at least\n* if problem is plan related, you can always disable certain plan\ntypes (seqscan), prepare, and re-enable those plan types\n* do as Jonah suggested, first step is to try and reproduce problem from psql\n\nmerlin\n", "msg_date": "Thu, 11 Oct 2007 13:28:51 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On 2007-10-10, Theo Kramer <[email protected]> wrote:\n> When doing a 'manual' prepare and explain analyze I get the following\n>\n> rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\n> calllog_mainteng, calllog_phase, calllog_self FROM calllog\n> WHERE calllog_mainteng = $1\n> AND calllog_phase = $2\n> AND calllog_self < $3 \n> OR calllog_mainteng = $1 \n> AND calllog_phase < $2\n> ORDER BY calllog_mainteng DESC,\n> calllog_phase DESC,\n> calllog_self DESC limit 25;\n> PREPARE\n\nWhen you do this from the application, are you passing it 3 parameters,\nor 5? The plan is clearly taking advantage of the fact that the two\noccurrences of $1 and $2 are known to be the same value; if your app is\nusing some interface that uses ? placeholders rather than numbered\nparameters, then the planner will not be able to make this assumption.\n\nAlso, from the application, is the LIMIT 25 passed as a constant or is that\nalso a parameter?\n\n-- \nAndrew, Supernews\nhttp://www.supernews.com - individual and corporate NNTP services\n", "msg_date": "Thu, 11 Oct 2007 18:28:02 -0000", "msg_from": "Andrew - Supernews <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On 10/11/07, Andrew - Supernews <[email protected]> wrote:\n> On 2007-10-10, Theo Kramer <[email protected]> wrote:\n> > When doing a 'manual' prepare and explain analyze I get the following\n> >\n> > rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\n> > calllog_mainteng, calllog_phase, calllog_self FROM calllog\n> > WHERE calllog_mainteng = $1\n> > AND calllog_phase = $2\n> > AND calllog_self < $3\n> > OR calllog_mainteng = $1\n> > AND calllog_phase < $2\n> > ORDER BY calllog_mainteng DESC,\n> > calllog_phase DESC,\n> > calllog_self DESC limit 25;\n> > PREPARE\n>\n> When you do this from the application, are you passing it 3 parameters,\n> or 5? The plan is clearly taking advantage of the fact that the two\n> occurrences of $1 and $2 are known to be the same value; if your app is\n> using some interface that uses ? placeholders rather than numbered\n> parameters, then the planner will not be able to make this assumption.\n>\n> Also, from the application, is the LIMIT 25 passed as a constant or is that\n> also a parameter?\n\nalso, this looks a bit like a drilldown query, which is ordering the\ntable on 2+ fields. if that's the case, row wise comparison is a\nbetter and faster approach. is this a converted cobol app?\n\nmerlin\n", "msg_date": "Thu, 11 Oct 2007 16:04:01 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Thu, 2007-10-11 at 16:04 -0400, Merlin Moncure wrote:\n> On 10/11/07, Andrew - Supernews <[email protected]> wrote:\n> > On 2007-10-10, Theo Kramer <[email protected]> wrote:\n> > > When doing a 'manual' prepare and explain analyze I get the following\n> > >\n> > > rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\n> > > calllog_mainteng, calllog_phase, calllog_self FROM calllog\n> > > WHERE calllog_mainteng = $1\n> > > AND calllog_phase = $2\n> > > AND calllog_self < $3\n> > > OR calllog_mainteng = $1\n> > > AND calllog_phase < $2\n> > > ORDER BY calllog_mainteng DESC,\n> > > calllog_phase DESC,\n> > > calllog_self DESC limit 25;\n> > > PREPARE\n> >\n> > When you do this from the application, are you passing it 3 parameters,\n> > or 5? The plan is clearly taking advantage of the fact that the two\n> > occurrences of $1 and $2 are known to be the same value; if your app is\n> > using some interface that uses ? placeholders rather than numbered\n> > parameters, then the planner will not be able to make this assumption.\n> >\n> > Also, from the application, is the LIMIT 25 passed as a constant or is that\n> > also a parameter?\n> \n> also, this looks a bit like a drilldown query, which is ordering the\n> table on 2+ fields. if that's the case, row wise comparison is a\n> better and faster approach.\n\nAgreed - and having a look into that.\n\n> is this a converted cobol app?\n\n:) - on the right track - it is a conversion from an isam based package\nwhere I have changed the backed to PostgreSQL. Unfortunately there is\nway too much legacy design and application code to change things at a\nhigher level.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Fri, 12 Oct 2007 09:35:02 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Thu, 2007-10-11 at 13:28 -0400, Merlin Moncure wrote:\n> On 10/11/07, Theo Kramer <[email protected]> wrote:\n> > On Thu, 2007-10-11 at 10:12 +0100, Richard Huxton wrote:\n> > > Theo Kramer wrote:\n> > > >\n> > > > So I suspect that there is something more fundamental here...\n> > >\n> > > OK, so there must be something different between the two scenarios. It\n> > > can only be one of:\n> > > 1. Query\n> > > 2. DB Environment (user, locale, settings)\n> > > 3. Network environment (server/client/network activity etc)\n> >\n> > I suspect that it could also be in the way the libpq PQprepare(), and\n> > PQexecPrepared() are handled... as opposed to the way PREPARE and\n> > EXECUTE are handled.\n> \n> PQexecPrepared is generally the fastest way to run queries from a C\n> app as long as you get the right plan. Some suggestions\n> \n> * you can explain/explain analyze executing prepared statements from\n> psql shell...try that and see if you can reproduce results\n\nDid that - see previous emails in this thread.\n\n> * at worst case you can drop to execParams which is faster (and\n> better) than PQexec, at least\n\nThanks - will keep that option open.\n\n> * if problem is plan related, you can always disable certain plan\n> types (seqscan), prepare, and re-enable those plan types\n> * do as Jonah suggested, first step is to try and reproduce problem from psql\n\nNo success on that.\n-- \nRegards\nTheo\n\n", "msg_date": "Fri, 12 Oct 2007 09:38:01 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Thu, 2007-10-11 at 18:28 +0000, Andrew - Supernews wrote:\n> On 2007-10-10, Theo Kramer <[email protected]> wrote:\n> > When doing a 'manual' prepare and explain analyze I get the following\n> >\n> > rascal=# prepare cq (char(12), smallint, integer) as SELECT oid,\n> > calllog_mainteng, calllog_phase, calllog_self FROM calllog\n> > WHERE calllog_mainteng = $1\n> > AND calllog_phase = $2\n> > AND calllog_self < $3 \n> > OR calllog_mainteng = $1 \n> > AND calllog_phase < $2\n> > ORDER BY calllog_mainteng DESC,\n> > calllog_phase DESC,\n> > calllog_self DESC limit 25;\n> > PREPARE\n> \n> When you do this from the application, are you passing it 3 parameters,\n> or 5? The plan is clearly taking advantage of the fact that the two\n> occurrences of $1 and $2 are known to be the same value; if your app is\n> using some interface that uses ? placeholders rather than numbered\n> parameters, then the planner will not be able to make this assumption.\n\nYou may just have hit the nail on the head. I use numbered parameters\nbut have $1 to $5 ... let me take a look to see if I can change this.\n\n> Also, from the application, is the LIMIT 25 passed as a constant or is that\n> also a parameter?\n\nA constant.\n\n> \n-- \nRegards\nTheo\n\n", "msg_date": "Fri, 12 Oct 2007 10:05:35 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "Theo Kramer wrote:\n> On Thu, 2007-10-11 at 18:28 +0000, Andrew - Supernews wrote:\n>> When you do this from the application, are you passing it 3 parameters,\n>> or 5? The plan is clearly taking advantage of the fact that the two\n>> occurrences of $1 and $2 are known to be the same value; if your app is\n>> using some interface that uses ? placeholders rather than numbered\n>> parameters, then the planner will not be able to make this assumption.\n> \n> You may just have hit the nail on the head. I use numbered parameters\n> but have $1 to $5 ... let me take a look to see if I can change this.\n\nThat'll be it.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 12 Oct 2007 09:26:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On 10/12/07, Theo Kramer <[email protected]> wrote:\n> On Thu, 2007-10-11 at 16:04 -0400, Merlin Moncure wrote:\n> > is this a converted cobol app?\n>\n> :) - on the right track - it is a conversion from an isam based package\n> where I have changed the backed to PostgreSQL. Unfortunately there is\n> way too much legacy design and application code to change things at a\n> higher level.\n\nfwiw, I converted a pretty large cobol app (acucobol) to postgresql\nbackend translating queries on the fly. if this is a fresh effort,\nyou definately want to use the row-wise comparison feature of 8.2.\nnot only is it much simpler, it's much faster. with some clever\ncaching strategies i was able to get postgresql performance to exceed\nthe isam backend. btw, I used execprepared for virtually the entire\nsystem.\n\nexample read next:\nselect * from foo where (a,b,c) > (a1,b1,c1) order by a,b,c limit 25;\n\nexample read previous:\nselect * from foo where (a,b,c) < (a1,b1,c1) order by a desc, b desc,\nc desc limit 25;\n\netc. this will use complete index for a,b,c and is much cleaner to\nprepare, and parse for the planner (the best you can get with standard\ntactics is to get backend to use index on a).\n\nAnother big tip i can give you (also 8.2) is to check into advisory\nlocks for isam style pessimistic locking. With some thin wrappers you\ncan generate full row and table locking which is quite powerful.\n\nmerlin\n", "msg_date": "Fri, 12 Oct 2007 09:02:32 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": "On Fri, 2007-10-12 at 09:02 -0400, Merlin Moncure wrote:\n> fwiw, I converted a pretty large cobol app (acucobol) to postgresql\n> backend translating queries on the fly. if this is a fresh effort,\n> you definately want to use the row-wise comparison feature of 8.2.\n> not only is it much simpler, it's much faster. with some clever\n> caching strategies i was able to get postgresql performance to exceed\n> the isam backend. btw, I used execprepared for virtually the entire\n> system.\n> \n> example read next:\n> select * from foo where (a,b,c) > (a1,b1,c1) order by a,b,c limit 25;\n> \n> example read previous:\n> select * from foo where (a,b,c) < (a1,b1,c1) order by a desc, b desc,\n> c desc limit 25;\n> \n> etc. this will use complete index for a,b,c and is much cleaner to\n> prepare, and parse for the planner (the best you can get with standard\n> tactics is to get backend to use index on a).\n> \n> Another big tip i can give you (also 8.2) is to check into advisory\n> locks for isam style pessimistic locking. With some thin wrappers you\n> can generate full row and table locking which is quite powerful.\n\nVery interesting - I have largely done the same thing, creating tables\non the fly, translating isam calls, and creating, preparing and\nexecuting queries on the fly using the libpq PQprepare() and\nPQexecPrepared() statements... and it is running rather well at several\nsites, however, the initial port I did was for 8.0 and 8.1 so could not,\nat the time use, row level comparison, although I do have it on the\nlatest version of my code working on 8.2 which is not yet released.\n\nThe problem I have on row level comparison is that we have orders that\nare mixed, ie. a mixture of ascending and descending orders and do not\nknow if it is possible to use row level comparison on that... eg. I\nhaven't been able to transform the following it a row comparison query.\n\nselect * from foo where\n (a = a1 and b = b1 and c >= c1) or\n (a = a1 and b < b1) or\n (a > a1)\norder by a, b desc, c;\n\nI have, however, found that transforming the above into a union based\nquery performs substantially better.\n\nAlso indexes containing mixed order columns will only be available on\n8.3...\n\nBut many thanks for your advice.\n\n-- \nRegards\nTheo\n\n", "msg_date": "Fri, 12 Oct 2007 16:57:01 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with prepared statements" }, { "msg_contents": ">>> On Fri, Oct 12, 2007 at 9:57 AM, in message\n<[email protected]>, Theo Kramer\n<[email protected]> wrote: \n> \n> select * from foo where\n> (a = a1 and b = b1 and c >= c1) or\n> (a = a1 and b < b1) or\n> (a > a1)\n> order by a, b desc, c;\n> \n> I have, however, found that transforming the above into a union based\n> query performs substantially better.\n \nAnother approach which often performs better is to rearrange the logic\nso that the high-order predicate is AND instead of OR:\n \nselect * from foo where\n ( a >= a1\n and ( a > a1\n or ( b <= b1\n and ( b < b1\n or ( c >= c1 )))))\n order by a, b desc, c;\n \nWith the right index and a limit on rows, this can do particularly well.\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 12 Oct 2007 14:03:50 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with prepared statements" } ]
[ { "msg_contents": "Hi List;\n\nI'm preparing to create a test suite of very complex queries that can be \nprofiled in terms of load and performance. The ultimate goal is to define a \nload/performance profile during a run of the old application code base and \nthen again with changes to the application code base.\n\nI suspect I should look at grabbing these things for load:\n# of queries running\n# of IDLE connections\n\nand these things for performance\nhow long the queries run\ndifference between the runs for # of queries running\n\nWhat do you'all think? are these valid metrics? should I be looking at others \nlike pg_statio or pg_stat_all_tables ? If so, what should I look for in these \n(or other) tables?\n\nThanks in advance...\n\n/Kevin\n", "msg_date": "Wed, 10 Oct 2007 16:21:41 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "building a performance test suite" }, { "msg_contents": "On Wed, 10 Oct 2007, Kevin Kempter wrote:\n\n> should I be looking at others like pg_statio or pg_stat_all_tables ? If \n> so, what should I look for in these (or other) tables?\n\nThere are a good set of monitoring scripts for performance-oriented things \nincluded with the dbt2 benchmarking package, \nhttp://sourceforge.net/projects/osdldbt\n\nYou can just use the SVN browse to take a look at the data collected by \nthat; see /trunk/dbt2/bin/pgsql/dbt2-pgsql-db-stat.in for some good things \nto get started with. For example, here's the pg_statio info they save:\n\nSELECT relid, relname, heap_blks_read, heap_blks_hit, idx_blks_read, \nidx_blks_hit FROM pg_statio_user_tables ORDER BY relname;\n\nSELECT relid, indexrelid, relname, indexrelname, idx_blks_read, \nidx_blks_hit FROM pg_statio_user_indexes ORDER BY indexrelname;\n\nPretty much everything in pg_stat_user_tables is worth collecting. And \nyou probably want to use the user oriented views rather than the all ones \n(pg_stat_user_tables instead of pg_stat_all_tables) so you don't clutter \nyour results with what's going on in the system tables--unless your test \nincudes lots of table modifications that is. Look at both of them and \nyou'll see what I mean.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 10 Oct 2007 23:14:02 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: building a performance test suite" }, { "msg_contents": "Hi,\n\nLe jeudi 11 octobre 2007, Kevin Kempter a écrit :\n> I'm preparing to create a test suite of very complex queries that can be\n> profiled in terms of load and performance. The ultimate goal is to define a\n> load/performance profile during a run of the old application code base and\n> then again with changes to the application code base.\n\nYou may want to consider using pgfouine and Tsung, the former to create tsung \nsessions from PostgreSQL logs and the latter to replay them simulating any \nnumber of concurrent users.\n\nTsung can also operate as PostgreSQL proxy recorder, you point your \napplication to it, it forwards the queries and record a session file for you.\n\nThe replay process can mix several sessions and use several phases of \ndifferent load behaviours.\n\nThen a little helper named tsung-plotter could be useful to draw several Tsung \nresults on the same charts for comparing.\n\nSome links:\n http://pgfouine.projects.postgresql.org/tsung.html\n http://tsung.erlang-projects.org/\n http://debian.dalibo.org/sid/tsung-ploter_0.1-1_all.deb\n http://debian.dalibo.org/sid/tsung-ploter_0.1-1.tar.gz\n\nHope this helps, regards,\n-- \ndim", "msg_date": "Thu, 11 Oct 2007 10:52:05 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: building a performance test suite" } ]
[ { "msg_contents": "Hi,\n\nI'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction that does 2 things: 1) delete about 300.000 rows from a table with about 15 million rows and 2) do some (heavy) calculations and re-insert a litlte more than 300.000 new rows.\n\nMy problem is that this consumes huge amounts of memory. The transaction runs for about 20 minutes and during that transaction memory usage peaks to about 2GB. Over time, the more rows that are involved in this transaction, the higher the peak memory requirements.\n\nLately we increased our shared_buffers to 1.5GB, and during this transaction we reached the process memory limit, causing an out of memory and a rollback of the transaction:\n\nBEGIN\nDELETE 299980\nERROR: out of memory\nDETAIL: Failed on request of size 4194304.\nROLLBACK\nDROP SEQUENCE\n\nreal\t19m45.797s\nuser\t0m0.024s\nsys\t0m0.000s\n\nOn my development machine, which has less than 2GB of memory, I can not even finish the transaction.\n\nIs there a way to tell PG to start swapping to disk instead of using ram memory during such a transaction?\n\nThanks in advance for all help\n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\nHi,I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction that does 2 things: 1) delete about 300.000 rows from a table with about 15 million rows and 2) do some (heavy) calculations and re-insert a litlte more than 300.000 new rows.My problem is that this consumes huge amounts of memory. The transaction runs for about 20 minutes and during that transaction memory usage peaks to about 2GB. Over time, the more rows that are involved in this transaction, the higher the peak memory requirements.Lately we increased our shared_buffers to 1.5GB, and during this transaction we reached the process memory limit, causing an out of memory and a rollback of the transaction:BEGINDELETE 299980ERROR: out of memoryDETAIL: Failed on request of size 4194304.ROLLBACKDROP SEQUENCEreal\t19m45.797suser\t0m0.024ssys\t0m0.000sOn my development machine, which has less than 2GB of memory, I can not even finish the transaction.Is there a way to tell PG to start swapping to disk instead of using ram memory during such a transaction?Thanks in advance for all helpExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Thu, 11 Oct 2007 16:04:38 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Huge amount of memory consumed during transaction" }, { "msg_contents": "henk de wit wrote:\n> Hi,\n> \n> I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit\n> Debian Etch/2x dual core C2D/8GB mem). The thing is that I have a\n> huge transaction that does 2 things: 1) delete about 300.000 rows\n> from a table with about 15 million rows and 2) do some (heavy)\n> calculations and re-insert a litlte more than 300.000 new rows.\n> \n> My problem is that this consumes huge amounts of memory.\n\nWhat exactly consumes all your memory? I'm assuming it's not just \nstraight SQL.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 11 Oct 2007 15:16:20 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "On 10/11/07, henk de wit <[email protected]> wrote:\n>\n> Hi,\n>\n> I'm running into a problem with PostgreSQL 8.2.4 (running on 32 bit Debian\n> Etch/2x dual core C2D/8GB mem). The thing is that I have a huge transaction\n> that does 2 things: 1) delete about 300.000 rows from a table with about 15\n> million rows and 2) do some (heavy) calculations and re-insert a litlte more\n> than 300.000 new rows.\n>\n> My problem is that this consumes huge amounts of memory. The transaction\n> runs for about 20 minutes and during that transaction memory usage peaks to\n> about 2GB. Over time, the more rows that are involved in this transaction,\n> the higher the peak memory requirements.\n\nHow is the memory consumed? How are you measuring it? I assume you\nmean the postgres process that is running the query uses the memory.\nIf so, which tool(s) are you using and what's the output that shows it\nbeing used?\n\nI believe that large transactions with foreign keys are known to cause\nthis problem.\n\n> Lately we increased our shared_buffers to 1.5GB, and during this transaction\n> we reached the process memory limit, causing an out of memory and a rollback\n> of the transaction:\n\nHow much memory does this machine have? You do realize that\nshared_buffers are not a generic postgresql memory pool, but\nexplicitly used to hold data from the discs. If you need to sort and\nmaterialize data, that is done with memory allocated from the heap.\nIf you've given all your memory to shared_buffers, there might not be\nany left.\n\nHow much swap have you got configured?\n\nLastly, what does explain <your query here> say?\n", "msg_date": "Thu, 11 Oct 2007 09:27:23 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> ERROR: out of memory\n> DETAIL: Failed on request of size 4194304.\n\nThis error should have produced a map of per-context memory use in the\npostmaster log. Please show us that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2007 10:51:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " }, { "msg_contents": "On Oct 11, 2007, at 9:51 AM, Tom Lane wrote:\n\n> henk de wit <[email protected]> writes:\n>> ERROR: out of memory\n>> DETAIL: Failed on request of size 4194304.\n>\n> This error should have produced a map of per-context memory use in the\n> postmaster log. Please show us that.\n>\n> \t\t\tregards, tom lane\n\nTom, are there any docs anywhere that explain how to interpret those \nper-context memory dumps? For example, when I see an autovacuum \ncontext listed is it safe to assume that the error came from an \nautovac operation, etc.?\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Thu, 11 Oct 2007 11:51:08 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " }, { "msg_contents": "Erik Jones <[email protected]> writes:\n> Tom, are there any docs anywhere that explain how to interpret those =20\n> per-context memory dumps?\n\nNo, not really. What you have to do is grovel around in the code and\nsee where contexts with particular names might get created.\n\n> For example, when I see an autovacuum =20\n> context listed is it safe to assume that the error came from an =20\n> autovac operation, etc.?\n\nProbably, but I haven't looked.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2007 13:49:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " }, { "msg_contents": "> How is the memory consumed? How are you measuring it? I assume you\n> mean the postgres process that is running the query uses the memory.\n> If so, which tool(s) are you using and what's the output that shows it\n> being used?\n\nIt's periodically measured and recorded by a script from which the relevant parts are:\n\nGET_VSZ=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$5}'\n | sort -n | tail -n1\";\nGET_RSS=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$6}'\n | sort -n | tail -n1\";\n\n From this I draw graphs using Cacti. I just checked a recent transaction; during this transaction which involved about 900.000 rows, VSZ peakes at 2.36GB, with RSS then peaking at 2.27GB. This memory usage is on top of a shared_buffers being set back to 320MB. Shortly after the transaction finished, memory usage indeed drops back to a nearly steady 320MB.\n(btw, I mistyped the rows involved in the original post; the 2GB memory usage is for 900.000 rows, not 300.000).\n\nAfter some more digging, I found out that the immense increase of memory usage started fairly recently (but before the increase of my shared_buffers, that just caused the out of memory exception).\n\nE.g. for a transaction with 300.000 rows involved a few weeks back, the memory usage stayed at a rather moderate 546MB/408MB (including 320MB for shared_buffers), and for some 800.000 rows the memory usage peaked at 'only' 631/598. When I draw a graph of \"rows involved\" vs \"memory usage\" there is a direct relation; apart from a few exceptions its clearly that the more rows are involved, the more memory is consumed.\n\nI'll have to check what was exactly changed at the PG installation recently, but nevertheless even with the more moderate memory consumption it becomes clear that PG eventually runs out of memory when more and more rows are involved.\n\n> I believe that large transactions with foreign keys are known to cause\n> this problem.\n\nAs far as I can see there are no, or nearly no foreign keys involved in the transaction I'm having problems with.\n\n> How much memory does this machine have? \n\nIt's in the original post: 8GB ;)\n\n> If you've given all your memory to shared_buffers, there might not be\n> any left.\n\nI have of course not given all memory to shared_buffers. I tried to apply the rule of thumb of setting it to 1/4 of total memory. To be a little conservative, even a little less than that. 1/4 of 8GB is 2GB, so I tried with 1.5 to start. All other queries and small transactions run fine (we're talking about thousands upon thousands of queries and 100's of different ones. It's this huge transaction that occupies so much memory.\n\n> Lastly, what does explain <your query here> say?\n\nI can't really test that easily now and it'll be a huge explain anyway (the query is almost 500 lines :X). I'll try to get one though.\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> How is the memory consumed? How are you measuring it? I assume you> mean the postgres process that is running the query uses the memory.> If so, which tool(s) are you using and what's the output that shows it> being used?It's periodically measured and recorded by a script from which the relevant parts are:GET_VSZ=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$5}' | sort -n | tail -n1\";GET_RSS=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$6}' | sort -n | tail -n1\";From this I draw graphs using Cacti. I just checked a recent transaction; during this transaction which involved about 900.000 rows, VSZ peakes at 2.36GB, with RSS then peaking at 2.27GB. This memory usage is on top of a shared_buffers being set back to 320MB. Shortly after the transaction finished, memory usage indeed drops back to a nearly steady 320MB.(btw, I mistyped the rows involved in the original post; the 2GB memory usage is for 900.000 rows, not 300.000).After some more digging, I found out that the immense increase of memory usage started fairly recently (but before the increase of my shared_buffers, that just caused the out of memory exception).E.g. for a transaction with 300.000 rows involved a few weeks back, the memory usage stayed at a rather moderate 546MB/408MB (including 320MB for shared_buffers), and for some 800.000 rows the memory usage peaked at 'only' 631/598. When I draw a graph of \"rows involved\" vs \"memory usage\" there is a direct relation; apart from a few exceptions its clearly that the more rows are involved, the more memory is consumed.I'll have to check what was exactly changed at the PG installation recently, but nevertheless even with the more moderate memory consumption it becomes clear that PG eventually runs out of memory when more and more rows are involved.> I believe that large transactions with foreign keys are known to cause> this problem.As far as I can see there are no, or nearly no foreign keys involved in the transaction I'm having problems with.> How much memory does this machine have? It's in the original post: 8GB ;)> If you've given all your memory to shared_buffers, there might not be> any left.I have of course not given all memory to shared_buffers. I tried to apply the rule of thumb of setting it to 1/4 of total memory. To be a little conservative, even a little less than that. 1/4 of 8GB is 2GB, so I tried with 1.5 to start. All other queries and small transactions run fine (we're talking about thousands upon thousands of queries and 100's of different ones. It's this huge transaction that occupies so much memory.> Lastly, what does explain <your query here> say?I can't really test that easily now and it'll be a huge explain anyway (the query is almost 500 lines :X). I'll try to get one though.Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 01:21:35 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "> This error should have produced a map of per-context memory use in the> postmaster log. \n> Please show us that.\n\nI'm not exactly sure what to look for in the log. I'll do my best though and see what I can come up with.\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> This error should have produced a map of per-context memory use in the> postmaster log. > Please show us that.I'm not exactly sure what to look for in the log. I'll do my best though and see what I can come up with.Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 01:23:33 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> I'm not exactly sure what to look for in the log. I'll do my best though an=\n> d see what I can come up with.\n\nIt'll be a bunch of lines like\n\nTopMemoryContext: 49832 total in 6 blocks; 8528 free (6 chunks); 41304 used\n\nimmediately in front of the out-of-memory ERROR report.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 11 Oct 2007 21:26:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " }, { "msg_contents": "henk de wit <[email protected]> writes:\n> I indeed found them in the logs. Here they are:\n\nIt looks to me like you have work_mem set optimistically large. This\nquery seems to be doing *many* large sorts and hashes:\n\n> HashBatchContext: 262144236 total in 42 blocks; 3977832 free (40 chunks); 258166404 used\n> TupleSort: 9429016 total in 11 blocks; 1242544 free (16 chunks); 8186472 used\n> HashBatchContext: 262144236 total in 42 blocks; 3977832 free (40 chunks); 258166404 used\n> TupleSort: 9429016 total in 11 blocks; 674376 free (20 chunks); 8754640 used\n> TupleSort: 9429016 total in 11 blocks; 245496 free (9 chunks); 9183520 used\n> TupleSort: 17817624 total in 12 blocks; 3007648 free (14 chunks); 14809976 used\n> TupleSort: 276878852 total in 44 blocks; 243209288 free (1727136 chunks); 33669564 used\n> TupleSort: 37740568 total in 14 blocks; 5139552 free (21 chunks); 32601016 used\n> HashBatchContext: 2105428 total in 9 blocks; 271912 free (7 chunks); 1833516 used\n> HashBatchContext: 4202580 total in 10 blocks; 927408 free (13 chunks); 3275172 used\n> TupleSort: 75489304 total in 18 blocks; 7909776 free (29 chunks); 67579528 used\n> TupleSort: 9429016 total in 11 blocks; 155224 free (16 chunks); 9273792 used\n> TupleSort: 46129176 total in 15 blocks; 5787984 free (19 chunks); 40341192 used\n> TupleSort: 62906392 total in 17 blocks; 8340448 free (16 chunks); 54565944 used\n> HashBatchContext: 2105428 total in 9 blocks; 271912 free (7 chunks); 1833516 used\n> TupleSort: 134209560 total in 24 blocks; 4506232 free (41 chunks); 129703328 used\n> TupleSort: 18866200 total in 12 blocks; 2182552 free (17 chunks); 16683648 used\n> HashBatchContext: 2105428 total in 9 blocks; 271912 free (7 chunks); 1833516 used\n> HashBatchContext: 4202580 total in 10 blocks; 927408 free (13 chunks); 3275172 used\n> TupleSort: 37740568 total in 14 blocks; 1239480 free (21 chunks); 36501088 used\n> TupleSort: 4710424 total in 10 blocks; 307496 free (15 chunks); 4402928 used\n> TupleSort: 27254808 total in 13 blocks; 6921864 free (17 chunks); 20332944 used\n> TupleSort: 134209560 total in 25 blocks; 6873024 free (39 chunks); 127336536 used\n> TupleSort: 39837720 total in 15 blocks; 3136080 free (34 chunks); 36701640 used\n\nand you just plain don't have enough memory for that large a multiple of\nwork_mem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2007 15:59:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " }, { "msg_contents": "> It looks to me like you have work_mem set optimistically large. This\n> query seems to be doing *many* large sorts and hashes:\n\nI\nhave work_mem set to 256MB. Reading in PG documentation I now realize\nthat \"several sort or hash operations might be running in parallel\". So\nthis is most likely the problem, although I don't really understand why\nmemory never seems to increase for any of the other queries (not\nexecuted in a transaction). Some of these are at least the size of the\nquery that is giving problems.\n\nBtw, is there some way to determine up front how many sort or hash operations will be running in parallel for a given query?\n\nRegards\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n\n\n> It looks to me like you have work_mem set optimistically large. This> query seems to be doing *many* large sorts and hashes:I\nhave work_mem set to 256MB. Reading in PG documentation I now realize\nthat \"several sort or hash operations might be running in parallel\". So\nthis is most likely the problem, although I don't really understand why\nmemory never seems to increase for any of the other queries (not\nexecuted in a transaction). Some of these are at least the size of the\nquery that is giving problems.Btw, is there some way to determine up front how many sort or hash operations will be running in parallel for a given query?RegardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 23:09:35 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "On Oct 12, 2007, at 4:09 PM, henk de wit wrote:\n\n> > It looks to me like you have work_mem set optimistically large. This\n> > query seems to be doing *many* large sorts and hashes:\n>\n> I have work_mem set to 256MB. Reading in PG documentation I now \n> realize that \"several sort or hash operations might be running in \n> parallel\". So this is most likely the problem, although I don't \n> really understand why memory never seems to increase for any of the \n> other queries (not executed in a transaction). Some of these are at \n> least the size of the query that is giving problems.\n\nWow. That's inordinately high. I'd recommend dropping that to 32-43MB.\n\n>\n> Btw, is there some way to determine up front how many sort or hash \n> operations will be running in parallel for a given query?\n\nExplain is your friend in that respect.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Fri, 12 Oct 2007 16:39:26 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "> > I have work_mem set to 256MB. \n> Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.\n\nOk, it seems I was totally wrong with the work_mem setting. I'll adjust it to a more saner level. Thanks a lot for the advice everyone!\n \n> Explain is your friend in that respect.\n\nIt shows all the operators, but it doesn't really say that these all will actually run in parallel right? Of course I guess it would give a good idea about what the upper bound is.\n\nregards\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> > I have work_mem set to 256MB. > Wow. That's inordinately high. I'd recommend dropping that to 32-43MB.Ok, it seems I was totally wrong with the work_mem setting. I'll adjust it to a more saner level. Thanks a lot for the advice everyone! > Explain is your friend in that respect.It shows all the operators, but it doesn't really say that these all will actually run in parallel right? Of course I guess it would give a good idea about what the upper bound is.regardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 23:48:59 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "On Oct 12, 2007, at 4:48 PM, henk de wit wrote:\n\n> > > I have work_mem set to 256MB.\n> > Wow. That's inordinately high. I'd recommend dropping that to \n> 32-43MB.\n>\n> Ok, it seems I was totally wrong with the work_mem setting. I'll \n> adjust it to a more saner level. Thanks a lot for the advice everyone!\n>\n> > Explain is your friend in that respect.\n>\n> It shows all the operators, but it doesn't really say that these \n> all will actually run in parallel right? Of course I guess it would \n> give a good idea about what the upper bound is.\n\nYou can determine what runs in parellel based on the indentation of \nthe output. Items at the same indentation level under the same \n\"parent\" line will run in parallel\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Fri, 12 Oct 2007 17:13:14 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "henk de wit escribi�:\n> > How is the memory consumed? How are you measuring it? I assume you\n> > mean the postgres process that is running the query uses the memory.\n> > If so, which tool(s) are you using and what's the output that shows it\n> > being used?\n> \n> It's periodically measured and recorded by a script from which the relevant parts are:\n> \n> GET_VSZ=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$5}'\n> | sort -n | tail -n1\";\n> GET_RSS=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$6}'\n> | sort -n | tail -n1\";\n\nHuh, this seems really ugly, have you tried something like just\n\n$ ps -o cmd:50,vsz,rss -C postmaster\nCMD VSZ RSS\n/pgsql/install/00head/bin/postmaster 51788 3992\npostgres: writer process 51788 1060\npostgres: wal writer process 51788 940\npostgres: autovacuum launcher process 51924 1236\npostgres: stats collector process 22256 896\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 17 Oct 2007 17:40:58 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> henk de wit escribi�:\n>>> How is the memory consumed? How are you measuring it? I assume you\n>>> mean the postgres process that is running the query uses the memory.\n>>> If so, which tool(s) are you using and what's the output that shows it\n>>> being used?\n>> \n>> It's periodically measured and recorded by a script from which the relevant parts are:\n>> \n>> GET_VSZ=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$5}'\n>> | sort -n | tail -n1\";\n>> GET_RSS=\"ps aux | grep $REQ_GREP | grep -v grep | grep -v $$ | awk '{print \\$6}'\n>> | sort -n | tail -n1\";\n\nOn many variants of Unix, this is going to lead to a totally misleading\nnumber. The first problem is that shared buffers will be counted\nmultiple times (once for each PG process). The second problem is that,\ndepending on platform, a specific page of shared memory is counted\nagainst a process only after it first touches that page. This means\nthat when you run a big seqscan, or anything else that touches a lot of\nbuffers, the reported size of the process gradually increases from just\nits local memory space to its local memory space plus the total size\nof the Postgres shared buffer arena. This change in the reported size\nis *utterly meaningless* in terms of actual memory consumption.\n\nIt's not easy to get useful measurements from \"ps\" when dealing with\nheavy memory sharing. There have been some discussions recently of\nalternative tools that let you get a clearer picture; check the\nPG list archives.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Oct 2007 19:28:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge amount of memory consumed during transaction " } ]
[ { "msg_contents": "Hi,\n\nI have a table with some 50 millions rows in PG 8.2. The table has indexes on relevant columns. My problem is that most everything I do with this table (which are actually very basic selects) is unbearable slow. For example:\n\nselect max(payment_id) from transactions\n\nThis takes 161 seconds.\n\nThe plan looks like this:\n\n\"Result (cost=0.37..0.38 rows=1 width=0) (actual time=184231.636..184231.638 rows=1 loops=1)\"\n\" InitPlan\"\n\" -> Limit (cost=0.00..0.37 rows=1 width=8) (actual time=184231.620..184231.622 rows=1 loops=1)\"\n\" -> Index Scan Backward using trans_payment_id_index on transactions (cost=0.00..19144690.58 rows=51122691 width=8) (actual time=184231.613..184231.613 rows=1 loops=1)\"\n\" Filter: (payment_id IS NOT NULL)\"\n\"Total runtime: 184231.755 ms\"\n\nAs shown, in the plan, the index on the requested column \"payment_id\" is being used, but the query still takes quite a lot of time. If I use a where clause in a similar query, the query seemingly runs forever, e.g.\n\nselect min(time) from transactions where payment_id = 67\n\nThere are indexes on both the time (a timestamp with time zone) and payment_id (a bigint) columns. About 1 million rows satisfy the condition payment_id = 67. This query takes a totally unrealistic amount of time for execution (I have it running for >30 minutes now on a machine with 8GB and 4 [email protected], and it still isn't finished). With mpstat it becomes clear that the query is totally IO bound (what is expected of course). The machine I'm running this on has a fast RAID that can do about 320 MB/s.\n\nAre these normal execution times for these amount of rows and this hardware? Is there anything I can do to speed up these kind of simple queries on huge tables?\n\nThanks in advance for all suggestions\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\nHi,I have a table with some 50 millions rows in PG 8.2. The table has indexes on relevant columns. My problem is that most everything I do with this table (which are actually very basic selects) is unbearable slow. For example:select max(payment_id) from transactionsThis takes 161 seconds.The plan looks like this:\"Result  (cost=0.37..0.38 rows=1 width=0) (actual time=184231.636..184231.638 rows=1 loops=1)\"\"  InitPlan\"\"    ->  Limit  (cost=0.00..0.37 rows=1 width=8) (actual time=184231.620..184231.622 rows=1 loops=1)\"\"          ->  Index Scan Backward using trans_payment_id_index on transactions  (cost=0.00..19144690.58 rows=51122691 width=8) (actual time=184231.613..184231.613 rows=1 loops=1)\"\"                Filter: (payment_id IS NOT NULL)\"\"Total runtime: 184231.755 ms\"As shown, in the plan, the index on the requested column \"payment_id\" is being used, but the query still takes quite a lot of time. If I use a where clause in a similar query, the query seemingly runs forever, e.g.select min(time) from transactions where payment_id = 67There are indexes on both the time (a timestamp with time zone) and payment_id (a bigint) columns. About 1 million rows satisfy the condition payment_id = 67. This query takes a totally unrealistic amount of time for execution (I have it running for >30 minutes now on a machine with 8GB and 4 [email protected], and it still isn't finished). With mpstat it becomes clear that the query is totally IO bound (what is expected of course). The machine I'm running this on has a fast RAID that can do about 320 MB/s.Are these normal execution times for these amount of rows and this hardware? Is there anything I can do to speed up these kind of simple queries on huge tables?Thanks in advance for all suggestionsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 22:41:56 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": ">>> On Fri, Oct 12, 2007 at 3:41 PM, in message\n<[email protected]>, henk de wit\n<[email protected]> wrote: \n> \n> I have a table with some 50 millions rows in PG 8.2. The table has indexes \n> on relevant columns. My problem is that most everything I do with this table \n> (which are actually very basic selects) is unbearable slow.\n \nDo you have autovacuum turned on? With what settings?\n \nDo you run scheduled VACUUM ANALYZE?\n \nWhat does the tail of the output from your last\nVACUUM ANALYZE VERBOSE look like?\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 12 Oct 2007 15:53:07 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": "> Do you have autovacuum turned on? With what settings?\n\nYes, I have it turned on. The settings are:\n\nautovacuum on \nautovacuum_analyze_scale_factor 0.1\nautovacuum_analyze_threshold 250\nautovacuum_freeze_max_age 200000000\nautovacuum_naptime 1min \nautovacuum_vacuum_cost_delay -1\nautovacuum_vacuum_cost_limit -1\nautovacuum_vacuum_scale_factor 0.2\nautovacuum_vacuum_threshold 500\nvacuum_cost_delay 0\nvacuum_cost_limit 200\nvacuum_cost_page_dirty 20\nvacuum_cost_page_hit 1\nvacuum_cost_page_miss 10\nvacuum_freeze_min_age 100000000\n\n> Do you run scheduled VACUUM ANALYZE?\n\nThis too, every night.\n\n> What does the tail of the output from your last\n> VACUUM ANALYZE VERBOSE look like?\n\nI'll try to look it up and post it back here once I got it.\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> Do you have autovacuum turned on? With what settings?Yes, I have it turned on. The settings are:autovacuum     on autovacuum_analyze_scale_factor     0.1autovacuum_analyze_threshold     250autovacuum_freeze_max_age     200000000autovacuum_naptime     1min autovacuum_vacuum_cost_delay     -1autovacuum_vacuum_cost_limit     -1autovacuum_vacuum_scale_factor     0.2autovacuum_vacuum_threshold     500vacuum_cost_delay     0vacuum_cost_limit     200vacuum_cost_page_dirty     20vacuum_cost_page_hit     1vacuum_cost_page_miss     10vacuum_freeze_min_age     100000000> Do you run scheduled VACUUM ANALYZE?This too, every night.> What does the tail of the output from your last> VACUUM ANALYZE VERBOSE look like?I'll try to look it up and post it back here once I got it.Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 23:19:09 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": "> select payment_id from transactions order by payment_id desc limit 1;\n\nThis one is indeed instant! Less than 50ms. In my case I can't use it for max though because of the fact that payment_id can be null (which is an unfortunate design choice). The other variant however didn't become instant. I.e. I tried:\n\nselect time from transactions where payment_id = 67 order by time asc limit 1;\n\nBut this one is still really slow.\n\nRegards\n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> select payment_id from transactions order by payment_id desc limit 1;This one is indeed instant! Less than 50ms. In my case I can't use it for max though because of the fact that payment_id can be null (which is an unfortunate design choice). The other variant however didn't become instant. I.e. I tried:select time from transactions where payment_id = 67 order by time asc limit 1;But this one is still really slow.RegardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 23:40:54 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> The plan looks like this:\n\n> \"Result (cost=3D0.37..0.38 rows=3D1 width=3D0) (actual time=3D184231.636..=\n> 184231.638 rows=3D1 loops=3D1)\"\n> \" InitPlan\"\n> \" -> Limit (cost=3D0.00..0.37 rows=3D1 width=3D8) (actual time=3D18423=\n> 1.620..184231.622 rows=3D1 loops=3D1)\"\n> \" -> Index Scan Backward using trans_payment_id_index on transact=\n> ions (cost=3D0.00..19144690.58 rows=3D51122691 width=3D8) (actual time=3D1=\n> 84231.613..184231.613 rows=3D1 loops=3D1)\"\n> \" Filter: (payment_id IS NOT NULL)\"\n> \"Total runtime: 184231.755 ms\"\n\nThe only way I can see for that to be so slow is if you have a very\nlarge number of rows where payment_id is null --- is that the case?\n\nThere's not a lot you could do about that in existing releases :-(.\nIn 8.3 it'll be possible to declare the index as NULLS FIRST, which\nmoves the performance problem from the max end to the min end ...\n\n> select min(time) from transactions where payment_id =3D 67\n\n> There are indexes on both the time (a timestamp with time zone) and payment=\n> _id (a bigint) columns.\n\nCreating indexes at random with no thought about how the system could\nuse them is not a recipe for speeding up your queries. What you'd need\nto make this query fast is a double-column index on (payment_id, time)\nso that a forward scan on the items with payment_id = 67 would\nimmediately find the minimum time entry. Neither of the single-column\nindexes offers any way to find the desired entry without scanning over\nlots of unrelated entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2007 18:03:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table? " }, { "msg_contents": "On Friday 12 October 2007, henk de wit <[email protected]> wrote:\n> > select payment_id from transactions order by payment_id desc limit 1;\n>\n> This one is indeed instant! Less than 50ms. In my case I can't use it for\n> max though because of the fact that payment_id can be null (which is an\n> unfortunate design choice). The other variant however didn't become\n> instant. I.e. I tried:\n>\n> select time from transactions where payment_id = 67 order by time asc\n> limit 1;\n>\n> But this one is still really slow.\n\nIf you had a compound index on payment_id,time (especially with a WHERE time \nIS NOT NULL conditional) it would likely speed it up.\n\n\n-- \nGhawar is dying\n\n", "msg_date": "Fri, 12 Oct 2007 15:04:40 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": "I wrote:\n> The only way I can see for that to be so slow is if you have a very\n> large number of rows where payment_id is null --- is that the case?\n\n> There's not a lot you could do about that in existing releases :-(.\n\nActually, there is a possibility if you are willing to change the query:\nmake a partial index that excludes nulls. Toy example:\n\nregression=# create table fooey(f1 int);\nCREATE TABLE\nregression=# create index fooeyi on fooey(f1) where f1 is not null;\nCREATE INDEX\nregression=# explain select max(f1) from fooey; \n QUERY PLAN \n---------------------------------------------------------------\n Aggregate (cost=36.75..36.76 rows=1 width=4)\n -> Seq Scan on fooey (cost=0.00..31.40 rows=2140 width=4)\n(2 rows)\n\nregression=# explain select max(f1) from fooey where f1 is not null;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------\n Result (cost=0.03..0.04 rows=1 width=0)\n InitPlan\n -> Limit (cost=0.00..0.03 rows=1 width=4)\n -> Index Scan Backward using fooeyi on fooey (cost=0.00..65.55 rows=2129 width=4)\n Filter: (f1 IS NOT NULL)\n(5 rows)\n\nProbably the planner ought to be smart enough to figure this out without\nthe explicit WHERE in the query, but right now it isn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2007 18:17:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table? " }, { "msg_contents": "> The only way I can see for that to be so slow is if you have a very\n> large number of rows where payment_id is null --- is that the case?\n\nThe number of rows where payment_id is null is indeed large. They increase every day to about 1 million at the end of the so-called \"payment period\" (so currently, about 1/100th of the table has nulls).\n\n> In 8.3 it'll be possible to declare the index as NULLS FIRST, which\n> moves the performance problem from the max end to the min end ...\n\nSounds interesting. I also noticed 8.3 is able to use an index for \"is null\". Luckily you've just released the beta release of 8.3. I'm going to setup a test system for 8.3 real soon then to try what difference it would make for my particular dataset.\n\n> Creating indexes at random with no thought about how the system could\n> use them is not a recipe for speeding up your queries. What you'd need\n> to make this query fast is a double-column index on (payment_id, time)\n> so that a forward scan on the items with payment_id = 67 would\n> immediately find the minimum time entry. Neither of the single-column\n> indexes offers any way to find the desired entry without scanning over\n> lots of unrelated entries.\n\nI see, that sounds very interesting too. As you might have noticed, I'm not an expert on this field but I'm trying to learn. I was under the impression that the last few incarnations of postgresql automatically combined single column indexes for cases where a multi-column index would be needed in earlier releases. But apparently this isn't true for every case and I still have a lot to learn about PG.\n\nRegards\n\n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> The only way I can see for that to be so slow is if you have a very> large number of rows where payment_id is null --- is that the case?The number of rows where payment_id is null is indeed large. They increase every day to about 1 million at the end of the so-called \"payment period\" (so currently, about 1/100th of the table has nulls).> In 8.3 it'll be possible to declare the index as NULLS FIRST, which> moves the performance problem from the max end to the min end ...Sounds interesting. I also noticed 8.3 is able to use an index for \"is null\". Luckily you've just released the beta release of 8.3. I'm going to setup a test system for 8.3 real soon then to try what difference it would make for my particular dataset.> Creating indexes at random with no thought about how the system could> use them is not a recipe for speeding up your queries. What you'd need> to make this query fast is a double-column index on (payment_id, time)> so that a forward scan on the items with payment_id = 67 would> immediately find the minimum time entry. Neither of the single-column> indexes offers any way to find the desired entry without scanning over> lots of unrelated entries.I see, that sounds very interesting too. As you might have noticed, I'm not an expert on this field but I'm trying to learn. I was under the impression that the last few incarnations of postgresql automatically combined single column indexes for cases where a multi-column index would be needed in earlier releases. But apparently this isn't true for every case and I still have a lot to learn about PG.RegardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Sat, 13 Oct 2007 00:21:26 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": "henk de wit <[email protected]> writes:\n> I see, that sounds very interesting too. As you might have noticed, I'm not=\n> an expert on this field but I'm trying to learn. I was under the impressio=\n> n that the last few incarnations of postgresql automatically combined singl=\n> e column indexes for cases where a multi-column index would be needed in ea=\n> rlier releases.\n\nIt's possible to combine independent indexes for resolving AND-type\nqueries, but the combination process does not preserve ordering, so\nit's useless for this type of situation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 12 Oct 2007 18:51:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table? " }, { "msg_contents": "> It's possible to combine independent indexes for resolving AND-type\n> queries, but the combination process does not preserve ordering, so\n> it's useless for this type of situation.\n\nOk, I'm going to try the double column index. Your suggestion about the index with nulls left out worked great btw. Min/Max is instantly now. \n\nI figure though that the double column index would not work for queries like:\n\nselect min(time) from transactions where payment_id is null\n\nSo for that situation I tried whether a specific index helped, i.e. :\n\ncreate index transactions__time_payment_id__null__idx on transactions(time) where payment_id is null;\n\nBut this does not really seem to help. It might be better to see if I can refactor the DB design though to not use nulls.\n\nRegards\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> It's possible to combine independent indexes for resolving AND-type> queries, but the combination process does not preserve ordering, so> it's useless for this type of situation.Ok, I'm going to try the double column index. Your suggestion about the index with nulls left out worked great btw. Min/Max is instantly now. I figure though that the double column index would not work for queries like:select min(time) from transactions where payment_id is nullSo for that situation I tried whether a specific index helped, i.e. :create index transactions__time_payment_id__null__idx on transactions(time) where payment_id is null;But this does not really seem to help. It might be better to see if I can refactor the DB design though to not use nulls.RegardsExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Sat, 13 Oct 2007 01:13:38 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" }, { "msg_contents": ">select min(time) from transactions where payment_id is null\n>So for that situation I tried whether a specific index helped, i.e. :\n>create index transactions__time_payment_id__null__idx on transactions(time) where payment_id is null;\n>But this does not really seem to help. It might be better to see if I can refactor the DB design though to not use nulls.\n\nI was posting too fast again, the previous index -does- work, making the above query instant (<50ms) :) I actually mis-typed the test query before (it's rather late at my place):\n\nselect min(payment_id) from transactions where payment_id is null\n\nAlthough not useful, it's an interesting case. One would say that it could return quickly, but it takes 26 minutes to execute this.\n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n>select min(time) from transactions where payment_id is null>So for that situation I tried whether a specific index helped, i.e. :>create index transactions__time_payment_id__null__idx on transactions(time) where payment_id is null;>But this does not really seem to help. It might be better to see if I can refactor the DB design though to not use nulls.I was posting too fast again, the previous index -does- work, making the above query instant (<50ms) :) I actually mis-typed the test query before (it's rather late at my place):select min(payment_id) from transactions where payment_id is nullAlthough not useful, it's an interesting case. One would say that it could return quickly, but it takes 26 minutes to execute this.\nExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Sat, 13 Oct 2007 01:48:23 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" } ]
[ { "msg_contents": "> This query takes a totally unrealistic amount of time for execution (I have it running for >30 minutes now on a machine with 8GB and 4 [email protected], and it still isn't finished).\n\nTo correct myself, I looked at the wrong window earlier, when I typed the email the query had in fact finished already and took about 18 minutes (which is still really long for such a query of course), but more than 30 minutes was wrong.\n\n\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> This query takes a totally unrealistic amount of time for execution (I have it running for >30 minutes now on a machine with 8GB and 4 [email protected], and it still isn't finished).To correct myself, I looked at the wrong window earlier, when I typed the email the query had in fact finished already and took about 18 minutes (which is still really long for such a query of course), but more than 30 minutes was wrong.\nExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Fri, 12 Oct 2007 22:59:52 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up min/max(id) in 50M rows table?" } ]
[ { "msg_contents": "\nHi,\n\nI am trying to decide between using a temporary table or a stored proc that\nreturns a result set to solve a fairly complex problem, and was wondering if\nPostres, when it sees a stored proc reference in a SQL, is smart enough to,\nbehind the scenes, create a temporary table with the results of the stored\nproc such that the stored proc does not get executed multiple times within a\nsingle query execution??\n\nExample: suppose I had a stored proc called SP_bob that returns a result set\nincluding the column store_no\nand I wrote the following query:\n\nselect * from Order_Line as X\nwhere not exists (select 1 from SP_bob(parm1, parm2) as Y where X.store_no =\nY.store_no)\n\nCan I rest assured that the stored proc would only run once, or could it run\nonce for each row in Order_Line??\n\nThe only reason I am going down this road is because of the difficulty of\nusing temp tables ( i.e. needing to execute a SQL string). Does anyone know\nif this requirement may be removed in the near future? \n\n\n-- \nView this message in context: http://www.nabble.com/using-a-stored-proc-that-returns-a-result-set-in-a-complex-SQL-stmt-tf4628555.html#a13216092\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Mon, 15 Oct 2007 09:09:25 -0700 (PDT)", "msg_from": "chrisj <[email protected]>", "msg_from_op": true, "msg_subject": "using a stored proc that returns a result set in a\n complex SQL stmt" }, { "msg_contents": "chrisj wrote:\n> I am trying to decide between using a temporary table or a stored proc that\n> returns a result set to solve a fairly complex problem, and was wondering if\n> Postres, when it sees a stored proc reference in a SQL, is smart enough to,\n> behind the scenes, create a temporary table with the results of the stored\n> proc such that the stored proc does not get executed multiple times within a\n> single query execution??\n> \n> Example: suppose I had a stored proc called SP_bob that returns a result set\n> including the column store_no\n> and I wrote the following query:\n> \n> select * from Order_Line as X\n> where not exists (select 1 from SP_bob(parm1, parm2) as Y where X.store_no =\n> Y.store_no)\n> \n> Can I rest assured that the stored proc would only run once, or could it run\n> once for each row in Order_Line??\n\nIt depends on the exact query you're running. I think in the above\nexample, SP_bob would only be ran once. Function volatility affects the\nplanners decision as well (see\nhttp://www.postgresql.org/docs/8.2/interactive/xfunc-volatility.html).\n\n> The only reason I am going down this road is because of the difficulty of\n> using temp tables ( i.e. needing to execute a SQL string). Does anyone know\n> if this requirement may be removed in the near future? \n\nI don't understand what requirement you're referring to.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 16 Oct 2007 16:22:48 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using a stored proc that returns a result set in a\n\tcomplex SQL stmt" }, { "msg_contents": "On 10/16/07, Heikki Linnakangas <[email protected]> wrote:\n> > The only reason I am going down this road is because of the difficulty of\n> > using temp tables ( i.e. needing to execute a SQL string). Does anyone know\n> > if this requirement may be removed in the near future?\n>\n> I don't understand what requirement you're referring to.\n\nI think he means creating temporary tables in stored procedures as\ndescribed for example here ->\nhttp://svr5.postgresql.org/pgsql-sql/2007-01/msg00117.php . From what\nI see at http://www.postgresql.org/docs/8.3/static/release-8-3.html\nthe EXECUTE workaround is no longer necessary as plan invalidates upon\ntable schema changes.\n", "msg_date": "Wed, 17 Oct 2007 07:32:47 +0200", "msg_from": "\"=?UTF-8?Q?Marcin_St=C4=99pnicki?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: using a stored proc that returns a result set in a complex SQL\n\tstmt" } ]
[ { "msg_contents": "Hi,\n\nFor some times, we have a vacuuming process on a specific table that\ngoes slower and slower. In fact, it took some 3 minutes a month ago, and\nnow it take almost 20 minutes. But, if one day it take so many time, it\nis possible that on the day after it will only take 4 minutes...\n\nI know the table in concern had 450000 tuples two months ago and now has\nmore than 700000 tuples in it.\n\nI wonder vacuum verbose would tell me if fsm parameters were not too\nbadly configured, but I can't get the 4 last lines of the output...\n\nIs there another way to get these info ? Or is it a parameter badly\nconfigured ?\n\nFor information, it's on AIX, PG8.1.9.\n\nSome configuration parameters :\nclient_min_messages : notice\nlog_error_verbosity : default\nlog_min_error_statement : panic\nlog_min_messages : notice.\n\nWhats's more, I wonder what we could monitor to get some explanation of\nthe recent time increase, and then have a quite-sure way of configuring\nthe server.\n\nI have to say the database is hosted, accessed in production on a 24/7\nbasis and then every change in configuration has to be scheduled.\n\nSome more information you may ask:\nchackpoint_segments : 32\ncheckpoint_timeout : 180\ncheckpoint_warning : 30\nwal_buffers : 64\nmaintenance_work_mem : 65536\nmax_fsm_pages : 400000\nmax_fsm_relations : 1000\nshared_buffers : 50000\ntemp_bufers : 1000\n\nWe also have 4Gb RAM.\n\nIsn't checkpoint_segments too low as all files in pg_xlogs seem to be\nrecycled within a few minutes. (In fact among the 60 files, at least 30\nhave been modified during the few minutes of that particular vacuum).\n\nThanks for any advice you could give me.\n\nBest regards,\n\n-- \nSt�phane SCHILDKNECHT\nPr�sident de PostgreSQLFr\nhttp://www.postgresqlfr.org\n\n", "msg_date": "Tue, 16 Oct 2007 10:48:24 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum goes worse" }, { "msg_contents": "St�phane Schildknecht wrote:\n> I wonder vacuum verbose would tell me if fsm parameters were not too\n> badly configured, but I can't get the 4 last lines of the output...\n\nWhy not?\n\n> Whats's more, I wonder what we could monitor to get some explanation of\n> the recent time increase, and then have a quite-sure way of configuring\n> the server.\n\nsar or iostat output would be a good start, to determine if it's waiting\nfor I/O or what.\n\n> I have to say the database is hosted, accessed in production on a 24/7\n> basis and then every change in configuration has to be scheduled.\n> \n> Some more information you may ask:\n> chackpoint_segments : 32\n> checkpoint_timeout : 180\n> checkpoint_warning : 30\n> wal_buffers : 64\n> maintenance_work_mem : 65536\n> max_fsm_pages : 400000\n> max_fsm_relations : 1000\n> shared_buffers : 50000\n> temp_bufers : 1000\n> \n> We also have 4Gb RAM.\n> \n> Isn't checkpoint_segments too low as all files in pg_xlogs seem to be\n> recycled within a few minutes. (In fact among the 60 files, at least 30\n> have been modified during the few minutes of that particular vacuum).\n\nIncreasing checkpoint_segments seems like a good idea then. You should\nincrease checkpoint_timeout as well, 180 is just 3 minutes. How much\nconcurrent activity is there in the database? 30 pg_xlog files equals\n512 MB of WAL; that's quite a lot.\n\nHave you changed the vacuum cost delay settings from the defaults?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 16 Oct 2007 10:08:58 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Heikki Linnakangas a �crit :\n> St�phane Schildknecht wrote:\n> \n>> I wonder vacuum verbose would tell me if fsm parameters were not too\n>> badly configured, but I can't get the 4 last lines of the output...\n>> \n>\n> Why not?\n> \n\nI would like to know... Seems like vacuum does not want me to see these\nprecious line. I really don't know why.\n> \n>> Whats's more, I wonder what we could monitor to get some explanation of\n>> the recent time increase, and then have a quite-sure way of configuring\n>> the server.\n>> \n>\n> sar or iostat output would be a good start, to determine if it's waiting\n> for I/O or what.\n> \n\nOk, I'll try that.\n>\n> Increasing checkpoint_segments seems like a good idea then. You should\n> increase checkpoint_timeout as well, 180 is just 3 minutes. How much\n> concurrent activity is there in the database? 30 pg_xlog files equals\n> 512 MB of WAL; that's quite a lot.\n> \n\nI don't know exactly how far, but yes, activity is high.\n> Have you changed the vacuum cost delay settings from the defaults?\n> \n\nNot yet.\n\n\n-- \nSt�phane SCHILDKNECHT\nPr�sident de PostgreSQLFr\nhttp://www.postgresqlfr.org\n\n", "msg_date": "Tue, 16 Oct 2007 11:50:51 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> For some times, we have a vacuuming process on a specific table that\n> goes slower and slower. In fact, it took some 3 minutes a month ago, and\n> now it take almost 20 minutes. But, if one day it take so many time, it\n> is possible that on the day after it will only take 4 minutes...\n\n> I know the table in concern had 450000 tuples two months ago and now has\n> more than 700000 tuples in it.\n\nThe real question is how often do rows get updated? I suspect you\nprobably need to vacuum this table more than once a day.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 10:26:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " }, { "msg_contents": "Tom Lane a �crit :\n> =?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> \n>> For some times, we have a vacuuming process on a specific table that\n>> goes slower and slower. In fact, it took some 3 minutes a month ago, and\n>> now it take almost 20 minutes. But, if one day it take so many time, it\n>> is possible that on the day after it will only take 4 minutes...\n>> \n>\n> \n>> I know the table in concern had 450000 tuples two months ago and now has\n>> more than 700000 tuples in it.\n>> \n>\n> The real question is how often do rows get updated? I suspect you\n> probably need to vacuum this table more than once a day.\n>\n> \n\nTo be honest, I suspect it too. But, I have been told by people using\nthat database they can't do vacuum more frequently than once in a day as\nit increases the time to achieve concurrent operations.\nThat's also why they don't want to hear about autovacuum.\n\nAnd finally that's why I'm looking for everything I can monitor to\nobtain information to convince them they're wrong and I'm right ;-)\n\nThat's also why I am so disappointed vacuum doesn't give me these 4\nhints lines.\n\nRegards,\n\n-- \nSt�phane SCHILDKNECHT\nPr�sident de PostgreSQLFr\nhttp://www.postgresqlfr.org\n\n", "msg_date": "Tue, 16 Oct 2007 17:26:15 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> Tom Lane a �crit :\n>> The real question is how often do rows get updated? I suspect you\n>> probably need to vacuum this table more than once a day.\n\n> To be honest, I suspect it too. But, I have been told by people using\n> that database they can't do vacuum more frequently than once in a day as\n> it increases the time to achieve concurrent operations.\n\nvacuum_cost_delay can help here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 11:30:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " }, { "msg_contents": "On 10/16/07, Stéphane Schildknecht\n<[email protected]> wrote:\n> Tom Lane a écrit :\n> > =?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> >\n> >> For some times, we have a vacuuming process on a specific table that\n> >> goes slower and slower. In fact, it took some 3 minutes a month ago, and\n> >> now it take almost 20 minutes. But, if one day it take so many time, it\n> >> is possible that on the day after it will only take 4 minutes...\n> >>\n> >\n> >\n> >> I know the table in concern had 450000 tuples two months ago and now has\n> >> more than 700000 tuples in it.\n> >>\n> >\n> > The real question is how often do rows get updated? I suspect you\n> > probably need to vacuum this table more than once a day.\n> >\n> >\n>\n> To be honest, I suspect it too. But, I have been told by people using\n> that database they can't do vacuum more frequently than once in a day as\n> it increases the time to achieve concurrent operations.\n> That's also why they don't want to hear about autovacuum.\n\nSounds like somebody there is operating on the belief that vacuums\nalways cost the same amount i/o wise. With the vacuum_cost_delay\nsetting Tim mentioned this is not true. Their concern shouldn't be\nwith how you accomplish your job, but with you meeting certain\nperformance criteria, and with vacuum cost delay, it is quite possible\nto vacuum midday with affecting the db too much.\n\n> And finally that's why I'm looking for everything I can monitor to\n> obtain information to convince them they're wrong and I'm right ;-)\n\nGood luck with that. I still have a boss who thinks \"vacuum's not\nfast enough\". His last experience with pgsql was in the 7.2 days.\nGenerally he's a pretty smart guy, but he's convinced himself that\nPostgreSQL 8.3 and 7.2 are pretty much the same beasts.\n\n> That's also why I am so disappointed vacuum doesn't give me these 4\n> hints lines.\n\nWhat kind of error, or output, does it give you at the end? Any hint\nas to why they're missing?\n", "msg_date": "Tue, 16 Oct 2007 10:43:34 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "\"Scott Marlowe\" <[email protected]> writes:\n> On 10/16/07, St=E9phane Schildknecht\n> <[email protected]> wrote:\n>> That's also why I am so disappointed vacuum doesn't give me these 4\n>> hints lines.\n\n> What kind of error, or output, does it give you at the end? Any hint\n> as to why they're missing?\n\nIf you're talking about the FSM statistics display, that only gets\nprinted by a database-wide VACUUM (one that doesn't name a specific\ntable).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 12:03:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " }, { "msg_contents": "Tom Lane a �crit :\n> \"Scott Marlowe\" <[email protected]> writes:\n> \n>> On 10/16/07, St=E9phane Schildknecht\n>> <[email protected]> wrote:\n>> \n>>> That's also why I am so disappointed vacuum doesn't give me these 4\n>>> hints lines.\n>>> \n>\n> \n>> What kind of error, or output, does it give you at the end? Any hint\n>> as to why they're missing?\n>> \n>\n> If you're talking about the FSM statistics display, that only gets\n> printed by a database-wide VACUUM (one that doesn't name a specific\n> table).\n> \n\nYes, I am. The command line is (in a shell script whom ouput is\nredirected in a global file) :\n\nvacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n\n \nThat does not explain why we don't get FSM statitics display. The output\nends with:\nINFO: vacuuming \"public.sometable\"\nINFO: \"sometable\": removed 62 row versions in 3 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"sometable\": found 62 removable, 5 nonremovable row versions in 5\npages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 534 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.sometable\"\nINFO: \"sometable\": scanned 5 of 5 pages, containing 5 live rows and 0\ndead rows; 5 rows in sample, 5 estimated total rows\nVACUUM\n\nBest regards,\n\nSt�phane\n", "msg_date": "Wed, 17 Oct 2007 13:33:01 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Your first post says vacuum goes worse (slower).\nI see that you do not issue the -f option (FULL VACUUM).\n\nI had a similar situation with a server (with frequent update)\nperforming nightly vacuumdb. After a few many days it went\nslower and slower.\n\nThe first solution was to add the -f switch.\nNote that it leads to table lock (see docs :-)\n\nthe FULL option completely rewrite the table on disk making it much\nmore compact\n(i think of it similar to a \"defrag\" on windows). I had a dramatic\nspeed improvement\nafter the first vacuum full.\n\nlatest solution (psql 8.0.1) was a perl script which selectively\nchooses tables to\nfull vacuum basing on results from this select:\n\nSELECT a.relname, a.relpages FROM pg_class a ,pg_stat_user_tables b\nWHERE a.relname = b.relname order by relpages desc;\n\nthis was to see how much a table's size grows through time.\n\nWith psql 8.2.x we adopted pg_autovacuum which seems to perform good,\neven thought\ni do not clearly understand whether it occasionally performs a full\nvacuum (i think he does'nt).\n\nStefano\n\n\n\nOn 10/17/07, Stéphane Schildknecht\n<[email protected]> wrote:\n> Tom Lane a écrit :\n> > \"Scott Marlowe\" <[email protected]> writes:\n> >\n> >> On 10/16/07, St=E9phane Schildknecht\n> >> <[email protected]> wrote:\n> >>\n> >>> That's also why I am so disappointed vacuum doesn't give me these 4\n> >>> hints lines.\n> >>>\n> >\n> >\n> >> What kind of error, or output, does it give you at the end? Any hint\n> >> as to why they're missing?\n> >>\n> >\n> > If you're talking about the FSM statistics display, that only gets\n> > printed by a database-wide VACUUM (one that doesn't name a specific\n> > table).\n> >\n>\n> Yes, I am. The command line is (in a shell script whom ouput is\n> redirected in a global file) :\n>\n> vacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n>\n>\n> That does not explain why we don't get FSM statitics display. The output\n> ends with:\n> INFO: vacuuming \"public.sometable\"\n> INFO: \"sometable\": removed 62 row versions in 3 pages\n> DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"sometable\": found 62 removable, 5 nonremovable row versions in 5\n> pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 534 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: analyzing \"public.sometable\"\n> INFO: \"sometable\": scanned 5 of 5 pages, containing 5 live rows and 0\n> dead rows; 5 rows in sample, 5 estimated total rows\n> VACUUM\n>\n> Best regards,\n>\n> Stéphane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Wed, 17 Oct 2007 16:35:16 +0200", "msg_from": "\"Stefano Dal Pra\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> Yes, I am. The command line is (in a shell script whom ouput is\n> redirected in a global file) :\n\n> vacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n \n> That does not explain why we don't get FSM statitics display.\n\nIs $DBUSR a superuser? If not, some tables are likely getting skipped.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Oct 2007 11:53:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " }, { "msg_contents": "Stefano Dal Pra escribi�:\n> Your first post says vacuum goes worse (slower).\n> I see that you do not issue the -f option (FULL VACUUM).\n> \n> I had a similar situation with a server (with frequent update)\n> performing nightly vacuumdb. After a few many days it went\n> slower and slower.\n\nWhen you have that problem, the solution is to issue more plain vacuum\n(not full) more frequently. If it's a highly updated table, then maybe\nonce per hour or more. It depends on the update rate.\n\n> With psql 8.2.x we adopted pg_autovacuum which seems to perform good,\n> even thought\n> i do not clearly understand whether it occasionally performs a full\n> vacuum (i think he does'nt).\n\nIt doesn't because it's normally not necessary. Also, we don't want to\nbe acquiring exclusive locks in a background automatic process, so if\nyou really need vacuum full (and I question your need to) then you must\nissue it yourself.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 17 Oct 2007 13:07:42 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "On 10/17/07, Stéphane Schildknecht\n<[email protected]> wrote:\n> Tom Lane a écrit :\n>\n> Yes, I am. The command line is (in a shell script whom ouput is\n> redirected in a global file) :\n>\n> vacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n>\n>\n> That does not explain why we don't get FSM statitics display.\n\nHmmm. Have you tried running that command interactively? I'm just\nwondering if your redirect is somehow dropping bits of the output.\n", "msg_date": "Wed, 17 Oct 2007 13:43:57 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Tom Lane a �crit :\n> =?ISO-8859-1?Q?St=E9phane_Schildknecht?= <[email protected]> writes:\n> \n>> Yes, I am. The command line is (in a shell script whom ouput is\n>> redirected in a global file) :\n>> \n>\n> \n>> vacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n>> \n> \n> \n>> That does not explain why we don't get FSM statitics display.\n>> \n>\n> Is $DBUSR a superuser? If not, some tables are likely getting skipped.\n>\n> \t\t\tregards, tom lane\n> \nNo it's not a superuser as some pg_tables are skipped, according to logs.\n\nSo that's why these information are not diplayed in 8.19. Shame on me!\nThanks for all the advice. In fact, I did not take care of that as on\n8.2.x, these information are displayed whenever you run it with\nsuperuser or not...\nWhat's more it seems (here on my linux box) it does not depends on the\nclient but on the server version.\n\nBest regards,\n\n-- \nSt�phane SCHILDKNECHT\nPr�sident de PostgreSQLFr\nhttp://www.postgresqlfr.org\n\n", "msg_date": "Thu, 18 Oct 2007 08:17:27 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Scott Marlowe a �crit :\n> On 10/17/07, St�phane Schildknecht\n> <[email protected]> wrote:\n> \n>> Tom Lane a �crit :\n>>\n>> Yes, I am. The command line is (in a shell script whom ouput is\n>> redirected in a global file) :\n>>\n>> vacuumdb -d $DBNAME -p $DBPORT -U $DBUSR -z -v\n>>\n>>\n>> That does not explain why we don't get FSM statitics display.\n>> \n>\n> Hmmm. Have you tried running that command interactively? I'm just\n> wondering if your redirect is somehow dropping bits of the output.\n> \n\nI tried a few combinations of redirection tests to verify that, and\nalways get all of the output or none of it.\nAccording to Tom and tests I made on another box, superuser is the key.\nThanks anyway.\n\nBest regards,\n\n-- \nSt�phane SCHILDKNECHT\nPr�sident de PostgreSQLFr\nhttp://www.postgresqlfr.org\n\n", "msg_date": "Thu, 18 Oct 2007 08:19:43 +0200", "msg_from": "=?ISO-8859-1?Q?St=E9phane_Schildknecht?=\n\t<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" } ]
[ { "msg_contents": "Whenever I turn on Autovacuum on my database, I get a ton of error \nmessages like this in my Postgres log:\n\nOct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory\nOct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on request \nof size 524287998.\n\nIt always fails on the same request. When I turn off autovacuum, they \ngo away. However, when I run VACUUM FULL manually, I don't get this \nerror.\n\nMy server has 2gb of ram, and my postgres settings are:\n\nautovacuum = on # enable autovacuum subprocess?\n # 'on' requires \nstats_start_collector\n # and stats_row_level to \nalso be on\n#autovacuum_naptime = 1min # time between autovacuum runs\n#autovacuum_vacuum_threshold = 500 # min # of tuple updates before\n # vacuum\n#autovacuum_analyze_threshold = 250 # min # of tuple updates before\n # analyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of rel size before\n # vacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of rel size before\n # analyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before \nforced vacuum\n # (change requires restart)\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovacuum, -1 means use\n # vacuum_cost_delay\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovacuum, -1 means use\n # vacuum_cost_limit\n\nshared_buffers = 20000 # min 128kB or max_connections*16kB\n # (change requires restart)\n#temp_buffers = 8MB # min 800kB\n#max_prepared_transactions = 5 # can be 0 or more\n # (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of \nshared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 4096 # min 64kB\nmaintenance_work_mem = 500MB # min 1MB\n#max_stack_depth = 2MB # min 100kB\n\n\nAny ideas as to what might be going on?\n\nThanks\nJason\n", "msg_date": "Tue, 16 Oct 2007 07:12:11 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Autovacuum running out of memory" }, { "msg_contents": "Not really a performance question, but...\n\nJason Lustig wrote:\n> Whenever I turn on Autovacuum on my database, I get a ton of error \n> messages like this in my Postgres log:\n> \n> Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory\n> Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on request of \n> size 524287998.\n> \n> It always fails on the same request. When I turn off autovacuum, they go \n> away. However, when I run VACUUM FULL manually, I don't get this error.\n\nIs there nothing before this giving the error message some context?\nIs the user and database really blank, or have you just trimmed those?\nWhat version of PG is this, and running on what O.S.?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Oct 2007 12:45:28 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "There isn't any more error message than this... it simply repeats \nevery minute or so, which is really quite strange. And the user & db \nis really blank in the log, I didn't trim it (if I did I would have \nreplaced it with dummy values).\n\nI'm using pg 8.2.4 on Linux 2.6.15.\n\nJason\n\n--\nJason Lustig\nCTO, MavenHaven Inc.\nwww.mavenhaven.com\nWhere the Community Finds Wisdom\n\nIsrael:\t054-231-8476\nU.S.:\t716-228-8729\nSkype:\tjasonlustig\n\n\nOn Oct 16, 2007, at 7:45 AM, Richard Huxton wrote:\n\n> Not really a performance question, but...\n>\n> Jason Lustig wrote:\n>> Whenever I turn on Autovacuum on my database, I get a ton of error \n>> messages like this in my Postgres log:\n>> Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory\n>> Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on \n>> request of size 524287998.\n>> It always fails on the same request. When I turn off autovacuum, \n>> they go away. However, when I run VACUUM FULL manually, I don't \n>> get this error.\n>\n> Is there nothing before this giving the error message some context?\n> Is the user and database really blank, or have you just trimmed those?\n> What version of PG is this, and running on what O.S.?\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n\nThere isn't any more error message than this... it simply repeats every minute or so, which is really quite strange. And the user & db is really blank in the log, I didn't trim it (if I did I would have replaced it with dummy values).I'm using pg 8.2.4 on Linux 2.6.15.Jason--Jason LustigCTO, MavenHaven Inc.www.mavenhaven.comWhere the Community Finds WisdomIsrael: 054-231-8476U.S.: 716-228-8729Skype: jasonlustig On Oct 16, 2007, at 7:45 AM, Richard Huxton wrote:Not really a performance question, but...Jason Lustig wrote: Whenever I turn on Autovacuum on my database, I get a ton of error messages like this in my Postgres log:Oct 16 06:43:47 [2897]: [1-1]  user=,db= ERROR:  out of memoryOct 16 06:43:47 [2897]: [1-2]  user=,db= DETAIL:  Failed on request of size 524287998.It always fails on the same request. When I turn off autovacuum, they go away. However, when I run VACUUM FULL manually, I don't get this error. Is there nothing before this giving the error message some context?Is the user and database really blank, or have you just trimmed those?What version of PG is this, and running on what O.S.?--   Richard Huxton  Archonet Ltd", "msg_date": "Tue, 16 Oct 2007 07:55:35 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "Jason Lustig wrote:\n> There isn't any more error message than this... it simply repeats every \n> minute or so, which is really quite strange. And the user & db is really \n> blank in the log, I didn't trim it (if I did I would have replaced it \n> with dummy values).\n\nHmm - odd that you're not getting any connection details.\n\n> I'm using pg 8.2.4 on Linux 2.6.15.\n\nFair enough.\n\n>>> Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory\n>>> Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on request \n>>> of size 524287998.\n\nWell, since this is about 500MB and your maintenance_work_mem is set to \n500MB that's the obvious place to start. It might just be that you've \nnot got enough free memory.\n\nWhat happens if you set maintenance_work_mem to say 50MB?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Oct 2007 13:23:12 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "I lowered the maintenance_work_mem to 50MB and am still getting the \nsame errors:\n\nOct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory\nOct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request \nof size 52428798.\nOct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of memory\nOct 16 09:27:57 [16421]: [1-2] user=,db= DETAIL: Failed on request \nof size 52428798.\nOct 16 09:29:44 [16500]: [1-1] user=,db= ERROR: out of memory\nOct 16 09:29:44 [16500]: [1-2] user=,db= DETAIL: Failed on request \nof size 52428798.\n\nLooking at my free memory (from TOP) I find\n\nMem: 2062364k total, 1846696k used, 215668k free, 223324k buffers\nSwap: 2104496k total, 160k used, 2104336k free, 928216k cached\n\nSo I don't think that I'm running out of memory total... it seems \nlike it's continually trying to do it. Is there a reason why Postgres \nwould be doing something without a username or database? Or is that \njust how autovacuum works?\n\nThanks,\nJason\n\n--\nJason Lustig\nIsrael:\t054-231-8476\nU.S.:\t716-228-8729\nSkype:\tjasonlustig\n\n\nOn Oct 16, 2007, at 8:23 AM, Richard Huxton wrote:\n\n> Jason Lustig wrote:\n>> There isn't any more error message than this... it simply repeats \n>> every minute or so, which is really quite strange. And the user & \n>> db is really blank in the log, I didn't trim it (if I did I would \n>> have replaced it with dummy values).\n>\n> Hmm - odd that you're not getting any connection details.\n>\n>> I'm using pg 8.2.4 on Linux 2.6.15.\n>\n> Fair enough.\n>\n>>>> Oct 16 06:43:47 [2897]: [1-1] user=,db= ERROR: out of memory\n>>>> Oct 16 06:43:47 [2897]: [1-2] user=,db= DETAIL: Failed on \n>>>> request of size 524287998.\n>\n> Well, since this is about 500MB and your maintenance_work_mem is \n> set to 500MB that's the obvious place to start. It might just be \n> that you've not got enough free memory.\n>\n> What happens if you set maintenance_work_mem to say 50MB?\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\nI lowered the maintenance_work_mem to 50MB and am still getting the same errors:Oct 16 09:26:57 [16402]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:26:57 [16402]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798.Oct 16 09:27:57 [16421]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:27:57 [16421]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798.Oct 16 09:29:44 [16500]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:29:44 [16500]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798.Looking at my free memory (from TOP) I findMem:   2062364k total,  1846696k used,   215668k free,   223324k buffersSwap:  2104496k total,      160k used,  2104336k free,   928216k cachedSo I don't think that I'm running out of memory total... it seems like it's continually trying to do it. Is there a reason why Postgres would be doing something without a username or database? Or is that just how autovacuum works?Thanks,Jason--Jason LustigIsrael: 054-231-8476U.S.: 716-228-8729Skype: jasonlustig On Oct 16, 2007, at 8:23 AM, Richard Huxton wrote:Jason Lustig wrote: There isn't any more error message than this... it simply repeats every minute or so, which is really quite strange. And the user & db is really blank in the log, I didn't trim it (if I did I would have replaced it with dummy values). Hmm - odd that you're not getting any connection details. I'm using pg 8.2.4 on Linux 2.6.15. Fair enough. Oct 16 06:43:47 [2897]: [1-1]  user=,db= ERROR:  out of memoryOct 16 06:43:47 [2897]: [1-2]  user=,db= DETAIL:  Failed on request of size 524287998. Well, since this is about 500MB and your maintenance_work_mem is set to 500MB that's the obvious place to start. It might just be that you've not got enough free memory.What happens if you set maintenance_work_mem to say 50MB?--   Richard Huxton  Archonet Ltd---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives?              http://archives.postgresql.org", "msg_date": "Tue, 16 Oct 2007 09:32:54 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "Jason Lustig wrote:\n> I lowered the maintenance_work_mem to 50MB and am still getting the same \n> errors:\n> \n> Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory\n> Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request of \n> size 52428798.\n> Oct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of memory\n> Oct 16 09:27:57 [16421]: [1-2] user=,db= DETAIL: Failed on request of \n> size 52428798.\n> Oct 16 09:29:44 [16500]: [1-1] user=,db= ERROR: out of memory\n> Oct 16 09:29:44 [16500]: [1-2] user=,db= DETAIL: Failed on request of \n> size 52428798.\n\nHmm - it's now failing on a request of 50MB, which shows it is in fact \nmaintenance_work_mem that's the issue.\n\n> Looking at my free memory (from TOP) I find\n> \n> Mem: 2062364k total, 1846696k used, 215668k free, 223324k buffers\n> Swap: 2104496k total, 160k used, 2104336k free, 928216k cached\n> \n> So I don't think that I'm running out of memory total... it seems like \n> it's continually trying to do it. Is there a reason why Postgres would \n> be doing something without a username or database? Or is that just how \n> autovacuum works?\n\nI've not seen an error at startup before, but if it's not connected yet \nthen that would make sense.\n\nI'm guessing this is a per-user limit that the postgres user is hitting. \nIf you \"su\" to user postgres and run \"ulimit -a\" that should show you if \nyou have any limits defined. See \"man bash\" for more details on ulimit.\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Oct 2007 15:01:10 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "On 10/16/07, Jason Lustig <[email protected]> wrote:\n\n> Looking at my free memory (from TOP) I find\n>\n> Mem: 2062364k total, 1846696k used, 215668k free, 223324k buffers\n> Swap: 2104496k total, 160k used, 2104336k free, 928216k cached\n>\n> So I don't think that I'm running out of memory total... it seems like it's\n> continually trying to do it. Is there a reason why Postgres would be doing\n> something without a username or database? Or is that just how autovacuum\n> works?\n\nYou are NOT running out of memory. Look at the size of your cache and\nbuffers, together they add up to over 1 Gig of memory. You've got\nplenty of free memory.\n\nI'm betting you're running postgresql under an account with a ulimit\nsetting on your memory.\n", "msg_date": "Tue, 16 Oct 2007 09:08:20 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "I ran \"ulimit -a\" for the postgres user, and here's what I got:\n\ncore file size (blocks, -c) 200000\ndata seg size (kbytes, -d) 200000\nmax nice (-e) 0\nfile size (blocks, -f) unlimited\npending signals (-i) 32635\nmax locked memory (kbytes, -l) 32\nmax memory size (kbytes, -m) 200000\nopen files (-n) 100\npipe size (512 bytes, -p) 8\nPOSIX message queues (bytes, -q) 819200\nmax rt priority (-r) 0\nstack size (kbytes, -s) 8192\ncpu time (seconds, -t) unlimited\nmax user processes (-u) 100\nvirtual memory (kbytes, -v) 200000\nfile locks (-x) unlimited\n\n\n\n--\nJason Lustig\nIsrael:\t054-231-8476\nU.S.:\t716-228-8729\nSkype:\tjasonlustig\n\n\nOn Oct 16, 2007, at 10:01 AM, Richard Huxton wrote:\n\n> Jason Lustig wrote:\n>> I lowered the maintenance_work_mem to 50MB and am still getting \n>> the same errors:\n>> Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory\n>> Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on \n>> request of size 52428798.\n>> Oct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of memory\n>> Oct 16 09:27:57 [16421]: [1-2] user=,db= DETAIL: Failed on \n>> request of size 52428798.\n>> Oct 16 09:29:44 [16500]: [1-1] user=,db= ERROR: out of memory\n>> Oct 16 09:29:44 [16500]: [1-2] user=,db= DETAIL: Failed on \n>> request of size 52428798.\n>\n> Hmm - it's now failing on a request of 50MB, which shows it is in \n> fact maintenance_work_mem that's the issue.\n>\n>> Looking at my free memory (from TOP) I find\n>> Mem: 2062364k total, 1846696k used, 215668k free, 223324k \n>> buffers\n>> Swap: 2104496k total, 160k used, 2104336k free, 928216k \n>> cached\n>> So I don't think that I'm running out of memory total... it seems \n>> like it's continually trying to do it. Is there a reason why \n>> Postgres would be doing something without a username or database? \n>> Or is that just how autovacuum works?\n>\n> I've not seen an error at startup before, but if it's not connected \n> yet then that would make sense.\n>\n> I'm guessing this is a per-user limit that the postgres user is \n> hitting. If you \"su\" to user postgres and run \"ulimit -a\" that \n> should show you if you have any limits defined. See \"man bash\" for \n> more details on ulimit.\n>\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n\nI ran \"ulimit -a\" for the postgres user, and here's what I got:core file size          (blocks, -c) 200000data seg size           (kbytes, -d) 200000max nice                        (-e) 0file size               (blocks, -f) unlimitedpending signals                 (-i) 32635max locked memory       (kbytes, -l) 32max memory size         (kbytes, -m) 200000open files                      (-n) 100pipe size            (512 bytes, -p) 8POSIX message queues     (bytes, -q) 819200max rt priority                 (-r) 0stack size              (kbytes, -s) 8192cpu time               (seconds, -t) unlimitedmax user processes              (-u) 100virtual memory          (kbytes, -v) 200000file locks                      (-x) unlimited --Jason LustigIsrael: 054-231-8476U.S.: 716-228-8729Skype: jasonlustig On Oct 16, 2007, at 10:01 AM, Richard Huxton wrote:Jason Lustig wrote: I lowered the maintenance_work_mem to 50MB and am still getting the same errors:Oct 16 09:26:57 [16402]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:26:57 [16402]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798.Oct 16 09:27:57 [16421]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:27:57 [16421]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798.Oct 16 09:29:44 [16500]: [1-1]  user=,db= ERROR:  out of memoryOct 16 09:29:44 [16500]: [1-2]  user=,db= DETAIL:  Failed on request of size 52428798. Hmm - it's now failing on a request of 50MB, which shows it is in fact maintenance_work_mem that's the issue. Looking at my free memory (from TOP) I findMem:   2062364k total,  1846696k used,   215668k free,   223324k buffersSwap:  2104496k total,      160k used,  2104336k free,   928216k cachedSo I don't think that I'm running out of memory total... it seems like it's continually trying to do it. Is there a reason why Postgres would be doing something without a username or database? Or is that just how autovacuum works? I've not seen an error at startup before, but if it's not connected yet then that would make sense.I'm guessing this is a per-user limit that the postgres user is hitting. If you \"su\" to user postgres and run \"ulimit -a\" that should show you if you have any limits defined. See \"man bash\" for more details on ulimit.--   Richard Huxton  Archonet Ltd", "msg_date": "Tue, 16 Oct 2007 10:14:30 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "Jason Lustig wrote:\n> I ran \"ulimit -a\" for the postgres user, and here's what I got:\n\n> max memory size (kbytes, -m) 200000\n> virtual memory (kbytes, -v) 200000\n\nThere you go - you probably are exceeding these.\n\nAdd some lines to /etc/security/limits.conf to increase them.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 16 Oct 2007 15:22:35 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "On 10/16/07, Jason Lustig <[email protected]> wrote:\n> I ran \"ulimit -a\" for the postgres user, and here's what I got:\n>\n> core file size (blocks, -c) 200000\n> data seg size (kbytes, -d) 200000\n> max nice (-e) 0\n> file size (blocks, -f) unlimited\n> pending signals (-i) 32635\n> max locked memory (kbytes, -l) 32\n> max memory size (kbytes, -m) 200000\n> open files (-n) 100\n> pipe size (512 bytes, -p) 8\n> POSIX message queues (bytes, -q) 819200\n> max rt priority (-r) 0\n> stack size (kbytes, -s) 8192\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 100\n> virtual memory (kbytes, -v) 200000\n> file locks (-x) unlimited\n\nThere ya go. it's limited to 200M memory.\n\nGenerally speaking, limiting postgresql to something that small is not\na good idea. Set it to ~ 1 Gig or so and see how it works.\n", "msg_date": "Tue, 16 Oct 2007 09:22:54 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "On Tue, 2007-10-16 at 10:14 -0400, Jason Lustig wrote:\n> I ran \"ulimit -a\" for the postgres user, and here's what I got:\n...\n> max memory size (kbytes, -m) 200000\n> open files (-n) 100\n> max user processes (-u) 100\n> virtual memory (kbytes, -v) 200000\n...\n\nThese settings are all quite low for a dedicated database server, they\nwould be more appropriate for a small development instance of PG sharing\na machine with several other processes.\n\nOthers have commented on the memory settings, but depending on the\nmaximum number of connections you expect to have open at any time you\nmay want to consider increasing the max user processes and open files\nsettings as well.\n\n-- Mark Lewis\n", "msg_date": "Tue, 16 Oct 2007 07:33:15 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "On Oct 16, 2007, at 10:22 AM, Richard Huxton wrote:\n\n> Add some lines to /etc/security/limits.conf to increase them.\n\nSorry for being somewhat of a linux novice -- but what is the best \nway to do this? It doesn't seem to provide matching options from \nulimit to the limits.conf file.\n\nThanks,\nJason\n", "msg_date": "Tue, 16 Oct 2007 10:54:12 -0400", "msg_from": "Jason Lustig <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "Jason Lustig escribió:\n> On Oct 16, 2007, at 10:22 AM, Richard Huxton wrote:\n>\n>> Add some lines to /etc/security/limits.conf to increase them.\n>\n> Sorry for being somewhat of a linux novice -- but what is the best way \n> to do this? It doesn't seem to provide matching options from ulimit to \n> the limits.conf file.\n>\n> Thanks,\n> Jason\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\nhttp://www.userlocal.com/security/secpam.php", "msg_date": "Tue, 16 Oct 2007 12:00:36 -0300", "msg_from": "Rodrigo Gonzalez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Hmm - odd that you're not getting any connection details.\n\nNot really; the autovacuum process doesn't have any connection, so those\nlog_line_prefix fields will be left empty. The weird thing about this\nis why the large maintenance_work_mem works for a regular session and\nnot for autovacuum. There really shouldn't be much difference in the\nmaximum workable setting for the two cases, AFAICS.\n\nYour later suggestion to check out the postgres user's ulimit -a\nsettings seems like the appropriate next step, but I'm not seeing\nhow ulimit would affect only some of the postmaster's children.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 11:10:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory " }, { "msg_contents": "I wrote:\n> ... The weird thing about this\n> is why the large maintenance_work_mem works for a regular session and\n> not for autovacuum. There really shouldn't be much difference in the\n> maximum workable setting for the two cases, AFAICS.\n\nAfter re-reading the thread I realized that the OP is comparing manual\nVACUUM FULL to automatic plain VACUUM, so the mystery is solved.\nPlain VACUUM tries to grab a maintenance_work_mem-sized array of\ntuple IDs immediately at command startup. VACUUM FULL doesn't work\nlike that.\n\nGiven the 200M ulimit -v, and the shared_buffers setting of 20000\n(about 160M), the behavior is all explained if we assume that shared\nmemory counts against -v. Which I think it does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 11:38:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory " }, { "msg_contents": "How about shared memory related settings of your kernel? ie. \nkernel.shmall, kernel.shmmax . Have a check with sysctl, maybe they \nshould be raised:\n\nhttp://www.postgresql.org/docs/8.2/interactive/kernel-resources.html\n\nRegards\n\nJason Lustig wrote:\n> I ran \"ulimit -a\" for the postgres user, and here's what I got:\n> \n> core file size (blocks, -c) 200000\n> data seg size (kbytes, -d) 200000\n> max nice (-e) 0\n> file size (blocks, -f) unlimited\n> pending signals (-i) 32635\n> max locked memory (kbytes, -l) 32\n> max memory size (kbytes, -m) 200000\n> open files (-n) 100\n> pipe size (512 bytes, -p) 8\n> POSIX message queues (bytes, -q) 819200\n> max rt priority (-r) 0\n> stack size (kbytes, -s) 8192\n> cpu time (seconds, -t) unlimited\n> max user processes (-u) 100\n> virtual memory (kbytes, -v) 200000\n> file locks (-x) unlimited\n> \n> \n> \n> --\n> Jason Lustig\n> Israel: 054-231-8476\n> U.S.: 716-228-8729\n> Skype: jasonlustig\n> \n> \n> On Oct 16, 2007, at 10:01 AM, Richard Huxton wrote:\n> \n>> Jason Lustig wrote:\n>>> I lowered the maintenance_work_mem to 50MB and am still getting the \n>>> same errors:\n>>> Oct 16 09:26:57 [16402]: [1-1] user=,db= ERROR: out of memory\n>>> Oct 16 09:26:57 [16402]: [1-2] user=,db= DETAIL: Failed on request \n>>> of size 52428798.\n>>> Oct 16 09:27:57 [16421]: [1-1] user=,db= ERROR: out of memory\n>>> Oct 16 09:27:57 [16421]: [1-2] user=,db= DETAIL: Failed on request \n>>> of size 52428798.\n>>> Oct 16 09:29:44 [16500]: [1-1] user=,db= ERROR: out of memory\n>>> Oct 16 09:29:44 [16500]: [1-2] user=,db= DETAIL: Failed on request \n>>> of size 52428798.\n>>\n>> Hmm - it's now failing on a request of 50MB, which shows it is in fact \n>> maintenance_work_mem that's the issue.\n>>\n>>> Looking at my free memory (from TOP) I find\n>>> Mem: 2062364k total, 1846696k used, 215668k free, 223324k buffers\n>>> Swap: 2104496k total, 160k used, 2104336k free, 928216k cached\n>>> So I don't think that I'm running out of memory total... it seems \n>>> like it's continually trying to do it. Is there a reason why Postgres \n>>> would be doing something without a username or database? Or is that \n>>> just how autovacuum works?\n>>\n>> I've not seen an error at startup before, but if it's not connected \n>> yet then that would make sense.\n>>\n>> I'm guessing this is a per-user limit that the postgres user is \n>> hitting. If you \"su\" to user postgres and run \"ulimit -a\" that should \n>> show you if you have any limits defined. See \"man bash\" for more \n>> details on ulimit.\n>>\n>>\n>> -- \n>> Richard Huxton\n>> Archonet Ltd\n> \n\n", "msg_date": "Wed, 17 Oct 2007 00:00:57 +0800", "msg_from": "=?UTF-8?B?5p2O5b2mIElhbiBMaQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum running out of memory" } ]
[ { "msg_contents": "Would it make sense to show the FSM stats for individual table vaccums as well? I'm wondering if the reason they aren't shown is because it wouldn't be useful or isn't practical, or just that it hasn't been done.\n\nBrian\n\n----- Original Message ----\nFrom: Tom Lane <[email protected]>\n\nIf you're talking about the FSM statistics display, that only gets\nprinted by a database-wide VACUUM (one that doesn't name a specific\ntable).\n\n regards, tom lane\n\n\n\n", "msg_date": "Tue, 16 Oct 2007 17:03:39 -0700 (PDT)", "msg_from": "Brian Herlihy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Brian Herlihy <[email protected]> writes:\n> Would it make sense to show the FSM stats for individual table vaccums as w=\n> ell? I'm wondering if the reason they aren't shown is because it wouldn't =\n> be useful or isn't practical, or just that it hasn't been done.\n\nIt was intentionally omitted in the original design, on the grounds that\nafter a single-table VACUUM there's no very good reason to think that\nthe global FSM stats are sufficiently complete to be accurate. Of\ncourse, in a multi-database installation the same charge could be\nleveled against the situation after a single-database VACUUM, so maybe\nthere's not a lot of strength in the argument.\n\nIIRC the code change would be trivial, it's just a matter of judgment\nwhether the extra output is useful/trustworthy.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 16 Oct 2007 21:16:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " }, { "msg_contents": "On Tue, 16 Oct 2007 17:03:39 -0700 (PDT)\nBrian Herlihy <[email protected]> wrote:\n\n> Would it make sense to show the FSM stats for individual table\n> vaccums as well? I'm wondering if the reason they aren't shown is\n> because it wouldn't be useful or isn't practical, or just that it\n> hasn't been done.\n\nI am not sure how useful it would be as the FSM is global. However what\nwould be useful is something like VACUUM SUMMARY, where I could get\n\"just\" the stats instead of all the other output that comes along with\nVERBOSE.\n\nJoshua D. Drake\n\n> \n> Brian\n> \n> ----- Original Message ----\n> From: Tom Lane <[email protected]>\n> \n> If you're talking about the FSM statistics display, that only gets\n> printed by a database-wide VACUUM (one that doesn't name a specific\n> table).\n> \n> regards, tom lane\n> \n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: don't forget to increase\n> your free space map settings\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/", "msg_date": "Tue, 16 Oct 2007 20:19:51 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Joshua D. Drake wrote:\n> On Tue, 16 Oct 2007 17:03:39 -0700 (PDT)\n> Brian Herlihy <[email protected]> wrote:\n> \n> > Would it make sense to show the FSM stats for individual table\n> > vaccums as well? I'm wondering if the reason they aren't shown is\n> > because it wouldn't be useful or isn't practical, or just that it\n> > hasn't been done.\n> \n> I am not sure how useful it would be as the FSM is global. However what\n> would be useful is something like VACUUM SUMMARY, where I could get\n> \"just\" the stats instead of all the other output that comes along with\n> VERBOSE.\n\nWhat would be really useful is to remove all that noise from vacuum and\nmake it appear on a view. 8.4 material all of this, of course.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira s� existe y tu est�s mintiendo\" (G. Lama)\n", "msg_date": "Wed, 17 Oct 2007 10:43:28 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> What would be really useful is to remove all that noise from vacuum and\n> make it appear on a view.\n\nWell, if you want something decoupled from VACUUM there's already\ncontrib/pg_freespacemap.\n\n> 8.4 material all of this, of course.\n\nI am hoping that we rewrite FSM into the distributed DSM structure\nthat's been talked about, so that the whole problem goes away in 8.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Oct 2007 11:56:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum goes worse " } ]
[ { "msg_contents": "Hi everybody,\n\nsuppose you have a large table tab and two (or more) queryes like this:\n\nSELECT count(*),A FROM tab WHERE C GROUP BY A;\nSELECT count(*),B FROM tab WHERE C GROUP BY B;\n\nis there any way to get both results in a single query,\neventually through stored procedure?\nThe retrieved [count(*),A] ; [count(*),B)] data couldnt fit\non a single table, of course.\n\nThe main goal would be to get multiple results while scanning the\ntable[s] once only\nthus getting results in a faster way.\n\nThis seems to me quite a common situation but i have no clue whether a neat\nsolution can be implemented through stored procedure.\n\nAny hint?\n\nThank you\n\nStefano\n", "msg_date": "Wed, 17 Oct 2007 14:30:52 +0200", "msg_from": "\"Stefano Dal Pra\" <[email protected]>", "msg_from_op": true, "msg_subject": "two queryes in a single tablescan" }, { "msg_contents": "Stefano Dal Pra wrote:\n> suppose you have a large table tab and two (or more) queryes like this:\n> \n> SELECT count(*),A FROM tab WHERE C GROUP BY A;\n> SELECT count(*),B FROM tab WHERE C GROUP BY B;\n> \n> is there any way to get both results in a single query,\n> eventually through stored procedure?\n> The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n> on a single table, of course.\n> \n> The main goal would be to get multiple results while scanning the\n> table[s] once only\n> thus getting results in a faster way.\n> \n> This seems to me quite a common situation but i have no clue whether a neat\n> solution can be implemented through stored procedure.\n\nWith a temp table:\n\nCREATE TEMPORARY TABLE tmp AS SELECT COUNT(*) as rows, a,b FROM WHERE C\nGROUP BY a,b;\nSELECT SUM(rows), a FROM tmp GROUP BY a;\nSELECT SUM(rows), b FROM tmp GROUP BY b;\nDROP TABLE tmp;\n\n(Using temp tables in plpgsql procedures doesn't quite work until 8.3.\nBut you can use dynamic EXECUTE as a work-around. There used to be a FAQ\nentry about that, but apparently it's been removed because the problem\nhas been fixed in the upcoming release.)\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 17 Oct 2007 14:00:58 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two queryes in a single tablescan" }, { "msg_contents": "On Wed, Oct 17, 2007 at 02:30:52PM +0200, Stefano Dal Pra wrote:\n> The main goal would be to get multiple results while scanning the\n> table[s] once only\n> thus getting results in a faster way.\n\nIn 8.3, Postgres will do this for you itself -- if you already have a\nsequential scan running against a given table, another one starting in\nparallel will simply piggyback it.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 17 Oct 2007 15:15:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two queryes in a single tablescan" }, { "msg_contents": "On 10/17/07, Heikki Linnakangas <[email protected]> wrote:\n> Stefano Dal Pra wrote:\n> > suppose you have a large table tab and two (or more) queryes like this:\n> >\n> > SELECT count(*),A FROM tab WHERE C GROUP BY A;\n> > SELECT count(*),B FROM tab WHERE C GROUP BY B;\n> >\n> > is there any way to get both results in a single query,\n> > eventually through stored procedure?\n> > The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n> > on a single table, of course.\n> >\n> > The main goal would be to get multiple results while scanning the\n> > table[s] once only\n> > thus getting results in a faster way.\n> >\n> > This seems to me quite a common situation but i have no clue whether a neat\n> > solution can be implemented through stored procedure.\n>\n> With a temp table:\n>\n> CREATE TEMPORARY TABLE tmp AS SELECT COUNT(*) as rows, a,b FROM WHERE C\n> GROUP BY a,b;\n> SELECT SUM(rows), a FROM tmp GROUP BY a;\n> SELECT SUM(rows), b FROM tmp GROUP BY b;\n> DROP TABLE tmp;\n>\n\nThank You.\n\nI actually already do something like that:\nin a stored procedure i do create a md5 hash using passed parameters\nconverted to TEXT\nand get a unix_like timestamp using now()::abstime::integer.\nThis gets me a string like: 9ffeb60e9e6581726f7f5027b42c7942_1192443215\nwhich i do use to\nEXECUTE\n CREATE TABLE 9ffeb60e9e6581726f7f5027b42c7942_1192443215 AS\n SELECT * FROM\ngetjd('''||param1||''','''||param2||''','||param3||','||param4||')'\n\n\nThe 9ffeb60e9e6581726f7f5027b42c7942_1192443215 is what i called 'tab'\nin my first post,\nand i need to perform about 7 queryes on that. (after a while i will\ndrop the table using the timestamp part of the name, but that's\nanother point).\n\nHere is where i would like to scan once only that table. Depending on\nparameters it may get as big as 50Mb (this actually is the tablespace\nsize growth) or more with about 10^6 tuples.\n\n Stefano\n\n\n> (Using temp tables in plpgsql procedures doesn't quite work until 8.3.\n> But you can use dynamic EXECUTE as a work-around. There used to be a FAQ\n> entry about that, but apparently it's been removed because the problem\n> has been fixed in the upcoming release.)\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n", "msg_date": "Wed, 17 Oct 2007 15:21:55 +0200", "msg_from": "\"Stefano Dal Pra\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: two queryes in a single tablescan" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Wed, Oct 17, 2007 at 02:30:52PM +0200, Stefano Dal Pra wrote:\n>> The main goal would be to get multiple results while scanning the\n>> table[s] once only\n>> thus getting results in a faster way.\n> \n> In 8.3, Postgres will do this for you itself -- if you already have a\n> sequential scan running against a given table, another one starting in\n> parallel will simply piggyback it.\n\nYou'd have to run the seq scans at the same time, from two different\nbackends, so it's not going to help here.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 17 Oct 2007 14:39:50 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two queryes in a single tablescan" }, { "msg_contents": "I remember when I was using SQL server we did like like that:\n\nSELECT count(CASE WHEN A THEN 1 END) AS cnt_a, count(CASE WHEN B \nTHEN 1 END) AS cnt_b FROM tab WHERE C;\n\nI did a little test with pg_bench data, also works in PostgreSQL:\n\ntest=# select count(*) from history where tid = 1;\n count\n-------\n 574\n(1 行)\n\n时间: 9.553 ms\ntest=# select count(*) from history where tid = 2;\n count\n-------\n 1107\n(1 行)\n\n时间: 8.949 ms\ntest=# select count(CASE WHEN tid = 1 then 1 END) as t1_cont, \ncount(case when tid=2 then 1 end) as t2_cnt from history ;\n t1_cont | t2_cnt\n---------+--------\n 574 | 1107\n(1 行)\n\n时间: 17.182 ms\n\nHope that helps.\n\nRegards\n\nStefano Dal Pra wrote:\n> Hi everybody,\n> \n> suppose you have a large table tab and two (or more) queryes like this:\n> \n> SELECT count(*),A FROM tab WHERE C GROUP BY A;\n> SELECT count(*),B FROM tab WHERE C GROUP BY B;\n> \n> is there any way to get both results in a single query,\n> eventually through stored procedure?\n> The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n> on a single table, of course.\n> \n> The main goal would be to get multiple results while scanning the\n> table[s] once only\n> thus getting results in a faster way.\n> \n> This seems to me quite a common situation but i have no clue whether a neat\n> solution can be implemented through stored procedure.\n> \n> Any hint?\n> \n> Thank you\n> \n> Stefano\n\n", "msg_date": "Thu, 18 Oct 2007 10:24:50 +0800", "msg_from": "=?UTF-8?B?5p2O5b2mIElhbiBMaQ==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: two queryes in a single tablescan" }, { "msg_contents": "Hi, Stefano,\n\n\"Stefano Dal Pra\" <[email protected]> wrote:\n\n> suppose you have a large table tab and two (or more) queryes like this:\n> \n> SELECT count(*),A FROM tab WHERE C GROUP BY A;\n> SELECT count(*),B FROM tab WHERE C GROUP BY B;\n> \n> is there any way to get both results in a single query,\n> eventually through stored procedure?\n> The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n> on a single table, of course.\n> \n> The main goal would be to get multiple results while scanning the\n> table[s] once only\n> thus getting results in a faster way.\n\nPostgreSQL 8.3 contains great improvements in this area, you can simply\nstart the selects from concurrent connections, and the backend will\nsynchronize the scans.\n\n\n\nRegards,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in Europe! www.ffii.org\nwww.nosoftwarepatents.org\n", "msg_date": "Sat, 20 Oct 2007 12:58:24 +0200", "msg_from": "Markus Schaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] two queryes in a single tablescan" }, { "msg_contents": "Markus Schaber <[email protected]> schrieb:\n> > is there any way to get both results in a single query,\n> > eventually through stored procedure?\n> > The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n> > on a single table, of course.\n> > \n> > The main goal would be to get multiple results while scanning the\n> > table[s] once only\n> > thus getting results in a faster way.\n> \n> PostgreSQL 8.3 contains great improvements in this area, you can simply\n> start the selects from concurrent connections, and the backend will\n> synchronize the scans.\n\nworks this right across different transactions? I mean, for instance, TX\na insert rows and TX b insert other rows and both clients (with\ndifferent transactions) starts a seq-scan?\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknow)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Sat, 20 Oct 2007 19:19:39 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] two queryes in a single tablescan" }, { "msg_contents": "On Oct 20, 2007, at 12:19 PM, Andreas Kretschmer wrote:\n\n> Markus Schaber <[email protected]> schrieb:\n>>> is there any way to get both results in a single query,\n>>> eventually through stored procedure?\n>>> The retrieved [count(*),A] ; [count(*),B)] data couldnt fit\n>>> on a single table, of course.\n>>>\n>>> The main goal would be to get multiple results while scanning the\n>>> table[s] once only\n>>> thus getting results in a faster way.\n>>\n>> PostgreSQL 8.3 contains great improvements in this area, you can \n>> simply\n>> start the selects from concurrent connections, and the backend will\n>> synchronize the scans.\n>\n> works this right across different transactions? I mean, for \n> instance, TX\n> a insert rows and TX b insert other rows and both clients (with\n> different transactions) starts a seq-scan?\n\nIf you are in read-committed mode and both backends start their scans \nafter the other has made its insert, then yes. Note Markus's point \nthat both queries must be initiated by concurrent connections. Since \nPostgres doesn't have any kind of shared transaction mechanism across \nconnections then this is inherent.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Sat, 20 Oct 2007 17:02:23 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] two queryes in a single tablescan" } ]
[ { "msg_contents": "Hello Everyone,\n\nI'm struggling to get postgres to run a particular query quickly. It seems that very early on, the planner seems to mis-estimate the number of rows returned by a join which causes it to assume that there is only 1 row as it goes up the tree. It then picks a nested loop join which seems to cause the whole query to be slow. Or at least if I turn off nestloop, it runs in 216ms.\n\nexplain analyze SELECT 1\nFROM \n rpt_agencyquestioncache_171_0 par \n right outer join namemaster dem on (par.nameid = dem.nameid and dem.programid = 171) \n right join activity_parentid_view ses on (par.activity = ses.activityid and ses.programid=171) \n left join (\n select ct0.inter_agency_id,ct0.nameid \n from rpt_agencyquestioncache_171_0 ct0 \n join rpt_agencyquestioncache_171_2 ct2 on ct2.participantid =ct0.participantid\n ) as par30232 on (dem.nameid=par30232.nameid and par30232.inter_agency_id=30232)\nWHERE \n ( ( (par.provider_lfm) ='Child Guidance Treatment Centers Inc.'))\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=1990.12..5666.92 rows=1 width=0) (actual time=82.185..91511.455 rows=1199 loops=1)\n Join Filter: (dem.nameid = ct0.nameid)\n -> Nested Loop Left Join (cost=45.92..1324.06 rows=1 width=4) (actual time=0.973..74.206 rows=1199 loops=1)\n -> Nested Loop (cost=45.92..1323.33 rows=1 width=8) (actual time=0.964..61.264 rows=1199 loops=1)\n -> Hash Join (cost=45.92..1251.07 rows=21 width=8) (actual time=0.948..10.439 rows=1199 loops=1)\n Hash Cond: (par.activity = a.activityid)\n -> Bitmap Heap Scan on rpt_agencyquestioncache_171_0 par (cost=21.92..1222.19 rows=1247 width=8) (actual time=0.415..3.081 rows=1199 loops=1)\n Recheck Cond: (provider_lfm = 'Child Guidance Treatment Centers Inc.'::text)\n -> Bitmap Index Scan on rpt_aqc_45604_provider_lfm (cost=0.00..21.61 rows=1247 width=0) (actual time=0.394..0.394 rows=1199 loops=1)\n Index Cond: (provider_lfm = 'Child Guidance Treatment Centers Inc.'::text)\n -> Hash (cost=19.21..19.21 rows=383 width=4) (actual time=0.513..0.513 rows=383 loops=1)\n -> Index Scan using activity_programid_idx on activity a (cost=0.00..19.21 rows=383 width=4) (actual time=0.034..0.307 rows=383 loops=1)\n Index Cond: (programid = 171)\n -> Index Scan using nameid_pk on namemaster dem (cost=0.00..3.43 rows=1 width=4) (actual time=0.023..0.036 rows=1 loops=1199)\n Index Cond: (par.nameid = dem.nameid)\n Filter: (programid = 171)\n -> Index Scan using activity_session_session_idx on activity_session s (cost=0.00..0.72 rows=1 width=4) (actual time=0.007..0.007 rows=0 loops=1199)\n Index Cond: (a.activityid = s.\"session\")\n -> Hash Join (cost=1944.20..4292.49 rows=4029 width=4) (actual time=59.732..74.897 rows=4130 loops=1199)\n Hash Cond: (ct2.participantid = ct0.participantid)\n -> Seq Scan on rpt_agencyquestioncache_171_2 ct2 (cost=0.00..1747.00 rows=74800 width=4) (actual time=0.008..28.442 rows=74800 loops=1199)\n -> Hash (cost=1893.84..1893.84 rows=4029 width=8) (actual time=5.578..5.578 rows=4130 loops=1)\n -> Bitmap Heap Scan on rpt_agencyquestioncache_171_0 ct0 (cost=55.48..1893.84 rows=4029 width=8) (actual time=0.625..3.714 rows=4130 loops=1)\n Recheck Cond: (inter_agency_id = 30232)\n -> Bitmap Index Scan on rpt_aqc_45604_inter_agency_id (cost=0.00..54.47 rows=4029 width=0) (actual time=0.609..0.609 rows=4130 loops=1)\n Index Cond: (inter_agency_id = 30232)\n Total runtime: 91514.109 ms\n(27 rows)\n\nI've increased statistics to 100 of all pertinent columns in the query to no effect.\n\nI've vacuumed and all analyzed all tables in question. Autovac is on.\n\nSettings of interest in postgresql.conf:\n\nshared_buffers = 1024MB \nwork_mem = 256MB\nmaintenance_work_mem = 256MB \nrandom_page_cost = 2.0\n\nPG version: 8.2.4\nServer Mem: 2G Ram\n\nIf I reduce random_page_cost to 1.0, I get the following query plan.\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=20.87..4377.13 rows=1 width=0) (actual time=146.402..29585.011 rows=1199 loops=1)\n -> Nested Loop Left Join (cost=20.87..4376.62 rows=1 width=4) (actual time=146.287..29572.257 rows=1199 loops=1)\n Join Filter: (dem.nameid = ct0.nameid)\n -> Nested Loop (cost=20.87..857.70 rows=1 width=8) (actual time=1.676..53.423 rows=1199 loops=1)\n -> Hash Join (cost=20.87..818.60 rows=21 width=8) (actual time=1.514..17.276 rows=1199 loops=1)\n Hash Cond: (par.activity = a.activityid)\n -> Index Scan using rpt_aqc_45604_provider_lfm on rpt_agencyquestioncache_171_0 par (cost=0.00..792.85 rows=1247 width=8) (actual time=0.293..9.976 rows=1199 loops=1)\n Index Cond: (provider_lfm = 'Child Guidance Treatment Centers Inc.'::text)\n -> Hash (cost=16.08..16.08 rows=383 width=4) (actual time=0.940..0.940 rows=383 loops=1)\n -> Index Scan using activity_programid_idx on activity a (cost=0.00..16.08 rows=383 width=4) (actual time=0.135..0.676 rows=383 loops=1)\n Index Cond: (programid = 171)\n -> Index Scan using nameid_pk on namemaster dem (cost=0.00..1.85 rows=1 width=4) (actual time=0.024..0.026 rows=1 loops=1199)\n Index Cond: (par.nameid = dem.nameid)\n Filter: (programid = 171)\n -> Nested Loop (cost=0.00..3468.56 rows=4029 width=4) (actual time=0.087..23.199 rows=4130 loops=1199)\n -> Index Scan using rpt_aqc_45604_inter_agency_id on rpt_agencyquestioncache_171_0 ct0 (cost=0.00..1126.10 rows=4029 width=8) (actual time=0.019..2.517 rows=4130 loops=1199)\n Index Cond: (inter_agency_id = 30232)\n -> Index Scan using rpt_aqc_45606_participantid on rpt_agencyquestioncache_171_2 ct2 (cost=0.00..0.57 rows=1 width=4) (actual time=0.003..0.003 rows=1 loops=4951870)\n Index Cond: (ct2.participantid = ct0.participantid)\n -> Index Scan using activity_session_session_idx on activity_session s (cost=0.00..0.49 rows=1 width=4) (actual time=0.007..0.007 rows=0 loops=1199)\n Index Cond: (a.activityid = s.\"session\")\n Total runtime: 29587.932 ms\n(22 rows)\n\nWith nestloop off I get the following query plan:\nset enable_nestloop=false;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=6393.91..7603.11 rows=1 width=0) (actual time=210.324..215.785 rows=1199 loops=1)\n Hash Cond: (a.activityid = s.\"session\")\n -> Hash Left Join (cost=6130.14..7339.01 rows=1 width=4) (actual time=175.072..179.623 rows=1199 loops=1)\n Hash Cond: (dem.nameid = ct0.nameid)\n -> Hash Join (cost=1787.29..2992.69 rows=1 width=8) (actual time=92.812..96.361 rows=1199 loops=1)\n Hash Cond: (par.nameid = dem.nameid)\n -> Hash Join (cost=45.92..1251.07 rows=21 width=8) (actual time=1.046..3.148 rows=1199 loops=1)\n Hash Cond: (par.activity = a.activityid)\n -> Bitmap Heap Scan on rpt_agencyquestioncache_171_0 par (cost=21.92..1222.19 rows=1247 width=8) (actual time=0.453..1.126 rows=1199 loops=1)\n Recheck Cond: (provider_lfm = 'Child Guidance Treatment Centers Inc.'::text)\n -> Bitmap Index Scan on rpt_aqc_45604_provider_lfm (cost=0.00..21.61 rows=1247 width=0) (actual time=0.433..0.433 rows=1199 loops=1)\n Index Cond: (provider_lfm = 'Child Guidance Treatment Centers Inc.'::text)\n -> Hash (cost=19.21..19.21 rows=383 width=4) (actual time=0.566..0.566 rows=383 loops=1)\n -> Index Scan using activity_programid_idx on activity a (cost=0.00..19.21 rows=383 width=4) (actual time=0.035..0.303 rows=383 loops=1)\n Index Cond: (programid = 171)\n -> Hash (cost=1551.74..1551.74 rows=15170 width=4) (actual time=91.725..91.725 rows=15575 loops=1)\n -> Index Scan using namemaster_programid_idx on namemaster dem (cost=0.00..1551.74 rows=15170 width=4) (actual time=0.197..81.753 rows=15575 loops=1)\n Index Cond: (programid = 171)\n -> Hash (cost=4292.49..4292.49 rows=4029 width=4) (actual time=82.217..82.217 rows=4130 loops=1)\n -> Hash Join (cost=1944.20..4292.49 rows=4029 width=4) (actual time=65.129..79.879 rows=4130 loops=1)\n Hash Cond: (ct2.participantid = ct0.participantid)\n -> Seq Scan on rpt_agencyquestioncache_171_2 ct2 (cost=0.00..1747.00 rows=74800 width=4) (actual time=0.014..28.093 rows=74800 loops=1)\n -> Hash (cost=1893.84..1893.84 rows=4029 width=8) (actual time=6.238..6.238 rows=4130 loops=1)\n -> Bitmap Heap Scan on rpt_agencyquestioncache_171_0 ct0 (cost=55.48..1893.84 rows=4029 width=8) (actual time=0.726..3.652 rows=4130 loops=1)\n Recheck Cond: (inter_agency_id = 30232)\n -> Bitmap Index Scan on rpt_aqc_45604_inter_agency_id (cost=0.00..54.47 rows=4029 width=0) (actual time=0.702..0.702 rows=4130 loops=1)\n Index Cond: (inter_agency_id = 30232)\n -> Hash (cost=150.01..150.01 rows=9101 width=4) (actual time=35.206..35.206 rows=9101 loops=1)\n -> Seq Scan on activity_session s (cost=0.00..150.01 rows=9101 width=4) (actual time=9.201..29.911 rows=9101 loops=1)\n Total runtime: 216.649 ms\n\n", "msg_date": "Wed, 17 Oct 2007 13:34:24 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect estimates on columns" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> I'm struggling to get postgres to run a particular query quickly.\n\nThe key problem seems to be the join size misestimate here:\n\n> -> Hash Join (cost=45.92..1251.07 rows=21 width=8) (actual time=0.948..10.439 rows=1199 loops=1)\n> Hash Cond: (par.activity = a.activityid)\n> -> Bitmap Heap Scan on rpt_agencyquestioncache_171_0 par (cost=21.92..1222.19 rows=1247 width=8) (actual time=0.415..3.081 rows=1199 loops=1)\n> -> Hash (cost=19.21..19.21 rows=383 width=4) (actual time=0.513..0.513 rows=383 loops=1)\n\nEvidently it's not realizing that every row of par will have a join\npartner, but why not? I suppose a.activityid is unique, and in most\ncases that I've seen the code seems to get that case right.\n\nWould you show us the pg_stats rows for par.activity and a.activityid?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Oct 2007 14:49:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on columns " }, { "msg_contents": "On Wednesday 17 October 2007 14:49, Tom Lane wrote:\n> Chris Kratz <[email protected]> writes:\n> > I'm struggling to get postgres to run a particular query quickly.\n>\n> The key problem seems to be the join size misestimate here:\n> > -> Hash Join (cost=45.92..1251.07 rows=21 width=8)\n> > (actual time=0.948..10.439 rows=1199 loops=1) Hash Cond: (par.activity =\n> > a.activityid)\n> > -> Bitmap Heap Scan on\n> > rpt_agencyquestioncache_171_0 par (cost=21.92..1222.19 rows=1247\n> > width=8) (actual time=0.415..3.081 rows=1199 loops=1) -> Hash \n> > (cost=19.21..19.21 rows=383 width=4) (actual time=0.513..0.513 rows=383\n> > loops=1)\n>\n> Evidently it's not realizing that every row of par will have a join\n> partner, but why not? I suppose a.activityid is unique, and in most\n> cases that I've seen the code seems to get that case right.\n>\n> Would you show us the pg_stats rows for par.activity and a.activityid?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n\nThanks for the reply and help Tom, \n\nactivityid is unique on the activity table.\nactivity on par is a child table to activity, with multiple rows per activityid.\n\nHere are the pg_stats rows for par.activity and a.activityid.\n\n# select * from pg_stats where tablename='activity' and attname='activityid';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n------------+-----------+------------+-----------+-----------+------------+------------------+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | activity | activityid | 0 | 4 | -1 | | | {232,2497,3137,3854,4210,5282,9318,11396,12265,12495,12760,13509,13753,15000,15379,15661,16791,17230,17703,18427,18987,19449,19846,20322,20574,20926,21210,21501,21733,22276,22519,23262,24197,24512,24898,25616,25893,26175,26700,27141,27509,27759,29554,29819,30160,30699,32343,32975,33227,33493,33753,33980,34208,34534,34780,35007,35235,35641,35922,36315,36678,37998,38343,38667,39046,39316,39778,40314,40587,40884,41187,41860,42124,42399,42892,43313,43546,43802,45408,45740,46030,46406,46655,47031,47556,47881,48190,48528,48810,49038,49319,49704,49978,50543,50916,51857,52134,52380,52691,53011,53356} | 0.703852\n(1 row)\n\n# select * from pg_stats where tablename='rpt_agencyquestioncache_171_0' and attname='activity';\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n------------+-------------------------------+----------+-----------+-----------+------------+---------------------------------------------------------------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------\n public | rpt_agencyquestioncache_171_0 | activity | 0 | 4 | 248 | {32905,32910,32912,32909,33530,32906,32931,33839,33837,32943,35267,33832,35552,35550,42040,39167} | {0.1471,0.125533,0.1114,0.0935667,0.0903667,0.0538,0.0378,0.0347667,0.0342667,0.0292667,0.0256333,0.0245333,0.0142333,0.0128333,0.0110333,0.00883333} | {32911,32953,32955,33745,33791,33811,33812,33813,33817,33820,33825,33827,33836,33838,33838,33843,33852,33859,33860,33862,33868,33869,33870,33872,33872,33872,33874,33875,33877,33879,33880,33881,33884,33885,33886,33886,33894,33899,33899,33905,33907,33911,33912,33915,33926,35549,35551,35551,35715,35716,35716,35717,35727,35734,39262,42010,42015,42015,42015,42015,42032,42032,42032,42042,42042,42045,43107,43108,43110,43111,43114,44017,44017,44017,44017,45824,46370,46370,46371,46371,46372,46372,46373,46373,46374,46375,46376,46377,46377,46378,46379,46387,52175,52177,52195,52204,52229,52447,52451,52454,53029} | -0.44304\n(1 row)\n\n-Chris\n", "msg_date": "Wed, 17 Oct 2007 15:43:40 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect estimates on columns" }, { "msg_contents": "Chris Kratz <[email protected]> writes:\n> On Wednesday 17 October 2007 14:49, Tom Lane wrote:\n>> Evidently it's not realizing that every row of par will have a join\n>> partner, but why not? I suppose a.activityid is unique, and in most\n>> cases that I've seen the code seems to get that case right.\n>> \n>> Would you show us the pg_stats rows for par.activity and a.activityid?\n\n> Here are the pg_stats rows for par.activity and a.activityid.\n\nHmm, nothing out of the ordinary there.\n\nI poked at this a bit and realized that what seems to be happening is\nthat the a.programid = 171 condition is reducing the selectivity\nestimate --- that is, it knows that that will filter out X percent of\nthe activity rows, and it assumes that *the size of the join result will\nbe reduced by that same percentage*, since join partners would then be\nmissing for some of the par rows. The fact that the join result doesn't\nactually decrease in size at all suggests that there's some hidden\ncorrelation between the programid condition and the condition on\npar.provider_lfm. Is that true? Maybe you could eliminate one of the\ntwo conditions from the query?\n\nSince PG doesn't have any cross-table (or even cross-column) statistics\nit's not currently possible for the optimizer to deal very well with\nhidden correlations like this ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 17 Oct 2007 20:23:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on columns " }, { "msg_contents": "Chris Kratz skrev:\n> Hello Everyone,\n> \n> I'm struggling to get postgres to run a particular query quickly. It\n> seems that very early on, the planner seems to mis-estimate the\n> number of rows returned by a join which causes it to assume that\n> there is only 1 row as it goes up the tree. It then picks a nested\n> loop join which seems to cause the whole query to be slow. Or at\n> least if I turn off nestloop, it runs in 216ms.\n> \n> explain analyze SELECT 1 FROM rpt_agencyquestioncache_171_0 par right\n> outer join namemaster dem on (par.nameid = dem.nameid and\n> dem.programid = 171) right join activity_parentid_view ses on\n> (par.activity = ses.activityid and ses.programid=171) left join ( \n> select ct0.inter_agency_id,ct0.nameid from\n> rpt_agencyquestioncache_171_0 ct0 join rpt_agencyquestioncache_171_2\n> ct2 on ct2.participantid =ct0.participantid ) as par30232 on\n> (dem.nameid=par30232.nameid and par30232.inter_agency_id=30232) WHERE\n> ( ( (par.provider_lfm) ='Child Guidance Treatment Centers Inc.'))\n\nThe first two join-conditions seem strange - I think those are the cause\nof the performance problems. The result of the first join, for instance,\nis the return of all rows from dem, and matching rows from par IFF\ndem.program_id =171 (NULLS otherwise).\n\nIn fact, since you are using a condition on the par table, you could\njust as well use inner joins for\nthe first two cases.\n\nHope this helps,\n\nNis\n\n", "msg_date": "Thu, 18 Oct 2007 11:29:08 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on columns" }, { "msg_contents": "On Wednesday 17 October 2007 20:23, Tom Lane wrote:\n> Chris Kratz <[email protected]> writes:\n> > On Wednesday 17 October 2007 14:49, Tom Lane wrote:\n> >> Evidently it's not realizing that every row of par will have a join\n> >> partner, but why not? I suppose a.activityid is unique, and in most\n> >> cases that I've seen the code seems to get that case right.\n> >>\n> >> Would you show us the pg_stats rows for par.activity and a.activityid?\n> >\n> > Here are the pg_stats rows for par.activity and a.activityid.\n>\n> Hmm, nothing out of the ordinary there.\n>\n> I poked at this a bit and realized that what seems to be happening is\n> that the a.programid = 171 condition is reducing the selectivity\n> estimate --- that is, it knows that that will filter out X percent of\n> the activity rows, and it assumes that *the size of the join result will\n> be reduced by that same percentage*, since join partners would then be\n> missing for some of the par rows. The fact that the join result doesn't\n> actually decrease in size at all suggests that there's some hidden\n> correlation between the programid condition and the condition on\n> par.provider_lfm. Is that true? Maybe you could eliminate one of the\n> two conditions from the query?\n>\n> Since PG doesn't have any cross-table (or even cross-column) statistics\n> it's not currently possible for the optimizer to deal very well with\n> hidden correlations like this ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\nYes, you are correct. Programid is a \"guard\" condition to make sure a user \ncannot look at rows outside of their program. In this particular case the \npar table only has rows for this agency (at one point in time, all rows were \nin one table), so I was able to remove the check on programid on \"a\". This \ncauses my example query to run in 200ms. That's wonderful.\n\nSo, to recap. We had a filter on the join clause which really didn't in this \ncase affect the selectivity of the join table. But the optimizer assumed \nthat the selectivity would be affected causing it to think the join would \ngenerate only a few rows. Since it thought that there would be relatively \nfew rows returned, it used a nestloop instead of another type of join that \nwould have been faster with larger data sets.\n\nThanks for all your help.\n\n-Chris \n", "msg_date": "Thu, 18 Oct 2007 16:03:50 -0400", "msg_from": "Chris Kratz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect estimates on columns" } ]
[ { "msg_contents": "I am trying to join three quite large tables, and the query is \nunbearably slow(meaning I can't get results in more than a day of \nprocessing).\nI've tried the basic optimizations I understand, and nothing has \nimproved the execute speed.... any help with this would be greatly \nappreciated\n\n\nThe three tables are quite large:\n sequence_fragment = 4.5 million rows\n sequence_external_info = 10million rows\n sequence_alignment = 500 million rows\n\n\nThe query I am attempting to run is this:\n\nselect sf.library_id, fio.clip_type , count(distinct(sa.sequence_id))\nfrom sequence_alignment sa, sequence_fragment sf, \nfragment_external_info fio\nwhere sf.seq_frag_id = fio.sequence_frag_id\nand sf.sequence_id = sa.sequence_id\ngroup by sf.library_id, fio.clip_type\n\n\nNOTES:\n~there are indexes on all of the fields being joined (but not on \nlibrary_id or clip_type ). \n~Everything has been re-analyzed post index creation\n~I've tried \"set enable_seqscan=off\" and set (join_table_order or \nsomething) = 1\n\nThe explain plan is as follows:\n\n QUERY \nPLAN \n\n ------------------------------------------------------------------------------------------------------------------------------------------------------- \n\n GroupAggregate (cost=1443436673.93..1480593403.29 rows=54 \nwidth=16) \n\n -> Sort (cost=1443436673.93..1452725856.10 rows=3715672868 \nwidth=16) \n\n Sort Key: sf.library_id, \nfio.clip_type \n\n -> Merge Join (cost=263624049.25..319410068.18 \nrows=3715672868 \nwidth=16) \n\n Merge Cond: (sf.sequence_id = \nsa.sequence_id) \n\n -> Sort (cost=38102888.77..38128373.54 rows=10193906 \nwidth=16) \n\n Sort Key: \nsf.sequence_id \n\n -> Hash Join (cost=5305576.14..36080036.76 \nrows=10193906 \nwidth=16) \n Hash Cond: (fio.sequence_frag_id = \nsf.seq_frag_id) \n\n -> Index Scan using \nfrag_ext_info_seq_frag_id on fragment_external_info fio \n(cost=0.00..30450510.27 rows=10193906 width=12)\n -> Hash (cost=5223807.54..5223807.54 \nrows=4453728 \nwidth=12) \n -> Index Scan using seq_frag_seqid_ind \non sequence_fragment sf (cost=0.00..5223807.54 rows=4453728 \nwidth=12) \n -> Sort (cost=225521160.48..226688766.88 rows=467042560 \nwidth=4) \n\n Sort Key: \nsa.sequence_id \n\n -> Seq Scan on sequence_alignment sa \n(cost=100000000.00..110379294.60 rows=467042560 \nwidth=4) \n\n 15 record(s) selected [Fetch MetaData: 0/ms] [Fetch Data: 0/ms]\n\n\nThanks in advance!\nJohn Major\n\n", "msg_date": "Thu, 18 Oct 2007 13:01:01 -0400", "msg_from": "John Major <[email protected]>", "msg_from_op": true, "msg_subject": "How to improve speed of 3 table join &group (HUGE tables)" }, { "msg_contents": "John Major skrev:\n> I am trying to join three quite large tables, and the query is\n> unbearably slow(meaning I can't get results in more than a day of\n> processing).\n> I've tried the basic optimizations I understand, and nothing has\n> improved the execute speed.... any help with this would be greatly\n> appreciated\n> \n> \n> The three tables are quite large:\n> sequence_fragment = 4.5 million rows\n> sequence_external_info = 10million rows\n> sequence_alignment = 500 million rows\n> \n> \n> The query I am attempting to run is this:\n> \n> select sf.library_id, fio.clip_type , count(distinct(sa.sequence_id))\n> from sequence_alignment sa, sequence_fragment sf,\n> fragment_external_info fio\n> where sf.seq_frag_id = fio.sequence_frag_id\n> and sf.sequence_id = sa.sequence_id\n> group by sf.library_id, fio.clip_type\n> \n> \n> NOTES:\n> ~there are indexes on all of the fields being joined (but not on\n> library_id or clip_type ). ~Everything has been re-analyzed post index\n> creation\n\nWhat are the primary (and candidate) keys of the tables? Are any of the\nfields nullable? How many distinct values exist for\nsequence_alignment.sequence_id?\n\n> ~I've tried \"set enable_seqscan=off\" and set (join_table_order or\n> something) = 1\n\n\nIt would help if you turned the settings back to defaults before doing\nthe ANALYZE - or provide the results of that case as well.\n\n> The explain plan is as follows:\n\n[cut]\n\nWithout trying to understand the ANALYZE output, I would suggest two\npossible optimizations:\n\n- Using count(distinct(sf.sequence_id)) instead of\ncount(distinct(sa.sequence_id)).\n\n- Replacing the join to sequence_alignment with \"WHERE sf.sequence_id IN\n(SELECT sequence_id from sequence_alignment)\".\n\nThe first one probably won't help (nor hurt), but the second one might\nbe able to get rid of the table scan, or at least the need do the full\nmerge join (which returns an estimated 3 billion rows).\n\nHope this helps,\n\nNis\n\n", "msg_date": "Thu, 18 Oct 2007 20:58:17 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE tables)" }, { "msg_contents": "John Major wrote:\n> ~there are indexes on all of the fields being joined (but not on\n> library_id or clip_type ). ~Everything has been re-analyzed post index\n> creation\n> ~I've tried \"set enable_seqscan=off\" and set (join_table_order or\n> something) = 1\n\nSeqscanning and sorting a table is generally faster than a full scan of\nthe table using an index scan, unless the heap is roughly in the index\norder. You probably need to CLUSTER the tables to use the indexes\neffectively.\n\nAre you sure you have an index on sequence_alignment.sequence_id? The\nplanner seems to choose a seqscan + sort, even though you've set\nenable_seqscan=false.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 18 Oct 2007 20:06:11 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE tables)" }, { "msg_contents": "Hello Nis-\n\nI did reset the defaults before running the explain.\n\nPrimary keys for the tables.\n sequence_fragment.seq_frag_id\n sequence.sequence_id\n\nCandidate keys.\n fragment_external_info.seq_frag_id (FK to sequence_fragment.seq_frag_id)\n sequence_alignment.sequence_id (FK to sequence_fragment.sequence_id).\n\nNone of the fields are nullable.\n\nsequence is the anchor table.\n seq_frag_id is the primary key (and foreign key to \nfragment_external_info) ~4.5 million unique entries\n sequence_id is an indexed field. ~3 million distinct IDs\n\nsequence_alignment has 500million entries, but i join on sequence_id \nwhich has ~3million entries.\n\n\nWhen I make the suggested changes, the new query is:\n select sf.library_id, fio.clip_type , count(sf.sequence_id)\n from sequence_fragment sf, fragment_external_info fio\n where sf.seq_frag_id = fio.sequence_frag_id\n and sf.sequence_id IN\n (SELECT sequence_id from sequence_alignment)\n group by sf.library_id, fio.clip_type\n\nAfter making the 2 changes, the cost dropped dramatically... but is \nstill very high.\nOriginal Explain cost:\ncost=1308049564..1345206293 rows=54 width=16\n\nNew Explain cost:\ncost=11831119..11831120 rows=54 width=16\n\nJohn\n\n\n\n\nNis J�rgensen wrote:\n> John Major skrev:\n> \n>> I am trying to join three quite large tables, and the query is\n>> unbearably slow(meaning I can't get results in more than a day of\n>> processing).\n>> I've tried the basic optimizations I understand, and nothing has\n>> improved the execute speed.... any help with this would be greatly\n>> appreciated\n>>\n>>\n>> The three tables are quite large:\n>> sequence_fragment = 4.5 million rows\n>> sequence_external_info = 10million rows\n>> sequence_alignment = 500 million rows\n>>\n>>\n>> The query I am attempting to run is this:\n>>\n>> select sf.library_id, fio.clip_type , count(distinct(sa.sequence_id))\n>> from sequence_alignment sa, sequence_fragment sf,\n>> fragment_external_info fio\n>> where sf.seq_frag_id = fio.sequence_frag_id\n>> and sf.sequence_id = sa.sequence_id\n>> group by sf.library_id, fio.clip_type\n>>\n>>\n>> NOTES:\n>> ~there are indexes on all of the fields being joined (but not on\n>> library_id or clip_type ). ~Everything has been re-analyzed post index\n>> creation\n>> \n>\n> What are the primary (and candidate) keys of the tables? Are any of the\n> fields nullable? How many distinct values exist for\n> sequence_alignment.sequence_id?\n>\n> \n>> ~I've tried \"set enable_seqscan=off\" and set (join_table_order or\n>> something) = 1\n>> \n>\n>\n> It would help if you turned the settings back to defaults before doing\n> the ANALYZE - or provide the results of that case as well.\n>\n> \n>> The explain plan is as follows:\n>> \n>\n> [cut]\n>\n> Without trying to understand the ANALYZE output, I would suggest two\n> possible optimizations:\n>\n> - Using count(distinct(sf.sequence_id)) instead of\n> count(distinct(sa.sequence_id)).\n>\n> - Replacing the join to sequence_alignment with \"WHERE sf.sequence_id IN\n> (SELECT sequence_id from sequence_alignment)\".\n>\n> The first one probably won't help (nor hurt), but the second one might\n> be able to get rid of the table scan, or at least the need do the full\n> merge join (which returns an estimated 3 billion rows).\n>\n> Hope this helps,\n>\n> Nis\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n", "msg_date": "Thu, 18 Oct 2007 15:46:19 -0400", "msg_from": "John Major <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE tables)" }, { "msg_contents": "Hi Hekki-\n\nWhen I turn seq_scan off for the new query:\n\nexplain\nselect sf.library_id, fio.clip_type , count(sf.sequence_id)\n from sequence_fragment sf, fragment_external_info fio\n where sf.seq_frag_id = fio.sequence_frag_id\n and sf.sequence_id IN\n (SELECT sequence_id from sequence_alignment)\n group by sf.library_id, fio.clip_type\n\nThe index is used... but the cost gets worse!\nit goes from:\n11831119\n-TO-\n53654888\n\nActually... The new query executes in ~ 15 minutes... which is good \nenough for me for now.\n\nThanks Nis!\n\njohn\n\n\n\nHeikki Linnakangas wrote:\n> John Major wrote:\n> \n>> ~there are indexes on all of the fields being joined (but not on\n>> library_id or clip_type ). ~Everything has been re-analyzed post index\n>> creation\n>> ~I've tried \"set enable_seqscan=off\" and set (join_table_order or\n>> something) = 1\n>> \n>\n> Seqscanning and sorting a table is generally faster than a full scan of\n> the table using an index scan, unless the heap is roughly in the index\n> order. You probably need to CLUSTER the tables to use the indexes\n> effectively.\n>\n> Are you sure you have an index on sequence_alignment.sequence_id? The\n> planner seems to choose a seqscan + sort, even though you've set\n> enable_seqscan=false.\n>\n> \n\n", "msg_date": "Thu, 18 Oct 2007 16:04:57 -0400", "msg_from": "John Major <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE tables)" }, { "msg_contents": "Hi,\n\nhow about:\n\nselect sf.library_id, fio.clip_type , count(sf.sequence_id)\n from sequence_fragment sf, fragment_external_info fio\n ,(SELECT distinct sequence_id from sequence_alignment) sa\n where sf.seq_frag_id = fio.sequence_frag_id\n and sf.sequence_id = sa.sequence_id\n group by sf.library_id, fio.clip_type\n\nI don't know postgres well, but I would put my bet in Oracle in that \nderived table instead of that in clause.\n\nIsmo\n\nOn Thu, 18 Oct 2007, John Major wrote:\n\n> Hi Hekki-\n> \n> When I turn seq_scan off for the new query:\n> \n> explain\n> select sf.library_id, fio.clip_type , count(sf.sequence_id)\n> from sequence_fragment sf, fragment_external_info fio\n> where sf.seq_frag_id = fio.sequence_frag_id\n> and sf.sequence_id IN\n> (SELECT sequence_id from sequence_alignment)\n> group by sf.library_id, fio.clip_type\n> \n> The index is used... but the cost gets worse!\n> it goes from:\n> 11831119\n> -TO-\n> 53654888\n> \n> Actually... The new query executes in ~ 15 minutes... which is good enough for\n> me for now.\n> \n> Thanks Nis!\n> \n> john\n> \n> \n> \n> Heikki Linnakangas wrote:\n> > John Major wrote:\n> > \n> > > ~there are indexes on all of the fields being joined (but not on\n> > > library_id or clip_type ). ~Everything has been re-analyzed post index\n> > > creation\n> > > ~I've tried \"set enable_seqscan=off\" and set (join_table_order or\n> > > something) = 1\n> > > \n> >\n> > Seqscanning and sorting a table is generally faster than a full scan of\n> > the table using an index scan, unless the heap is roughly in the index\n> > order. You probably need to CLUSTER the tables to use the indexes\n> > effectively.\n> >\n> > Are you sure you have an index on sequence_alignment.sequence_id? The\n> > planner seems to choose a seqscan + sort, even though you've set\n> > enable_seqscan=false.\n> >\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n> \n\n", "msg_date": "Fri, 19 Oct 2007 07:40:17 +0300 (EEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE\n tables)" }, { "msg_contents": "John Major skrev:\n> Hello Nis-\n> \n> I did reset the defaults before running the explain.\n\nThis line from your original post:\n\n-> Seq Scan on sequence_alignment sa (cost=100000000.00..110379294.60\nrows=467042560 width=4)\n\nIs an indication that you didn't (AFAIK enable_seqscan=off works by\nsetting the cost of starting a seqscan to 100000000).\n\n> Candidate keys.\n> fragment_external_info.seq_frag_id (FK to sequence_fragment.seq_frag_id)\n> sequence_alignment.sequence_id (FK to sequence_fragment.sequence_id). \n\nThose are not candidate keys. A candidate key is \"something which could\nhave been chosen as the primary key\". Anyway, I think I understand your\ntable layout now. It might have been quicker if you just posted the\ndefinition of your tables. This could also have shown us that the\ncorrect indexes are in place, rather than taking your word for it.\n\nYou are absolutely certain that both sides of all FK relationships are\nindexed?\n\n> After making the 2 changes, the cost dropped dramatically... but is still very high.\n> Original Explain cost:\n> cost=1308049564..1345206293 rows=54 width=16\n> \n> New Explain cost:\n> cost=11831119..11831120 rows=54 width=16 \n\nPlease post the full output if you want more help. And preferably use\nEXPLAIN ANALYZE, now that it runs in finite time.\n\n\nNis\n\n", "msg_date": "Tue, 23 Oct 2007 13:16:46 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to improve speed of 3 table join &group (HUGE tables)" } ]
[ { "msg_contents": "Hi, all\n\n \n\n I am trying to improve the performance of creating index.\n\n I've set shared_buffers = 1024MB\n\n Effective_cache_size = 1024MB\n\n Work_mem = 1GB\n\n Maintenance_work_mem=512MB\n\n (I'm sure that the server process has received the SIGHUP signal)\n\nHowever, when create index, I found that the memory used by Postgres is only\n50MB. And it is very slow. How to make it faster?\n\nAll helps are appreciated.\n\n \n\nThanks.\n\nYinan\n\n\n\n\n\n\n\n\n\n\n\nHi,\nall\n \n         I\nam trying to improve the performance of creating index.\n         I’ve\nset shared_buffers = 1024MB\n                   Effective_cache_size\n= 1024MB\n                   Work_mem\n= 1GB\n                   Maintenance_work_mem=512MB\n         (I’m\nsure that the server process has received the SIGHUP signal)\nHowever, when create\nindex, I found that the memory used by Postgres is only 50MB. And it is very\nslow. How to make it faster?\nAll helps are\nappreciated.\n \nThanks.\nYinan", "msg_date": "Fri, 19 Oct 2007 19:57:32 +0800", "msg_from": "\"Yinan Li\" <[email protected]>", "msg_from_op": true, "msg_subject": "how to improve the performance of creating index" }, { "msg_contents": "Yinan Li wrote:\n> I am trying to improve the performance of creating index.\n> \n> I've set shared_buffers = 1024MB\n> \n> Effective_cache_size = 1024MB\n> \n> Work_mem = 1GB\n> \n> Maintenance_work_mem=512MB\n> \n> (I'm sure that the server process has received the SIGHUP signal)\n> \n> However, when create index, I found that the memory used by Postgres is only\n> 50MB. And it is very slow. How to make it faster?\n\nWhat version of Postgres are you using? How much RAM does the box have?\nHow big is the table? How long does the index build take? What kind of\nan I/O system do you have?\n\nmaintenance_work_mem is the one that controls how much memory is used\nfor the sort.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 19 Oct 2007 13:03:27 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to improve the performance of creating index" }, { "msg_contents": "[email protected] (\"Yinan Li\") writes:\n> �������� I am trying to improve the performance of creating index.:p>\n>\n> �������� I've set shared_buffers = 1024MB:p>\n>\n> ������������������ Effective_cache_size = 1024MB:p>\n>\n> ������������������ Work_mem = 1GB:p>\n>\n> ������������������ Maintenance_work_mem=512MB:p>\n>\n> �������� (I'm sure that the server process has received the SIGHUP signal):p>\n>\n> However, when create index, I found that the memory used by Postgres is only 50MB. And it is very\n> slow. How to make it faster?:p>\n>\n> All helps are appreciated.:p>\n\nThose values seem rather large, with the exception of the effective\ncache size, which I would expect to be somewhat bigger, based on the\nother values.\n\nNote that the values for work_mem and maintenance_work_mem get used\neach time something is sorted or maintained. So if those values get\nset high, this can pretty easily lead to scary amounts of swapping,\nwhich would tend to lead to things getting \"very slow.\"\n\nYou may want to do a census as to how much resources you have on the\nserver. Knowing that would help people make more rational evaluations\nof whether your parameters are sensible or not.\n-- \n(reverse (concatenate 'string \"moc.enworbbc\" \"@\" \"enworbbc\"))\nhttp://linuxdatabases.info/info/languages.html\nAll syllogisms have three parts, therefore this is not a syllogism.\n", "msg_date": "Fri, 19 Oct 2007 08:52:05 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to improve the performance of creating index" }, { "msg_contents": "Please keep the list cc'd so that others can help.\n\nYinan Li wrote:\n> What version of Postgres are you using?\n> 8.2.4\n> How much RAM does the box have?\n> 2G\n> How big is the table? \n> 256M tuples, each tuple contains 2 integers.\n> How long does the index build take?\n> About 2 hours\n> What kind of an I/O system do you have?\n> A SATA disk (7200 rpm)\n\n2 hours does sound like a very long time. With a table like that, we're\ntalking about ~7-9 GB worth of data if I did the math right.\n\nYou could try lowering shared_buffers to something like 50 MB while you\nbuild the index, to leave more RAM available for OS caching.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 19 Oct 2007 15:50:07 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to improve the performance of creating index" }, { "msg_contents": "On 10/19/07, Yinan Li <[email protected]> wrote:\n>\n> Hi, all\n> I am trying to improve the performance of creating index.\n> I've set shared_buffers = 1024MB\n> Effective_cache_size = 1024MB\n> Work_mem = 1GB\n> Maintenance_work_mem=512MB\n> (I'm sure that the server process has received the SIGHUP signal)\n>\n> However, when create index, I found that the memory used by Postgres is only\n> 50MB. And it is very slow. How to make it faster?\n\nWhat, exactly is the create index statement? I assume that if there's\nonly two columns then at worst it's a two part index (i.e. column1,\ncolumn2) which can get rather large.\n\n From what you said in your reply to Heikki, these settings are WAY too\nhigh. You shouldn't try to allocate more memory than your machine has\nto the database. with shared buffers at 1G, work mem at 1G and maint\nworkmem at 0.5 gig you could use all your memory plus 0.5G on a single\nquery.\n\nSet them at something more sane. shared_buffers at 128M to 512M,\nwork_mem at 64M, and maintenance_work_mem at 128M to 512M (max)\n\nWhat do top, vmstat, and iostat have to say about your machine while\nthe create query is going on?\n\nIf you want it to go faster, it depends on what's holding you back.\nIf you're CPUs are maxed out, then you might need more CPU. If your\nI/O is maxed, then you might need more I/O bandwidth, and if neither\nseems maxed out, but you've got a lot of time spend waiting / idle,\nthen you might need faster / more memory.\n\nUntil we / you know what's slow, we don't know what to change to make\nyour create index run faster.\n\nOn my reporting server at work, where we have 2 Gigs ram, and\n70Million or so rows using about 45Gigs (tables and indexes) I can\ncreate a new index in about 10-15 minutes. So your machine sounds\nkinda slow. The primary difference between my machine and yours is\nthat I have a 4 disk RAID-10 software array. Even if that made my\nmachine twice as fast as yours, that doesn't explain the really slow\nperformance of yours.\n", "msg_date": "Fri, 19 Oct 2007 13:21:26 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to improve the performance of creating index" } ]
[ { "msg_contents": "Hi,\n\n I am updating a big table (90M records) with data from another rather large table (4M entries). Here is my update query:\n\n update links set target_size = \n ( select size from articles where articles.article_id = links.article_to)\n\n I have built all the indexes one might need, increased shared mem buffers to 400MB, I looked at the query plan and it looks reasonable.\nBut its taking an eternity to run: I've been running the query for 3 hours now on my new Mac laptop, and looking at the activity monitor I see that postrges is spending all of this time in disk IO (average CPU load of postgres process is about 4-5%).\n\n However, just looking at the query, postgres could cache the articles table and do a single pass over the links table...\n\n Please let me know if there is a good solution to this.\n\nThanks!\nPavel Velikhov\nInstitute of Systems Programming\nRussian Academy of Sciences\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi,  I am updating a big table (90M records) with data from another rather large table (4M entries). Here is my update query:    update links set target_size =     ( select size from articles where articles.article_id = links.article_to) I have built all the indexes one might need, increased shared mem buffers to 400MB, I looked at the query plan and it looks reasonable.But its taking an eternity to run: I've been running the query for 3 hours now on my new Mac laptop, and looking at the activity monitor I see that postrges is spending all of this time in disk IO (average CPU load of postgres process is about 4-5%). However, just looking at the query, postgres could cache the articles table and do a single\n pass over the links table... Please let me know if there is a good solution to this.Thanks!Pavel VelikhovInstitute of Systems ProgrammingRussian Academy of Sciences__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Fri, 19 Oct 2007 08:05:15 -0700 (PDT)", "msg_from": "Pavel Velikhov <[email protected]>", "msg_from_op": true, "msg_subject": "need help with a query" }, { "msg_contents": "On 10/19/07, Pavel Velikhov <[email protected]> wrote:\n>\n> Hi,\n>\n> I am updating a big table (90M records) with data from another rather\n> large table (4M entries). Here is my update query:\n>\n> update links set target_size =\n> ( select size from articles where articles.article_id =\n> links.article_to)\n\ntry:\n\nUPDATE links\n SET target_size = size\n FROM articles\n WHERE articles.article_id = links.article_to;\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 19 Oct 2007 11:52:58 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help with a query" } ]
[ { "msg_contents": "Thanks for you help!\n\nGot a very different query plan this time, with a hash join between links and articles. At least now postgres is using both shared memory buffers and working mem, but its still completely IO bound, only getting in 5-6% CPU once in a while. I guess I can't squeeze more out of the laptop, but I also have a machine with 16GB RAM that I'll try this on next. Should I allocate tons of memory into shared buffers or into the working memory?\n\nThanks in advance!\n\n----- Original Message ----\nFrom: Jonah H. Harris <[email protected]>\nTo: Pavel Velikhov <[email protected]>\nCc: [email protected]\nSent: Friday, October 19, 2007 7:52:58 PM\nSubject: Re: [PERFORM] need help with a query\n\nOn 10/19/07, Pavel Velikhov <[email protected]> wrote:\n>\n> Hi,\n>\n> I am updating a big table (90M records) with data from another\n rather\n> large table (4M entries). Here is my update query:\n>\n> update links set target_size =\n> ( select size from articles where articles.article_id =\n> links.article_to)\n\ntry:\n\nUPDATE links\n SET target_size = size\n FROM articles\n WHERE articles.article_id = links.article_to;\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n\n---------------------------(end of\n broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nThanks for you help!Got a very different query plan this time, with a hash join between links and articles. At least now postgres is using both shared memory buffers and working mem, but its still completely IO bound, only getting in 5-6% CPU once in a while. I guess I can't squeeze more out of the laptop, but I also have a machine with 16GB RAM that I'll try this on next. Should I allocate tons of memory into shared buffers or into the working memory?Thanks in advance!----- Original Message ----From: Jonah H. Harris <[email protected]>To: Pavel Velikhov <[email protected]>Cc:\n [email protected]: Friday, October 19, 2007 7:52:58 PMSubject: Re: [PERFORM] need help with a queryOn 10/19/07, Pavel Velikhov <[email protected]> wrote:>> Hi,>>  I am updating a big table (90M records) with data from another\n rather> large table (4M entries). Here is my update query:>>    update links set target_size =>    ( select size from articles where articles.article_id => links.article_to)try:UPDATE links  SET target_size = size  FROM articles WHERE articles.article_id = links.article_to;-- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324EnterpriseDB Corporation                | fax: 732.331.1301499 Thornall Street, 2nd Floor          | [email protected], NJ 08837                        | http://www.enterprisedb.com/---------------------------(end of\n broadcast)---------------------------TIP 6: explain analyze is your friend__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Fri, 19 Oct 2007 09:11:36 -0700 (PDT)", "msg_from": "Pavel Velikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need help with a query" }, { "msg_contents": "Pavel Velikhov <[email protected]> writes:\n> Got a very different query plan this time, with a hash join between links and articles. At least now postgres is using both shared memory buffers and working mem, but its still completely IO bound, only getting in 5-6% CPU once in a while. I guess I can't squeeze more out of the laptop, but I also have a machine with 16GB RAM that I'll try this on next. Should I allocate tons of memory into shared buffers or into the working memory?\n\nFor a hash join, I think you want to raise work_mem as high as you can\n(without driving the system into swapping). It won't read any block of\nthe tables more than once, so there's no point in having lots of\nbuffers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 19 Oct 2007 12:34:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help with a query " } ]
[ { "msg_contents": "Thanks a lot folks,\n\nLeft the query running for 10+ hours and had to kill it. I guess there really was no need to have lots of\nshared buffers (the hope was that postgresql will cache the whole table). I ended up doing this step inside\nthe application as a pre-processing step. Can't have postgres running with different fsych options since this\nwill be part of an \"easy to install and run\" app, that should just require a typical PosgreSQL installation.\n\nPavel Velikhov\n\n----- Original Message ----\nFrom: Kenneth Marshall <[email protected]>\nTo: Pavel Velikhov <[email protected]>\nCc: Jonah H. Harris <[email protected]>; [email protected]\nSent: Friday, October 19, 2007 8:17:48 PM\nSubject: Re: [PERFORM] need help with a query\n\nOn Fri, Oct 19, 2007 at 09:11:36AM -0700, Pavel Velikhov wrote:\n> Thanks for you help!\n> \n> Got a very different query plan this time, with a hash join between\n links and articles. At least now postgres is using both shared memory\n buffers and working mem, but its still completely IO bound, only getting\n in 5-6% CPU once in a while. I guess I can't squeeze more out of the\n laptop, but I also have a machine with 16GB RAM that I'll try this on\n next. Should I allocate tons of memory into shared buffers or into the\n working memory?\n\nThis is an extremely I/O intensive query that must rewrite every\nentry in the table. You could speed it up by starting postgresql\nwith fsync disabled, run the update, and then restart it with\nfsync re-enabled.\n\nKen\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nThanks a lot folks,Left the query running for 10+ hours and had to kill it. I guess there really was no need to have lots ofshared buffers (the hope was that postgresql will cache the whole table). I ended up doing this step insidethe application as a pre-processing step. Can't have postgres running with different fsych options since thiswill be part of an \"easy to install and run\" app, that should just require a typical PosgreSQL installation.Pavel Velikhov----- Original Message ----From: Kenneth Marshall <[email protected]>To: Pavel Velikhov <[email protected]>Cc: Jonah H.\n Harris <[email protected]>; [email protected]: Friday, October 19, 2007 8:17:48 PMSubject: Re: [PERFORM] need help with a queryOn Fri, Oct 19, 2007 at 09:11:36AM -0700, Pavel Velikhov wrote:> Thanks for you help!> > Got a very different query plan this time, with a hash join between\n links and articles. At least now postgres is using both shared memory\n buffers and working mem, but its still completely IO bound, only getting\n in 5-6% CPU once in a while. I guess I can't squeeze more out of the\n laptop, but I also have a machine with 16GB RAM that I'll try this on\n next. Should I allocate tons of memory into shared buffers or into the\n working memory?This is an extremely I/O intensive query that must rewrite everyentry in the table. You could speed it up by starting postgresqlwith fsync disabled, run the update, and then restart it withfsync re-enabled.Ken__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Sat, 20 Oct 2007 13:11:00 -0700 (PDT)", "msg_from": "Pavel Velikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need help with a query" }, { "msg_contents": "On 10/20/07, Pavel Velikhov <[email protected]> wrote:\n> Left the query running for 10+ hours and had to kill it. I guess there\n> really was no need to have lots of\n> shared buffers (the hope was that postgresql will cache the whole table). I\n> ended up doing this step inside\n> the application as a pre-processing step. Can't have postgres running with\n> different fsych options since this\n> will be part of an \"easy to install and run\" app, that should just require a\n> typical PosgreSQL installation.\n\nIs the size always different? If not, you could limit the updates:\n\nUPDATE links\n SET target_size = size\n FROM articles\n WHERE articles.article_id = links.article_to\n AND links.target_size != articles.size;\n\nSince this is a huge operation, what about trying:\n\nCREATE TABLE links_new AS SELECT l.col1, l.col2, a.size as\ntarget_size, l.col4, ... FROM links l, articles a WHERE a.article_id =\nl.article_to;\n\nThen truncate links, copy the data from links_new. Alternatively, you\ncould drop links, rename links_new to links, and recreate the\nconstraints.\n\nI guess the real question is application design. Why doesn't this app\nstore size at runtime instead of having to batch this huge update?\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sun, 21 Oct 2007 12:44:01 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: need help with a query" } ]
[ { "msg_contents": "I have a client server that is dedicated to being a Postgres 8.2.4 database\nserver for many websites. This server will contain approximately 15\ndatabases each containing between 40-100 tables. Each database will have\napproximately 7 web applications pulling data from it, but there will\nprobably be no more than 50 simultaneous requests. The majority of the\ntables will be very small tables around 1K in total size. However, most of\nthe queries will be going to the other 10-15 tables that are in each\ndatabase that will contain postgis shapes. These tables will range in size\nfrom 50 to 730K rows and each row will range in size from a 2K to 3MB. The\ndata will be truncated and reinserted as part of a nightly process but other\nthan that, there won't be many writes during the day. I am trying to tune\nthis server to its maximum capacity. I would appreciate any advice on any\nof the settings that I should look at. I have not changed any of the\nsettings before because I have never really needed to. And even now, I have\nnot experienced any bad performance, I am simply trying to turn the track\nbefore the train gets here.\n\nServer Specification:\nWindows 2003 Enterprise R2\nDual-Quad Core 2.33GHz\n8GB RAM\n263 GB HD (I am not 100% on drive speed, but I think it is 15K)\n\n\nThanks in advance,\nLee Keel\n\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.\n\n\n\n\n\nMemory Settings....\n\n\nI have a client server that is dedicated to being a Postgres 8.2.4 database server for many websites.  This server will contain approximately 15 databases each containing between 40-100 tables.  Each database will have approximately 7 web applications pulling data from it, but there will probably be no more than 50 simultaneous requests.  The majority of the tables will be very small tables around 1K in total size.  However, most of the queries will be going to the other 10-15 tables that are in each database that will contain postgis shapes.  These tables will range in size from 50 to 730K rows and each row will range in size from a 2K to 3MB.  The data will be truncated and reinserted as part of a nightly process but other than that, there won't be many writes during the day.  I am trying to tune this server to its maximum capacity.  I would appreciate any advice on any of the settings that I should look at.  I have not changed any of the settings before because I have never really needed to.  And even now, I have not experienced any bad performance, I am simply trying to turn the track before the train gets here.\nServer Specification:\nWindows 2003 Enterprise R2\nDual-Quad Core 2.33GHz\n8GB RAM\n263 GB HD (I am not 100% on drive speed, but I think it is 15K)\n\nThanks in advance,\nLee Keel\n\n\n\nThis email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.", "msg_date": "Mon, 22 Oct 2007 11:10:11 -0500", "msg_from": "Lee Keel <[email protected]>", "msg_from_op": true, "msg_subject": "Memory Settings...." }, { "msg_contents": "I recently tweaked some configs for performance, so I'll let you in on\nwhat I changed.\n\nFor memory usage, you'll want to look at shared_buffers, work_mem, and\nmaintenance_work_mem. Postgres defaults to very low values of this,\nand to get good performance and not a lot of disk paging, you'll want\nto raise those values (you will need to restart the server and\npossibly tweak some memory config for lots of shared_buffers, I had to\nraise SHMMAX on Linux, but I don't know the Windows analogue). The\nbasic rule of thumb for shared_buffers is 25%-50% of main memory,\nenough to use main memory but leaving some to allow work_mem to do its\nthing and allow any other programs to run smoothly. Tweak this as\nnecessary.\n\nThe other big thing is the free space map, which tracks free space and\nhelps to prevent index bloat. A VACUUM VERBOSE in a database will tell\nyou what these values should be set to.\n\nGo here for full details:\nhttp://www.postgresql.org/docs/8.2/static/runtime-config.html, especially\nhttp://www.postgresql.org/docs/8.2/static/runtime-config-resource.html\n\nPeter\n\nOn 10/22/07, Lee Keel <[email protected]> wrote:\n>\n>\n>\n> I have a client server that is dedicated to being a Postgres 8.2.4 database\n> server for many websites. This server will contain approximately 15\n> databases each containing between 40-100 tables. Each database will have\n> approximately 7 web applications pulling data from it, but there will\n> probably be no more than 50 simultaneous requests. The majority of the\n> tables will be very small tables around 1K in total size. However, most of\n> the queries will be going to the other 10-15 tables that are in each\n> database that will contain postgis shapes. These tables will range in size\n> from 50 to 730K rows and each row will range in size from a 2K to 3MB. The\n> data will be truncated and reinserted as part of a nightly process but other\n> than that, there won't be many writes during the day. I am trying to tune\n> this server to its maximum capacity. I would appreciate any advice on any\n> of the settings that I should look at. I have not changed any of the\n> settings before because I have never really needed to. And even now, I have\n> not experienced any bad performance, I am simply trying to turn the track\n> before the train gets here.\n>\n> Server Specification:\n>\n> Windows 2003 Enterprise R2\n>\n> Dual-Quad Core 2.33GHz\n>\n> 8GB RAM\n>\n> 263 GB HD (I am not 100% on drive speed, but I think it is 15K)\n>\n>\n> Thanks in advance,\n>\n> Lee Keel\n>\n> This email and any files transmitted with it are confidential and intended\n> solely for the use of the individual or entity to whom they are addressed.\n> If you have received this email in error please notify the sender. This\n> message contains confidential information and is intended only for the\n> individual named. If you are not the named addressee you should not\n> disseminate, distribute or copy this e-mail.\n", "msg_date": "Mon, 22 Oct 2007 12:15:22 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Settings...." }, { "msg_contents": "You may find this informative:\n\nhttp://www.powerpostgresql.com/Downloads/annotated_conf_80.html\n\nOn Mon, 22 Oct 2007, Lee Keel wrote:\n\n> I have a client server that is dedicated to being a Postgres 8.2.4 database\n> server for many websites. This server will contain approximately 15\n> databases each containing between 40-100 tables. Each database will have\n> approximately 7 web applications pulling data from it, but there will\n> probably be no more than 50 simultaneous requests. The majority of the\n> tables will be very small tables around 1K in total size. However, most of\n> the queries will be going to the other 10-15 tables that are in each\n> database that will contain postgis shapes. These tables will range in size\n> from 50 to 730K rows and each row will range in size from a 2K to 3MB. The\n> data will be truncated and reinserted as part of a nightly process but other\n> than that, there won't be many writes during the day. I am trying to tune\n> this server to its maximum capacity. I would appreciate any advice on any\n> of the settings that I should look at. I have not changed any of the\n> settings before because I have never really needed to. And even now, I have\n> not experienced any bad performance, I am simply trying to turn the track\n> before the train gets here.\n>\n> Server Specification:\n> Windows 2003 Enterprise R2\n> Dual-Quad Core 2.33GHz\n> 8GB RAM\n> 263 GB HD (I am not 100% on drive speed, but I think it is 15K)\n>\n>\n> Thanks in advance,\n> Lee Keel\n>\n>\n>\n> This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the sender. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail.\n>\n", "msg_date": "Mon, 22 Oct 2007 10:23:20 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Settings...." }, { "msg_contents": "Peter Koczan wrote:\n> The\n> basic rule of thumb for shared_buffers is 25%-50% of main memory,\n> enough to use main memory but leaving some to allow work_mem to do its\n> thing and allow any other programs to run smoothly. Tweak this as\n> necessary.\n\nAnother rule of thumb is that on Windows you want only very little\nshared_buffers, because of some performance issues with shared memory on\nWindows.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 22 Oct 2007 18:30:18 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Settings...." }, { "msg_contents": ">>> On Mon, Oct 22, 2007 at 11:10 AM, in message\n<[email protected]>, Lee Keel\n<[email protected]> wrote: \n\n> there will probably be no more than 50 simultaneous requests.\n\n> Dual-Quad Core 2.33GHz\n\nMy benchmarks have indicated that you want to keep the number of\nactive queries at or below four times the number of CPUs. You might\nwant to consider some form of connection pooling which can queue the\nrequests to achieve that. It can boost both the throughput and the\nresponse time.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 22 Oct 2007 12:34:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory Settings...." } ]
[ { "msg_contents": "Hi, \nI think planner should use other plans than seqscan to solve querys like select * from hugetable limit 1, especially when the talbe is very large. Is it solved in newer versions or is there some open issues about it?. \nthanks\nI'm working with postgres 8.0.1, \n \n---------------------------------\n\n�S� un mejor fot�grafo!\nPerfecciona tu t�cnica y encuentra las mejores fotos en:\nhttp://telemundo.yahoo.com/promos/mejorfotografo.html\nHi, I think planner should use other plans than seqscan to solve querys like select * from hugetable limit 1, especially when the talbe is very large. Is it solved in newer versions or is there some open issues about it?. thanksI'm working with postgres 8.0.1, \n�S� un mejor fot�grafo!Perfecciona tu t�cnica y encuentra las mejores fotos en:\nhttp://telemundo.yahoo.com/promos/mejorfotografo.html", "msg_date": "Mon, 22 Oct 2007 19:24:39 -0700 (PDT)", "msg_from": "Adrian Demaestri <[email protected]>", "msg_from_op": true, "msg_subject": "Seqscan" }, { "msg_contents": "On Mon, 2007-10-22 at 19:24 -0700, Adrian Demaestri wrote:\n> Hi, \n> I think planner should use other plans than seqscan to solve querys\n> like select * from hugetable limit 1, especially when the talbe is\n> very large. Is it solved in newer versions or is there some open\n> issues about it?. \n> thanks\n> I'm working with postgres 8.0.1, \n\nFor the query in question, what would be faster than a seqscan? It\ndoesn't read the whole table, it only reads until it satisfies the limit\nclause. \n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Mon, 22 Oct 2007 19:45:11 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan" }, { "msg_contents": "It is not actualy a table, sorry, it is a quite complex view that involve three large tables. When I query the view using a where clause the answer is fast because of the use of some restrictive indexes, but when there is no where clause the \"limit 1\" waits until the entire table is generated and all the joins are made. I can't control the sintax of the problematic query, it is generated autamatically by another layer of our application and it's syntactically and semantically ok. \n\nHere is the view structure \n\nSELECT a.field1\n FROM a\n LEFT JOIN b ON a.f1= b.f1 AND a.f2 = b.f2\n LEFT JOIN c ON a.f3 = c.f3\n\nEach one of the tables a, b and c has about 5 million rows.\nThe relation between the tables using that joins conditions is at most 1 to 1\nthanks!\n\n\n\n\nJeff Davis <[email protected]> escribi�: On Mon, 2007-10-22 at 19:24 -0700, Adrian Demaestri wrote:\n> Hi, \n> I think planner should use other plans than seqscan to solve querys\n> like select * from hugetable limit 1, especially when the talbe is\n> very large. Is it solved in newer versions or is there some open\n> issues about it?. \n> thanks\n> I'm working with postgres 8.0.1, \n\nFor the query in question, what would be faster than a seqscan? It\ndoesn't read the whole table, it only reads until it satisfies the limit\nclause. \n\nRegards,\n Jeff Davis\n\n\n\n \n---------------------------------\n\n�S� un mejor ambientalista!\nEncuentra consejos para cuidar el lugar donde vivimos en:\nhttp://telemundo.yahoo.com/promos/mejorambientalista.html\nIt is not actualy a table, sorry, it is a quite complex view that involve three large tables. When I query the view using a where clause the answer is fast because of the use of some restrictive indexes, but when there is no where clause the \"limit 1\" waits until the entire table is generated and all the joins are made. I can't control the sintax of the problematic query, it is generated autamatically by another layer of our application and it's syntactically and semantically ok. Here is the view structure SELECT a.field1   FROM  a   LEFT JOIN b ON a.f1= b.f1 AND a.f2 = b.f2   LEFT JOIN c ON a.f3 = c.f3Each one of the tables a, b and c has about 5 million rows.The relation between the tables using that joins conditions is at most 1 to 1thanks!Jeff Davis <[email protected]> escribi�: On Mon, 2007-10-22 at 19:24 -0700, Adrian Demaestri wrote:> Hi, > I think planner should use other plans than seqscan to solve querys> like select * from hugetable limit 1, especially when the talbe is> very large. Is it solved in newer versions or is there some open> issues about it?. > thanks> I'm working with postgres 8.0.1, For the query in question, what would be faster than a seqscan? Itdoesn't read the whole table, it only reads until it satisfies the limitclause. Regards, Jeff Davis\n�S� un mejor ambientalista!Encuentra consejos para cuidar el lugar donde vivimos en:\nhttp://telemundo.yahoo.com/promos/mejorambientalista.html", "msg_date": "Tue, 23 Oct 2007 06:06:39 -0700 (PDT)", "msg_from": "Adrian Demaestri <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seqscan" }, { "msg_contents": "(Please don't top-post. )\n\nAdrian Demaestri skrev:\n> */Jeff Davis <[email protected]>/* escribi�:\n> \n> On Mon, 2007-10-22 at 19:24 -0700, Adrian Demaestri wrote:\n> > Hi,\n> > I think planner should use other plans than seqscan to solve querys\n> > like select * from hugetable limit 1, especially when the talbe is\n> > very large. Is it solved in newer versions or is there some open\n> > issues about it?.\n> > thanks\n> > I'm working with postgres 8.0.1,\n> \n> For the query in question, what would be faster than a seqscan? It\n> doesn't read the whole table, it only reads until it satisfies the limit\n> clause.\n\n> It is not actualy a table, sorry, it is a quite complex view that\n> involve three large tables.\n\n\nIf hugetable isn't a table, you chose a really bad name for it.\n\nWhat you have here is a specific query performing badly, not a generic\nissue with all queries containing \"LIMIT X\". You might of course have\nfound a construct which the planner has problems with - but the first\nstep is to let us see the result of EXPLAIN ANALYZE.\n\nAnyway, I think you might be hitting this issue:\n\n\"Fix mis-planning of queries with small LIMIT values due to poorly\nthought out \"fuzzy\" cost comparison\"\n(http://www.postgresql.org/docs/8.0/static/release-8-0-4.html)\n\nwhich was fixed in 8.0.4 . You should upgrade.\n\nNis\n\n", "msg_date": "Tue, 23 Oct 2007 15:36:21 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seqscan" } ]
[ { "msg_contents": "On 10/20/07, Pavel Velikhov <[email protected]> wrote:\n> Left the query running for 10+ hours and had to kill it. I guess\n there\n> really was no need to have lots of\n> shared buffers (the hope was that postgresql will cache the whole\n table). I\n> ended up doing this step inside\n> the application as a pre-processing step. Can't have postgres running\n with\n> different fsych options since this\n> will be part of an \"easy to install and run\" app, that should just\n require a\n> typical PosgreSQL installation.\n\n>Is the size always different? If not, you could limit the updates:\n\n>UPDATE links\n> SET target_size = size\n>FROM articles\n>WHERE articles.article_id = links.article_to\n> AND links.target_size != articles.size;\n\nAh, this sounds better for sure! But its probably as good as the scan with an index-scan subquery I was getting before...\n\n>Since this is a huge operation, what about trying:\n\n>CREATE TABLE links_new AS SELECT l.col1, l.col2, a.size as\n>target_size, l.col4, ... FROM links l, articles a WHERE a.article_id =\n>l.article_to;\n\n>Then truncate links, copy the data from links_new. Alternatively, you\n>could drop links, rename links_new to links, and recreate the\n>constraints.\n\n>I guess the real question is application design. Why doesn't this app\n>store size at runtime instead of having to batch this huge update?\n\nThis is a link analysis application, I need to materialize all the sizes for target\narticles in order to have the runtime part (vs. the loading part) run efficiently. I.e.\nI really want to avoid a join with the articles table at runtime.\n\nI have solved the size problem by other means (I compute it in my loader), but\nI still have one query that needs to update a pretty large percentage of the links table...\nI have previously used mysql, and for some reason I didn't have a problem with queries\nlike this (on the other hand mysql was crashing when building an index on article_to in the\nlinks relation, so I had to work without a critical index)...\n\nThank!\nPavel\n\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n\n---------------------------(end of\n broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nOn 10/20/07, Pavel Velikhov <[email protected]> wrote:> Left the query running for 10+ hours and had to kill it. I guess\n there> really was no need to have lots of> shared buffers (the hope was that postgresql will cache the whole\n table). I> ended up doing this step inside> the application as a pre-processing step. Can't have postgres running\n with> different fsych options since this> will be part of an \"easy to install and run\" app, that should just\n require a> typical PosgreSQL installation.>Is the size always different?  If not, you could limit the updates:>UPDATE links>  SET target_size = size>FROM articles>WHERE articles.article_id = links.article_to>        AND links.target_size != articles.size;Ah, this sounds better for sure! But its probably as good as the scan with an index-scan subquery I was getting before...>Since this is a huge operation, what about trying:>CREATE TABLE links_new AS SELECT l.col1, l.col2, a.size as>target_size, l.col4, ... FROM links l, articles a WHERE a.article_id =>l.article_to;>Then truncate links, copy the data from links_new.  Alternatively, you>could drop links, rename links_new to links, and recreate the>constraints.>I guess the real question is application design.  Why doesn't this\n app>store size at runtime instead of having to batch this huge update?This is a link analysis application, I need to materialize all the sizes for targetarticles in order to have the runtime part (vs. the loading part) run efficiently. I.e.I really want to avoid a join with the articles table at runtime.I have solved the size problem by other means (I compute it in my loader), butI still have one query that needs to update a pretty large percentage of the links table...I have previously used mysql, and for some reason I didn't have a problem with querieslike this (on the other hand mysql was crashing when building an index on article_to in thelinks relation, so I had to work without a critical index)...Thank!Pavel-- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324EnterpriseDB Corporation                | fax:\n 732.331.1301499 Thornall Street, 2nd Floor          | [email protected], NJ 08837                        | http://www.enterprisedb.com/---------------------------(end of\n broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Tue, 23 Oct 2007 02:54:04 -0700 (PDT)", "msg_from": "Pavel Velikhov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: need help with a query" } ]
[ { "msg_contents": "We vacuum only a few of our tables nightly, this one is the last one \nbecause it takes longer to run. I'll probably re-index it soon, but I \nwould appreciate any advice on how to speed up the vacuum process (and \nthe db in general).\n\nOkay, here's our system:\n postgres 8.1.4\n Linux version 2.4.21\n Red Hat Linux 3.2.3\n 8 GB ram\n Intel(R) Xeon(TM) CPU 3.20GHz\n Raid 5\n autovacuum=off\n serves as the application server and database server\n server is co-located in another city, hardware upgrade is not \ncurrently an option\n\nHere's the table information:\nThe table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. It \nis probably our 'key' table in the database and gets called by almost \nevery query (usually joined to others). The table gets updated only \nabout 10 times a day. We were running autovacuum but it interfered with \nthe updates to we shut it off. We vacuum this table nightly, and it \ncurrently takes about 12 hours to vacuum it. Not much else is running \nduring this period, nothing that should affect the table.\n\nHere are the current non-default postgresql.conf settings:\nmax_connections = 100\nshared_buffers = 50000\nwork_mem = 9192\nmaintenance_work_mem = 786432\nmax_fsm_pages = 70000\nvacuum_cost_delay = 200\nvacuum_cost_limit = 100\nbgwriter_delay = 10000\nfsync = on\ncheckpoint_segments = 64\ncheckpoint_timeout = 1800\neffective_cache_size = 270000\nrandom_page_cost = 2\nlog_destination = 'stderr'\nredirect_stderr = on\nclient_min_messages = warning\nlog_min_messages = warning\nstats_start_collector = off\nstats_command_string = on\nstats_block_level = on\nstats_row_level = on \nautovacuum = off \nautovacuum_vacuum_threshold = 2000\ndeadlock_timeout = 10000\nmax_locks_per_transaction = 640\nadd_missing_from = on\n\nAs I mentioned, any insights into changing the configuration to optimize \nperformance are most welcome.\n\nThanks\n\nRon\n", "msg_date": "Tue, 23 Oct 2007 08:53:17 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "12 hour table vacuums" }, { "msg_contents": "In response to Ron St-Pierre <[email protected]>:\n\n> We vacuum only a few of our tables nightly, this one is the last one \n> because it takes longer to run. I'll probably re-index it soon, but I \n> would appreciate any advice on how to speed up the vacuum process (and \n> the db in general).\n\nI doubt anyone can provide meaningful advice without the output of\nvacuum verbose.\n\n> \n> Okay, here's our system:\n> postgres 8.1.4\n> Linux version 2.4.21\n> Red Hat Linux 3.2.3\n> 8 GB ram\n> Intel(R) Xeon(TM) CPU 3.20GHz\n> Raid 5\n> autovacuum=off\n> serves as the application server and database server\n> server is co-located in another city, hardware upgrade is not \n> currently an option\n> \n> Here's the table information:\n> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. It \n> is probably our 'key' table in the database and gets called by almost \n> every query (usually joined to others). The table gets updated only \n> about 10 times a day. We were running autovacuum but it interfered with \n> the updates to we shut it off. We vacuum this table nightly, and it \n> currently takes about 12 hours to vacuum it. Not much else is running \n> during this period, nothing that should affect the table.\n> \n> Here are the current non-default postgresql.conf settings:\n> max_connections = 100\n> shared_buffers = 50000\n> work_mem = 9192\n> maintenance_work_mem = 786432\n> max_fsm_pages = 70000\n> vacuum_cost_delay = 200\n> vacuum_cost_limit = 100\n> bgwriter_delay = 10000\n> fsync = on\n> checkpoint_segments = 64\n> checkpoint_timeout = 1800\n> effective_cache_size = 270000\n> random_page_cost = 2\n> log_destination = 'stderr'\n> redirect_stderr = on\n> client_min_messages = warning\n> log_min_messages = warning\n> stats_start_collector = off\n> stats_command_string = on\n> stats_block_level = on\n> stats_row_level = on \n> autovacuum = off \n> autovacuum_vacuum_threshold = 2000\n> deadlock_timeout = 10000\n> max_locks_per_transaction = 640\n> add_missing_from = on\n> \n> As I mentioned, any insights into changing the configuration to optimize \n> performance are most welcome.\n> \n> Thanks\n> \n> Ron\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Tue, 23 Oct 2007 12:07:50 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Ron St-Pierre <[email protected]> writes:\n> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. It \n> is probably our 'key' table in the database and gets called by almost \n> every query (usually joined to others). The table gets updated only \n> about 10 times a day. We were running autovacuum but it interfered with \n> the updates to we shut it off. We vacuum this table nightly, and it \n> currently takes about 12 hours to vacuum it. Not much else is running \n> during this period, nothing that should affect the table.\n\nHere is your problem:\n\n> vacuum_cost_delay = 200\n\nIf you are only vacuuming when nothing else is happening, you shouldn't\nbe using vacuum_cost_delay at all: set it to 0. In any case this value\nis probably much too high. I would imagine that if you watch the\nmachine while the vacuum is running you'll find both CPU and I/O load\nnear zero ... which is nice, unless you would like the vacuum to finish\nsooner.\n\nIn unrelated comments:\n\n> maintenance_work_mem = 786432\n\nThat seems awfully high, too.\n\n> max_fsm_pages = 70000\n\nAnd this possibly too low --- are you sure you are not leaking disk\nspace?\n\n> stats_start_collector = off\n> stats_command_string = on\n> stats_block_level = on\n> stats_row_level = on \n\nThese are not self-consistent.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 23 Oct 2007 12:11:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums " }, { "msg_contents": "Ron St-Pierre wrote:\n\n> Okay, here's our system:\n> postgres 8.1.4\n\nUpgrade to 8.1.10\n\n> Here's the table information:\n> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.\n\n60 indexes? You gotta be kidding. You really have 60 columns on which\nto scan?\n\n> vacuum_cost_delay = 200\n> vacuum_cost_limit = 100\n\nIsn't this a bit high? What happens if you cut the delay to, say, 10?\n(considering you've lowered the limit to half the default)\n\n-- \nAlvaro Herrera Developer, http://www.PostgreSQL.org/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n", "msg_date": "Tue, 23 Oct 2007 13:12:04 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "On Tue, 2007-10-23 at 08:53 -0700, Ron St-Pierre wrote:\n> [snip] We were running autovacuum but it interfered with \n> the updates to we shut it off.\n\nThis is not directly related to your question, but it might be good for\nyour DB: you don't need to turn off autovacuum, you can exclude tables\nindividually from being autovacuumed by inserting the appropriate rows\nin pg_autovacuum. See:\n\nhttp://www.postgresql.org/docs/8.1/static/catalog-pg-autovacuum.html\n\nWe also do have here a few big tables which we don't want autovacuum to\ntouch, so we disable them via pg_autovacuum. There are a few really big\nones which change rarely - those we only vacuum via a DB wide vacuum in\nthe weekend (which for us is a low activity period). If you say your\ntable is only changed rarely, you might be OK too with such a setup...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Tue, 23 Oct 2007 18:21:16 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Bill Moran wrote:\n> In response to Ron St-Pierre <[email protected]>:\n>\n> \n>> We vacuum only a few of our tables nightly, this one is the last one \n>> because it takes longer to run. I'll probably re-index it soon, but I \n>> would appreciate any advice on how to speed up the vacuum process (and \n>> the db in general).\n>> \n>\n> I doubt anyone can provide meaningful advice without the output of\n> vacuum verbose.\n>\n> \nThe cron job is still running\n /usr/local/pgsql/bin/vacuumdb -d imperial -t stock.fdata -v -z > \n/usr/local/pgsql/bin/fdata.txt\nI'll post the output when it's finished.\n\nRon\n\n", "msg_date": "Tue, 23 Oct 2007 09:33:16 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "\"Ron St-Pierre\" <[email protected]> writes:\n\n> We vacuum only a few of our tables nightly, this one is the last one because it\n> takes longer to run. I'll probably re-index it soon, but I would appreciate any\n> advice on how to speed up the vacuum process (and the db in general).\n...\n> vacuum_cost_delay = 200\n\nWell speeding up vacuum isn't really useful in itself. In fact you have vacuum\nconfigured to run quite slowly by having vacuum_cost_delay set so high. You\nhave it set to sleep 200ms every few pages. If you lower that it'll run faster\nbut take more bandwidth away from the foreground tasks.\n\n> Here's the table information:\n> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. \n\nFor what it's worth NUMERIC columns take more space than you might expect.\nFigure a minimum of 12 bytes your rows are at about 1.5k each even if the\nnon-numeric columns aren't large themselves. What are the other columns?\n\n> We were running autovacuum but it interfered with the updates to we shut it\n> off. \n\nWas it just the I/O bandwidth? I'm surprised as your vacuum_cost_delay is\nquite high. Manual vacuum doesn't do anything differently from autovacuum,\nneither should interfere directly with updates except by taking away\nI/O bandwidth.\n\n> We vacuum this table nightly, and it currently takes about 12 hours to\n> vacuum it. Not much else is running during this period, nothing that should\n> affect the table.\n\nIs this time increasing over time? If once a day isn't enough then you may be\naccumulating more and more dead space over time. In which case you may be\nbetter off running it during prime time with a large vacuum_cost_delay (like\nthe 200 you have configured) rather than trying to get to run fast enough to\nfit in the off-peak period.\n\n> deadlock_timeout = 10000\n\nI would not suggest having this quite this high. Raising it from the default\nis fine but having a value larger than your patience is likely to give you the\nfalse impression that something is hung if you should ever get a deadlock.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 23 Oct 2007 17:38:29 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "In response to Ron St-Pierre <[email protected]>:\n\n> Bill Moran wrote:\n> > In response to Ron St-Pierre <[email protected]>:\n> >\n> > \n> >> We vacuum only a few of our tables nightly, this one is the last one \n> >> because it takes longer to run. I'll probably re-index it soon, but I \n> >> would appreciate any advice on how to speed up the vacuum process (and \n> >> the db in general).\n> >> \n> >\n> > I doubt anyone can provide meaningful advice without the output of\n> > vacuum verbose.\n\nUnderstood, however I may have spoken too soon. It appears that Tom\nfound an obvious issue with your config that seems likely to be the\nproblem.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 23 Oct 2007 12:41:48 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Tom Lane wrote:\n> Here is your problem:\n>\n> \n>> vacuum_cost_delay = 200\n>> \n>\n> If you are only vacuuming when nothing else is happening, you shouldn't\n> be using vacuum_cost_delay at all: set it to 0. In any case this value\n> is probably much too high. I would imagine that if you watch the\n> machine while the vacuum is running you'll find both CPU and I/O load\n> near zero ... which is nice, unless you would like the vacuum to finish\n> sooner.\n> \nYeah, I've noticed that CPU, mem and I/O load are really low when this \nis running. I'll change that setting.\n> In unrelated comments:\n>\n> \n>> maintenance_work_mem = 786432\n>> \n>\n> That seems awfully high, too.\n>\n> \nAny thoughts on a more reasonable value?\n>> max_fsm_pages = 70000\n>> \n>\n> And this possibly too low --- \nThe default appears to be 20000, so I upped it to 70000. I'll try 160000 \n(max_fsm_relations*16).\n> are you sure you are not leaking disk\n> space?\n>\n> \nWhat do you mean leaking disk space?\n>> stats_start_collector = off\n>> stats_command_string = on\n>> stats_block_level = on\n>> stats_row_level = on \n>> \n>\n> These are not self-consistent.\n>\n> \t\t\tregards, tom lane\n>\n> \n\n", "msg_date": "Tue, 23 Oct 2007 09:43:08 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Gregory Stark wrote:\n> \"Ron St-Pierre\" <[email protected]> writes:\n>\n> \n>> We vacuum only a few of our tables nightly, this one is the last one because it\n>> takes longer to run. I'll probably re-index it soon, but I would appreciate any\n>> advice on how to speed up the vacuum process (and the db in general).\n>> \n> ...\n> \n>> vacuum_cost_delay = 200\n>> \n>\n> Well speeding up vacuum isn't really useful in itself. In fact you have vacuum\n> configured to run quite slowly by having vacuum_cost_delay set so high. You\n> have it set to sleep 200ms every few pages. If you lower that it'll run faster\n> but take more bandwidth away from the foreground tasks.\n> \nIt's okay if it uses a lot of resources, because it's scheduled to run \nduring the night (our slow time). Because most of the important queries \nrunning during the day use this table, I want the vacuum analzye \nfinished ASAP.\n> \n>> Here's the table information:\n>> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes. \n>> \n>\n> For what it's worth NUMERIC columns take more space than you might expect.\n> Figure a minimum of 12 bytes your rows are at about 1.5k each even if the\n> non-numeric columns aren't large themselves. What are the other columns?\n> \nThe NUMERIC columns hold currency related values, with values ranging \nfrom a few cents to the billions, as well as a few negative numbers.\n> \n>> We were running autovacuum but it interfered with the updates to we shut it\n>> off. \n>> \n>\n> Was it just the I/O bandwidth? I'm surprised as your vacuum_cost_delay is\n> quite high. Manual vacuum doesn't do anything differently from autovacuum,\n> neither should interfere directly with updates except by taking away\n> I/O bandwidth.\n>\n> \nI don't know what the problem was. I tried to exclude certain tables \nfrom autovacuuming, but it autovacuumed anyway.\n\n>> We vacuum this table nightly, and it currently takes about 12 hours to\n>> vacuum it. Not much else is running during this period, nothing that should\n>> affect the table.\n>> \n>\n> Is this time increasing over time? If once a day isn't enough then you may be\n> accumulating more and more dead space over time. In which case you may be\n> better off running it during prime time with a large vacuum_cost_delay (like\n> the 200 you have configured) rather than trying to get to run fast enough to\n> fit in the off-peak period.\n>\n> \n>> deadlock_timeout = 10000\n>> \n>\n> I would not suggest having this quite this high. Raising it from the default\n> is fine but having a value larger than your patience is likely to give you the\n> false impression that something is hung if you should ever get a deadlock.\n>\n> \nGood point. I'll look into this.\n\nThanks\n\nRon\n\n", "msg_date": "Tue, 23 Oct 2007 09:52:52 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Alvaro Herrera wrote:\n> Ron St-Pierre wrote:\n>\n> \n>> Okay, here's our system:\n>> postgres 8.1.4\n>> \n>\n> Upgrade to 8.1.10\n> \nAny particular fixes in 8.1.10 that would help with this?\n> \n>> Here's the table information:\n>> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.\n>> \n>\n> 60 indexes? You gotta be kidding. You really have 60 columns on which\n> to scan?\n>\n> \nReally. 60 indexes. They're the most commonly requested columns for \ncompany information (we believe). Any ideas on testing our assumptions \nabout that? I would like to know definitively what are the most popular \ncolumns. Do you think that rules would be a good approach for this? \n(Sorry if I'm getting way off topic here)\n>> vacuum_cost_delay = 200\n>> vacuum_cost_limit = 100\n>> \n>\n> Isn't this a bit high? What happens if you cut the delay to, say, 10?\n> (considering you've lowered the limit to half the default)\n>\n> \nYes, Tom pointed this out too. I'll lower it and check out the results.\n\nRon\n\n", "msg_date": "Tue, 23 Oct 2007 10:00:05 -0700", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "On Tue, 23 Oct 2007 10:00:05 -0700\nRon St-Pierre <[email protected]> wrote:\n\n> Alvaro Herrera wrote:\n> > Ron St-Pierre wrote:\n> >\n> > \n> >> Okay, here's our system:\n> >> postgres 8.1.4\n> >> \n> >\n> > Upgrade to 8.1.10\n> > \n> Any particular fixes in 8.1.10 that would help with this?\n> > \n> >> Here's the table information:\n> >> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60\n> >> indexes. \n> >\n> > 60 indexes? You gotta be kidding. You really have 60 columns on\n> > which to scan?\n> >\n> > \n> Really. 60 indexes. They're the most commonly requested columns for \n> company information (we believe). Any ideas on testing our\n> assumptions about that? I would like to know definitively what are\n> the most popular columns. Do you think that rules would be a good\n> approach for this? (Sorry if I'm getting way off topic here)\n\nI suggest you:\n\n1. Turn on stats and start looking in the stats columns to see what\nindexes are actually being used.\n\n2. Strongly review your normalization :)\n\nJoshua D. Drake\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/", "msg_date": "Tue, 23 Oct 2007 10:41:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "On Tue, 2007-10-23 at 08:53 -0700, Ron St-Pierre wrote:\n> The table gets updated only \n> about 10 times a day. \n\nSo why are you VACUUMing it nightly? You should do this at the weekend\nevery 3 months...\n\n8.1 is slower at VACUUMing indexes than later releases, so 60 indexes\nare going to hurt quite a lot.\n\nThe default maintenance_work_mem is sufficient for this table.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Tue, 23 Oct 2007 19:03:14 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Ron St-Pierre wrote:\n> Alvaro Herrera wrote:\n>> Ron St-Pierre wrote:\n>>\n>> \n>>> Okay, here's our system:\n>>> postgres 8.1.4\n>>> \n>>\n>> Upgrade to 8.1.10\n>> \n> Any particular fixes in 8.1.10 that would help with this?\n\nI don't think so, but my guess is that you really want to avoid the\nautovacuum bug which makes it vacuum without FREEZE on template0, that\nhas caused so many problems all over the planet.\n\n>>> Here's the table information:\n>>> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.\n>>\n>> 60 indexes? You gotta be kidding. You really have 60 columns on which\n>> to scan?\n>>\n>> \n> Really. 60 indexes. They're the most commonly requested columns for company \n> information (we believe). Any ideas on testing our assumptions about that? \n> I would like to know definitively what are the most popular columns. Do you \n> think that rules would be a good approach for this? (Sorry if I'm getting \n> way off topic here)\n\nAs Josh Drake already said, you can check pg_stat* views to see which\nindexes are not used. Hard to say anything else without seeing the\ndefinition.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 23 Oct 2007 15:22:02 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Ron St-Pierre wrote:\n> Gregory Stark wrote:\n\n>>> We were running autovacuum but it interfered with the updates to we\n>>> shut it off. \n>>\n>> Was it just the I/O bandwidth? I'm surprised as your\n>> vacuum_cost_delay is quite high. Manual vacuum doesn't do anything\n>> differently from autovacuum, neither should interfere directly with\n>> updates except by taking away I/O bandwidth.\n>>\n> I don't know what the problem was. I tried to exclude certain tables\n> from autovacuuming, but it autovacuumed anyway.\n\nProbably because of Xid wraparound issues. Now that you're vacuuming\nweekly it shouldn't be a problem. (It's also much less of a problem in\n8.2).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 23 Oct 2007 15:23:18 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "In article <[email protected]>,\nRon St-Pierre <[email protected]> writes:\n\n>> For what it's worth NUMERIC columns take more space than you might expect.\n>> Figure a minimum of 12 bytes your rows are at about 1.5k each even if the\n>> non-numeric columns aren't large themselves. What are the other columns?\n\n> The NUMERIC columns hold currency related values, with values ranging\n> from a few cents to the billions, as well as a few negative numbers.\n\nWhat's the required precision? If it's just cents (or maybe tenths\nthereof), you could use BIGINT to store the amount in this precision.\nThis would give you exact values with much less space.\n\n", "msg_date": "Wed, 24 Oct 2007 11:04:27 +0200", "msg_from": "Harald Fuchs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" }, { "msg_contents": "Ron St-Pierre wrote:\n> We vacuum only a few of our tables nightly, this one is the last one\n> because it takes longer to run. I'll probably re-index it soon, but I\n> would appreciate any advice on how to speed up the vacuum process (and\n> the db in general).\n\nI am a novice to postgreSQL, so I have no answers for you. But for my own\neducation, I am confused by some of your post.\n> \n> Okay, here's our system:\n> postgres 8.1.4\n\nI have postgresql-8.1.9-1.el5\n\n> Linux version 2.4.21\n\nI imagine you mean Linux kernel version; I have 2.6.18-8.1.15.el5PAE\n\n> Red Hat Linux 3.2.3\n\nI have no clue what this means. Red Hat Linux 3 must have been in the early\n1990s. RHL 5 came out about 1998 IIRC.\n\nRed Hat Enterprise Linux 3, on the other hand, was not numbered like that,\nas I recall. I no longer run that, but my current RHEL5 is named like this:\n\nRed Hat Enterprise Linux Server release 5 (Tikanga)\n\nand for my CentOS 4 system, it is\n\nCentOS release 4.5 (Final)\n\nDid RHEL3 go with the second dot in their release numbers? I do not remember\nthat.\n\n> 8 GB ram\n> Intel(R) Xeon(TM) CPU 3.20GHz\n> Raid 5\n> autovacuum=off\n\nWhy would you not have that on?\n\n> serves as the application server and database server\n> server is co-located in another city, hardware upgrade is not\n> currently an option\n> \n> Here's the table information:\n> The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.\n\nI have designed databases, infrequently, but since the late 1970s. In my\nexperience, my tables had relatively few columns, rarely over 10. Are you\nsure this table needs so many? Why not, e.g., 13 tables averaging 10 columns\neach?\n\nOTOH, 140,000 rows is not all that many. I have a 6,000,000 row table in my\nlittle database on my desktop, and I do not even consider that large.\nImagine the size of a database belonging to the IRS, for example. Surely it\nwould have at least one row for each taxpayer and each employer (possibly in\ntwo tables, or two databases).\n\nHere are the last few lines of a VACUUM VERBOSE; command for that little\ndatabase. The 6,000,000 row table is not in the database at the moment, nor\nare some of the other tables, but two relatively (for me) large tables are.\n\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: free space map contains 166 pages in 76 relations\nDETAIL: A total of 1280 page slots are in use (including overhead).\n1280 page slots are required to track all free space.\nCurrent limits are: 40000 page slots, 1000 relations, using 299 KB.\nVACUUM\nstock=> select count(*) from ranks; [table has 10 columns]\n count\n--------\n 981030\n(1 row)\n\nstock=> select count(*) from ibd; [table has 8 columns]\n count\n---------\n 1099789\n(1 row)\n\nAnd this is the time for running that psql process, most of which was\nconsumed by slow typing on my part.\n\nreal 1m40.206s\nuser 0m0.027s\nsys 0m0.019s\n\nMy non-default settings for this are\n\n# - Memory -\n\nshared_buffers = 251000\nwork_mem = 32768\nmax_fsm_pages = 40000\n\nI have 8GBytes RAM on this machine, and postgreSQL is the biggest memory\nuser. I set shared_buffers high to try to get some entire (small) tables in\nRAM and to be sure there is room for indices.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:40:01 up 1 day, 58 min, 1 user, load average: 4.08, 4.13, 4.17\n", "msg_date": "Wed, 24 Oct 2007 09:14:42 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 12 hour table vacuums" } ]
[ { "msg_contents": "Hello,\n\nI am having a strange latency problem on my instance of Postgres that\nI don't know how to investigate.\n\nI am accessing the db instance using a Java application and the\nCayenne mapping framework. Everything works fine, except when it is\ntime to delete a user account (that is a user of the application, not\nof Postgres).\n\nDeleting an account trigger a (sort-of) cascade delete to remove also\nall the dependent records stored on the db. The cascade constraints\nare managed by the Cayenne library, and the db receives a list of\ndelete statements for all the rows of the different tables that should\nbe deleted; this is probably not optimal from a db point of view, but\nis extremely convenient from an application point of view.\n\nAnyway, the deletion of all the records is executed without much\nproblems (I had a huge slowdown here a few weeks ago, but I was\nmissing an index on a constraint).\n\nObviously, all the delete statements are grouped into a single\ntransaction; and when it is time to commit this transaction, the db\ninstance takes \"forever\".\n\nHere are some logs taken from the db server:\n\n[ lot of delete statements skipped]\n2007-10-24 12:13:17 CEST LOG: statement: EXECUTE <unnamed> [PREPARE:\n DELETE FROM connection.USRCNN WHERE ID_USRCNN = $1]\n2007-10-24 12:13:17 CEST LOG: duration: 0.206 ms\n2007-10-24 12:13:17 CEST LOG: duration: 0.206 ms statement: EXECUTE\n<unnamed> [PREPARE: DELETE FROM connection.USRCNN WHERE ID_USRCNN =\n$1]\n2007-10-24 12:13:17 CEST LOG: statement: PREPARE <unnamed> AS DELETE\nFROM clipperz.USR WHERE ID_USR = $1\n2007-10-24 12:13:17 CEST LOG: statement: <BIND>\n2007-10-24 12:13:17 CEST LOG: statement: EXECUTE <unnamed> [PREPARE:\n DELETE FROM clipperz.USR WHERE ID_USR = $1]\n2007-10-24 12:13:17 CEST LOG: duration: 0.761 ms\n2007-10-24 12:13:17 CEST LOG: duration: 0.761 ms statement: EXECUTE\n<unnamed> [PREPARE: DELETE FROM clipperz.USR WHERE ID_USR = $1]\n2007-10-24 12:13:17 CEST LOG: statement: <BIND>\n2007-10-24 12:13:17 CEST LOG: statement: EXECUTE <unnamed> [PREPARE: COMMIT]\n2007-10-24 12:13:51 CEST LOG: autovacuum: processing database \"clipperz_beta\"\n2007-10-24 12:14:51 CEST LOG: autovacuum: processing database \"clipperz_beta\"\n2007-10-24 12:15:10 CEST LOG: duration: 113300.147 ms\n2007-10-24 12:15:10 CEST LOG: duration: 113300.147 ms statement:\nEXECUTE <unnamed> [PREPARE: COMMIT]\n\n\nAs you may notice, the commit phase takes almost 2 full minutes. :-(\n\nHow can I understand what is going on on that timeframe in order to\ntry to fix it?\n\nThanks for your attention.\n\nBest regards,\n\nGiulio Cesare\n\nPS: I run my development machine on a MacOS, with Postgres running on\na Parallels virtual machine. I don't think this really matters for the\nabove problem, but in case ...\n", "msg_date": "Wed, 24 Oct 2007 12:44:35 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Finalizing commit taking very long" }, { "msg_contents": "\"Giulio Cesare Solaroli\" <[email protected]> writes:\n> As you may notice, the commit phase takes almost 2 full minutes. :-(\n\nYow. It's hard to believe that the actual commit (ie, flushing the\ncommit record to WAL) could take more than a fraction of a second.\nI'm thinking there must be a pile of pre-commit work to do, like a\nlot of deferred triggers. Do you use deferred foreign keys?\nIf so, the most likely bet is that the DELETE is triggering a lot\nof deferred FK checks, and these are slow for some reason (maybe\nanother missing index).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Oct 2007 09:15:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finalizing commit taking very long " }, { "msg_contents": "Hello Tom,\n\nOn 10/24/07, Tom Lane <[email protected]> wrote:\n> \"Giulio Cesare Solaroli\" <[email protected]> writes:\n> > As you may notice, the commit phase takes almost 2 full minutes. :-(\n>\n> Yow. It's hard to believe that the actual commit (ie, flushing the\n> commit record to WAL) could take more than a fraction of a second.\n> I'm thinking there must be a pile of pre-commit work to do, like a\n> lot of deferred triggers. Do you use deferred foreign keys?\n> If so, the most likely bet is that the DELETE is triggering a lot\n> of deferred FK checks, and these are slow for some reason (maybe\n> another missing index).\n\nI have most (if not all) of my constraint defined with the DEFERRABLE\nINITIALLY DEFERRED clause.\n\nI have done this as I have not a direct control on the order of the\nSQL statements that the Cayenne library sends to the server, and this\nwill avoid all the constraint violations inside a single transaction.\n\nHow can I try to isolate the trigger taking so long, in oder to\nunderstand which is/are the missing index(es)?\n\nBest regards,\n\nGiulio Cesare\n", "msg_date": "Wed, 24 Oct 2007 15:37:07 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finalizing commit taking very long" }, { "msg_contents": "\"Giulio Cesare Solaroli\" <[email protected]> writes:\n> How can I try to isolate the trigger taking so long, in oder to\n> understand which is/are the missing index(es)?\n\nTry SET CONSTRAINTS ALL IMMEDIATE and then EXPLAIN ANALYZE the\ndelete. This should cause all the triggers to run within the\nscope of the EXPLAIN ANALYZE, and you'll be able to see which\none(s) are slow. (This assumes you're running a recent release\nof PG; I think EXPLAIN shows trigger times since 8.1 or so.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 24 Oct 2007 10:03:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finalizing commit taking very long " }, { "msg_contents": "On 10/24/07, Tom Lane <[email protected]> wrote:\n> \"Giulio Cesare Solaroli\" <[email protected]> writes:\n> > How can I try to isolate the trigger taking so long, in oder to\n> > understand which is/are the missing index(es)?\n>\n> Try SET CONSTRAINTS ALL IMMEDIATE and then EXPLAIN ANALYZE the\n> delete. This should cause all the triggers to run within the\n> scope of the EXPLAIN ANALYZE, and you'll be able to see which\n> one(s) are slow. (This assumes you're running a recent release\n> of PG; I think EXPLAIN shows trigger times since 8.1 or so.)\n\nI was thinking about something similar after writing the last message.\n\nThank you very much for your attention!!\n\nGiulio Cesare\n", "msg_date": "Wed, 24 Oct 2007 16:25:53 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finalizing commit taking very long" }, { "msg_contents": "Hello Tom,\n\nI can confirm that adding the indexes used by the deferred constraint\ntriggers solved the issue.\n\nThank you very much for your suggestions.\n\nBest regards,\n\nGiulio Cesare\n\n\nOn 10/24/07, Giulio Cesare Solaroli <[email protected]> wrote:\n> On 10/24/07, Tom Lane <[email protected]> wrote:\n> > \"Giulio Cesare Solaroli\" <[email protected]> writes:\n> > > How can I try to isolate the trigger taking so long, in oder to\n> > > understand which is/are the missing index(es)?\n> >\n> > Try SET CONSTRAINTS ALL IMMEDIATE and then EXPLAIN ANALYZE the\n> > delete. This should cause all the triggers to run within the\n> > scope of the EXPLAIN ANALYZE, and you'll be able to see which\n> > one(s) are slow. (This assumes you're running a recent release\n> > of PG; I think EXPLAIN shows trigger times since 8.1 or so.)\n>\n> I was thinking about something similar after writing the last message.\n>\n> Thank you very much for your attention!!\n>\n> Giulio Cesare\n>\n", "msg_date": "Fri, 26 Oct 2007 14:41:36 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finalizing commit taking very long" } ]
[ { "msg_contents": "Hi all,\ni'm looking for correct or at least good enough solution for use of \nmultiple apaches with single postgres database. (apaches are 2.0.x, and \npostgres is 8.1.x)\n\nAt this moment i'm involved in management of a website where we have \nlarge user load on our web servers. Apaches are set up to be able to \nanswer 300 requests at the same time and at the moment we have 4 \napaches. Eaxh of these apaches handles about 100 requests \nsimultaneously at average.We have no connection pooling setup between \napaches and postgresql. Postgres accepts up to 200 connections and \nnormaly there is about 20 used connections (although, there is quite a \nlot of traffic between postgres and apaches, queries are simple enough, \nso postgres handles it nicely)\n\nBut sometimes (i don't know exactly for what reason) some queries gets \nstuck (mostly they are inserts or updates, but realy simple) and \npostgres is unable to answer in time, which starts a \"wave\" because \nqueries from apaches are delayed, which means that there is bigger \nnumber of user request in process, which means more connections to \npostgres, until we reach connection limit. But there is something even \nworse and that is, that i don't know why postmaster process probably \nforks itself ending with lots of (some hunreds) of postmasters running. \nWhen we kill all these postmasters and start postgres again, it ends the \nsame because apaches probably overloads database server with their \nwaiting requests. In this case we first need to stop apaches, start \npostgres, and then apaches and everything works fine ...... until next \nproblem, which can occur in hours, days or weeks.\n\nAnd my questions:\n1. Does someone hes similar experience? or clue what to do with it?\n\n2. What is correct setup of postgresql backend serving data for many \n(4+) apaches? i know that there are connection pooling solutions \n(pgPool, pgBouncer, or apache 2.2) and i'm thinking about them, but it \nseems that we have other problem beside that we didn't implement any \npooling solution yet.\n\n3. is there a way to somehow log what happened to the postgres server \nbefore accident? do you think that logging of all sql statements would \nhelp me? if i enable it, what will be the performance overhead?\n\nI might be asking too much, but i appreciate any help, hint, or \ndirection what to explore.\n\nThanks, i'm looking forward for answers.\n\nHonza\n\n", "msg_date": "Wed, 24 Oct 2007 14:15:14 +0200", "msg_from": "Honza Novak <[email protected]>", "msg_from_op": true, "msg_subject": "multiple apaches against single postgres database" }, { "msg_contents": "Honza Novak napsal(a):\n> And my questions:\n> 1. Does someone hes similar experience? or clue what to do with it?\n\nSure, this is considered \"normal\" behavior for web applications. The \nsolution is to use connection pooling.\n\n> 2. What is correct setup of postgresql backend serving data for many \n> (4+) apaches? i know that there are connection pooling solutions \n> (pgPool, pgBouncer, or apache 2.2) and i'm thinking about them, but it \n> seems that we have other problem beside that we didn't implement any \n> pooling solution yet.\n\nWe use pgpool running on each web server. You can have also the pgpool \nrunning on the database server or even a separate server just for that. \nYou'll have to test to see what's best for you.\n\n> 3. is there a way to somehow log what happened to the postgres server \n> before accident? do you think that logging of all sql statements would \n> help me? if i enable it, what will be the performance overhead?\n\nWhat you are seeing is called \"positive feedback\". Once the server \nreaches a certain performance threshold, it starts to delay the queries, \nwhich causes more load, which causes further delay, until everything \ncomes to a halt. Sometimes the system can recover from this, if you have \nproperly setup limits (it will just refuse the requests until it can \ncool off), sometimes it doesn't. The point is never get over the threshold.\n\nAlso, maybe you need better hardware for that kind of load, but since \nyou didn't provide more detail, we can't tell you.\n\nIt's quite meaningless to analyze performance once the system is \noverloaded. You have to analyze before that happens and identify the \nlongest running queries under normal load and try to optimize them. \nUnder heavy load, even the simplest query may seem to be taking long \ntime, but it doesn't necessarily mean there is something wrong with it.\n\n-- \nMichal Tďż˝borskďż˝\nchief systems architect\nInternet Mall, a.s.\n<http://www.MALL.cz>\n", "msg_date": "Wed, 24 Oct 2007 15:08:20 +0200", "msg_from": "Michal Taborsky - Internet Mall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple apaches against single postgres database" }, { "msg_contents": "\"Honza Novak\" <[email protected]> writes:\n\n> Hi all,\n> i'm looking for correct or at least good enough solution for use of multiple\n> apaches with single postgres database. (apaches are 2.0.x, and postgres is\n> 8.1.x)\n>\n> At this moment i'm involved in management of a website where we have large user\n> load on our web servers. Apaches are set up to be able to answer 300 requests\n> at the same time and at the moment we have 4 apaches. \n\nDo you have 300 processors? Are your requests particularly i/o-bound? Why\nwould running 300 processes simultaneously be faster than running a smaller\nnumber sequentially? It doesn't sound like your systems are capable of\nhandling such a large number of requests simultaneously.\n\nThe traditional answer is to separate static content such as images which are\nmore i/o-bound onto a separate apache configuration which has a larger number\nof connections, limit the number of connections for the cpu-bound dynamic\ncontent server, and have a 1-1 ratio between apache dynamic content\nconnections and postgres backends. The alternative is to use connection\npooling. Often a combination of the two is best.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 24 Oct 2007 14:17:22 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple apaches against single postgres database" }, { "msg_contents": "Hi Honza,\n\nas Gregory wrote, let apache do the job.\nThe apache does queue a request if all running workers are busy.\n\n1. Split static content.\nWe have an apache as frontend which serves all static content and\nforwards (reverse-proxy) dynamic content to the \"backends\"\n\n2. Split different types of dynamic content.\nWe have an apache for all interactive requests - where the user expects\nquick responses. We have another apache for non-interactive content such\nas downloads and uploads. Whose request, which doesn't do much cpu work\nat all (and don't fit to 1.).\n\n3. Limit each apache to achieve a good work load\nIt is to late if the operation systems tries to schedule the simultan\nwork load because you have more processes in ready state than free CPUs.\nSet MaxClients of the apache for interactive requests that your\nserver(s) doesn't get overloaded.\nYou can set MaxClients just to limit the parallel downloads/uploads.\nSet MaxClients of the frontend higher. A good settings is when the\ninteractive requests are queued at the backend apache without reaching\nthe limit of the request queue.\n\n4. Set max_connection that you don't reach this limit.\nMaximum number of connection from interactive backend + maximum number\nof connections from non-interactive backend + reserve for the database\nadmin.\n\n5. Check all limits that you never reach a memory limit and you box\nstarts to swap.\n\n6. Monitor you application well\n- Count number of open connections to each apache\n- Check the load of the server.\n- Check context switches on the PostgreSQL box.\n\nI understand an apache process group as one apache.\n\nHere is a example from our web application\n\nTwo frontends - each MaxClients = 1024.\nInteractive backend - MaxClients = 35.\nnon-Interactive backend - MaxClients = 65.\nmax_connections = 120 (assuming each backend child process has one\nconnections)\nWith this setting we have even under load normally not more queries\nrunning at the PostgreSQL server as cores are available.\n\nPlease note that example should give you only experience for the scale.\nWe need a long time to find this values for our environment (application\nand hardware).\n\nBTW: This can also be setup on a single box. We have customers where\ndifferent apache are running on the same server.\n\nThere are a number of papers in the web which describe such setups.\nCheckout <http://perl.apache.org/docs/1.0/guide/performance.html> for\nexample.\n\nSven.\n\nGregory Stark schrieb:\n> \"Honza Novak\" <[email protected]> writes:\n> \n>> Hi all,\n>> i'm looking for correct or at least good enough solution for use of multiple\n>> apaches with single postgres database. (apaches are 2.0.x, and postgres is\n>> 8.1.x)\n>>\n>> At this moment i'm involved in management of a website where we have large user\n>> load on our web servers. Apaches are set up to be able to answer 300 requests\n>> at the same time and at the moment we have 4 apaches. \n> \n> Do you have 300 processors? Are your requests particularly i/o-bound? Why\n> would running 300 processes simultaneously be faster than running a smaller\n> number sequentially? It doesn't sound like your systems are capable of\n> handling such a large number of requests simultaneously.\n> \n> The traditional answer is to separate static content such as images which are\n> more i/o-bound onto a separate apache configuration which has a larger number\n> of connections, limit the number of connections for the cpu-bound dynamic\n> content server, and have a 1-1 ratio between apache dynamic content\n> connections and postgres backends. The alternative is to use connection\n> pooling. Often a combination of the two is best.\n> \n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n", "msg_date": "Wed, 24 Oct 2007 16:33:10 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple apaches against single postgres database" }, { "msg_contents": ">>> On Wed, Oct 24, 2007 at 7:15 AM, in message\n<[email protected]>, Honza Novak\n<[email protected]> wrote: \n \n> But sometimes (i don't know exactly for what reason) some queries gets \n> stuck (mostly they are inserts or updates, but realy simple) and \n> postgres is unable to answer in time\n \nIn addition to the points made by others, there is a chance that a\ncontributing factor is the tendency of PostgreSQL (prior to the\nupcoming 8.3 release) to hold onto dirty pages for as long as\npossible and throw them all at the disk drives in at checkpoint\ntime. In some such cases the advice from previous emails may not\nbe enough -- you may have to use very aggressive background writer\nsettings, a smaller shared buffers setting, and/or reduce or\neliminate the OS write delays.\n \nIf you find this to be your problem, you may want to be an early\nadopter of the 8.3 release, once it is out.\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 24 Oct 2007 12:22:35 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple apaches against single postgres\n\tdatabase" }, { "msg_contents": "Sorry for an off topic posting...\n\nMichal,\n\n> Honza Novak napsal(a):\n> > And my questions:\n> > 1. Does someone hes similar experience? or clue what to do with it?\n> \n> Sure, this is considered \"normal\" behavior for web applications. The \n> solution is to use connection pooling.\n> \n> > 2. What is correct setup of postgresql backend serving data for many \n> > (4+) apaches? i know that there are connection pooling solutions \n> > (pgPool, pgBouncer, or apache 2.2) and i'm thinking about them, but it \n> > seems that we have other problem beside that we didn't implement any \n> > pooling solution yet.\n> \n> We use pgpool running on each web server. You can have also the pgpool \n> running on the database server or even a separate server just for that. \n> You'll have to test to see what's best for you.\n\nAs a member of pgpool development team, I am always looking for pgpool\nexamples in the real world which could be open to public. Can you\nplese tell me more details the pgpool usage if possible?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n\n> > 3. is there a way to somehow log what happened to the postgres server \n> > before accident? do you think that logging of all sql statements would \n> > help me? if i enable it, what will be the performance overhead?\n> \n> What you are seeing is called \"positive feedback\". Once the server \n> reaches a certain performance threshold, it starts to delay the queries, \n> which causes more load, which causes further delay, until everything \n> comes to a halt. Sometimes the system can recover from this, if you have \n> properly setup limits (it will just refuse the requests until it can \n> cool off), sometimes it doesn't. The point is never get over the threshold.\n> \n> Also, maybe you need better hardware for that kind of load, but since \n> you didn't provide more detail, we can't tell you.\n> \n> It's quite meaningless to analyze performance once the system is \n> overloaded. You have to analyze before that happens and identify the \n> longest running queries under normal load and try to optimize them. \n> Under heavy load, even the simplest query may seem to be taking long \n> time, but it doesn't necessarily mean there is something wrong with it.\n> \n> -- \n> Michal Táborský\n> chief systems architect\n> Internet Mall, a.s.\n> <http://www.MALL.cz>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Thu, 25 Oct 2007 12:26:53 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: multiple apaches against single postgres database" } ]
[ { "msg_contents": "I thought I will update this to the Performance alias too about our \ntesting with PG8.3beta1 on Solaris.\n\nRegards,\nJignesh\n\n__Background_:_\nWe were using PostgreSQL 8.3beta1 testing on our latest Sun SPARC \nEnterprise T5220 Server using Solaris 10 8/07. Generally for performance \nbenefits in Solaris we put file systems on forcedirectio we bypass the \nfilesystem cache and go direct to disks.\n\n__Problem_:_\nWhat we were observing that there were lots of reads happening about \n4MB/sec on the file system holding $PGDATA and the database tables \nduring an OLTP Benchmark run. Initially we thought that our bufferpools \nwere not big enough. But thanks to 64-bit builds we could use bigger \nbufferpools. However even with extraordinary bufferpool sizes we still \nsaw lots of reads going to the disks.\n\n__DTrace to the Rescue_:_\n\nI modified iosnoop.d to just snoop on reads. The modified rsnoop.d is as \nfollows:\n $ cat rsnoop.d\n#!/usr/sbin/dtrace -s\nsyscall::read:entry\n/execname==\"postgres\"/\n{\n printf(\"pid %d reading %s\\n\", pid, fds[arg0].fi_pathname);\n}\n\nBased on it I found that most postgresql processes were doing lots of \nreads from pg_clog directory.\nCLOG or commit logs keep track of transactions in flight. Writes of CLOG \ncomes from recording of transaction commits( or when it aborts) or when \nan XLOG is generated. However though I am not clear on reads yet, it \nseems every process constantly reads it to get some status. CLOG data is \nnot cached in any PostgreSQL shared memory segments and hence becomes \nthe bottleneck as it has to constantly go to the filesystem to get the \nread data.\n\n\n__Workaround for the high reads on CLOG on Solaris_ :\n_Start with the cluster $PGDATA on regular UFS (which is buffered and \nlogging is enabled). Always create a new tablespace for your database on \nforcedirectio mounted file system which bypasses the file system cache. \nThis allows all PostgreSQL CLOG files to be cached in UFS greatly \nreducing stress on the underlying storage. For writes to the best of my \nknowledge, PostgreSQL will still do fsync to force the writes the CLOGs \nonto the disks so it is consistent. But the reads are spared from going \nto the disks and returned from the cache.\n\n__Result_:_\nWith rightly sized bufferpool now all database data can be in PostgreSQL \ncache and hence reads are spared from the tablespaces. As for PGDATA \ndata, UFS will do the caching of CLOG files, etc and hence sparring \nreads from going to the disks again. In the end what we achieve is a \nright sized bufferpool where there are no reads required during a high \nOLTP environment and the disks are just busy doing the writes of updates \nand inserts.\n\n\n", "msg_date": "Thu, 25 Oct 2007 10:04:56 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 8.3beta1 on Solaris testing case study" } ]
[ { "msg_contents": "I have just changed around some programs that ran too slowly (too much time\nin io-wait) and they speeded up greatly. This was not unexpected, but I\nwonder about the limitations.\n\nBy transaction, I mean a single INSERT or a few related INSERTs.\n\nWhat I used to do is roughly like this:\n\nfor each file {\n for each record {\n BEGIN WORK;\n INSERT stuff in table(s);\n if error {\n\tROLLBACK WORK\n }\n else {\n COMMIT WORK;\n }\n }\n}\n\nThe speedup was the obvious one:\n\nfor each file {\n BEGIN WORK;\n for each record {\n INSERT stuff in table(s);\n }\n if error {\n ROLLBACK WORK\n }\n else {\n COMMIT WORK;\n }\n}\n\nThis means, of course, that the things I think of as transactions have been\nbunched into a much smaller number of what postgreSQL thinks of as large\ntransactions, since there is only one per file rather than one per record.\nNow if a file has several thousand records, this seems to work out just great.\n\nBut what is the limitation on such a thing? In this case, I am just\npopulating the database and there are no other users at such a time. I am\nwilling to lose the whole insert of a file if something goes wrong -- I\nwould fix whatever went wrong and start over anyway.\n\nBut at some point, disk IO would have to be done. Is this just a function of\nhow big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it\nhave to do with wal_buffers and checkpoint_segments?\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:10:01 up 2 days, 3:28, 4 users, load average: 5.76, 5.70, 5.53\n", "msg_date": "Thu, 25 Oct 2007 11:30:08 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Bunching \"transactions\"" }, { "msg_contents": "On Oct 25, 2007, at 10:30 AM, Jean-David Beyer wrote:\n\n> I have just changed around some programs that ran too slowly (too \n> much time\n> in io-wait) and they speeded up greatly. This was not unexpected, \n> but I\n> wonder about the limitations.\n>\n> By transaction, I mean a single INSERT or a few related INSERTs.\n>\n> What I used to do is roughly like this:\n>\n> for each file {\n> for each record {\n> BEGIN WORK;\n> INSERT stuff in table(s);\n> if error {\n> \tROLLBACK WORK\n> }\n> else {\n> COMMIT WORK;\n> }\n> }\n> }\n>\n> The speedup was the obvious one:\n>\n> for each file {\n> BEGIN WORK;\n> for each record {\n> INSERT stuff in table(s);\n> }\n> if error {\n> ROLLBACK WORK\n> }\n> else {\n> COMMIT WORK;\n> }\n> }\n>\n> This means, of course, that the things I think of as transactions \n> have been\n> bunched into a much smaller number of what postgreSQL thinks of as \n> large\n> transactions, since there is only one per file rather than one per \n> record.\n> Now if a file has several thousand records, this seems to work out \n> just great.\n>\n> But what is the limitation on such a thing? In this case, I am just\n> populating the database and there are no other users at such a \n> time. I am\n> willing to lose the whole insert of a file if something goes wrong \n> -- I\n> would fix whatever went wrong and start over anyway.\n>\n> But at some point, disk IO would have to be done. Is this just a \n> function of\n> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or \n> does it\n> have to do with wal_buffers and checkpoint_segments?\n\nYou're reading data from a file and generating inserts? Can you not \nuse COPY? That would be the most performant.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Thu, 25 Oct 2007 10:51:57 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "Jean-David Beyer wrote:\n> This means, of course, that the things I think of as transactions have been\n> bunched into a much smaller number of what postgreSQL thinks of as large\n> transactions, since there is only one per file rather than one per record.\n> Now if a file has several thousand records, this seems to work out just great.\n\nUsing the small transactions, you were limited by the speed your hard\ndisk flush the commit WAL records to the disk. With small transactions\nlike that, it's not about the bandwidth, but latency of the hard drive.\nUsing larger transactions helps because you get more work done on each\ndisk operation.\n\nUpcoming 8.3 release will have a feature called \"asynchronous commit\",\nwhich should speed up those small transactions dramatically, if you\ndon't want to batch them into larger transactions like you did:\n\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT\n\n> But what is the limitation on such a thing? In this case, I am just\n> populating the database and there are no other users at such a time. I am\n> willing to lose the whole insert of a file if something goes wrong -- I\n> would fix whatever went wrong and start over anyway.\n> \n> But at some point, disk IO would have to be done. Is this just a function of\n> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it\n> have to do with wal_buffers and checkpoint_segments?\n\nWell, you have to do the I/O eventually, regardless of shared_buffers.\nCommon wisdom is that increasing wal_buffers from the default helps with\nbulk loading like that, up to a point. Increasing checkpoint_segments\nhelps as well. After you've done all that, you're going to be limited by\neither the bandwidth of your I/O system, or the speed of your CPU,\ndepending on your hardware. Using COPY instead of INSERTs will help if\nit's CPU.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 25 Oct 2007 17:06:11 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "Jean-David Beyer <[email protected]> writes:\n> But what is the limitation on such a thing?\n\nAFAIR, the only limit on the size of a transaction is 2^32 commands\n(due to CommandCounter being 32 bits).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Oct 2007 12:21:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\" " }, { "msg_contents": "[email protected] (Jean-David Beyer) writes:\n> But what is the limitation on such a thing? In this case, I am just\n> populating the database and there are no other users at such a time. I am\n> willing to lose the whole insert of a file if something goes wrong -- I\n> would fix whatever went wrong and start over anyway.\n>\n> But at some point, disk IO would have to be done. Is this just a function of\n> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it\n> have to do with wal_buffers and checkpoint_segments?\n\nI have done bulk data loads where I was typically loading hundreds of\nthousands of rows in as a single transaction, and it is worth\nobserving that loading in data from a pg_dump will do exactly the same\nthing, where, in general, each table's data is loaded as a single\ntransaction.\n\nIt has tended to be the case that increasing the number of checkpoint\nsegments is helpful, though it's less obvious that this is the case in\n8.2 and later versions, what with the ongoing changes to checkpoint\nflushing.\n\nIn general, this isn't something that typically needs to get tuned\nreally finely; if you tune your DB, in general, \"pretty big\ntransactions\" should generally work fine, up to rather large sizes of\n\"pretty big.\"\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://linuxdatabases.info/info/languages.html\n\"Why use Windows, since there is a door?\"\n-- <[email protected]> Andre Fachat\n", "msg_date": "Thu, 25 Oct 2007 13:55:23 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "Chris Browne wrote:\n> [email protected] (Jean-David Beyer) writes:\n>> But what is the limitation on such a thing? In this case, I am just\n>> populating the database and there are no other users at such a time. I am\n>> willing to lose the whole insert of a file if something goes wrong -- I\n>> would fix whatever went wrong and start over anyway.\n>>\n>> But at some point, disk IO would have to be done. Is this just a function of\n>> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it\n>> have to do with wal_buffers and checkpoint_segments?\n> \n> I have done bulk data loads where I was typically loading hundreds of\n> thousands of rows in as a single transaction, and it is worth\n> observing that loading in data from a pg_dump will do exactly the same\n> thing, where, in general, each table's data is loaded as a single\n> transaction.\n\nI guess a reasonable standard of performance would be that if my initial\npopulation of the database takes only a little longer than a restore of the\ndatabase using pg_restore, I am pretty close, and that is good enough. Of\ncourse, the restore depends on how fast my tape drive can pull the tape --\nit claims up to 12 MB/sec transfer rate, so it looks as though it will be\ntape-limited rather than postgreSQL-limited.\n> \n> It has tended to be the case that increasing the number of checkpoint\n> segments is helpful, though it's less obvious that this is the case in\n> 8.2 and later versions, what with the ongoing changes to checkpoint\n> flushing.\n\nI am running postgresql-8.1.9-1.el5 because that is what comes with RHEL5.\nI probably will not upgrade until a little while after RHEL7 comes out,\nsince I hate upgrading.\n> \n> In general, this isn't something that typically needs to get tuned\n> really finely; if you tune your DB, in general, \"pretty big\n> transactions\" should generally work fine, up to rather large sizes of\n> \"pretty big.\"\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 15:05:01 up 2 days, 7:23, 5 users, load average: 4.11, 4.22, 4.16\n", "msg_date": "Thu, 25 Oct 2007 15:15:31 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "[email protected] (Jean-David Beyer) writes:\n> Chris Browne wrote:\n>> [email protected] (Jean-David Beyer) writes:\n>>> But what is the limitation on such a thing? In this case, I am just\n>>> populating the database and there are no other users at such a time. I am\n>>> willing to lose the whole insert of a file if something goes wrong -- I\n>>> would fix whatever went wrong and start over anyway.\n>>>\n>>> But at some point, disk IO would have to be done. Is this just a function of\n>>> how big /pgsql/data/postgresql.conf's shared_buffers is set to? Or does it\n>>> have to do with wal_buffers and checkpoint_segments?\n>> \n>> I have done bulk data loads where I was typically loading hundreds of\n>> thousands of rows in as a single transaction, and it is worth\n>> observing that loading in data from a pg_dump will do exactly the same\n>> thing, where, in general, each table's data is loaded as a single\n>> transaction.\n>\n> I guess a reasonable standard of performance would be that if my initial\n> population of the database takes only a little longer than a restore of the\n> database using pg_restore, I am pretty close, and that is good enough. Of\n> course, the restore depends on how fast my tape drive can pull the tape --\n> it claims up to 12 MB/sec transfer rate, so it looks as though it will be\n> tape-limited rather than postgreSQL-limited.\n\nThat's quite possible.\n\nThere is a further factor, which is that grouping things into larger\ntransactions has very clearly diminishing returns.\n\nSupposing you have a stream of 50,000 operations updating one tuple\n(those could be UPDATE, DELETE, or INSERT; it is not, at first order,\nmaterial what sort they are), then the effects of grouping are thus...\n\n- With none...\n\n Cost = cost of doing 50,000 updates\n + cost of doing 50,000 COMMITs\n\n- If you COMMIT after every 2 updates\n\n Cost = cost of doing 50,000 updates\n + cost of doing 25,000 COMMITs\n\n- If you COMMIT after every 10 updates\n\n Cost = cost of doing 50,000 updates\n + cost of doing 5,000 COMMITs\n\n- If you COMMIT after every 100 updates\n\n Cost = cost of doing 50,000 updates\n + cost of doing 500 COMMITs\n\nThe amount of work that COMMIT does is fairly much constant,\nregardless of the number of updates in the transaction, so that the\ncost, in that equation, of COMMITs pretty quickly evaporates to\nirrelevancy.\n\nAnd increasing the sizes of the transactions does not give you\n*increasing* performance improvements; the improvements will tend to\ndecline.\n\nI wouldn't worry about trying to strictly minimize the number of\ntransactions COMMITted; once you have grouped \"enough\" data into one\ntransaction, that should be good enough.\n\nFurther, the Right Thing is to group related data together, and come\nup with a policy that is driven primarily by the need for data\nconsistency. If things work well enough, then don't go off trying to\noptimize something that doesn't really need optimization, and perhaps\nbreak the logic of the application.\n-- \noutput = (\"cbbrowne\" \"@\" \"acm.org\")\nhttp://cbbrowne.com/info/unix.html\nUsers should cultivate an ability to make the simplest molehill into a\nmountain by finding controversial interpretations of innocuous\nsounding statements that the sender never intended or imagined.\n-- from the Symbolics Guidelines for Sending Mail\n", "msg_date": "Thu, 25 Oct 2007 16:46:46 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Jean-David Beyer wrote:\n> \n>> My IO system has two Ultra/320 LVD SCSI controllers and 6 10,000rpm SCSI\n>> hard drives. The dual SCSI controller is on its own PCI-X bus (the machine\n>> has 5 independent PCI-X busses). Two hard drives are on one SCSI controller\n>> and the other four are on the other. The WAL is on the first controller,\n>> most of the rest is on the other controller. Once in a while, I get 144\n>> Megabytes/sec transfers for a few seconds at a time to the hard drive\n>> system, where I have an advertizing-maximum of 640 Megabytes/second. Each\n>> hard drive claims to take a sustained data rate of about 80\n>> Megabytes/second. When I test it, I can get 55 and sometimes a little more\n>> for a single drive.\n> \n> You might find that you get better performance by just putting all the\n> drives on a single RAID array. Or not :-). I'm not a hardware guy\n> myself, but having read this mailing list for some time, I've seen\n> different people come to different conclusions on that one. I guess it\n> depends on the hardware and the application.\n\nIn the old days, I was a \"hardware guy.\" But not in the last 15 years or so\n(although I did put this machine together from parts). Right now, I do not\nthink I would get more performance with a single RAID array. Certainly not\nif it were software RAID. Right now, I have the WAL on one drive that is not\nheavily used when doing bulk loading of the database, and the main data on\nthe other 4 drives on a different SCSI controller. Measurements revealed\nthat THE bottleneck was the writing to the WAL.\n\nThe indices for any one table are on a different drive from the data itself\nto minimize seek contention (and IO transmission contention, too, but that\ndoes not seem to be an issue). Note that now the machine is only in IO-WAIT\nstate less than 1% of the time, and I no longer notice the main postgres\nserver process in D state. It used to be in D state a lot of the time before\nI started bunching transactions. The IO to the drive with the WAL dropped\nfrom a little over 3000 sectors per second to about 700 sectors per second,\nfor one thing. And the IO bandwidth consumed dropped, IIRC, about 50%.\n> \n>> Likewise, I seemto have enough processing power.\n>>\n>> top - 12:47:22 up 2 days, 5:06, 4 users, load average: 1.40, 3.13, 4.20\n>> Tasks: 168 total, 3 running, 165 sleeping, 0 stopped, 0 zombie\n>> Cpu0 : 29.5%us, 3.3%sy, 0.0%ni, 67.0%id, 0.2%wa, 0.0%hi, 0.0%si,\n>> Cpu1 : 21.8%us, 3.1%sy, 0.0%ni, 73.7%id, 1.4%wa, 0.0%hi, 0.0%si,\n>> Cpu2 : 24.6%us, 3.6%sy, 0.0%ni, 71.7%id, 0.1%wa, 0.0%hi, 0.0%si,\n>> Cpu3 : 23.1%us, 2.7%sy, 0.0%ni, 74.0%id, 0.1%wa, 0.1%hi, 0.0%si,\n>> Mem: 8185340k total, 5112656k used, 3072684k free, 32916k buffers\n>> Swap: 4096496k total, 384k used, 4096112k free, 4549536k cached\n>>\n > Actually it looks like you've saturated the CPU.\n\nHow do you figure that? There are two or four (depending on how you count\nthem) CPUs. The CPUs seem to be running at 75% idle. If I let BOINC\nprocesses run (nice 19), I can soak up most of this idle time. I turned them\noff for the purpose of these measurements because they hide the io-wait times.\n\n> Postgres backends are\n> single-threaded, so a single bulk load like that won't use more than one\n> CPU at a time. If you add up the usr percentages above, it's ~100%.\n\nIf you add up the idle percentages, it is about 300%. Recall that there are\ntwo hyperthreaded processors here. That is more than two processors (but\nless than four). If I examine the postgres processes, one of them used to\nget to 100% once in a while when I did things like DELETE FROM tablename;\nbut I do a TRUNCATE now and it is much faster. Now any single process peaks\nat 80% of a CPU and usually runs at less than 50%. The postgres processes\nrun on multiple CPUS. Looking at the top command, normally my client runs at\naround 20% on one CPU, the main postgres server runs on a second at between\n30% and 80% (depends on which tables I am loading), and the writer runs on\nyet another. The two loggers wander around more. But these last three run at\naround 1% each. In fact, the writer is idle much of the time.\n> \n> You should switch to using COPY if you can.\n> \nSomeone else posted that I should not get neurotic about squeezing the last\nlittle bit out of this (not his exact words), and I agree. This is only for\ndoing an initial load of the database after all. And as long as the total\ntime is acceptable, that is good enough. When I first started this (using\nDB2), one of the loads used to take something like 10 hours. Redesigning my\nbasic approach got that time down to about 2 hours without too much\nprogramming effort. As the amount of data has increased, that started\ncreeping up, and one of the tables, that has about 6,000,000 entries at the\nmoment, took overnight to load. That is why I looked into bunching these\ntransactions, with gratifying results.\n\nTo use COPY, I would have to write a bunch of special purpose programs to\nconvert the data as I get them into a form that COPY could handle them. (I\nimagine pg_dump and pg_restore use COPY). And running those would take time\ntoo. There ought to be a law against making huge spreadsheets for data, but\npeople who should be using a relational database for things seem more\ncomfortable with spreadsheets. So that is the form in which I get these data.\n\n From a programming point of view, I hate spreadsheets because the\ncalculations and the data are intermixed, and you cannot see what the\ncalculations are unless you specifically look for them. And the way people\ndesign (if that is the proper term) a spreadsheet results in something that\ncould not be considered normalized in any sense of that term.\nOne of these tables has columns from A all the way to KC (I guess that is\nover 300 columns), and I would not be able to use such a table even if\npostgres would accept one. IIRC, DB2 would not take such a wide one, but I\nam not sure about that anymore. Anyhow, I believe in separation of concerns,\nand mixing programs and data as in spreadsheets is a step in the wrong\ndirection.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:15:01 up 3 days, 33 min, 0 users, load average: 4.09, 4.15, 4.20\n", "msg_date": "Fri, 26 Oct 2007 09:04:43 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "Chris Browne wrote:\n\n> Further, the Right Thing is to group related data together, and come\n> up with a policy that is driven primarily by the need for data\n> consistency. If things work well enough, then don't go off trying to\n> optimize something that doesn't really need optimization, and perhaps\n> break the logic of the application.\n\nRight. I think it was Jon Louis Bently who wrote (in his book, \"Writing\nEfficient Programs\") something to the effect, \"Premature optimization is the\nroot of all evil.\" Just because so much of it broke the logic of the\napplication (and did not help anyway). (Gotta profile first, for one thing.)\n\nI had a boss once who insisted we write everyting in assembly language for\nefficiency. We did not even know what algorithms we needed for the\napplication. And at the time (System 360 days), IBM did not even publish the\nexecution times for the instruction set of the machine we were using because\nso many executed in zero-time -- overlapped with other instructions, local\ncaching in the processor, locality of memory reference, and so on. To get\nefficiency, you must first get your algorithms right, including getting the\nbest ones for the problem at hand.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 10:05:01 up 3 days, 2:23, 1 user, load average: 4.10, 4.24, 4.18\n", "msg_date": "Fri, 26 Oct 2007 10:12:53 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bunching \"transactions\"" }, { "msg_contents": "On Fri, 26 Oct 2007, Jean-David Beyer wrote:\n\n> I think it was Jon Louis Bently who wrote (in his book, \"Writing \n> Efficient Programs\") something to the effect, \"Premature optimization is \n> the root of all evil.\"\n\nThat quote originally comes from Tony Hoare, popularized by a paper \nwritten by Donald Knuth in 1974. The full statement is \"We should forget \nabout small efficiencies, say about 97% of the time: premature \noptimization is the root of all evil. Yet we should not pass up our \nopportunities in that critical 3%.\"\n\nMy favorite sound-bite on this topic is from William Wulf: \"More \ncomputing sins are committed in the name of efficiency (without \nnecessarily achieving it) than for any other single reason - including \nblind stupidity.\" That was back in 1972. Both his and Knuth's papers \ncentered on abusing GOTO, which typically justified at the time via \nperformance concerns.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 26 Oct 2007 11:08:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bunching \"transactions\"" } ]
[ { "msg_contents": "Update on my testing 8.3beta1 on Solaris.\n\n* CLOG reads\n* Asynchronous Commit benefit\n* Hot CPU Utilization\n\nRegards,\nJignesh\n\n__Background_:_\nWe were using PostgreSQL 8.3beta1 testing on our latest Sun SPARC \nEnterprise T5220 Server using Solaris 10 8/07 and Sun Fire X4200 using \nSolaris 10 8/07. Generally for performance benefits in Solaris we put \nfile systems on forcedirectio we bypass the filesystem cache and go \ndirect to disks.\n\n__Problem_:_\nWhat we were observing that there were lots of reads happening about \n4MB/sec on the file system holding $PGDATA and the database tables \nduring an OLTP Benchmark run. Initially we thought that our bufferpools \nwere not big enough. But thanks to 64-bit builds we could use bigger \nbufferpools. However even with extraordinary bufferpool sizes we still \nsaw lots of reads going to the disks.\n\n__DTrace to the Rescue_:_\n\nI modified iosnoop.d to just snoop on reads. The modified rsnoop.d is as \nfollows:\n$ cat rsnoop.d\n#!/usr/sbin/dtrace -s\nsyscall::read:entry\n/execname==\"postgres\"/\n{\n printf(\"pid %d reading %s\\n\", pid, fds[arg0].fi_pathname);\n}\n\nBased on it I found that most postgresql processes were doing lots of \nreads from pg_clog directory.\nCLOG or commit logs keep track of transactions in flight. Writes of CLOG \ncomes from recording of transaction commits( or when it aborts) or when \nan XLOG is generated. However though I am not clear on reads yet, it \nseems every process constantly reads it to get some status. CLOG data is \nnot cached in any PostgreSQL shared memory segments and hence becomes \nthe bottleneck as it has to constantly go to the filesystem to get the \nread data.\n# ./rsnoop.d\ndtrace: script './rsnoop.d' matched 1 probe\nCPU ID FUNCTION:NAME\n 0 49222 read:entry pid 8739 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 0 49222 read:entry pid 9607 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 0 49222 read:entry pid 9423 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 0 49222 read:entry pid 8731 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 0 49222 read:entry pid 8719 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 0 49222 read:entry pid 9019 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 1 49222 read:entry pid 9255 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n 1 49222 read:entry pid 8867 reading \n/export/home0/igen/pgdata/pg_clog/000C\n\n\nLater on during another run I added ustack() after the printf in the \nabove script to get the function name also:\n\n# ./rsnoop.d\ndtrace: script './rsnoop.d' matched 1 probe\nCPU ID FUNCTION:NAME\n 0 49222 read:entry pid 10956 reading \n/export/home0/igen/pgdata/pg_clog/0011\n libc.so.1`_read+0xa\n postgres`SimpleLruReadPage+0x3e6\n postgres`SimpleLruReadPage_ReadOnly+0x9b\n postgres`TransactionIdGetStatus+0x1f\n postgres`TransactionIdDidCommit+0x42\n postgres`HeapTupleSatisfiesVacuum+0x21a\n postgres`heap_prune_chain+0x14b\n postgres`heap_page_prune_opt+0x1e6\n postgres`index_getnext+0x144\n postgres`IndexNext+0xe1\n postgres`ExecScan+0x189\n postgres`ExecIndexScan+0x43\n postgres`ExecProcNode+0x183\n postgres`ExecutePlan+0x9e\n postgres`ExecutorRun+0xab\n postgres`PortalRunSelect+0x47a\n postgres`PortalRun+0x262\n postgres`exec_execute_message+0x565\n postgres`PostgresMain+0xf45\n postgres`BackendRun+0x3f9\n\n 0 49222 read:entry pid 10414 reading \n/export/home0/igen/pgdata/pg_clog/0011\n libc.so.1`_read+0xa\n postgres`SimpleLruReadPage+0x3e6\n postgres`SimpleLruReadPage_ReadOnly+0x9b\n postgres`TransactionIdGetStatus+0x1f\n postgres`TransactionIdDidCommit+0x42\n postgres`HeapTupleSatisfiesVacuum+0x21a\n postgres`heap_prune_chain+0x14b\n postgres`heap_page_prune_opt+0x1e6\n postgres`index_getnext+0x144\n postgres`IndexNext+0xe1\n postgres`ExecScan+0x189\n^C libc.so.1`_read+0xa\n postgres`SimpleLruReadPage+0x3e6\n postgres`SimpleLruReadPage_ReadOnly+0x9b\n postgres`TransactionIdGetStatus+0x1f\n postgres`TransactionIdDidCommit+0x42\n postgres`HeapTupleSatisfiesMVCC+0x34f\n postgres`index_getnext+0x29e\n postgres`IndexNext+0xe1\n postgres`ExecScan+0x189\n postgres`ExecIndexScan+0x43\n postgres`ExecProcNode+0x183\n postgres`ExecutePlan+0x9e\n postgres`ExecutorRun+0xab\n postgres`PortalRunSelect+0x47a\n postgres`PortalRun+0x262\n postgres`exec_execute_message+0x565\n postgres`PostgresMain+0xf45\n postgres`BackendRun+0x3f9\n postgres`BackendStartup+0x271\n postgres`ServerLoop+0x259\n\n 0 49222 read:entry pid 10186 reading \n/export/home0/igen/pgdata/pg_clog/0011\n libc.so.1`_read+0xa\n postgres`SimpleLruReadPage+0x3e6\n postgres`SimpleLruReadPage_ReadOnly+0x9b\n postgres`TransactionIdGetStatus+0x1f\n postgres`TransactionIdDidCommit+0x42\n postgres`HeapTupleSatisfiesVacuum+0x21a\n postgres`heap_prune_chain+0x14b\n postgres`heap_page_prune_opt+0x1e6\n postgres`index_getnext+0x144\n postgres`IndexNext+0xe1\n postgres`ExecScan+0x189\n postgres`ExecIndexScan+0x43\n postgres`ExecProcNode+0x183\n postgres`ExecutePlan+0x9e\n postgres`ExecutorRun+0xab\n postgres`PortalRunSelect+0x47a\n postgres`PortalRun+0x262\n postgres`exec_execute_message+0x565\n postgres`PostgresMain+0xf45\n postgres`BackendRun+0x3f9\n\n\nSo multiple processes are reading the same file. In this case since the \nfile system is told not to cache files, hence all read ios are being \nsent to the disk to read the file again.\n\n\n\n__Workaround for the high reads on CLOG on Solaris_ :\n_Start with the cluster $PGDATA on regular UFS (which is buffered and \nlogging is enabled). Always create a new tablespace for your database on \nforcedirectio mounted file system which bypasses the file system cache. \nThis allows all PostgreSQL CLOG files to be cached in UFS greatly \nreducing stress on the underlying storage. For writes to the best of my \nknowledge, PostgreSQL will still do fsync to force the writes the CLOGs \nonto the disks so it is consistent. But the reads are spared from going \nto the disks and returned from the cache.\n\n__Result_:_\nWith rightly sized bufferpool now all database data can be in PostgreSQL \ncache and hence reads are spared from the tablespaces. As for PGDATA \ndata, UFS will do the caching of CLOG files, etc and hence sparring \nreads from going to the disks again. In the end what we achieve is a \nright sized bufferpool where there are no reads required during a high \nOLTP environment and the disks are just busy doing the writes of updates \nand inserts.\n\n\n_Asynchronous Commit_:\n\nAlso as requested by Josh, I tried out Asynchronous Commit in 8.3beta 1\nI compared four scenarios on internal disks (the prime target)\n1. Default options (commit_delay off, synchronous_commit=true)\n2. With Commit_delay on\n3. With Asynchronous and Commit_delay on\n4. With Asynchronous commit but Commit_delay off\n5. With Fsync off\n\n\nIn 8.2 I found compared to (1),( 2) gave me a huge boost (2X) but fsync \nwould be eventually even 2.8X faster than (2)\n\nIn 8.3 hence I did not even test the default option and took (2) as my \nbaseline run and found (3),(4),(5) pretty much gave me the similar boost \n2.55X over my baseline run (2) since eventually I was CPU bound on my \nbox and IO ended up handling well.\n\n(Though I found (5) was better in 8.2 compared to (5) in 8.3beta1 since \nit was getting CPU saturated slightly earlier)\n\n\n_Additional CPU consumption Findings_:\nIn the lightweight OLTP Testing that was performed with about 1000 \nusers, with 8.3 with the above workaround in place for CLOG. I reached a \nscenario where the system was out of CPU resources with about 1000 \nusers. Anyway doing a quick profiling using the \"hotuser\" program \navailable in the DTraceToolkit the top function is postgres`GetSnapshotData\n\n# ./hotuser -p 10559\n\n....\npostgres`hash_seq_term 1 2.1%\npostgres`SearchCatCache 2 4.2%\npostgres`hash_search_with_hash_value 4 8.3%\npostgres`GetSnapshotData 6 12.5%\n\n\n\nAlso Lock Waits during the 1000 User run was as follows:\n\n# ./83_lwlock_wait.d 9231\n\n Lock Id Mode Count\n WALInsertLock Exclusive 1\n ProcArrayLock Exclusive 19\n\n Lock Id Combined Time (ns)\n WALInsertLock 428507\n ProcArrayLock 1009652776\n\n# ./83_lwlock_wait.d 9153\n\n Lock Id Mode Count\n CLogControlLock Exclusive 1\n WALInsertLock Exclusive 1\n ProcArrayLock Exclusive 15\n\n Lock Id Combined Time (ns)\n CLogControlLock 25536\n WALInsertLock 397265\n ProcArrayLock 696894211\n\n#\n\n\nMy Guess is that the ProcArrayLock is coming from the GetSnapShotData \nfunction ... or maybe caused by it.. But I guess I will let the experts \ncomment ..\n\nI am of the opionion that if we tune GetSnapShotData, then we should be \nable to handle more users.\n\n\nSo overall I think I am excited with the 8.3beta1 performance specially \nin terms of asynchronous_commit however to get the CPU performance in \nline with GetSnapshotData() and also fixing the CLOG reading problem \noccuring in SimpleLruRead() will greatly enhance the performance of 8.3 \nfor OLTP benchmarks.\n\n\n\n", "msg_date": "Thu, 25 Oct 2007 16:24:18 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.3beta1 testing on Solaris" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> CLOG data is \n> not cached in any PostgreSQL shared memory segments\n\nThe above statement is utterly false, so your trace seems to indicate\nsomething broken. Are you sure these were the only reads of pg_clog\nfiles? Can you extend the tracing to determine which page of the file\ngot read? I am wondering if your (unspecified) test load was managing\nto touch more pages of the clog than there is room for in shared memory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Oct 2007 16:54:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris " }, { "msg_contents": "\n\"Jignesh K. Shah\" <[email protected]> writes:\n\n> CLOG data is not cached in any PostgreSQL shared memory segments and hence\n> becomes the bottleneck as it has to constantly go to the filesystem to get\n> the read data.\n\nThis is the same bottleneck you discussed earlier. CLOG reads are cached in\nthe Postgres shared memory segment but only NUM_CLOG_BUFFERS are which\ndefaults to 8 buffers of 8kb each. With 1,000 clients and the transaction rate\nyou're running you needed a larger number of buffers.\n\nUsing the filesystem buffer cache is also an entirely reasonable solution\nthough. That's surely part of the logic behind not trying to keep more of the\nclog in shared memory. Do you have any measurements of how much time is being\nspent just doing the logical I/O to the buffer cache for the clog pages? 4MB/s\nseems like it's not insignificant but your machine is big enough that perhaps\nI'm thinking at the wrong scale.\n\nI'm really curious whether you see any benefit from the vxid read-only\ntransactions. I'm not sure how to get an apples to apples comparison though.\nIdeally just comparing it to CVS HEAD from immediately prior to the vxid patch\ngoing in. Perhaps calling some function which forces an xid to be allocated\nand seeing how much it slows down the benchmark would be a good substitute.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 25 Oct 2007 23:37:33 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.3beta1 testing on Solaris" }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>> CLOG data is \n>> not cached in any PostgreSQL shared memory segments\n>\n> The above statement is utterly false, so your trace seems to indicate\n> something broken. Are you sure these were the only reads of pg_clog\n> files? Can you extend the tracing to determine which page of the file\n> got read? I am wondering if your (unspecified) test load was managing\n> to touch more pages of the clog than there is room for in shared memory.\n\nDidn't we already go through this? He and Simon were pushing to bump up\nNUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and\nsome other clog.c would have to be reengineered to scale well to larger\nvalues. \n\nAlso it seemed there were only modest improvements from raising the value and\nthere would always be a ceiling to bump into so just raising the number of\nbuffers isn't particularly interesting unless there's some magic numbers we're\ntrying to hit.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 25 Oct 2007 23:43:52 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Didn't we already go through this? He and Simon were pushing to bump up\n> NUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and\n> some other clog.c would have to be reengineered to scale well to larger\n> values. \n\nAFAIR we never did get any clear explanation of what the test case is.\nI guess it must be write-mostly, else lazy XID assignment would have\nhelped this by reducing the rate of XID consumption.\n\nIt's still true that I'm leery of a large increase in the number of\nbuffers without reengineering slru.c. That code was written on the\nassumption that there were few enough buffers that a linear search\nwould be fine. I'd hold still for 16, or maybe even 32, but I dunno\nhow much impact that will have for such a test case.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Oct 2007 20:51:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris " }, { "msg_contents": "Tom,\n\n> It's still true that I'm leery of a large increase in the number of\n> buffers without reengineering slru.c. That code was written on the\n> assumption that there were few enough buffers that a linear search\n> would be fine. I'd hold still for 16, or maybe even 32, but I dunno\n> how much impact that will have for such a test case.\n\nActually, 32 made a significant difference as I recall ... do you still have \nthe figures for that, Jignesh?\n\nThe test case is a workload called \"iGen\" which is a \"fixed\" TPCC-like \nworkload. I've been trying to talk Sun into open-sourcing it, but no dice so \nfar. It is heavy on writes, and (like TPCC) consists mostly of one-line \ntransactions.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 25 Oct 2007 19:56:56 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> Actually, 32 made a significant difference as I recall ... do you still have \n> the figures for that, Jignesh?\n\nI'd want to see a new set of test runs backing up any call for a change\nin NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that\nbenchmarks using code from a few months back shouldn't carry a lot of\nweight.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 25 Oct 2007 23:26:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris " }, { "msg_contents": "\"Josh Berkus\" <[email protected]> writes:\n\n> Actually, 32 made a significant difference as I recall ... do you still have \n> the figures for that, Jignesh?\n\nWell it made a difference but it didn't remove the bottleneck, it just moved\nit. IIRC under that benchmark Jignesh was able to run with x sessions\nefficiently with 8 clog buffers, x + 100 or so sessions with 16 clog buffers\nand x + 200 or so sessions with 32 clog buffers.\n\nIt happened that x + 200 was > the number of sessions he wanted to run the\nbenchmark at so it helped the benchmark results quite a bit. But that was just\nan artifact of how many sessions the benchmark needed. A user who needs 1200\nsessions or who has a different transaction load might find he needs more clog\nbuffers to alleviate the bottleneck. And of course most (all?) normal users\nuse far fewer sessions and won't run into this bottleneck at all.\n\nRaising NUM_CLOG_BUFFERS just moves around the arbitrary bottleneck. This\nbenchmark is useful in that it gives us an idea where the bottleneck lies for\nvarious values of NUM_CLOG_BUFFERS but it doesn't tell us what value realistic\nusers are likely to bump into.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 26 Oct 2007 09:07:23 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "\nHi George,\n\nI have seen the 4M/sec problem first actually during an EAStress type \nrun with only 150 connections.\n\nI will try to do more testing today that Tom has requested.\n\nRegards,\nJignesh\n\n\nGregory Stark wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n>\n> \n>> CLOG data is not cached in any PostgreSQL shared memory segments and hence\n>> becomes the bottleneck as it has to constantly go to the filesystem to get\n>> the read data.\n>> \n>\n> This is the same bottleneck you discussed earlier. CLOG reads are cached in\n> the Postgres shared memory segment but only NUM_CLOG_BUFFERS are which\n> defaults to 8 buffers of 8kb each. With 1,000 clients and the transaction rate\n> you're running you needed a larger number of buffers.\n>\n> Using the filesystem buffer cache is also an entirely reasonable solution\n> though. That's surely part of the logic behind not trying to keep more of the\n> clog in shared memory. Do you have any measurements of how much time is being\n> spent just doing the logical I/O to the buffer cache for the clog pages? 4MB/s\n> seems like it's not insignificant but your machine is big enough that perhaps\n> I'm thinking at the wrong scale.\n>\n> I'm really curious whether you see any benefit from the vxid read-only\n> transactions. I'm not sure how to get an apples to apples comparison though.\n> Ideally just comparing it to CVS HEAD from immediately prior to the vxid patch\n> going in. Perhaps calling some function which forces an xid to be allocated\n> and seeing how much it slows down the benchmark would be a good substitute.\n>\n> \n", "msg_date": "Fri, 26 Oct 2007 09:20:36 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.3beta1 testing on Solaris" }, { "msg_contents": "The problem I saw was first highlighted by EAStress runs with PostgreSQL \non Solaris with 120-150 users. I just replicated that via my smaller \ninternal benchmark that we use here to recreate that problem.\n\nEAStress should be just fine to highlight it.. Just put pg_clog on \nO_DIRECT or something so that all IOs go to disk making it easier to \nobserve.\n\nIn the meanwhile I will try to get more information.\n\n\nRegards,\nJignesh\n\n\nTom Lane wrote:\n> Gregory Stark <[email protected]> writes:\n> \n>> Didn't we already go through this? He and Simon were pushing to bump up\n>> NUM_CLOG_BUFFERS and you were arguing that the test wasn't representative and\n>> some other clog.c would have to be reengineered to scale well to larger\n>> values. \n>> \n>\n> AFAIR we never did get any clear explanation of what the test case is.\n> I guess it must be write-mostly, else lazy XID assignment would have\n> helped this by reducing the rate of XID consumption.\n>\n> It's still true that I'm leery of a large increase in the number of\n> buffers without reengineering slru.c. That code was written on the\n> assumption that there were few enough buffers that a linear search\n> would be fine. I'd hold still for 16, or maybe even 32, but I dunno\n> how much impact that will have for such a test case.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n", "msg_date": "Fri, 26 Oct 2007 09:25:02 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is just \navoiding the symptom to a later value.. I promise to look more into it \nbefore making any recommendations to increase NUM_CLOG_BUFFERs.\n\n\nBecause though \"iGen\" showed improvements in that area by increasing \nnum_clog_buffers , EAStress had shown no improvements.. Plus the reason \nI think this is not the problem in 8.3beta1 since the Lock Output \nclearly does not show CLOGControlFile as to be the issue which I had \nseen in earlier case. So I dont think that increasing NUM_CLOG_BUFFERS \nwill change thing here.\n\nNow I dont understand the code pretty well yet I see three hotspots and \nnot sure if they are related to each other\n* ProcArrayLock waits - causing Waits as reported by \n83_lockwait.d script\n* SimpleLRUReadPage - causing read IOs as reported by \niostat/rsnoop.d\n* GetSnapshotData - causing CPU utilization as reported by hotuser\n\nBut I will shut up and do more testing.\n\nRegards,\nJignesh\n\n\n\nTom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n> \n>> Actually, 32 made a significant difference as I recall ... do you still have \n>> the figures for that, Jignesh?\n>> \n>\n> I'd want to see a new set of test runs backing up any call for a change\n> in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that\n> benchmarks using code from a few months back shouldn't carry a lot of\n> weight.\n>\n> \t\t\tregards, tom lane\n> \n", "msg_date": "Fri, 26 Oct 2007 09:36:53 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "Tom,\n\nHere is what I did:\n\nI started aggregating all read information:\n\nFirst I also had added group by pid (arg0,arg1, pid) and the counts \nwere all coming as 1\n\nThen I just grouped by filename and location (arg0,arg1 of reads) and \nthe counts came back as\n\n# cat read.d\n#!/usr/sbin/dtrace -s\nsyscall::read:entry\n/execname==\"postgres\"/\n{\n @read[fds[arg0].fi_pathname, arg1] = count();\n}\n\n\n# ./read.d\ndtrace: script './read.d' matched 1 probe\n^C\n\n /export/home0/igen/pgdata/pg_clog/0014 \n-2753028293472 1\n /export/home0/igen/pgdata/pg_clog/0014 \n-2753028277088 1\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028244320 2\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028268896 14\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028260704 25\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028252512 27\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028277088 28\n /export/home0/igen/pgdata/pg_clog/0015 \n-2753028293472 37\n\n\nFYI I pressed ctrl-c within like less than a second\n\nSo to me this seems that multiple processes are reading the same page \nfrom different pids. (This was with about 600 suers active.\n\nAparently we do have a problem that we are reading the same buffer \naddress again. (Same as not being cached anywhere or not finding it in \ncache anywhere).\n\nI reran lock wait script on couple of processes and did not see \nCLogControlFileLock as a problem..\n\n# ./83_lwlock_wait.d 14341\n\n Lock Id Mode Count\n WALInsertLock Exclusive 1\n ProcArrayLock Exclusive 16\n\n Lock Id Combined Time (ns)\n WALInsertLock 383109\n ProcArrayLock 198866236\n\n# ./83_lwlock_wait.d 14607\n\n Lock Id Mode Count\n WALInsertLock Exclusive 2\n ProcArrayLock Exclusive 15\n\n Lock Id Combined Time (ns)\n WALInsertLock 55243\n ProcArrayLock 69700140\n\n#\n\nWhat will help you find out why it is reading the same page again?\n\n\n-Jignesh\n\n\n\nJignesh K. Shah wrote:\n> I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is \n> just avoiding the symptom to a later value.. I promise to look more \n> into it before making any recommendations to increase NUM_CLOG_BUFFERs.\n>\n>\n> Because though \"iGen\" showed improvements in that area by increasing \n> num_clog_buffers , EAStress had shown no improvements.. Plus the \n> reason I think this is not the problem in 8.3beta1 since the Lock \n> Output clearly does not show CLOGControlFile as to be the issue which \n> I had seen in earlier case. So I dont think that increasing \n> NUM_CLOG_BUFFERS will change thing here.\n>\n> Now I dont understand the code pretty well yet I see three hotspots \n> and not sure if they are related to each other\n> * ProcArrayLock waits - causing Waits as reported by \n> 83_lockwait.d script\n> * SimpleLRUReadPage - causing read IOs as reported by \n> iostat/rsnoop.d\n> * GetSnapshotData - causing CPU utilization as reported by hotuser\n>\n> But I will shut up and do more testing.\n>\n> Regards,\n> Jignesh\n>\n>\n>\n> Tom Lane wrote:\n>> Josh Berkus <[email protected]> writes:\n>> \n>>> Actually, 32 made a significant difference as I recall ... do you \n>>> still have the figures for that, Jignesh?\n>>> \n>>\n>> I'd want to see a new set of test runs backing up any call for a change\n>> in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area that\n>> benchmarks using code from a few months back shouldn't carry a lot of\n>> weight.\n>>\n>> regards, tom lane\n>> \n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Fri, 26 Oct 2007 11:45:26 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "Also to give perspective on the equivalent writes on CLOG\n\nI used the following script which runs for 10 sec to track all writes to \nthe clog directory and here is what it came up with... (This is with 500 \nusers running)\n\n# cat write.d\n#!/usr/sbin/dtrace -s\nsyscall::write:entry\n/execname==\"postgres\" && \ndirname(fds[arg0].fi_pathname)==\"/export/home0/igen/pgdata/pg_clog\"/\n{\n @write[fds[arg0].fi_pathname,arg1] = count();\n}\ntick-10sec\n{\nexit(0);\n}\n\n# ./write.d\ndtrace: script './write.d' matched 2 probes\nCPU ID FUNCTION:NAME\n 3 1026 :tick-10sec\n\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028277088 1\n#\nI modified read.d to do a 5sec read\n# ./read.d\ndtrace: script './read.d' matched 3 probes\nCPU ID FUNCTION:NAME\n 0 1 :BEGIN\n 0 1027 :tick-5sec\n\n /export/home0/igen/pgdata/pg_clog/001F \n-2753028268896 1\n /export/home0/igen/pgdata/pg_clog/001F \n-2753028252512 1\n /export/home0/igen/pgdata/pg_clog/001F \n-2753028285280 2\n /export/home0/igen/pgdata/pg_clog/001F \n-2753028277088 3\n /export/home0/igen/pgdata/pg_clog/001F \n-2753028236128 3\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028285280 5\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028236128 9\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028277088 13\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028268896 15\n /export/home0/igen/pgdata/pg_clog/001E \n-2753028252512 27\n#\n\nSo the ratio of reads vs writes to clog files is pretty huge..\n\n\n-Jignesh\n\n\n\nJignesh K. Shah wrote:\n> Tom,\n>\n> Here is what I did:\n>\n> I started aggregating all read information:\n>\n> First I also had added group by pid (arg0,arg1, pid) and the counts \n> were all coming as 1\n>\n> Then I just grouped by filename and location (arg0,arg1 of reads) and \n> the counts came back as\n>\n> # cat read.d\n> #!/usr/sbin/dtrace -s\n> syscall::read:entry\n> /execname==\"postgres\"/\n> {\n> @read[fds[arg0].fi_pathname, arg1] = count();\n> }\n>\n>\n> # ./read.d\n> dtrace: script './read.d' matched 1 probe\n> ^C\n>\n> /export/home0/igen/pgdata/pg_clog/0014 \n> -2753028293472 1\n> /export/home0/igen/pgdata/pg_clog/0014 \n> -2753028277088 1\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028244320 2\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028268896 14\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028260704 25\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028252512 27\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028277088 28\n> /export/home0/igen/pgdata/pg_clog/0015 \n> -2753028293472 37\n>\n>\n> FYI I pressed ctrl-c within like less than a second\n>\n> So to me this seems that multiple processes are reading the same page \n> from different pids. (This was with about 600 suers active.\n>\n> Aparently we do have a problem that we are reading the same buffer \n> address again. (Same as not being cached anywhere or not finding it \n> in cache anywhere).\n>\n> I reran lock wait script on couple of processes and did not see \n> CLogControlFileLock as a problem..\n>\n> # ./83_lwlock_wait.d 14341\n>\n> Lock Id Mode Count\n> WALInsertLock Exclusive 1\n> ProcArrayLock Exclusive 16\n>\n> Lock Id Combined Time (ns)\n> WALInsertLock 383109\n> ProcArrayLock 198866236\n>\n> # ./83_lwlock_wait.d 14607\n>\n> Lock Id Mode Count\n> WALInsertLock Exclusive 2\n> ProcArrayLock Exclusive 15\n>\n> Lock Id Combined Time (ns)\n> WALInsertLock 55243\n> ProcArrayLock 69700140\n>\n> #\n>\n> What will help you find out why it is reading the same page again?\n>\n>\n> -Jignesh\n>\n>\n>\n> Jignesh K. Shah wrote:\n>> I agree with Tom.. somehow I think increasing NUM_CLOG_BUFFERS is \n>> just avoiding the symptom to a later value.. I promise to look more \n>> into it before making any recommendations to increase NUM_CLOG_BUFFERs.\n>>\n>>\n>> Because though \"iGen\" showed improvements in that area by increasing \n>> num_clog_buffers , EAStress had shown no improvements.. Plus the \n>> reason I think this is not the problem in 8.3beta1 since the Lock \n>> Output clearly does not show CLOGControlFile as to be the issue which \n>> I had seen in earlier case. So I dont think that increasing \n>> NUM_CLOG_BUFFERS will change thing here.\n>>\n>> Now I dont understand the code pretty well yet I see three hotspots \n>> and not sure if they are related to each other\n>> * ProcArrayLock waits - causing Waits as reported by \n>> 83_lockwait.d script\n>> * SimpleLRUReadPage - causing read IOs as reported by \n>> iostat/rsnoop.d\n>> * GetSnapshotData - causing CPU utilization as reported by hotuser\n>>\n>> But I will shut up and do more testing.\n>>\n>> Regards,\n>> Jignesh\n>>\n>>\n>>\n>> Tom Lane wrote:\n>>> Josh Berkus <[email protected]> writes:\n>>> \n>>>> Actually, 32 made a significant difference as I recall ... do you \n>>>> still have the figures for that, Jignesh?\n>>>> \n>>>\n>>> I'd want to see a new set of test runs backing up any call for a change\n>>> in NUM_CLOG_BUFFERS --- we've changed enough stuff around this area \n>>> that\n>>> benchmarks using code from a few months back shouldn't carry a lot of\n>>> weight.\n>>>\n>>> regards, tom lane\n>>> \n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Fri, 26 Oct 2007 12:46:14 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "\"Jignesh K. Shah\" <[email protected]> writes:\n> So the ratio of reads vs writes to clog files is pretty huge..\n\nIt looks to me that the issue is simply one of not having quite enough\nCLOG buffers. Your first run shows 8 different pages being fetched and\nthe second shows 10. Bearing in mind that we \"pin\" the latest CLOG page\ninto buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for\nolder pages, so what we've got here is thrashing for the available\nslots.\n\nTry increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Oct 2007 15:08:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris " }, { "msg_contents": "\nI changed CLOG Buffers to 16\n\nRunning the test again:\n# ./read.d\ndtrace: script './read.d' matched 2 probes\nCPU ID FUNCTION:NAME\n 0 1027 :tick-5sec\n\n /export/home0/igen/pgdata/pg_clog/0024 \n-2753028219296 1\n /export/home0/igen/pgdata/pg_clog/0025 \n-2753028211104 1\n# ./read.d\ndtrace: script './read.d' matched 2 probes\nCPU ID FUNCTION:NAME\n 1 1027 :tick-5sec\n\n# ./read.d\ndtrace: script './read.d' matched 2 probes\nCPU ID FUNCTION:NAME\n 1 1027 :tick-5sec\n\n# ./read.d\ndtrace: script './read.d' matched 2 probes\nCPU ID FUNCTION:NAME\n 0 1027 :tick-5sec\n\n /export/home0/igen/pgdata/pg_clog/0025 \n-2753028194720 1\n\n\nSo Tom seems to be correct that it is a case of CLOG Buffer thrashing. \nBut since I saw the same problem with two different workloads, I think \npeople hitting this problem is pretty high.\n\nAlso I am bit surprised that CLogControlFile did not show up as being \nhot.. Maybe because not much writes are going on .. Or maybe since I did \nnot trace all 500 users to see their hot lock status..\n\n\nDmitri has another workload to test, I might try that out later on to \nsee if it causes similar impact or not.\n\nOf course I havent seen my throughput go up yet since I am already CPU \nbound... But this is good since the number of IOPS to the disk are \nreduced (and hence system calls).\n\n\nIf I take this as my baseline number.. I can then proceed to hunt other \nbottlenecks????\n\n\nWhats the view of the community?\n\nHunt down CPU utilizations or Lock waits next?\n\nYour votes are crucial on where I put my focus.\n\nAnother thing Josh B told me to check out was the wal_writer_delay setting:\n\nI have done two settings with almost equal performance (with the CLOG 16 \nsetting) .. One with 100ms and other default at 200ms.. Based on the \nruns it seemed that the 100ms was slightly better than the default .. \n(Plus the risk of loosing data is reduced from 600ms to 300ms)\n\nThanks.\n\nRegards,\nJignesh\n\n\n\n\nTom Lane wrote:\n> \"Jignesh K. Shah\" <[email protected]> writes:\n> \n>> So the ratio of reads vs writes to clog files is pretty huge..\n>> \n>\n> It looks to me that the issue is simply one of not having quite enough\n> CLOG buffers. Your first run shows 8 different pages being fetched and\n> the second shows 10. Bearing in mind that we \"pin\" the latest CLOG page\n> into buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for\n> older pages, so what we've got here is thrashing for the available\n> slots.\n>\n> Try increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Fri, 26 Oct 2007 17:45:19 -0400", "msg_from": "\"Jignesh K. Shah\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" }, { "msg_contents": "\nThis has been saved for the 8.4 release:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches_hold\n\n---------------------------------------------------------------------------\n\nJignesh K. Shah wrote:\n> \n> I changed CLOG Buffers to 16\n> \n> Running the test again:\n> # ./read.d\n> dtrace: script './read.d' matched 2 probes\n> CPU ID FUNCTION:NAME\n> 0 1027 :tick-5sec\n> \n> /export/home0/igen/pgdata/pg_clog/0024 \n> -2753028219296 1\n> /export/home0/igen/pgdata/pg_clog/0025 \n> -2753028211104 1\n> # ./read.d\n> dtrace: script './read.d' matched 2 probes\n> CPU ID FUNCTION:NAME\n> 1 1027 :tick-5sec\n> \n> # ./read.d\n> dtrace: script './read.d' matched 2 probes\n> CPU ID FUNCTION:NAME\n> 1 1027 :tick-5sec\n> \n> # ./read.d\n> dtrace: script './read.d' matched 2 probes\n> CPU ID FUNCTION:NAME\n> 0 1027 :tick-5sec\n> \n> /export/home0/igen/pgdata/pg_clog/0025 \n> -2753028194720 1\n> \n> \n> So Tom seems to be correct that it is a case of CLOG Buffer thrashing. \n> But since I saw the same problem with two different workloads, I think \n> people hitting this problem is pretty high.\n> \n> Also I am bit surprised that CLogControlFile did not show up as being \n> hot.. Maybe because not much writes are going on .. Or maybe since I did \n> not trace all 500 users to see their hot lock status..\n> \n> \n> Dmitri has another workload to test, I might try that out later on to \n> see if it causes similar impact or not.\n> \n> Of course I havent seen my throughput go up yet since I am already CPU \n> bound... But this is good since the number of IOPS to the disk are \n> reduced (and hence system calls).\n> \n> \n> If I take this as my baseline number.. I can then proceed to hunt other \n> bottlenecks????\n> \n> \n> Whats the view of the community?\n> \n> Hunt down CPU utilizations or Lock waits next?\n> \n> Your votes are crucial on where I put my focus.\n> \n> Another thing Josh B told me to check out was the wal_writer_delay setting:\n> \n> I have done two settings with almost equal performance (with the CLOG 16 \n> setting) .. One with 100ms and other default at 200ms.. Based on the \n> runs it seemed that the 100ms was slightly better than the default .. \n> (Plus the risk of loosing data is reduced from 600ms to 300ms)\n> \n> Thanks.\n> \n> Regards,\n> Jignesh\n> \n> \n> \n> \n> Tom Lane wrote:\n> > \"Jignesh K. Shah\" <[email protected]> writes:\n> > \n> >> So the ratio of reads vs writes to clog files is pretty huge..\n> >> \n> >\n> > It looks to me that the issue is simply one of not having quite enough\n> > CLOG buffers. Your first run shows 8 different pages being fetched and\n> > the second shows 10. Bearing in mind that we \"pin\" the latest CLOG page\n> > into buffers, there are only NUM_CLOG_BUFFERS-1 buffers available for\n> > older pages, so what we've got here is thrashing for the available\n> > slots.\n> >\n> > Try increasing NUM_CLOG_BUFFERS to 16 and see how it affects this test.\n> >\n> > \t\t\tregards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Thu, 15 Nov 2007 15:49:27 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] 8.3beta1 testing on Solaris" } ]
[ { "msg_contents": "PostgreSql version 8.2.4\n\nMemory = 8 Gig\n\nCPUs 1 dual core Zeon running at 3.0\n\n \n\nI have a problem with an update query taking over 10 hours in order to\nrun. I rebooted my server. I ran the SQL command \"analyze\". Could\nyou please help me with any suggestions? I have included the two tables\ninvolved in the update below as well as the indexes I am using. \n\n \n\nThe table result_entry contains 17,767,240 rows and the table\nquestion_number contains 40,787. Each row from the result_entry table\nwill match to one and only one row in the table question_number using\nthe fk_question_id field. Each row from the question_number table\nmatches to an average of 436 rows on the result_entry table.\n\n \n\nCREATE TABLE question_number\n\n(\n\n fk_form_id integer not null,\n\n fk_question_id integer not null,\n\n question_number integer not null,\n\n sequence_id integer not null\n\n);\n\n \n\nALTER TABLE ONLY question_number ADD CONSTRAINT question_number_pkey\nPRIMARY KEY (fk_question_id);\n\nCREATE INDEX question_number_index1 ON question_number USING btree\n(question_number);\n\n \n\n \n\nCREATE TABLE result_entry (\n\n fk_result_submission_id integer NOT NULL,\n\n fk_question_id integer NOT NULL,\n\n fk_option_order_id integer NOT NULL, \n\n value character varying,\n\n order_id integer NOT NULL,\n\n question_number integer\n\n);\n\n \n\nCREATE INDEX result_entery_index1 ON result_entry USING btree\n(fk_question_id);\n\n \n\n \n\nupdate result_entry set question_number=question_number.question_number\n\n\nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n\n \n\n \n\n \n\nexplain update result_entry set\nquestion_number=question_number.question_number \n\nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n\n \n\n QUERY PLAN\n\n\n------------------------------------------------------------------------\n---------\n\n Hash Join (cost=1437.71..1046983.94 rows=17333178 width=32)\n\n Hash Cond: (result_entry.fk_question_id =\nquestion_number.fk_question_id)\n\n -> Seq Scan on result_entry (cost=0.00..612216.78 rows=17333178\nwidth=28)\n\n -> Hash (cost=927.87..927.87 rows=40787 width=8)\n\n -> Seq Scan on question_number (cost=0.00..927.87 rows=40787\nwidth=8)\n\n(5 rows)\n\n \n\n \n\n \n\nPostgresql.conf settings:\n\n \n\nshared_buffers = 1GB\n\nwork_mem = 10MB\n\nmax_fsm_pages = 204800\n\nrandom_page_cost = 1.0\n\neffective_cache_size = 8GB\n\n \n\n \n\nThanks for any help!\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSql version 8.2.4\nMemory = 8 Gig\nCPUs 1 dual core Zeon running at 3.0\n \nI have a problem with an update query taking over 10 hours\nin order to run.   I rebooted my server.  I ran the SQL command “analyze”. \nCould you please help me with any suggestions?  I have included the two\ntables involved in the update below as well as the indexes I am using.  \n \nThe table result_entry contains 17,767,240 rows and the\ntable question_number contains 40,787.  Each row from the result_entry\ntable will match to one and only one row in the table question_number using the\nfk_question_id field.  Each row from the question_number table matches to\nan average of 436 rows on the result_entry table.\n \nCREATE TABLE question_number\n(\n \nfk_form_id                   \ninteger         not null,\n \nfk_question_id               \ninteger         not null,\n  question_number              \ninteger not null,\n \nsequence_id                  \ninteger not null\n);\n \nALTER TABLE ONLY question_number ADD CONSTRAINT question_number_pkey\nPRIMARY KEY (fk_question_id);\nCREATE INDEX question_number_index1 ON question_number USING\nbtree (question_number);\n \n \nCREATE TABLE result_entry (\n    fk_result_submission_id integer NOT NULL,\n    fk_question_id integer NOT NULL,\n    fk_option_order_id integer NOT\nNULL,      \n    value character varying,\n    order_id integer NOT NULL,\n    question_number integer\n);\n \nCREATE INDEX result_entery_index1 ON result_entry USING\nbtree (fk_question_id);\n \n \nupdate result_entry set question_number=question_number.question_number      \nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n \n \n \nexplain update result_entry set question_number=question_number.question_number      \nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n \n                                  \nQUERY PLAN                                   \n\n---------------------------------------------------------------------------------\n Hash Join  (cost=1437.71..1046983.94\nrows=17333178 width=32)\n   Hash Cond: (result_entry.fk_question_id =\nquestion_number.fk_question_id)\n   ->  Seq Scan on result_entry \n(cost=0.00..612216.78 rows=17333178 width=28)\n   ->  Hash  (cost=927.87..927.87\nrows=40787 width=8)\n         -> \nSeq Scan on question_number  (cost=0.00..927.87 rows=40787 width=8)\n(5 rows)\n \n \n \nPostgresql.conf settings:\n \nshared_buffers = 1GB\nwork_mem = 10MB\nmax_fsm_pages = 204800\nrandom_page_cost = 1.0\neffective_cache_size = 8GB\n \n \nThanks for any help!\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Fri, 26 Oct 2007 15:26:46 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Suggestions on an update query" }, { "msg_contents": "I forgot to include an additional parameter I am using in\nPostgresql.conf: \n\n \n\ncheckpoint_segments = 30\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n________________________________\n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Campbell,\nLance\nSent: Friday, October 26, 2007 3:27 PM\nTo: [email protected]\nSubject: [PERFORM] Suggestions on an update query\n\n \n\nPostgreSql version 8.2.4\n\nMemory = 8 Gig\n\nCPUs 1 dual core Zeon running at 3.0\n\n \n\nI have a problem with an update query taking over 10 hours in order to\nrun. I rebooted my server. I ran the SQL command \"analyze\". Could\nyou please help me with any suggestions? I have included the two tables\ninvolved in the update below as well as the indexes I am using. \n\n \n\nThe table result_entry contains 17,767,240 rows and the table\nquestion_number contains 40,787. Each row from the result_entry table\nwill match to one and only one row in the table question_number using\nthe fk_question_id field. Each row from the question_number table\nmatches to an average of 436 rows on the result_entry table.\n\n \n\nCREATE TABLE question_number\n\n(\n\n fk_form_id integer not null,\n\n fk_question_id integer not null,\n\n question_number integer not null,\n\n sequence_id integer not null\n\n);\n\n \n\nALTER TABLE ONLY question_number ADD CONSTRAINT question_number_pkey\nPRIMARY KEY (fk_question_id);\n\nCREATE INDEX question_number_index1 ON question_number USING btree\n(question_number);\n\n \n\n \n\nCREATE TABLE result_entry (\n\n fk_result_submission_id integer NOT NULL,\n\n fk_question_id integer NOT NULL,\n\n fk_option_order_id integer NOT NULL, \n\n value character varying,\n\n order_id integer NOT NULL,\n\n question_number integer\n\n);\n\n \n\nCREATE INDEX result_entery_index1 ON result_entry USING btree\n(fk_question_id);\n\n \n\n \n\nupdate result_entry set question_number=question_number.question_number\n\n\nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n\n \n\n \n\n \n\nexplain update result_entry set\nquestion_number=question_number.question_number \n\nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n\n \n\n QUERY PLAN\n\n\n------------------------------------------------------------------------\n---------\n\n Hash Join (cost=1437.71..1046983.94 rows=17333178 width=32)\n\n Hash Cond: (result_entry.fk_question_id =\nquestion_number.fk_question_id)\n\n -> Seq Scan on result_entry (cost=0.00..612216.78 rows=17333178\nwidth=28)\n\n -> Hash (cost=927.87..927.87 rows=40787 width=8)\n\n -> Seq Scan on question_number (cost=0.00..927.87 rows=40787\nwidth=8)\n\n(5 rows)\n\n \n\n \n\n \n\nPostgresql.conf settings:\n\n \n\nshared_buffers = 1GB\n\nwork_mem = 10MB\n\nmax_fsm_pages = 204800\n\nrandom_page_cost = 1.0\n\neffective_cache_size = 8GB\n\n \n\n \n\nThanks for any help!\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI forgot to include an additional\nparameter I am using in Postgresql.conf: \n \ncheckpoint_segments = 30\n \nThanks,\n \n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Campbell, Lance\nSent: Friday, October 26, 2007\n3:27 PM\nTo:\[email protected]\nSubject: [PERFORM] Suggestions on\nan update query\n\n \nPostgreSql version 8.2.4\nMemory = 8 Gig\nCPUs 1 dual core Zeon running at 3.0\n \nI have a problem with an update query taking over 10 hours\nin order to run.   I rebooted my server.  I ran the SQL command\n“analyze”.  Could you please help me with any\nsuggestions?  I have included the two tables involved in the update below\nas well as the indexes I am using.  \n \nThe table result_entry contains 17,767,240 rows and the\ntable question_number contains 40,787.  Each row from the result_entry\ntable will match to one and only one row in the table question_number using the\nfk_question_id field.  Each row from the question_number table matches to\nan average of 436 rows on the result_entry table.\n \nCREATE TABLE question_number\n(\n  fk_form_id                   \ninteger         not null,\n \nfk_question_id               \ninteger         not null,\n \nquestion_number              \ninteger not null,\n \nsequence_id                  \ninteger not null\n);\n \nALTER TABLE ONLY question_number ADD CONSTRAINT\nquestion_number_pkey PRIMARY KEY (fk_question_id);\nCREATE INDEX question_number_index1 ON question_number USING\nbtree (question_number);\n \n \nCREATE TABLE result_entry (\n    fk_result_submission_id integer NOT NULL,\n    fk_question_id integer NOT NULL,\n    fk_option_order_id integer NOT\nNULL,      \n    value character varying,\n    order_id integer NOT NULL,\n    question_number integer\n);\n \nCREATE INDEX result_entery_index1 ON result_entry USING\nbtree (fk_question_id);\n \n \nupdate result_entry set\nquestion_number=question_number.question_number      \nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n \n \n \nexplain update result_entry set\nquestion_number=question_number.question_number      \nfrom question_number where\nresult_entry.fk_question_id=question_number.fk_question_id;\n \n                                  \nQUERY PLAN                                   \n\n---------------------------------------------------------------------------------\n Hash Join  (cost=1437.71..1046983.94\nrows=17333178 width=32)\n   Hash Cond: (result_entry.fk_question_id =\nquestion_number.fk_question_id)\n   ->  Seq Scan on result_entry \n(cost=0.00..612216.78 rows=17333178 width=28)\n   ->  Hash  (cost=927.87..927.87\nrows=40787 width=8)\n         -> \nSeq Scan on question_number  (cost=0.00..927.87 rows=40787 width=8)\n(5 rows)\n \n \n \nPostgresql.conf settings:\n \nshared_buffers = 1GB\nwork_mem = 10MB\nmax_fsm_pages = 204800\nrandom_page_cost = 1.0\neffective_cache_size = 8GB\n \n \nThanks for any help!\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Fri, 26 Oct 2007 15:31:44 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "\"Campbell, Lance\" <[email protected]> writes:\n\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------------------\n>\n> Hash Join (cost=1437.71..1046983.94 rows=17333178 width=32)\n> Hash Cond: (result_entry.fk_question_id = question_number.fk_question_id)\n> -> Seq Scan on result_entry (cost=0.00..612216.78 rows=17333178 width=28)\n> -> Hash (cost=927.87..927.87 rows=40787 width=8)\n> -> Seq Scan on question_number (cost=0.00..927.87 rows=40787 width=8)\n>\n> (5 rows)\n\nThat looks like an entirely reasonable plan. Is it possible some other session\nwas blocking this update with a lock on a record? Was there lots of I/O at the\ntime? You could peek in pg_locks while the update seems frozen.\n\nThis looks like a one-time administrative job to add a new column, is that it?\nYou might also consider creating a new table with the new data and replacing\nthe old table with the new one with something like:\n\nCREATE TABLE new_result_entry AS \n SELECT fk_result_submission_id, fk_question_id, fk_option_order_id, \n value, order_id, \n question_number.question_number \n FROM result_entry\n JOIN question_number USING (fk_question_id)\n\nCREATE INDEX result_entery_index1n ON new_result_entry USING btree (fk_question_id);\n\nALTER TABLE result_entry RENAME TO old_result_entry\nALTER TABLE newresult_entry RENAME TO result_entry\n\nUnfortunately (for this use case) any views, triggers, etc which reference the\nold table will continue to reference the old table after the renames. You'll\nhave to drop and recreate them.\n\nThat may not be an option if the data is actively being used though. But if it\nis an option there are a few advantages 1) it'll be a bit faster 2) you can\nbuild the indexes on the new data at the end of the creation b) the resulting\ntable and indexes won't have all the old versions taking up space waiting for\na vacuum.\n\n\n> Postgresql.conf settings:\n> shared_buffers = 1GB\n> work_mem = 10MB\n> max_fsm_pages = 204800\n> random_page_cost = 1.0\n> effective_cache_size = 8GB\n\nI would suggest keeping random_page_cost at least slightly above 1.0 and\neffective_cache_size should probably be about 6GB rather than 8 since the\nshared buffers and other things which use memory reduce the memory available\nfor cache. Also, work_mem could be larger at least for large batch queries\nlike this.\n\nNone of this is relevant for this query though. Actually I think a larger\nwork_mem can avoid problems with hash joins so you might try that but I don't\nthink it would be choosing it estimated that might happen -- and the estimates\nall look accurate.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 26 Oct 2007 22:09:20 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "On Fri, 26 Oct 2007 15:31:44 -0500\n\"Campbell, Lance\" <[email protected]> wrote:\n\n> I forgot to include an additional parameter I am using in\n> Postgresql.conf: \n> \n\nO.k. first, just to get it out of the way (and then I will try and\nhelp). Please do not top post, it makes replying contextually very\ndifficult.\n> \n> PostgreSql version 8.2.4\n> \n> Memory = 8 Gig\n> \n> CPUs 1 dual core Zeon running at 3.0\n> \n\nO.k. first you might be grinding through your 20 checkpoint segments\nbut in reality what I think is happening is you are doing foreign key\nchecks against all of it and slowing things down.\n\n\n> \n> The table result_entry contains 17,767,240 rows and the table\n> question_number contains 40,787. Each row from the result_entry table\n> will match to one and only one row in the table question_number using\n> the fk_question_id field. Each row from the question_number table\n> matches to an average of 436 rows on the result_entry table.\n> \n> \n\n\nYou could disable the foreign key for the update and then reapply it.\n\nJoshua D. Drake\n\n\n> \n> CREATE TABLE question_number\n> \n> (\n> \n> fk_form_id integer not null,\n> \n> fk_question_id integer not null,\n> \n> question_number integer not null,\n> \n> sequence_id integer not null\n> \n> );\n> \n> \n> \n> ALTER TABLE ONLY question_number ADD CONSTRAINT question_number_pkey\n> PRIMARY KEY (fk_question_id);\n> \n> CREATE INDEX question_number_index1 ON question_number USING btree\n> (question_number);\n> \n> \n> \n> \n> \n> CREATE TABLE result_entry (\n> \n> fk_result_submission_id integer NOT NULL,\n> \n> fk_question_id integer NOT NULL,\n> \n> fk_option_order_id integer NOT NULL, \n> \n> value character varying,\n> \n> order_id integer NOT NULL,\n> \n> question_number integer\n> \n> );\n> \n> \n> \n> CREATE INDEX result_entery_index1 ON result_entry USING btree\n> (fk_question_id);\n> \n> \n> \n> \n> \n> update result_entry set\n> question_number=question_number.question_number\n> \n> \n> from question_number where\n> result_entry.fk_question_id=question_number.fk_question_id;\n> \n> \n> \n> \n> \n> \n> \n> explain update result_entry set\n> question_number=question_number.question_number \n> \n> from question_number where\n> result_entry.fk_question_id=question_number.fk_question_id;\n> \n> \n> \n> QUERY PLAN\n> \n> \n> ------------------------------------------------------------------------\n> ---------\n> \n> Hash Join (cost=1437.71..1046983.94 rows=17333178 width=32)\n> \n> Hash Cond: (result_entry.fk_question_id =\n> question_number.fk_question_id)\n> \n> -> Seq Scan on result_entry (cost=0.00..612216.78 rows=17333178\n> width=28)\n> \n> -> Hash (cost=927.87..927.87 rows=40787 width=8)\n> \n> -> Seq Scan on question_number (cost=0.00..927.87\n> rows=40787 width=8)\n> \n> (5 rows)\n> \n> \n> \n> \n> \n> \n> \n> Postgresql.conf settings:\n> \n> \n> \n> shared_buffers = 1GB\n> \n> work_mem = 10MB\n> \n> max_fsm_pages = 204800\n> \n> random_page_cost = 1.0\n> \n> effective_cache_size = 8GB\n> \n> \n> \n> \n> \n> Thanks for any help!\n> \n> \n> \n> \n> \n> Lance Campbell\n> \n> Project Manager/Software Architect\n> \n> Web Services at Public Affairs\n> \n> University of Illinois\n> \n> 217.333.0382\n> \n> http://webservices.uiuc.edu\n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/", "msg_date": "Fri, 26 Oct 2007 14:15:47 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "\n\"Joshua D. Drake\" <[email protected]> writes:\n\n> On Fri, 26 Oct 2007 15:31:44 -0500\n> \"Campbell, Lance\" <[email protected]> wrote:\n>\n>> I forgot to include an additional parameter I am using in\n>> Postgresql.conf: \n>> \n>\n> O.k. first, just to get it out of the way (and then I will try and\n> help). Please do not top post, it makes replying contextually very\n> difficult.\n>\n>> PostgreSql version 8.2.4\n>> \n>> Memory = 8 Gig\n>> \n>> CPUs 1 dual core Zeon running at 3.0\n>> \n>\n> O.k. first you might be grinding through your 20 checkpoint segments\n> but in reality what I think is happening is you are doing foreign key\n> checks against all of it and slowing things down.\n\nIf you're going to berate someone about top-posting perhaps you should attach\nyour own commentary to relevant bits of context :P\n\nBut the original post didn't include any foreign key constraints. I suspect\nyou've guessed it right though. In fact I suspect what's happening is he\ndoesn't have an index on the referencing column so the foreign key checks are\ndoing sequential scans of.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 27 Oct 2007 03:04:47 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "On Sat, 27 Oct 2007 03:04:47 +0100\nGregory Stark <[email protected]> wrote:\n\n\n> > O.k. first you might be grinding through your 20 checkpoint segments\n> > but in reality what I think is happening is you are doing foreign\n> > key checks against all of it and slowing things down.\n> \n> If you're going to berate someone about top-posting perhaps you\n> should attach your own commentary to relevant bits of context :P\n\nIt was hardly berating Greg, I even said please.\n\n\n> I\n> suspect you've guessed it right though. In fact I suspect what's\n> happening is he doesn't have an index on the referencing column so\n> the foreign key checks are doing sequential scans of.\n> \n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/", "msg_date": "Fri, 26 Oct 2007 19:31:11 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "Thanks for all of your help. The problem was that the result_entry table\nhad some constraints that pointed to a third table. When I removed\nthose constraints the performance was amazing. The update took less\nthan seven minutes to execute. I did not even consider the fact that\nconstraints to another table would impact the performance.\n\nThanks again,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Gregory\nStark\nSent: Friday, October 26, 2007 9:05 PM\nTo: Joshua D. Drake\nCc: Campbell, Lance; [email protected]\nSubject: Re: [PERFORM] Suggestions on an update query\n\n\n\"Joshua D. Drake\" <[email protected]> writes:\n\n> On Fri, 26 Oct 2007 15:31:44 -0500\n> \"Campbell, Lance\" <[email protected]> wrote:\n>\n>> I forgot to include an additional parameter I am using in\n>> Postgresql.conf: \n>> \n>\n> O.k. first, just to get it out of the way (and then I will try and\n> help). Please do not top post, it makes replying contextually very\n> difficult.\n>\n>> PostgreSql version 8.2.4\n>> \n>> Memory = 8 Gig\n>> \n>> CPUs 1 dual core Zeon running at 3.0\n>> \n>\n> O.k. first you might be grinding through your 20 checkpoint segments\n> but in reality what I think is happening is you are doing foreign key\n> checks against all of it and slowing things down.\n\nIf you're going to berate someone about top-posting perhaps you should\nattach\nyour own commentary to relevant bits of context :P\n\nBut the original post didn't include any foreign key constraints. I\nsuspect\nyou've guessed it right though. In fact I suspect what's happening is he\ndoesn't have an index on the referencing column so the foreign key\nchecks are\ndoing sequential scans of.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n", "msg_date": "Mon, 29 Oct 2007 11:33:57 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Suggestions on an update query" }, { "msg_contents": "On 10/29/07, Campbell, Lance <[email protected]> wrote:\n> Thanks for all of your help. The problem was that the result_entry table\n> had some constraints that pointed to a third table. When I removed\n> those constraints the performance was amazing. The update took less\n> than seven minutes to execute. I did not even consider the fact that\n> constraints to another table would impact the performance.\n\nUsually you can put an index on the refrerenced key in the foreign\ntable to speed things up.\n", "msg_date": "Tue, 30 Oct 2007 00:19:42 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Suggestions on an update query" } ]
[ { "msg_contents": "Hi List!\n\nI executed 2 equivalents queries. The first one uses a union structure. \nThe second uses a partitioned table. The tables are the same with 30 \nmillions of rows each one and the returned rows are the same.\n\nBut the union query perform faster than the partitioned query.\n\nMy question is: why? :)\n\n[pabloa@igor testeo]$ cat query-union.sql\nselect e, p, sum( c) as c\nfrom (\n select e, p, count( *) as c\n from tt_00003\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00006\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00009\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00012\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00015\n group by e, p\n) as t\ngroup by e, p\norder by e, p desc;\n\n\n\n[pabloa@igor testeo]$ cat query-heritage.sql\nselect e, p, count( *) as c\nfrom tt\ngroup by e, p\norder by e, p desc;\n\n\nThe server is a Athlon 64x2 6000+ 2 Gb RAM PostreSQL 8.2.5\n\nThe structure tables are:\n\nCREATE TABLE tt_00003\n(\n-- Inherited: idtt bigint NOT NULL,\n-- Inherited: idttp bigint NOT NULL,\n-- Inherited: e integer NOT NULL,\n-- Inherited: dmodi timestamp without time zone NOT NULL DEFAULT now(),\n-- Inherited: p integer NOT NULL DEFAULT 0,\n-- Inherited: m text NOT NULL,\n CONSTRAINT tt_00003_pkey PRIMARY KEY (idtt),\n CONSTRAINT tt_00003_idtt_check CHECK (idtt >= 1::bigint AND idtt <= \n30000000::bigint)\n) INHERITS (tt)\nWITHOUT OIDS;\nALTER TABLE tt_00003 ;\n\nCREATE INDEX tt_00003_e\n ON tt_00003\n USING btree\n (e);\n\n\n\n\n\n", "msg_date": "Fri, 26 Oct 2007 16:37:40 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Speed difference between select ... union select ... and select from\n\tpartitioned_table" }, { "msg_contents": "On Fri, 2007-10-26 at 16:37 -0400, Pablo Alcaraz wrote:\n> Hi List!\n> \n> I executed 2 equivalents queries. The first one uses a union structure. \n> The second uses a partitioned table. The tables are the same with 30 \n> millions of rows each one and the returned rows are the same.\n> \n> But the union query perform faster than the partitioned query.\n> \n\nI think you mean to use UNION ALL here. UNION forces a DISTINCT, which\nresults in a sort operation. What surprises me is that the UNION is\nactually faster than the partitioning using inheritance.\n\nI suspect it has something to do with the GROUP BYs, but we won't know\nuntil you post EXPLAIN ANALYZE results.\n\n Regards,\n\tJeff Davis\n\n", "msg_date": "Fri, 26 Oct 2007 14:07:01 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed difference between select ... union select ...\n\tand select from partitioned_table" }, { "msg_contents": "I forgot to post the times:\n\nquery-union: 21:59\nquery-heritage: 1:31:24\n\nRegards\n\nPablo\n\nPablo Alcaraz wrote:\n> Hi List!\n>\n> I executed 2 equivalents queries. The first one uses a union \n> structure. The second uses a partitioned table. The tables are the \n> same with 30 millions of rows each one and the returned rows are the \n> same.\n>\n> But the union query perform faster than the partitioned query.\n>\n> My question is: why? :)\n>\n> [pabloa@igor testeo]$ cat query-union.sql\n> select e, p, sum( c) as c\n> from (\n> select e, p, count( *) as c\n> from tt_00003\n> group by e, p\n> union\n> select e, p, count( *) as c\n> from tt_00006\n> group by e, p\n> union\n> select e, p, count( *) as c\n> from tt_00009\n> group by e, p\n> union\n> select e, p, count( *) as c\n> from tt_00012\n> group by e, p\n> union\n> select e, p, count( *) as c\n> from tt_00015\n> group by e, p\n> ) as t\n> group by e, p\n> order by e, p desc;\n>\n>\n>\n> [pabloa@igor testeo]$ cat query-heritage.sql\n> select e, p, count( *) as c\n> from tt\n> group by e, p\n> order by e, p desc;\n>\n>\n> The server is a Athlon 64x2 6000+ 2 Gb RAM PostreSQL 8.2.5\n>\n> The structure tables are:\n>\n> CREATE TABLE tt_00003\n> (\n> -- Inherited: idtt bigint NOT NULL,\n> -- Inherited: idttp bigint NOT NULL,\n> -- Inherited: e integer NOT NULL,\n> -- Inherited: dmodi timestamp without time zone NOT NULL DEFAULT now(),\n> -- Inherited: p integer NOT NULL DEFAULT 0,\n> -- Inherited: m text NOT NULL,\n> CONSTRAINT tt_00003_pkey PRIMARY KEY (idtt),\n> CONSTRAINT tt_00003_idtt_check CHECK (idtt >= 1::bigint AND idtt <= \n> 30000000::bigint)\n> ) INHERITS (tt)\n> WITHOUT OIDS;\n> ALTER TABLE tt_00003 ;\n>\n> CREATE INDEX tt_00003_e\n> ON tt_00003\n> USING btree\n> (e);\n>\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n", "msg_date": "Fri, 26 Oct 2007 17:10:15 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed difference between select ... union select ...\n\tand select from partitioned_table" }, { "msg_contents": "\"Pablo Alcaraz\" <[email protected]> writes:\n\n> Hi List!\n>\n> I executed 2 equivalents queries. The first one uses a union structure. The\n> second uses a partitioned table. The tables are the same with 30 millions of\n> rows each one and the returned rows are the same.\n>\n> But the union query perform faster than the partitioned query.\n>\n> My question is: why? :)\n>\n> [pabloa@igor testeo]$ cat query-union.sql\n> select e, p, sum( c) as c\n> from (\n> select e, p, count( *) as c\n> from tt_00003\n> group by e, p\n> union\n> select e, p, count( *) as c\n> from tt_00006\n> group by e, p\n> union\n...\n\n\nYou should send along the \"explain analyze\" results for both queries,\notherwise we're just guessing.\n\nAlso, you should consider using UNION ALL instead of plain UNION.\n\nFinally you should consider removing all the intermediate GROUP BYs and just\ngroup the entire result. In theory it should be faster but in practice I'm not\nsure it works out that way.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 26 Oct 2007 22:12:37 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed difference between select ... union select ... and select\n\tfrom partitioned_table" }, { "msg_contents": "These are the EXPLAIN ANALIZE:\n\nI ran both queries on a CLUSTER and ANALYZEd tables:\n\nUNION QUERY\nexplain analyze\nselect e, p, sum( c) as c\nfrom (\n select e, p, count( *) as c\n from tt_00003\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00006\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00009\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00012\n group by e, p\n union\n select e, p, count( *) as c\n from tt_00015\n group by e, p\n) as t\ngroup by e, p\norder by e, p desc;\n\n\"Sort (cost=2549202.87..2549203.37 rows=200 width=16) (actual \ntime=263593.182..263593.429 rows=207 loops=1)\"\n\" Sort Key: t.e, t.p\"\n\" -> HashAggregate (cost=2549192.73..2549195.23 rows=200 width=16) \n(actual time=263592.469..263592.763 rows=207 loops=1)\"\n\" -> Unique (cost=2549172.54..2549179.88 rows=734 width=8) \n(actual time=263590.481..263591.764 rows=356 loops=1)\"\n\" -> Sort (cost=2549172.54..2549174.38 rows=734 width=8) \n(actual time=263590.479..263590.891 rows=356 loops=1)\"\n\" Sort Key: e, p, c\"\n\" -> Append (cost=1307131.88..2549137.60 rows=734 \nwidth=8) (actual time=132862.176..263589.774 rows=356 loops=1)\"\n\" -> HashAggregate \n(cost=1307131.88..1307133.03 rows=92 width=8) (actual \ntime=132862.173..132862.483 rows=200 loops=1)\"\n\" -> Seq Scan on tt_00003 \n(cost=0.00..1081550.36 rows=30077536 width=8) (actual \ntime=10.135..83957.424 rows=30000000 loops=1)\"\n\" -> HashAggregate \n(cost=1241915.64..1241916.16 rows=42 width=8) (actual \ntime=130726.219..130726.457 rows=156 loops=1)\"\n\" -> Seq Scan on tt_00006 \n(cost=0.00..1028793.22 rows=28416322 width=8) (actual \ntime=11.389..87338.730 rows=28351293 loops=1)\"\n\" -> HashAggregate (cost=24.53..27.03 \nrows=200 width=8) (actual time=0.005..0.005 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00009 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.002 rows=0 \nloops=1)\"\n\" -> HashAggregate (cost=24.53..27.03 \nrows=200 width=8) (actual time=0.004..0.004 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00012 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.002 rows=0 \nloops=1)\"\n\" -> HashAggregate (cost=24.53..27.03 \nrows=200 width=8) (actual time=0.005..0.005 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00015 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.001..0.001 rows=0 \nloops=1)\"\n\"Total runtime: 263594.381 ms\"\n\n\nPARTITIONED QUERY\n\nexplain analyze\nselect e, p, count( *) as c\nfrom tt\ngroup by e, p\norder by e, p desc;\n\n\"GroupAggregate (cost=13256958.67..13842471.95 rows=40000 width=8) \n(actual time=899391.384..1065585.531 rows=207 loops=1)\"\n\" -> Sort (cost=13256958.67..13403211.99 rows=58501328 width=8) \n(actual time=899391.364..989749.914 rows=58351293 loops=1)\"\n\" Sort Key: public.tt.e, public.tt.p\"\n\" -> Append (cost=0.00..2110508.28 rows=58501328 width=8) \n(actual time=14.031..485211.466 rows=58351293 loops=1)\"\n\" -> Seq Scan on tt (cost=0.00..18.30 rows=830 width=8) \n(actual time=0.002..0.002 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00003 tt (cost=0.00..1081550.36 \nrows=30077536 width=8) (actual time=14.024..178657.738 rows=30000000 \nloops=1)\"\n\" -> Seq Scan on tt_00006 tt (cost=0.00..1028793.22 \nrows=28416322 width=8) (actual time=39.852..168307.030 rows=28351293 \nloops=1)\"\n\" -> Seq Scan on tt_00009 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00012 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00015 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00018 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00021 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00024 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00027 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\" -> Seq Scan on tt_00030 tt (cost=0.00..18.30 rows=830 \nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"Total runtime: 1066301.084 ms\"\n\nAny idea?\n\nRegards\n\nPablo\n\n\nJeff Davis wrote:\n> On Fri, 2007-10-26 at 16:37 -0400, Pablo Alcaraz wrote:\n> \n>> Hi List!\n>>\n>> I executed 2 equivalents queries. The first one uses a union structure. \n>> The second uses a partitioned table. The tables are the same with 30 \n>> millions of rows each one and the returned rows are the same.\n>>\n>> But the union query perform faster than the partitioned query.\n>>\n>> \n>\n> I think you mean to use UNION ALL here. UNION forces a DISTINCT, which\n> results in a sort operation. What surprises me is that the UNION is\n> actually faster than the partitioning using inheritance.\n>\n> I suspect it has something to do with the GROUP BYs, but we won't know\n> until you post EXPLAIN ANALYZE results.\n>\n> Regards,\n> \tJeff Davis\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n> \n\n\n\n\n\n\n\n\nThese are the EXPLAIN ANALIZE:\n\nI ran both queries on a CLUSTER and ANALYZEd tables:\n\nUNION QUERY\nexplain analyze \nselect e, p,  sum( c) as c\nfrom (\n        select e, p, count( *) as c\n        from tt_00003\n        group by e, p\n        union\n        select e, p, count( *) as c\n        from tt_00006\n        group by e, p\n        union\n        select e, p, count( *) as c\n        from tt_00009\n        group by e, p\n        union\n        select e, p, count( *) as c\n        from tt_00012\n        group by e, p\n        union\n        select e, p, count( *) as c\n        from tt_00015\n        group by e, p\n) as t\ngroup by e, p\norder by e, p desc;\n\n\"Sort  (cost=2549202.87..2549203.37 rows=200 width=16) (actual\ntime=263593.182..263593.429 rows=207 loops=1)\"\n\"  Sort Key: t.e, t.p\"\n\"  ->  HashAggregate  (cost=2549192.73..2549195.23 rows=200\nwidth=16) (actual time=263592.469..263592.763 rows=207 loops=1)\"\n\"        ->  Unique  (cost=2549172.54..2549179.88 rows=734 width=8)\n(actual time=263590.481..263591.764 rows=356 loops=1)\"\n\"              ->  Sort  (cost=2549172.54..2549174.38 rows=734\nwidth=8) (actual time=263590.479..263590.891 rows=356 loops=1)\"\n\"                    Sort Key: e, p, c\"\n\"                    ->  Append  (cost=1307131.88..2549137.60\nrows=734 width=8) (actual time=132862.176..263589.774 rows=356 loops=1)\"\n\"                          ->  HashAggregate \n(cost=1307131.88..1307133.03 rows=92 width=8) (actual\ntime=132862.173..132862.483 rows=200 loops=1)\"\n\"                                ->  Seq Scan on tt_00003 \n(cost=0.00..1081550.36 rows=30077536 width=8) (actual\ntime=10.135..83957.424 rows=30000000 loops=1)\"\n\"                          ->  HashAggregate \n(cost=1241915.64..1241916.16 rows=42 width=8) (actual\ntime=130726.219..130726.457 rows=156 loops=1)\"\n\"                                ->  Seq Scan on tt_00006 \n(cost=0.00..1028793.22 rows=28416322 width=8) (actual\ntime=11.389..87338.730 rows=28351293 loops=1)\"\n\"                          ->  HashAggregate  (cost=24.53..27.03\nrows=200 width=8) (actual time=0.005..0.005 rows=0 loops=1)\"\n\"                                ->  Seq Scan on tt_00009 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\"                          ->  HashAggregate  (cost=24.53..27.03\nrows=200 width=8) (actual time=0.004..0.004 rows=0 loops=1)\"\n\"                                ->  Seq Scan on tt_00012 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.002..0.002 rows=0\nloops=1)\"\n\"                          ->  HashAggregate  (cost=24.53..27.03\nrows=200 width=8) (actual time=0.005..0.005 rows=0 loops=1)\"\n\"                                ->  Seq Scan on tt_00015 \n(cost=0.00..18.30 rows=830 width=8) (actual time=0.001..0.001 rows=0\nloops=1)\"\n\"Total runtime: 263594.381 ms\"\n\n\nPARTITIONED QUERY\n\nexplain analyze \nselect e, p,  count( *) as c\nfrom tt\ngroup by e, p\norder by e, p desc;\n\n\"GroupAggregate  (cost=13256958.67..13842471.95 rows=40000 width=8)\n(actual time=899391.384..1065585.531 rows=207 loops=1)\"\n\"  ->  Sort  (cost=13256958.67..13403211.99 rows=58501328 width=8)\n(actual time=899391.364..989749.914 rows=58351293 loops=1)\"\n\"        Sort Key: public.tt.e, public.tt.p\"\n\"        ->  Append  (cost=0.00..2110508.28 rows=58501328 width=8)\n(actual time=14.031..485211.466 rows=58351293 loops=1)\"\n\"              ->  Seq Scan on tt  (cost=0.00..18.30 rows=830\nwidth=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00003 tt  (cost=0.00..1081550.36\nrows=30077536 width=8) (actual time=14.024..178657.738 rows=30000000\nloops=1)\"\n\"              ->  Seq Scan on tt_00006 tt  (cost=0.00..1028793.22\nrows=28416322 width=8) (actual time=39.852..168307.030 rows=28351293\nloops=1)\"\n\"              ->  Seq Scan on tt_00009 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00012 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00015 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00018 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.001..0.001 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00021 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00024 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00027 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"              ->  Seq Scan on tt_00030 tt  (cost=0.00..18.30\nrows=830 width=8) (actual time=0.002..0.002 rows=0 loops=1)\"\n\"Total runtime: 1066301.084 ms\"\n\nAny idea?\n\nRegards\n\nPablo\n\n\nJeff Davis wrote:\n\nOn Fri, 2007-10-26 at 16:37 -0400, Pablo Alcaraz wrote:\n \n\nHi List!\n\nI executed 2 equivalents queries. The first one uses a union structure. \nThe second uses a partitioned table. The tables are the same with 30 \nmillions of rows each one and the returned rows are the same.\n\nBut the union query perform faster than the partitioned query.\n\n \n\n\nI think you mean to use UNION ALL here. UNION forces a DISTINCT, which\nresults in a sort operation. What surprises me is that the UNION is\nactually faster than the partitioning using inheritance.\n\nI suspect it has something to do with the GROUP BYs, but we won't know\nuntil you post EXPLAIN ANALYZE results.\n\n Regards,\n\tJeff Davis\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend", "msg_date": "Sat, 27 Oct 2007 01:14:21 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed difference between select ... union select ...\n\tand select from partitioned_table" }, { "msg_contents": "Pablo Alcaraz <[email protected]> writes:\n> These are the EXPLAIN ANALIZE:\n\nIf you raise work_mem enough to let the second query use a hash\naggregate (probably a few MB would do it), I think it'll be about\nthe same speed as the first one.\n\nThe reason it's not picking that on its own is the overestimate\nof the number of resulting groups. This is because\nget_variable_numdistinct is not smart about append relations.\nWe should try to fix that sometime...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Oct 2007 09:15:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed difference between select ... union select ... and select\n\tfrom partitioned_table" }, { "msg_contents": "On Fri, 2007-10-26 at 16:37 -0400, Pablo Alcaraz wrote:\n\n> I executed 2 equivalents queries. The first one uses a union structure. \n> The second uses a partitioned table. The tables are the same with 30 \n> millions of rows each one and the returned rows are the same.\n> \n> But the union query perform faster than the partitioned query.\n> \n> My question is: why? :)\n\nThe two queries are equivalent but they have different execution plans.\n\nThe UNION query has explicit GROUP BY operations within it. We do not\ncurrently perform a push-down operation onto the individual partitions.\nThis results in more data copying as well as requiring a single very\nlarge sort, rather than lots of small ones. That is probably enough to\nallow it to perform the sort in memory rather than on-disk, thus\nallowing a considerable speed-up.\n\nThis is on my list of requirements for further partitioning improvements\nin 8.4 or beyond.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sat, 27 Oct 2007 22:26:49 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed difference between select ... union select ...\n\tand select from partitioned_table" }, { "msg_contents": "Pablo Alcaraz wrote:\n> These are the EXPLAIN ANALIZE:\n>\n\nIf you raise work_mem enough to let the second query use a hash\naggregate (probably a few MB would do it), I think it'll be about\nthe same speed as the first one.\n\nThe reason it's not picking that on its own is the overestimate\nof the number of resulting groups. This is because\nget_variable_numdistinct is not smart about append relations.\nWe should try to fix that sometime...\n\n\n\nI re run the partitioned-query. it completed in 15996 seconds. It \nbuilded a BIG temp file:\n\n[root@igor xxx]# ls -lh pgsql-data/data/16386/pgsql_tmp/\ntotal 2.2G\n-rw------- 1 postgres postgres 1.0G Oct 27 15:35 pgsql_tmp7004.0\n-rw------- 1 postgres postgres 1.0G Oct 27 15:35 pgsql_tmp7004.1\n-rw------- 1 postgres postgres 175M Oct 27 15:35 pgsql_tmp7004.2\n\nwork_mem=1Mb. How much do I need to raise work_mem variable? 2.2G?\n\nRegards\n\nPablo\n\n\n\n\n\n\nPablo Alcaraz wrote:\n\n\n\nThese are the EXPLAIN ANALIZE:\n\n\n\nIf you raise work_mem enough to let the second query use a hash\naggregate (probably a few MB would do it), I think it'll be about\nthe same speed as the first one.\n\nThe reason it's not picking that on its own is the overestimate\nof the number of resulting groups. This is because\nget_variable_numdistinct is not smart about append relations.\nWe should try to fix that sometime...\n\n\nI re run the partitioned-query. it completed in 15996 seconds. It\nbuilded a BIG temp file:\n\n[root@igor xxx]# ls -lh pgsql-data/data/16386/pgsql_tmp/\ntotal 2.2G\n-rw------- 1 postgres postgres 1.0G Oct 27 15:35 pgsql_tmp7004.0\n-rw------- 1 postgres postgres 1.0G Oct 27 15:35 pgsql_tmp7004.1\n-rw------- 1 postgres postgres 175M Oct 27 15:35 pgsql_tmp7004.2\n\nwork_mem=1Mb. How much do I need to raise work_mem variable? 2.2G?\n\nRegards\n\nPablo", "msg_date": "Sat, 27 Oct 2007 18:31:18 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed difference between select ... union select ...\n\tand select from partitioned_table" } ]
[ { "msg_contents": "And I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.\n\nThe patch isn't ready for application, but someone can quickly implement it I'd expect.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tHeikki Linnakangas [mailto:[email protected]]\nSent:\tSaturday, October 27, 2007 05:20 AM Eastern Standard Time\nTo:\tAnton\nCc:\[email protected]\nSubject:\tRe: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnton wrote:\n> I repost here my original question \"Why it no uses indexes?\" (on\n> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\n> mean that you miss this discussion.\n\nAs I said back then:\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nRe: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\n\n\nAnd I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.\n\nThe patch isn't ready for application, but someone can quickly implement it I'd expect.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Heikki Linnakangas [mailto:[email protected]]\nSent:   Saturday, October 27, 2007 05:20 AM Eastern Standard Time\nTo:     Anton\nCc:     [email protected]\nSubject:        Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnton wrote:\n> I repost here my original question \"Why it no uses indexes?\" (on\n> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\n> mean that you miss this discussion.\n\nAs I said back then:\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node.\n\n--\n  Heikki Linnakangas\n  EnterpriseDB   http://www.enterprisedb.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings", "msg_date": "Sat, 27 Oct 2007 15:12:06 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "On Sat, 2007-10-27 at 15:12 -0400, Luke Lonergan wrote:\n> And I repeat - 'we fixed that and submitted a patch' - you can find it\n> in the unapplied patches queue.\n\nI got the impression it was a suggestion rather than a tested patch,\nforgive me if that was wrong.\n\nDid the patch work? Do you have timings/different plan?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sat, 27 Oct 2007 22:31:22 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n\tLIMIT 1" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n\n> And I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.\n\nI can't find this. Can you point me towards it?\n\nThanks\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask about EnterpriseDB's PostGIS support!\n", "msg_date": "Mon, 29 Oct 2007 13:40:39 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Sure - it's here:\n http://momjian.us/mhonarc/patches_hold/msg00381.html\n\n- Luke\n\n\nOn 10/29/07 6:40 AM, \"Gregory Stark\" <[email protected]> wrote:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n> \n>> And I repeat - 'we fixed that and submitted a patch' - you can find it in the\n>> unapplied patches queue.\n> \n> I can't find this. Can you point me towards it?\n> \n> Thanks\n\n\n", "msg_date": "Mon, 29 Oct 2007 22:06:02 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Luke Lonergan wrote:\n> Sure - it's here:\n> http://momjian.us/mhonarc/patches_hold/msg00381.html\n>\n> \n\nTo clarify - we've fixed this in Greenplum db - the patch as submitted \nis (hopefully) a hint about how to fix it in Postgres, rather than a \nworking patch... as its full of non-postgres functions and macros:\n\nCdbPathLocus_MakeHashed\ncdbpathlocus_pull_above_projection\ncdbpullup_findPathKeyItemInTargetList\ncdbpullup_makeVar\ncdbpullup_expr\n\nCheers\n\nMark\n", "msg_date": "Tue, 30 Oct 2007 18:46:22 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> Sure - it's here:\n> http://momjian.us/mhonarc/patches_hold/msg00381.html\n\nLuke, this is not a patch, and I'm getting pretty dang tired of seeing\nyou refer to it as one. What this is is a very-selective extract from\nGreenplum proprietary code. If you'd like us to think it is a patch,\nyou need to offer the source code to all the GP-specific functions that\nare called in the quoted additions.\n\nHell, the diff is *against* GP-specific code --- it removes calls\nto functions that we've never seen, eg here:\n\n- /* Use constant expr if available. Will be at head of list. */\n- if (CdbPathkeyEqualsConstant(pathkey))\n\nThis is not a patch, and your statements that it's only a minor porting\nmatter to turn it into one are lie^H^H^Hnonsense. Please lift the\nskirts higher than the ankle region if you want us to get excited.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Oct 2007 01:54:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1 " }, { "msg_contents": "\"Mark Kirkwood\" <[email protected]> writes:\n\n> Luke Lonergan wrote:\n>> Sure - it's here:\n>> http://momjian.us/mhonarc/patches_hold/msg00381.html\n>\n> To clarify - we've fixed this in Greenplum db - the patch as submitted is\n> (hopefully) a hint about how to fix it in Postgres, rather than a working\n> patch... as its full of non-postgres functions and macros:\n\nOh, that was the problem with the original patch and I thought Luke had said\nthat was the problem which was fixed.\n\n> cdbpathlocus_pull_above_projection\n\nIn particular this is the function I was hoping to see. Anyways as Tom pointed\nout previously there's precedent in Postgres as well for subqueries so I'm\nsure I'll be able to do it.\n\n(But I'm still not entirely convinced putting the append member vars into the\neclasses would be wrong btw...)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Tue, 30 Oct 2007 09:06:21 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "BTW - Mark has volunteered to work a Postgres patch together. Thanks Mark!\n\n- Luke\n\n\nOn 10/29/07 10:46 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Luke Lonergan wrote:\n>> Sure - it's here:\n>> http://momjian.us/mhonarc/patches_hold/msg00381.html\n>> \n>> \n> \n> To clarify - we've fixed this in Greenplum db - the patch as submitted\n> is (hopefully) a hint about how to fix it in Postgres, rather than a\n> working patch... as its full of non-postgres functions and macros:\n> \n> CdbPathLocus_MakeHashed\n> cdbpathlocus_pull_above_projection\n> cdbpullup_findPathKeyItemInTargetList\n> cdbpullup_makeVar\n> cdbpullup_expr\n> \n> Cheers\n> \n> Mark\n\n\n", "msg_date": "Wed, 31 Oct 2007 07:16:13 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "I am trying to build a very Robust DB server that will support 1000+ \nconcurrent users (all ready have seen max of 237 no pooling being \nused). i have read so many articles now that I am just saturated. I \nhave a general idea but would like feedback from others.\n\nI understand query tuning and table design play a large role in \nperformance, but taking that factor away\nand focusing on just hardware, what is the best hardware to get for \nPg to work at the highest level\n(meaning speed at returning results)?\n\nHow does pg utilize multiple processors? The more the better?\nAre queries spread across multiple processors?\nIs Pg 64 bit?\nIf so what processors are recommended?\n\nI read this : http://www.postgresql.org/files/documentation/books/ \naw_pgsql/hw_performance/node12.html\nPOSTGRESQL uses a multi-process model, meaning each database \nconnection has its own Unix process. Because of this, all multi-cpu \noperating systems can spread multiple database connections among the \navailable CPUs. However, if only a single database connection is \nactive, it can only use one CPU. POSTGRESQL does not use multi- \nthreading to allow a single process to use multiple CPUs.\n\nIts pretty old (2003) but is it still accurate? if this statement is \naccurate how would it affect connection pooling software like pg_pool?\n\nRAM? The more the merrier right? Understanding shmmax and the pg \nconfig file parameters for shared mem has to be adjusted to use it.\nDisks? standard Raid rules right? 1 for safety 5 for best mix of \nperformance and safety?\nAny preference of SCSI over SATA? What about using a High speed \n(fibre channel) mass storage device?\n\nWho has built the biggest baddest Pg server out there and what do you \nuse?\n\nThanks!\n\n\n\n\n", "msg_date": "Wed, 31 Oct 2007 11:53:17 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": false, "msg_subject": "hardware and For PostgreSQL" }, { "msg_contents": "It would probably help you to spend some time browsing the archives of \nthis list for questions similar to yours - you'll find quite a lot of \nconsistent answers. In general, you'll find that:\n\n- If you can fit your entire database into memory, you'll get the best\n performance.\n\n- If you cannot (and most databases cannot) then you'll want to get the\n fastest disk system you can.\n\n- For reads, RAID5 isn't so bad but for writes it's near the bottom of the\n options. RAID10 is not as efficient in terms of hardware, but if you\n want performance for both reads and writes, you want RAID10.\n\n- Your RAID card also matters. Areca cards are expensive, and a lot of\n people consider them to be worth it.\n\n- More procs tend to be better than faster procs, because more procs let\n you do more at once and databases tend to be i/o bound more than cpu\n bound.\n\n- More or faster procs put more contention on the data, so getting more or\n better cpus just increases the need for faster disks or more ram.\n\n- PG is 64 bit if you compile it to be so, or if you install a 64-bit\n binary package.\n\n....and all that said, application and schema design can play a far more \nimportant role in performance than hardware.\n\n\nOn Wed, 31 Oct 2007, Ketema Harris wrote:\n\n> I am trying to build a very Robust DB server that will support 1000+ \n> concurrent users (all ready have seen max of 237 no pooling being used). i \n> have read so many articles now that I am just saturated. I have a general \n> idea but would like feedback from others.\n>\n> I understand query tuning and table design play a large role in performance, \n> but taking that factor away\n> and focusing on just hardware, what is the best hardware to get for Pg to \n> work at the highest level\n> (meaning speed at returning results)?\n>\n> How does pg utilize multiple processors? The more the better?\n> Are queries spread across multiple processors?\n> Is Pg 64 bit?\n> If so what processors are recommended?\n>\n> I read this : \n> http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html\n> POSTGRESQL uses a multi-process model, meaning each database connection has \n> its own Unix process. Because of this, all multi-cpu operating systems can \n> spread multiple database connections among the available CPUs. However, if \n> only a single database connection is active, it can only use one CPU. \n> POSTGRESQL does not use multi-threading to allow a single process to use \n> multiple CPUs.\n>\n> Its pretty old (2003) but is it still accurate? if this statement is \n> accurate how would it affect connection pooling software like pg_pool?\n>\n> RAM? The more the merrier right? Understanding shmmax and the pg config file \n> parameters for shared mem has to be adjusted to use it.\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of \n> performance and safety?\n> Any preference of SCSI over SATA? What about using a High speed (fibre \n> channel) mass storage device?\n>\n> Who has built the biggest baddest Pg server out there and what do you use?\n>\n> Thanks!\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Wed, 31 Oct 2007 10:24:13 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "On 10/31/07, Ketema Harris <[email protected]> wrote:\n> I am trying to build a very Robust DB server that will support 1000+\n> concurrent users (all ready have seen max of 237 no pooling being\n> used). i have read so many articles now that I am just saturated. I\n> have a general idea but would like feedback from others.\n\nSlow down, take a deep breath. It's going to be ok.\n\nYou should definitely be looking at query pooling. pgbouncer from\nskype has gotten some good coverage lately. I've used pgpool and\npgpool II with good luck myself.\n\n> How does pg utilize multiple processors? The more the better?\n> Are queries spread across multiple processors?\n> Is Pg 64 bit?\n> If so what processors are recommended?\n\nGenerally more is better, up to a point. PG runs one query per\nprocessor max. i.e. it doesn't spread a single query out over\nmultiple CPUs.\n\nYes, it's 64 bit, if you use a 64 bit version on a 64 bit OS.\n\nRight now both Intel and AMD CPUs seem pretty good.\n\n> RAM? The more the merrier right?\n\nThat depends. If 16 Gigs runs 33% faster than having 32 Gigs, the 16\nGigs will probably be better, especially if your data set fits in\n16Gig. But all things being equal, more memory = good.\n\n> Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n\nDon't forget all the other paramenters like work_mem and fsm settings.\n and regular vacuuming / autovacuuming\n\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n\nNeither of those is optimal for a transactional database. 5 isn't\nparticularly safe since two disks can kill your whole array. RAID-10\nis generally preferred, and RAID 50 or 6 can be a good choice.\n\n> Any preference of SCSI over SATA? What about using a High speed\n> (fibre channel) mass storage device?\n\nWhat's most important is the quality of your controller. A very high\nquality SATA controller will beat a mediocre SCSI controller. Write\nback cache with battery backed cache is a must. Escalade, Areca, LSI\nand now apparently even Adaptec all have good controllers. Hint: If\nit costs $85 or so, it's likely not a great choice for RAID.\n\nI've seen many <$200 RAID controllers that were much better when you\nturned off the RAID software and used kernel SW mode RAID instead\n(witness Adaptec 14xx series)\n\nMass storage can be useful, especially if you need a lot of storage or\nexpansion ability.\n\n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n\nNot me, but we had a post from somebody with a very very very large\npgsql database on this list a few months ago... Search the archives.\n", "msg_date": "Wed, 31 Oct 2007 13:19:49 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "I realize there are people who discourage looking at Dell, but i've been\nvery happy with a larger ball of equipment we ordered recently from\nthem. Our database servers consist of a PowerEdge 2950 connected to a\nPowerVault MD1000 with a 1 meter SAS cable.\n\nThe 2950 tops out at dual quad core cpus, 32 gb ram, and 6 x 3.5\"\ndrives. It has a Perc 5/i as the controller of the in-box disks but\nthen also has room for 2 Perc 5/e controllers that can allow connecting\nup to 2 chains of disk arrays to the thing.\n\nIn our environment we started the boxes off at 8gb ram with 6 15k SAS\ndisks in the server and then connected an MD1000 with 15 SATA disks to\none of the Perc 5/e controllers. Gives tons of flexibility for growth\nand for tablespace usage depending on budget and what you can spend on\nyour disks. We have everything on the SATA disks right now but plan to\nstart moving the most brutalized indexes to the SAS disks very soon.\n\nIf you do use Dell, get connected with a small business account manager\nfor better prices and more attention.\n\nJoe\n\nKetema Harris wrote:\n> I am trying to build a very Robust DB server that will support 1000+\n> concurrent users (all ready have seen max of 237 no pooling being\n> used). i have read so many articles now that I am just saturated. I\n> have a general idea but would like feedback from others.\n>\n> I understand query tuning and table design play a large role in\n> performance, but taking that factor away\n> and focusing on just hardware, what is the best hardware to get for Pg\n> to work at the highest level\n> (meaning speed at returning results)?\n>\n> How does pg utilize multiple processors? The more the better?\n> Are queries spread across multiple processors?\n> Is Pg 64 bit?\n> If so what processors are recommended?\n>\n> I read this :\n> http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html\n>\n> POSTGRESQL uses a multi-process model, meaning each database\n> connection has its own Unix process. Because of this, all multi-cpu\n> operating systems can spread multiple database connections among the\n> available CPUs. However, if only a single database connection is\n> active, it can only use one CPU. POSTGRESQL does not use\n> multi-threading to allow a single process to use multiple CPUs.\n>\n> Its pretty old (2003) but is it still accurate? if this statement is\n> accurate how would it affect connection pooling software like pg_pool?\n>\n> RAM? The more the merrier right? Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n> Any preference of SCSI over SATA? What about using a High speed (fibre\n> channel) mass storage device?\n>\n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n>\n> Thanks!\n>\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n", "msg_date": "Wed, 31 Oct 2007 14:54:51 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "Joe Uhl wrote:\n> I realize there are people who discourage looking at Dell, but i've been\n> very happy with a larger ball of equipment we ordered recently from\n> them. Our database servers consist of a PowerEdge 2950 connected to a\n> PowerVault MD1000 with a 1 meter SAS cable.\n>\n> \nWe have a similar piece of equipment from Dell (the PowerEdge), and when \nwe had a problem with it we received excellent service from them. When \nour raid controller went down (machine < 1 year old), Dell helped to \ndiagnose the problem and installed a new one at our hosting facility, \nall within 24 hours.\n\nfyi\n\nRon\n\n\n", "msg_date": "Wed, 31 Oct 2007 14:01:34 -0800", "msg_from": "Ron St-Pierre <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "On Wed, 31 Oct 2007 14:54:51 -0400\nJoe Uhl <[email protected]> wrote:\n\n> I realize there are people who discourage looking at Dell, but i've\n> been very happy with a larger ball of equipment we ordered recently\n> from them. Our database servers consist of a PowerEdge 2950\n> connected to a PowerVault MD1000 with a 1 meter SAS cable.\n> \n> The 2950 tops out at dual quad core cpus, 32 gb ram, and 6 x 3.5\"\n> drives. It has a Perc 5/i as the controller of the in-box disks but\n> then also has room for 2 Perc 5/e controllers that can allow\n> connecting up to 2 chains of disk arrays to the thing.\n\nThe new Dell's based on Woodcrest (which is what you are talking about)\nare a much better product that what Dell used to ship.\n\nJoshua D. Drake\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/", "msg_date": "Wed, 31 Oct 2007 15:30:40 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "Ron St-Pierre wrote:\n> Joe Uhl wrote:\n>> I realize there are people who discourage looking at Dell, but i've been\n>> very happy with a larger ball of equipment we ordered recently from\n>> them. Our database servers consist of a PowerEdge 2950 connected to a\n>> PowerVault MD1000 with a 1 meter SAS cable.\n>>\n>> \n> We have a similar piece of equipment from Dell (the PowerEdge), and when \n> we had a problem with it we received excellent service from them. When \n> our raid controller went down (machine < 1 year old), Dell helped to \n> diagnose the problem and installed a new one at our hosting facility, \n> all within 24 hours.\n> \n> fyi\n> \n> Ron\n> \n\nThis is good to know - I've got a new Dell PowerEdge 2900 quad-core, 4GB \nRAM, 6*146Gb SAN disks on RAID-5 controller arriving in the next week or \ntwo as a new database server. Well, more like a database development \nmachine - but it will host some non-production databases as well. Good \nto know Dell can be relied on these days, I was a bit concerned about \nthat when purchasing sent me a copy invoice from Dell - particularly \nsince I was originally told an IBM was on the way.\n\nGood ol' purchasing, can always rely on them to change their minds last \nminute when they find a cheaper system.\n\n-- \nPaul Lambert\nDatabase Administrator\nAutoLedgers\n\n", "msg_date": "Thu, 01 Nov 2007 08:00:51 +0900", "msg_from": "Paul Lambert <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "Ron St-Pierre wrote:\n> Joe Uhl wrote:\n>> I realize there are people who discourage looking at Dell, but i've been\n>> very happy with a larger ball of equipment we ordered recently from\n>> them. Our database servers consist of a PowerEdge 2950 connected to a\n>> PowerVault MD1000 with a 1 meter SAS cable.\n>>\n>> \n> We have a similar piece of equipment from Dell (the PowerEdge), and when\n> we had a problem with it we received excellent service from them. When\n> our raid controller went down (machine < 1 year old), Dell helped to\n> diagnose the problem and installed a new one at our hosting facility,\n> all within 24 hours.\n\n24 hours?! I have a new one for my HP boxes onsite in 4 hours, including\na tech if needed...\n\nBut I assume Dell also has service-agreement deals you can get to get\nthe level of service you'd want. (But you won't get it for a\nnon-brand-name server, most likely)\n\nBottom line - don't underestimate the service you get from the vendor\nwhen something breaks. Because eventually, something *will* break.\n\n\n//Magnus\n", "msg_date": "Thu, 01 Nov 2007 07:57:02 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "Magnus Hagander wrote:\n> Ron St-Pierre wrote:\n> \n>> Joe Uhl wrote:\n>> \n>>> I realize there are people who discourage looking at Dell, but i've been\n>>> very happy with a larger ball of equipment we ordered recently from\n>>> them. Our database servers consist of a PowerEdge 2950 connected to a\n>>> PowerVault MD1000 with a 1 meter SAS cable.\n>>>\n>>> \n>>> \n>> We have a similar piece of equipment from Dell (the PowerEdge), and when\n>> we had a problem with it we received excellent service from them. When\n>> our raid controller went down (machine < 1 year old), Dell helped to\n>> diagnose the problem and installed a new one at our hosting facility,\n>> all within 24 hours.\n>> \n>\n> 24 hours?! I have a new one for my HP boxes onsite in 4 hours, including\n> a tech if needed...\n>\n> But I assume Dell also has service-agreement deals you can get to get\n> the level of service you'd want. (But you won't get it for a\n> non-brand-name server, most likely)\n>\n> Bottom line - don't underestimate the service you get from the vendor\n> when something breaks. Because eventually, something *will* break.\n>\n>\n> //Magnus\n> \nYeah the response time depends on the service level purchased. I\ngenerally go with 24 hour because everything is redundant so a day of\ndowntime isn't going to bring services down (though it could make them\nslow depending on what fails) but you can purchase 4 hr and in some\ncases even 2 hr. I had a \"gold\" level support contract on a server that\nfailed awhile back and within 3 net hours they diagnosed and fixed the\nproblem by getting onsite and replacing the motherboard and a cpu. I\nhaven't had any of our 24hr support level devices fail yet so don't have\nanything to compare there.\n\nIf you do go with Dell and want the higher support contracts i'll\nrestate that a small business account is the way to go. Typically the\nprices are better to the point that a support level upgrade appears free\nwhen compared to the best shopping cart combo I can come up with.\n\nJoe\n", "msg_date": "Thu, 01 Nov 2007 07:00:57 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware and For PostgreSQL" }, { "msg_contents": "Gregory Stark wrote:\n> cdbpathlocus_pull_above_projection\n> \n>\n> In particular this is the function I was hoping to see. Anyways as Tom pointed\n> out previously there's precedent in Postgres as well for subqueries so I'm\n> sure I'll be able to do it.\n>\n> (But I'm still not entirely convinced putting the append member vars into the\n> eclasses would be wrong btw...)\n>\n> \nI spent today looking at getting this patch into a self contained state. \nWorking against HEAD I'm getting bogged down in the PathKeyItem to \nPathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd \ndivide and conquer to some extent, and initially provide a patch:\n\n- against 8.2.(5)\n- self contained (i.e no mystery functions)\n\nThe next step would be to update to to HEAD. That would hopefully \nprovide some useful material for others working on this.\n\nThoughts suggestions?\n\nregards\n\nMark\n\n", "msg_date": "Mon, 05 Nov 2007 18:49:26 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "\"Mark Kirkwood\" <[email protected]> writes:\n\n> I spent today looking at getting this patch into a self contained state.\n> Working against HEAD I'm getting bogged down in the PathKeyItem to\n> PathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd divide\n> and conquer to some extent, and initially provide a patch:\n>\n> - against 8.2.(5)\n> - self contained (i.e no mystery functions)\n\nThat would be helpful for me. It would include the bits I'm looking for.\n\n> The next step would be to update to to HEAD. That would hopefully provide some\n> useful material for others working on this.\n\nIf that's not too much work then that would be great but if it's a lot of work\nthen it may not be worth it if I'm planning to only take certain bits. On the\nother hand if it's good then we might just want to take it wholesale and then\nadd to it.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Mon, 05 Nov 2007 12:12:27 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Gregory Stark wrote:\n> \"Mark Kirkwood\" <[email protected]> writes:\n>\n> \n>> I spent today looking at getting this patch into a self contained state.\n>> Working against HEAD I'm getting bogged down in the PathKeyItem to\n>> PathKey/EquivalenceClass/EquivalenceMember(s) change. So I figured I'd divide\n>> and conquer to some extent, and initially provide a patch:\n>>\n>> - against 8.2.(5)\n>> - self contained (i.e no mystery functions)\n>> \n>\n> That would be helpful for me. It would include the bits I'm looking for.\n>\n> \n>> The next step would be to update to to HEAD. That would hopefully provide some\n>> useful material for others working on this.\n>> \n>\n> If that's not too much work then that would be great but if it's a lot of work\n> then it may not be worth it if I'm planning to only take certain bits. On the\n> other hand if it's good then we might just want to take it wholesale and then\n> add to it.\n>\n> \n\nHere is a (somewhat hurried) self-contained version of the patch under \ndiscussion. It applies to 8.2.5 and the resultant code compiles and \nruns. I've left in some unneeded parallel stuff (PathLocus struct), \nwhich I can weed out in a subsequent version if desired. I also removed \nthe 'cdb ' from most of the function names and (I hope) any Greenplum \ncopyrights.\n\nI discovered that the patch solves a slightly different problem... it \npulls up index scans as a viable path choice, (but not for the DESC \ncase) but does not push down the LIMIT to the child tables ... so the \nactual performance improvement is zero - however hopefully the patch \nprovides useful raw material to help.\n\ne.g - using the examine schema from the OP email - but removing the DESC \nfrom the query:\n\npart=# set enable_seqscan=off;\nSET\npart=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;\n QUERY \nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=198367.14..198367.15 rows=1 width=20)\n -> Sort (cost=198367.14..200870.92 rows=1001510 width=20)\n Sort Key: public.n_traf.date_time\n -> Result (cost=0.00..57464.92 rows=1001510 width=20)\n -> Append (cost=0.00..57464.92 rows=1001510 width=20)\n -> Index Scan using n_traf_date_time_login_id on \nn_traf (cost=0.00..66.90 rows=1510 width=20)\n -> Index Scan using \nn_traf_y2007m01_date_time_login_id on n_traf_y2007m01 n_traf \n(cost=0.00..4748.38 rows=83043 width=20)\n -> Index Scan using \nn_traf_y2007m02_date_time_login_id on n_traf_y2007m02 n_traf \n(cost=0.00..4772.60 rows=83274 width=20)\n -> Index Scan using \nn_traf_y2007m03_date_time_login_id on n_traf_y2007m03 n_traf \n(cost=0.00..4782.12 rows=83330 width=20)\n -> Index Scan using \nn_traf_y2007m04_date_time_login_id on n_traf_y2007m04 n_traf \n(cost=0.00..4818.29 rows=83609 width=20)\n -> Index Scan using \nn_traf_y2007m05_date_time_login_id on n_traf_y2007m05 n_traf \n(cost=0.00..4721.85 rows=82830 width=20)\n -> Index Scan using \nn_traf_y2007m06_date_time_login_id on n_traf_y2007m06 n_traf \n(cost=0.00..4766.56 rows=83357 width=20)\n -> Index Scan using \nn_traf_y2007m07_date_time_login_id on n_traf_y2007m07 n_traf \n(cost=0.00..4800.44 rows=83548 width=20)\n -> Index Scan using \nn_traf_y2007m08_date_time_login_id on n_traf_y2007m08 n_traf \n(cost=0.00..4787.55 rows=83248 width=20)\n -> Index Scan using \nn_traf_y2007m09_date_time_login_id on n_traf_y2007m09 n_traf \n(cost=0.00..4830.67 rows=83389 width=20)\n -> Index Scan using \nn_traf_y2007m10_date_time_login_id on n_traf_y2007m10 n_traf \n(cost=0.00..4795.78 rows=82993 width=20)\n -> Index Scan using \nn_traf_y2007m11_date_time_login_id on n_traf_y2007m11 n_traf \n(cost=0.00..4754.26 rows=83351 width=20)\n -> Index Scan using \nn_traf_y2007m12_date_time_login_id on n_traf_y2007m12 n_traf \n(cost=0.00..4819.51 rows=84028 width=20)\n(18 rows)", "msg_date": "Thu, 08 Nov 2007 15:46:34 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "\"Mark Kirkwood\" <[email protected]> writes:\n\n> Here is a (somewhat hurried) self-contained version of the patch under\n> discussion. It applies to 8.2.5 and the resultant code compiles and runs. I've\n> left in some unneeded parallel stuff (PathLocus struct), which I can weed out\n> in a subsequent version if desired. I also removed the 'cdb ' from most of the\n> function names and (I hope) any Greenplum copyrights.\n\nThanks, I'll take a look at it.\n\n> I discovered that the patch solves a slightly different problem... it pulls up\n> index scans as a viable path choice, (but not for the DESC case) but does not\n> push down the LIMIT to the child tables ... so the actual performance\n> improvement is zero - however hopefully the patch provides useful raw material\n> to help.\n\n\n> SET\n> part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=198367.14..198367.15 rows=1 width=20)\n> -> Sort (cost=198367.14..200870.92 rows=1001510 width=20)\n> Sort Key: public.n_traf.date_time\n> -> Result (cost=0.00..57464.92 rows=1001510 width=20)\n> -> Append (cost=0.00..57464.92 rows=1001510 width=20)\n> -> Index Scan using n_traf_date_time_login_id on n_traf\n> (cost=0.00..66.90 rows=1510 width=20)\n\nThat looks suspicious. There's likely no good reason to be using the index\nscan unless it avoids the sort node above the Append node. That's what I hope\nto do by having the Append executor code do what's necessary to maintain the\norder.\n\n From skimming your patch previously I thought the main point was when there\nwas only one subnode. In that case it was able to pull the subnode entirely\nout of the append node and pull up the paths of the subnode. In Postgres that\nwould never happen because constraint exclusion will never be able to prune\ndown to a single partition because of the parent table problem but I expect\nwe'll change that.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Thu, 08 Nov 2007 06:21:10 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "On 11/7/07 10:21 PM, \"Gregory Stark\" <[email protected]> wrote:\n\n>> part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------\n>> --------------------------------------------------------------------\n>> Limit (cost=198367.14..198367.15 rows=1 width=20)\n>> -> Sort (cost=198367.14..200870.92 rows=1001510 width=20)\n>> Sort Key: public.n_traf.date_time\n>> -> Result (cost=0.00..57464.92 rows=1001510 width=20)\n>> -> Append (cost=0.00..57464.92 rows=1001510 width=20)\n>> -> Index Scan using n_traf_date_time_login_id on n_traf\n>> (cost=0.00..66.90 rows=1510 width=20)\n> \n> That looks suspicious. There's likely no good reason to be using the index\n> scan unless it avoids the sort node above the Append node. That's what I hope\n> to do by having the Append executor code do what's necessary to maintain the\n> order.\n\nYah - the way it works in GPDB is that you get a non-sorting plan with an\nindex scan below the parent - that was the point of the fix. Hmm.\n\n- Luke\n\n\n", "msg_date": "Wed, 07 Nov 2007 22:40:20 -0800", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "Luke Lonergan wrote:\n> On 11/7/07 10:21 PM, \"Gregory Stark\" <[email protected]> wrote:\n>\n> \n>>> part=# explain SELECT * FROM n_traf ORDER BY date_time LIMIT 1;\n>>> QUERY PLAN\n>>> -----------------------------------------------------------------------------\n>>> --------------------------------------------------------------------\n>>> Limit (cost=198367.14..198367.15 rows=1 width=20)\n>>> -> Sort (cost=198367.14..200870.92 rows=1001510 width=20)\n>>> Sort Key: public.n_traf.date_time\n>>> -> Result (cost=0.00..57464.92 rows=1001510 width=20)\n>>> -> Append (cost=0.00..57464.92 rows=1001510 width=20)\n>>> -> Index Scan using n_traf_date_time_login_id on n_traf\n>>> (cost=0.00..66.90 rows=1510 width=20)\n>>> \n>> That looks suspicious. There's likely no good reason to be using the index\n>> scan unless it avoids the sort node above the Append node. That's what I hope\n>> to do by having the Append executor code do what's necessary to maintain the\n>> order.\n>> \n>\n> Yah - the way it works in GPDB is that you get a non-sorting plan with an\n> index scan below the parent - that was the point of the fix. Hmm.\n>\n> \n\nUnfortunately our plan in GPDB looks exactly the same in this case - so \nwe have a bit of work to do as well! Initially I wondered if I have got \nsomething wrong in the patch... and checked on GPDB - only to see the \nsame behaviour! (see prev comment about LIMIT).\n\nCheers\n\nMark\n", "msg_date": "Thu, 08 Nov 2007 20:18:47 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "This is driving me crazy. I have some Postgres C function extensions in a shared library. They've been working fine. I upgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets this message:\n\n *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8\n\nand the backend process won't die. Every single connection that executes one of my functions leaves an idle process, like this:\n\n $ ps -ef | grep postgres\n postgres 12938 12920 0 23:24 ? 00:00:00 postgres: mydb mydb [local] idle\n\nThis error only happens on disconnect. As long as I keep the connection open, I can \n\nWorse, these zombie Postgres processes won't die, which means I can't shut down and restart Postgres unless I \"kill -9\" all of them, and I can't use this at all because I get zillions of these dead processes.\n\nI've used valgrind on a test application that runs all of my functions outside of the Postgres environment, and not a single problem shows up even after hours of processing. I tried setting MALLOC_CHECK_ to various values, so that I could trap the abort() call using gdb, but once MALLOC_CHECK_ is set, the double-free error never occurs. (But malloc slows down too much this way.)\n\nI even read through the documentation for C functions again, and carefully examined my code. Nothing is amiss, some of the functions are quite simple yet still exhibit this problem.\n\nAnyone seen this before? It's driving me nuts.\n\n Postgres 8.1.4\n Linux kernel 2.6.22\n gcc 4.1.1\n\nThanks,\nCraig\n", "msg_date": "Mon, 10 Dec 2007 23:50:11 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "libgcc double-free, backend won't die" }, { "msg_contents": "Craig James wrote:\n> This is driving me crazy. I have some Postgres C function extensions\n> in a shared library. They've been working fine. I upgraded to Fedora\n> Core 6 and gcc4, and now every time psql(1) disconnects from the\n> server, the serverlog gets this message:\n>\n> *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8\n\nDo you have any Perl or Python functions or stuff like that?\n\n> Postgres 8.1.4\n\nPlease upgrade to 8.1.10 and try again. If it still fails we will be\nmuch more interested in tracking it down.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n", "msg_date": "Tue, 11 Dec 2007 08:21:28 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Alvaro Herrera wrote:\n> Craig James wrote:\n>> This is driving me crazy. I have some Postgres C function extensions\n>> in a shared library. They've been working fine. I upgraded to Fedora\n>> Core 6 and gcc4, and now every time psql(1) disconnects from the\n>> server, the serverlog gets this message:\n>>\n>> *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8\n> \n> Do you have any Perl or Python functions or stuff like that?\n\nThere is one Perl function, but it is never invoked during this test. I connect to Postgres, issue one \"select myfunc()\", and disconnect.\n\n>> Postgres 8.1.4\n> \n> Please upgrade to 8.1.10 and try again. If it still fails we will be\n> much more interested in tracking it down.\n\nGood idea, but alas, no difference. I get the same \"double free or corruption!\" mesage. I compiled 8.1.10 from source and installed, then rebuilt all of my code from scratch and reinstalled the shared object. Same message as before.\n\nHere is my guess -- and this is just a guess. My functions use a third-party library which, of necessity, uses malloc/free in the ordinary way. I suspect that there's a bug in the Postgres palloc() code that's walking over memory that regular malloc() allocates. The third-party library (OpenBabel) has been tested pretty thoroughly by me an others and has no memory corruption problems. All malloc's are freed properly. Does that seem like a possibility?\n\nI can't figure out how to use ordinary tools like valgrind with a Postgres backend process to track this down.\n\nThanks,\nCraig\n\n", "msg_date": "Tue, 11 Dec 2007 07:05:56 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Craig James <[email protected]> writes:\n> This is driving me crazy. I have some Postgres C function extensions in a shared library. They've been working fine. I upgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets this message:\n> *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8\n\nHave you tried attaching to one of these processes with gdb to see where\nit ends up? Have you checked to see if the processes are becoming\nmulti-threaded?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 10:07:13 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "Tom Lane wrote:\n> Craig James <[email protected]> writes:\n>> This is driving me crazy. I have some Postgres C function extensions in a shared library. They've been working fine. I upgraded to Fedora Core 6 and gcc4, and now every time psql(1) disconnects from the server, the serverlog gets this message:\n>> *** glibc detected *** postgres: mydb mydb [local] idle: double free or corruption! (!prev): 0x08bfcde8\n> \n> Have you tried attaching to one of these processes with gdb to see where\n> it ends up? Have you checked to see if the processes are becoming\n> multi-threaded?\n> \n> \t\t\tregards, tom lane\n> \n\n\n# ps -ef | grep postgres\npostgres 31362 1 0 06:53 ? 00:00:00 /usr/local/pgsql/bin/postmaster -D /postgres/main\npostgres 31364 31362 0 06:53 ? 00:00:00 postgres: writer process \npostgres 31365 31362 0 06:53 ? 00:00:00 postgres: stats buffer process \npostgres 31366 31365 0 06:53 ? 00:00:00 postgres: stats collector process \npostgres 31442 31362 0 06:54 ? 00:00:00 postgres: craig_test craig_test [local] idle \nroot 31518 31500 0 07:06 pts/6 00:00:00 grep postgres\n# gdb -p 31442\nGNU gdb Red Hat Linux (6.5-15.fc6rh)\nCopyright (C) 2006 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\n\n[snip - a bunch of symbol table stuff]\n\n0x00110402 in __kernel_vsyscall ()\n(gdb) bt\n#0 0x00110402 in __kernel_vsyscall ()\n#1 0x0082fb8e in __lll_mutex_lock_wait () from /lib/libc.so.6\n#2 0x007bfce8 in _L_lock_14096 () from /lib/libc.so.6\n#3 0x007befa4 in free () from /lib/libc.so.6\n#4 0x00744f93 in _dl_map_object_deps () from /lib/ld-linux.so.2\n#5 0x0074989d in dl_open_worker () from /lib/ld-linux.so.2\n#6 0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2\n#7 0x00749222 in _dl_open () from /lib/ld-linux.so.2\n#8 0x00858712 in do_dlopen () from /lib/libc.so.6\n#9 0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2\n#10 0x008588c5 in __libc_dlopen_mode () from /lib/libc.so.6\n#11 0x00836139 in init () from /lib/libc.so.6\n#12 0x008362d3 in backtrace () from /lib/libc.so.6\n#13 0x007b3e11 in __libc_message () from /lib/libc.so.6\n#14 0x007bba96 in _int_free () from /lib/libc.so.6\n#15 0x007befb0 in free () from /lib/libc.so.6\n#16 0x001f943a in DeleteByteCode (node=0x890ff4) at chains.cpp:477\n#17 0x00780859 in exit () from /lib/libc.so.6\n#18 0x081a6064 in proc_exit ()\n#19 0x081b5b9d in PostgresMain ()\n#20 0x0818e34b in ServerLoop ()\n#21 0x0818f1de in PostmasterMain ()\n#22 0x08152369 in main ()\n(gdb) \n\n", "msg_date": "Tue, 11 Dec 2007 07:12:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Craig James wrote:\n\n> Here is my guess -- and this is just a guess. My functions use a\n> third-party library which, of necessity, uses malloc/free in the\n> ordinary way. I suspect that there's a bug in the Postgres palloc()\n> code that's walking over memory that regular malloc() allocates. The\n> third-party library (OpenBabel) has been tested pretty thoroughly by\n> me an others and has no memory corruption problems. All malloc's are\n> freed properly. Does that seem like a possibility?\n\nNot really. palloc uses malloc underneath.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\n\"La vida es para el que se aventura\"\n", "msg_date": "Tue, 11 Dec 2007 12:14:17 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Alvaro Herrera wrote:\n> Craig James wrote:\n> \n>> Here is my guess -- and this is just a guess. My functions use a\n>> third-party library which, of necessity, uses malloc/free in the\n>> ordinary way. I suspect that there's a bug in the Postgres palloc()\n>> code that's walking over memory that regular malloc() allocates. The\n>> third-party library (OpenBabel) has been tested pretty thoroughly by\n>> me an others and has no memory corruption problems. All malloc's are\n>> freed properly. Does that seem like a possibility?\n> \n> Not really. palloc uses malloc underneath.\n\nBut some Postgres code could be walking off the end of a malloc'ed block, even if palloc() is allocating and deallocating correctly. Which is why I was hoping to use valgrind to see what's going on.\n\nThanks,\nCraig\n\n\n", "msg_date": "Tue, 11 Dec 2007 07:17:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Craig James wrote:\n> Alvaro Herrera wrote:\n>> Craig James wrote:\n>>\n>>> Here is my guess -- and this is just a guess. My functions use a\n>>> third-party library which, of necessity, uses malloc/free in the\n>>> ordinary way. I suspect that there's a bug in the Postgres palloc()\n>>> code that's walking over memory that regular malloc() allocates. The\n>>> third-party library (OpenBabel) has been tested pretty thoroughly by\n>>> me an others and has no memory corruption problems. All malloc's are\n>>> freed properly. Does that seem like a possibility?\n>>\n>> Not really. palloc uses malloc underneath.\n>\n> But some Postgres code could be walking off the end of a malloc'ed\n> block, even if palloc() is allocating and deallocating correctly.\n> Which is why I was hoping to use valgrind to see what's going on.\n\nI very much doubt it. Since you've now shown that OpenBabel is\nmultithreaded, then that's a much more likely cause.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\n\"When the proper man does nothing (wu-wei),\nhis thought is felt ten thousand miles.\" (Lao Tse)\n", "msg_date": "Tue, 11 Dec 2007 12:20:10 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Alvaro Herrera wrote:\n> Craig James wrote:\n>> Alvaro Herrera wrote:\n>>> Craig James wrote:\n>>>\n>>>> Here is my guess -- and this is just a guess. My functions use a\n>>>> third-party library which, of necessity, uses malloc/free in the\n>>>> ordinary way. I suspect that there's a bug in the Postgres palloc()\n>>>> code that's walking over memory that regular malloc() allocates. The\n>>>> third-party library (OpenBabel) has been tested pretty thoroughly by\n>>>> me an others and has no memory corruption problems. All malloc's are\n>>>> freed properly. Does that seem like a possibility?\n>>> Not really. palloc uses malloc underneath.\n>> But some Postgres code could be walking off the end of a malloc'ed\n>> block, even if palloc() is allocating and deallocating correctly.\n>> Which is why I was hoping to use valgrind to see what's going on.\n> \n> I very much doubt it. Since you've now shown that OpenBabel is\n> multithreaded, then that's a much more likely cause.\n\nCan you elaborate? Are multithreaded libraries not allowed to be linked to Postgres?\n\nThanks,\nCraig\n", "msg_date": "Tue, 11 Dec 2007 07:25:10 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Craig James wrote:\n> Alvaro Herrera wrote:\n>> Craig James wrote:\n>>> Alvaro Herrera wrote:\n>>>> Craig James wrote:\n>>>>\n>>>>> Here is my guess -- and this is just a guess. My functions use a\n>>>>> third-party library which, of necessity, uses malloc/free in the\n>>>>> ordinary way. I suspect that there's a bug in the Postgres palloc()\n>>>>> code that's walking over memory that regular malloc() allocates. The\n>>>>> third-party library (OpenBabel) has been tested pretty thoroughly by\n>>>>> me an others and has no memory corruption problems. All malloc's are\n>>>>> freed properly. Does that seem like a possibility?\n>>>> Not really. palloc uses malloc underneath.\n>>> But some Postgres code could be walking off the end of a malloc'ed\n>>> block, even if palloc() is allocating and deallocating correctly.\n>>> Which is why I was hoping to use valgrind to see what's going on.\n>>\n>> I very much doubt it. Since you've now shown that OpenBabel is\n>> multithreaded, then that's a much more likely cause.\n>\n> Can you elaborate? Are multithreaded libraries not allowed to be\n> linked to Postgres?\n\nAbsolutely not.\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"La gente vulgar solo piensa en pasar el tiempo;\nel que tiene talento, en aprovecharlo\"\n", "msg_date": "Tue, 11 Dec 2007 12:28:39 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Craig James <[email protected]> writes:\n> GNU gdb Red Hat Linux (6.5-15.fc6rh)\n> Copyright (C) 2006 Free Software Foundation, Inc.\n> GDB is free software, covered by the GNU General Public License, and you are\n> welcome to change it and/or distribute copies of it under certain conditions.\n\n> [snip - a bunch of symbol table stuff]\n\nPlease show that stuff you snipped --- it might have some relevant\ninformation. The stack trace looks a bit like a threading problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 10:43:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "Alvaro Herrera wrote:\n>>> ...Since you've now shown that OpenBabel is\n>>> multithreaded, then that's a much more likely cause.\n>> Can you elaborate? Are multithreaded libraries not allowed to be\n>> linked to Postgres?\n> \n> Absolutely not.\n\nOk, thanks, I'll work on recompiling OpenBabel without thread support.\n\nSince I'm not a Postgres developer, perhaps one of the maintainers could update the Postgres manual. In chapter 32.9.6, it says,\n\n \"To be precise, a shared library needs to be created.\"\n\nThis should be amended to say,\n\n \"To be precise, a non-threaded, shared library needs to be created.\"\n\nCheers,\nCraig\n\n\n", "msg_date": "Tue, 11 Dec 2007 07:50:17 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Tom Lane wrote:\n> Craig James <[email protected]> writes:\n>> GNU gdb Red Hat Linux (6.5-15.fc6rh)\n>> Copyright (C) 2006 Free Software Foundation, Inc.\n>> GDB is free software, covered by the GNU General Public License, and you are\n>> welcome to change it and/or distribute copies of it under certain conditions.\n> \n>> [snip - a bunch of symbol table stuff]\n> \n> Please show that stuff you snipped --- it might have some relevant\n> information. The stack trace looks a bit like a threading problem...\n\n# gdb -p 31442\nGNU gdb Red Hat Linux (6.5-15.fc6rh)\nCopyright (C) 2006 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain conditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-redhat-linux-gnu\".\nAttaching to process 31442\nReading symbols from /usr/local/pgsql/bin/postgres...(no debugging symbols found)...done.\nUsing host libthread_db library \"/lib/libthread_db.so.1\".\nReading symbols from /usr/lib/libz.so.1...(no debugging symbols found)...done.\nLoaded symbols for /usr/lib/libz.so.1\nReading symbols from /usr/lib/libreadline.so.5...(no debugging symbols found)...done.\nLoaded symbols for /usr/lib/libreadline.so.5\nReading symbols from /lib/libtermcap.so.2...(no debugging symbols found)...done.\nLoaded symbols for /lib/libtermcap.so.2\nReading symbols from /lib/libcrypt.so.1...\n(no debugging symbols found)...done.\nLoaded symbols for /lib/libcrypt.so.1\nReading symbols from /lib/libresolv.so.2...(no debugging symbols found)...done.\nLoaded symbols for /lib/libresolv.so.2\nReading symbols from /lib/libnsl.so.1...(no debugging symbols found)...done.\nLoaded symbols for /lib/libnsl.so.1\nReading symbols from /lib/libdl.so.2...(no debugging symbols found)...done.\nLoaded symbols for /lib/libdl.so.2\nReading symbols from /lib/libm.so.6...\n(no debugging symbols found)...done.\nLoaded symbols for /lib/libm.so.6\nReading symbols from /lib/libc.so.6...(no debugging symbols found)...done.\nLoaded symbols for /lib/libc.so.6\nReading symbols from /lib/ld-linux.so.2...(no debugging symbols found)...done.\nLoaded symbols for /lib/ld-linux.so.2\nReading symbols from /lib/libnss_files.so.2...(no debugging symbols found)...done.\nLoaded symbols for /lib/libnss_files.so.2\nReading symbols from /usr/local/pgsql/lib/libchmoogle.so...done.\nLoaded symbols for /usr/local/pgsql/lib/libchmoogle.so\nReading symbols from /lib/libgcc_s.so.1...done.\nLoaded symbols for /lib/libgcc_s.so.1\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/jaguarformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/jaguarformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/libopenbabel.so.2...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/libopenbabel.so.2\nReading symbols from /usr/lib/libstdc++.so.6...done.\nLoaded symbols for /usr/lib/libstdc++.so.6\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastaformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastaformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cansmilesformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cansmilesformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/APIInterface.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/APIInterface.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmodformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmodformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/molreportformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/molreportformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fhformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fhformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemkinformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemkinformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmcifformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mmcifformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/thermoformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/thermoformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/carformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/carformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/ghemicalformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/ghemicalformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/turbomoleformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/turbomoleformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xmlformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xmlformat.so\nReading symbols from /usr/lib/libxml2.so.2...done.\nLoaded symbols for /usr/lib/libxml2.so.2\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/rxnformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/rxnformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/reportformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/reportformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/acrformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/acrformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/nwchemformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/nwchemformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/hinformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/hinformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/bgfformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/bgfformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/shelxformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/shelxformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/yasaraformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/yasaraformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/viewmolformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/viewmolformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mdlformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mdlformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/CSRformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/CSRformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacaoformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacaoformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gaussformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gaussformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/titleformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/titleformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gamessformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gamessformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/zindoformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/zindoformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fingerprintformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fingerprintformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/balstformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/balstformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cssrformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cssrformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cdxmlformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cdxmlformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/crkformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/crkformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xedformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xedformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawcdxformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawcdxformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cmlformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cmlformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpdformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpdformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/amberformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/amberformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/smilesformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/smilesformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemtoolformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemtoolformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pubchem.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pubchem.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fchkformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fchkformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/qchemformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/qchemformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mopacformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mopacformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/PQSformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/PQSformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastsearchformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/fastsearchformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/freefracformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/freefracformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chem3dformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chem3dformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/inchiformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/inchiformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cccformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cccformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpqcformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mpqcformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/copyformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/copyformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cifformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cifformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/unichemformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/unichemformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/boxformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/boxformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mol2format.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/mol2format.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/tinkerformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/tinkerformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/featformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/featformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/alchemyformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/alchemyformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pngformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pngformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pcmodelformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pcmodelformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/dmolformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/dmolformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gausscubeformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gausscubeformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/povrayformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/povrayformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xyzformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/xyzformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacheformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/cacheformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawctformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/chemdrawctformat.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gromos96format.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/gromos96format.so\nReading symbols from /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pdbformat.so...done.\nLoaded symbols for /usr/local/openbabel-inst/openbabel-dev-2-1-x/lib/openbabel/pdbformat.so\n\n0x00110402 in __kernel_vsyscall ()\n(gdb) bt\n#0 0x00110402 in __kernel_vsyscall ()\n#1 0x0082fb8e in __lll_mutex_lock_wait () from /lib/libc.so.6\n#2 0x007bfce8 in _L_lock_14096 () from /lib/libc.so.6\n#3 0x007befa4 in free () from /lib/libc.so.6\n#4 0x00744f93 in _dl_map_object_deps () from /lib/ld-linux.so.2\n#5 0x0074989d in dl_open_worker () from /lib/ld-linux.so.2\n#6 0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2\n#7 0x00749222 in _dl_open () from /lib/ld-linux.so.2\n#8 0x00858712 in do_dlopen () from /lib/libc.so.6\n#9 0x00745c36 in _dl_catch_error () from /lib/ld-linux.so.2\n#10 0x008588c5 in __libc_dlopen_mode () from /lib/libc.so.6\n#11 0x00836139 in init () from /lib/libc.so.6\n#12 0x008362d3 in backtrace () from /lib/libc.so.6\n#13 0x007b3e11 in __libc_message () from /lib/libc.so.6\n#14 0x007bba96 in _int_free () from /lib/libc.so.6\n#15 0x007befb0 in free () from /lib/libc.so.6\n#16 0x001f943a in DeleteByteCode (node=0x890ff4) at chains.cpp:477\n#17 0x00780859 in exit () from /lib/libc.so.6\n#18 0x081a6064 in proc_exit ()\n#19 0x081b5b9d in PostgresMain ()\n#20 0x0818e34b in ServerLoop ()\n#21 0x0818f1de in PostmasterMain ()\n#22 0x08152369 in main ()\n(gdb) \n", "msg_date": "Tue, 11 Dec 2007 07:53:15 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:\n> Alvaro Herrera wrote:\n> >>>...Since you've now shown that OpenBabel is\n> >>>multithreaded, then that's a much more likely cause.\n> >>Can you elaborate? Are multithreaded libraries not allowed to be\n> >>linked to Postgres?\n> >\n> >Absolutely not.\n> \n> Ok, thanks, I'll work on recompiling OpenBabel without thread support.\n> \n> Since I'm not a Postgres developer, perhaps one of the maintainers could \n> update the Postgres manual. In chapter 32.9.6, it says,\n> \n> \"To be precise, a shared library needs to be created.\"\n> \n> This should be amended to say,\n> \n> \"To be precise, a non-threaded, shared library needs to be created.\"\n> \n\nJust before someone goes ahead and writes it (which is probably a good idea\nin general), don't write it just like taht - because it's platform\ndependent. On win32, you can certainly stick a threaded library to it -\nwhich is good, because most (if not all) win32 libs are threaded... Now, if\nthey actually *use* threads explicitly things might break (but most likely\nnot from that specifically), but you can link with them without the\nproblem. I'm sure there are other platforms with similar situations.\n\n\n//Magnus\n", "msg_date": "Tue, 11 Dec 2007 16:57:03 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Craig James wrote:\n>> Can you elaborate? Are multithreaded libraries not allowed to be\n>> linked to Postgres?\n\n> Absolutely not.\n\nThe problem is that you get into library-interaction bugs like the\none discussed here:\nhttp://archives.postgresql.org/pgsql-general/2007-11/msg00580.php\nhttp://archives.postgresql.org/pgsql-general/2007-11/msg00610.php\n\nI suspect what you're seeing is the exact same problem on a different\nglibc internal mutex: the mutex is left uninitialized on the first trip\nthrough the code because the process is not multithreaded, and then\nafter OpenBabel gets loaded the process becomes multithreaded, and then\nit starts trying to use the mutex :-(.\n\nSince the glibc boys considered the other problem to be their bug,\nthey'd probably be interested in fixing this one too. Unfortunately,\nyou picked a Fedora version that reached EOL last week. Update to\nFC7 or FC8, and if you still see the problem, file a bugzilla entry\nagainst glibc.\n\nBut having said all that, that still only addresses the question of\nwhy the process hangs up during exit(). Why the double-free report is\nbeing made at all is less clear, but I kinda think that unexpected\nmultithread behavior may be at bottom there too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 11:10:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "Craig James <[email protected]> writes:\n>> Please show that stuff you snipped --- it might have some relevant\n>> information. The stack trace looks a bit like a threading problem...\n\n> Using host libthread_db library \"/lib/libthread_db.so.1\".\n\nThat's pretty suspicious, but not quite a smoking gun. Does \"info\nthreads\" report more than 1 thread?\n\n> Reading symbols from /usr/lib/libstdc++.so.6...done.\n> Loaded symbols for /usr/lib/libstdc++.so.6\n\nHmm, I wonder whether *this* is the problem, rather than OpenBabel\nper se. Trying to use C++ inside the PG backend is another minefield\nof things that don't work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 11:19:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:\n>> Since I'm not a Postgres developer, perhaps one of the maintainers could \n>> update the Postgres manual. In chapter 32.9.6, it says,\n>> \n>> \"To be precise, a shared library needs to be created.\"\n>> \n>> This should be amended to say,\n>> \n>> \"To be precise, a non-threaded, shared library needs to be created.\"\n\n> Just before someone goes ahead and writes it (which is probably a good idea\n> in general), don't write it just like taht - because it's platform\n> dependent.\n\nI can find no such text in our documentation at all, nor any reference\nto OpenBabel. I think Craig must be looking at someone else's\ndocumentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 11:25:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Tue, 11 Dec 2007 11:25:08 -0500\r\nTom Lane <[email protected]> wrote:\r\n\r\n> Magnus Hagander <[email protected]> writes:\r\n> > On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:\r\n> >> Since I'm not a Postgres developer, perhaps one of the maintainers\r\n> >> could update the Postgres manual. In chapter 32.9.6, it says,\r\n> >> \r\n> >> \"To be precise, a shared library needs to be created.\"\r\n> >> \r\n> >> This should be amended to say,\r\n> >> \r\n> >> \"To be precise, a non-threaded, shared library needs to be\r\n> >> created.\"\r\n> \r\n> > Just before someone goes ahead and writes it (which is probably a\r\n> > good idea in general), don't write it just like taht - because it's\r\n> > platform dependent.\r\n> \r\n> I can find no such text in our documentation at all, nor any reference\r\n> to OpenBabel. I think Craig must be looking at someone else's\r\n> documentation.\r\n\r\nIt's actually 33.9.6 and it is in:\r\n\r\nhttp://www.postgresql.org/docs/8.2/static/xfunc-c.html#DFUNC\r\n\r\nHe is looking directly at our documentation :)\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n\r\n\r\n- -- \r\nThe PostgreSQL Company: Since 1997, http://www.commandprompt.com/ \r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nSELECT 'Training', 'Consulting' FROM vendor WHERE name = 'CMD'\r\n\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHXrxJATb/zqfZUUQRAmSTAJwO0kdDovLB7kFGaPL9OPna3rm8ZwCfVaNo\r\nXKtTfT7He9rNEvMBs5e+O94=\r\n=qmOr\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Tue, 11 Dec 2007 08:35:21 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Tom Lane wrote:\n> Magnus Hagander <[email protected]> writes:\n>> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:\n>>> Since I'm not a Postgres developer, perhaps one of the maintainers could \n>>> update the Postgres manual. In chapter 32.9.6, it says,\n>>>\n>>> \"To be precise, a shared library needs to be created.\"\n>>>\n>>> This should be amended to say,\n>>>\n>>> \"To be precise, a non-threaded, shared library needs to be created.\"\n> \n>> Just before someone goes ahead and writes it (which is probably a good idea\n>> in general), don't write it just like taht - because it's platform\n>> dependent.\n> \n> I can find no such text in our documentation at all, nor any reference\n> to OpenBabel. I think Craig must be looking at someone else's\n> documentation.\n\nhttp://www.postgresql.org/docs/8.1/static/xfunc-c.html#DFUNChttp://www.postgresql.org/docs/8.1/static/xfunc-c.html#DFUNC\n\nCraig\n", "msg_date": "Tue, 11 Dec 2007 08:40:14 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "\"Magnus Hagander\" <[email protected]> writes:\n\n> On Tue, Dec 11, 2007 at 07:50:17AM -0800, Craig James wrote:\n>\n>> This should be amended to say,\n>> \n>> \"To be precise, a non-threaded, shared library needs to be created.\"\n>\n> Just before someone goes ahead and writes it (which is probably a good idea\n> in general), don't write it just like taht - because it's platform\n> dependent. On win32, you can certainly stick a threaded library to it -\n> which is good, because most (if not all) win32 libs are threaded... Now, if\n> they actually *use* threads explicitly things might break (but most likely\n> not from that specifically), but you can link with them without the\n> problem. I'm sure there are other platforms with similar situations.\n\nEven on Unix there's nothing theoretically wrong with loading a shared library\nwhich uses threads. It's just that there are a whole lot of practical problems\nwhich can crop up.\n\n1) No Postgres function is guaranteed to be thread-safe so you better protect\n against concurrent calls to Postgres API functions. Also Postgres functions\n use longjmp which can restore the stack pointer to a value which may have\n been set earlier, possibly by another thread which wouldn't work.\n\nSo you're pretty much restricted to calling Postgres API functions from the\nmain stack which means from the original thread Postgres loaded you with.\n\nThen there's\n\n2) Some OSes have bugs (notably glibc for a specific narrow set of versions)\n and don't expect to have standard library functions called before\n pthread_init() then called again after pthread_init(). If they expect the\n system to be either \"threaded\" or \"not threaded\" then they may be surprised\n to see that state change.\n\nThat just means you have to use a non-buggy version of your OS. Unfortunately\ntracking down bugs in your OS to figure out what's causing them and whether\nit's a particular known bug can be kind of tricky.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Tue, 11 Dec 2007 17:02:56 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n>> I can find no such text in our documentation at all, nor any reference\n>> to OpenBabel. I think Craig must be looking at someone else's\n>> documentation.\n\n> It's actually 33.9.6 and it is in:\n> http://www.postgresql.org/docs/8.2/static/xfunc-c.html#DFUNC\n\n[ shrug... ] That documentation is not intended to address how to\nconfigure OpenBabel. It's talking about setting up linker commands,\nand \"threaded\" is not a relevant concept at that level.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Dec 2007 12:06:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "Gregory Stark wrote:\n> 1) No Postgres function is guaranteed to be thread-safe so you better protect\n> against concurrent calls to Postgres API functions. Also Postgres functions\n> use longjmp which can restore the stack pointer to a value which may have\n> been set earlier, possibly by another thread which wouldn't work.\n>\n> \nThat's a whole different thing to saying that you can't use a threaded \nsubsystem under a Postgres\nprocess.\n\n> 2) Some OSes have bugs (notably glibc for a specific narrow set of versions)\n> and don't expect to have standard library functions called before\n> pthread_init() then called again after pthread_init(). If they expect the\n> system to be either \"threaded\" or \"not threaded\" then they may be surprised\n> to see that state change.\n>\n> \nIs there any particular reason not to ensure that any low-level \nthreading support in libc is enabled right\nfrom the get-go, as a build-time option? Does it do anything that's not \nwell defined in a threaded\nprocess? Signal handling and atfork (and posix_ exec) are tyical areas \nI guess. While this can potentially\nmake malloc slower, Postgres already wraps malloc so using a caching \nthread-aware malloc\nsubstitute such as nedmalloc should be no problem.\n\nI don't see any issue with the setjmp usage - so long as only one thread \nuses any internal API. Which\ncan be checked rather easily at runtime with low cost in a debug build.\n> That just means you have to use a non-buggy version of your OS. Unfortunately\n> tracking down bugs in your OS to figure out what's causing them and whether\n> it's a particular known bug can be kind of tricky.\n> \nIs that really much of an issue an the current version of any major OS \nthough? Its reaonable to\nlimit the use of a threaded library (in particular, the runtimes for \nmost embeddable languages, or\nlibraries for RPC runtimes, etc) to 'modern' platforms that support \nthreads effectively. On\nmany such platforms these will already implicitly link libpthread anyway.\n\nJames\n\n\n", "msg_date": "Sun, 16 Dec 2007 17:48:06 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "James Mansion <[email protected]> writes:\n> Is there any particular reason not to ensure that any low-level \n> threading support in libc is enabled right\n> from the get-go, as a build-time option?\n\nYes.\n\n1) It's of no value to us\n\n2) On many platforms there is a nonzero performance penalty\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Dec 2007 13:01:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die " }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> James Mansion <[email protected]> writes:\n>> Is there any particular reason not to ensure that any low-level \n>> threading support in libc is enabled right\n>> from the get-go, as a build-time option?\n>\n> Yes.\n> 1) It's of no value to us\n> 2) On many platforms there is a nonzero performance penalty\n\nAnd the only reason to do that would be to work around one bug in one small\nrange of glibc versions. If you're going to use a multi-threaded library\n(which isn't very common since it's hard to do safely for all those other\nreasons) surely using a version of your OS without any thread related bugs is\na better idea.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Sun, 16 Dec 2007 18:16:18 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Gregory Stark wrote:\n> \"Tom Lane\" <[email protected]> writes:\n> \n>> James Mansion <[email protected]> writes:\n>>> Is there any particular reason not to ensure that any low-level \n>>> threading support in libc is enabled right\n>>> from the get-go, as a build-time option?\n>> Yes.\n>> 1) It's of no value to us\n\nWho is \"us\"? Some of us would like to use the system for advanced scientific work, and scientific libraries are usually written in C++.\n\n>> 2) On many platforms there is a nonzero performance penalty\n\nI'm surprised you say this, given that you're usually the voice of reason when it comes to rejecting hypothetical statements in favor of tested facts. If building Postgres using thread-safe technology is really a performance burden, that could be easily verified. A \"nonzero performance penalty\", what does that mean, a 0.0001% slowdown? I find it hard to believe that the performance penalty of thread-safe version would even be measurable.\n\nIf nobody has the time to do such a test, or other priorities take precedence, that's understandable. But the results aren't in yet.\n\n> And the only reason to do that would be to work around one bug in one small\n> range of glibc versions. If you're going to use a multi-threaded library\n> (which isn't very common since it's hard to do safely for all those other\n> reasons) surely using a version of your OS without any thread related bugs is\n> a better idea.\n\nYou're jumping ahead. This problem has not been accurately diagnosed yet. It could be that the pthreads issue is completely misleading everyone, and in fact there is a genuine memory corruption going on here. Or not. We don't know yet. I have made zero progress fixing this problem.\n\nThe \"one small range of glibc versions\" is a giveaway. I've seen this problem in FC3, 5, and 6 (I went through this series of upgrades all in one week trying to fix this problem). With each version, I recompiled Postgres and OpenBabel from scratch. I'm going to try FC7 next since it's now the only \"official\" supported version, but I don't believe glibc is the problem.\n\nAndrew Dalke, a regular contributor to the OpenBabel forum, suggests another problem: It could be a result of linking the wrong libraries together. The gcc/ld system has a byzantine set of rules and hacks that if I understand Andrew's posting) select different versions of the same library depending on what it thinks you might need. It's possible that the wrong version of some system library is getting linked in.\n\nCraig\n", "msg_date": "Sun, 16 Dec 2007 11:03:11 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "\"Craig James\" <[email protected]> writes:\n\n> Gregory Stark wrote:\n>\n>> And the only reason to do that would be to work around one bug in one small\n>> range of glibc versions. If you're going to use a multi-threaded library\n>> (which isn't very common since it's hard to do safely for all those other\n>> reasons) surely using a version of your OS without any thread related bugs is\n>> a better idea.\n>\n> You're jumping ahead. This problem has not been accurately diagnosed yet. It\n> could be that the pthreads issue is completely misleading everyone, and in fact\n> there is a genuine memory corruption going on here. Or not. We don't know yet.\n> I have made zero progress fixing this problem.\n\nWell, no that would be you jumping ahead then... You proposed Postgres\nchanging the way it handles threaded libraries based on Tom's suggestion that\nyour problem was something like the glibc problem previously found. My comment\nwas based on the known glibc problem. From what you're saying it's far from\ncertain that the problem would be fixed by changing Postgres's behaviour in\nthe way you proposed.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Sun, 16 Dec 2007 20:06:47 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Tom Lane wrote:\n> Yes.\n>\n> 1) It's of no value to us\n>\n> 2) On many platforms there is a nonzero performance penalty\n>\n> \nI think you have your head in the ground, but its your perogative. *You*\nmight not care, but anyone wanting to use thread-aware libraries (and I'm\n*not* talking about threading in any Postgres code) will certainly value \nit if\nthey can do so with some stability.\n\nThere's a clear benefit to being able to use such code. I suggested a\nbuild option but you reject it out of hand. And in doing so, you also \nlock out\nthe benefits that you *could* have as well, in future.. It seems religious,\nwhich is unfortunate.\n\nAre you suggesting that the performance penalty, apart from the\nmalloc performance (which is easily dealt with) is *material*?\nAn extra indirection in access to errno will hurt so much? Non-zero I can\naccept, but clinging to 'non-zero' religiously isn't smart, especially \nif its a\nbuild-time choice.\n\nWe'll clearly move to multiple cores, and the clock speed enhancements will\nslow (at best). In many cases, the number of available cores will \nexceed the\nnumber of instantaneously active connections. Don't you want to be able\nto use all the horsepower?\n\nCertainly on the sort of systems I work in my day job (big derivative \ntrading\nsystems) its the norm that the cache hit rate on Sybase is well over \n99%, and\nsuch systems are typically CPU bound. Parallelism matters, and will matter\nmore and more in future.\n\nSo, an ability to start incrementally adding parallel operation of some \nactions\n(whether scanning or updating indices or pushing data to the peer) is \nvaluable,\nas is the ability to use threaded libraries - and the (potential?) \nability to use\nembedded languages and more advanced libraries in Postgres procs is one \nof the\nadvantages of the system itself. (I'd like to discount the use of a \nruntime in a\nseperate process - the latency is a problem for row triggers and functions)\n\nJames\n\n", "msg_date": "Sun, 16 Dec 2007 22:24:01 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "James Mansion wrote:\n> I think you have your head in the ground, but its your perogative.\n> *You* might not care, but anyone wanting to use thread-aware libraries\n> (and I'm *not* talking about threading in any Postgres code) will\n> certainly value it if they can do so with some stability.\n\nI suggest you find out the cause of your problem and then we can do more\nresearch. Talking about us changing the Postgres behavior from the\nreport of one user who doesn't even have the full details isn't\nproductive.\n\n--\n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 17 Dec 2007 03:52:16 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: libgcc double-free, backend won't die" }, { "msg_contents": "Bruce Momjian wrote:\n> James Mansion wrote:\n>> I think you have your head in the ground, but its your perogative.\n>> *You* might not care, but anyone wanting to use thread-aware libraries\n>> (and I'm *not* talking about threading in any Postgres code) will\n>> certainly value it if they can do so with some stability.\n> \n> I suggest you find out the cause of your problem and then we can do more\n> research. Talking about us changing the Postgres behavior from the\n> report of one user who doesn't even have the full details isn't\n> productive.\n\nI think you're confusing James Mansion with me (Craig James). I'm the one with the unresolved problem.\n\nJames is suggesting, completely independently of whether or not there's a bug in my system, that a thread-friendly option for Postgres would be very useful.\n\nDon't confuse thread-friendly with a threaded implemetation of Postgres itself. These are two separate questions. Thread-friendly involves compile/link options that don't affect the Postgres source code at all.\n\nCraig\n", "msg_date": "Mon, 17 Dec 2007 11:35:18 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Multi-threading friendliness (was: libgcc double-free, backend won't\n\tdie)" }, { "msg_contents": "Craig James wrote:\n> Don't confuse thread-friendly with a threaded implemetation of \n> Postgres itself. These are two separate questions. Thread-friendly \n> involves compile/link options that don't affect the Postgres source \n> code at all.\nIndeed. I'm specifically not suggesting that Postgres should offer an \nAPI that can be called from\nanything except the initial thread of its process - just that library \nsubsystems might want to use\nthreads internally and that should be OK. Or rather, it should be \npossible to build Postgres\nso that its OK. Even if there's a small slowdown, the benefit of \nrunning the full JVM or CLR\nmight outweigh that quite easily *in some circumstances*.\n\nI've also hinted that at some stage you might want to thread some parts \nof the implementation,\nbut I'm not suggesting that would be an early target. It seems to me \nsensible to make it\nstraightforward to take baby steps in that direction in future would be \na reasonable thing to\ndo. As would being friendly to dynamically loaded C++ code. If you \ncreate the framework,\n(and we're talking the barest of scaffolding) then others can work to \nshow the cost/benefit.\n\nI fail to see why this would be a controversial engineering approach.\n\nJames\n\n", "msg_date": "Mon, 17 Dec 2007 21:05:41 +0000", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multi-threading friendliness" } ]
[ { "msg_contents": "I just read the lead ups to this post - didn't see Tom and Greg's comments.\n\nThe approach we took was to recognize the ordering of child nodes and propagate that to the append in the special case of only one child (after CE). This is the most common use-case in 'partitioning', and so is an easy, high payoff low amount of code fix.\n\nI'd suggest we take this approach while also considering a more powerful set of append merge capabilities.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tLuke Lonergan [mailto:[email protected]]\nSent:\tSaturday, October 27, 2007 03:14 PM Eastern Standard Time\nTo:\tHeikki Linnakangas; Anton\nCc:\[email protected]\nSubject:\tRe: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnd I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.\n\nThe patch isn't ready for application, but someone can quickly implement it I'd expect.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tHeikki Linnakangas [mailto:[email protected]]\nSent:\tSaturday, October 27, 2007 05:20 AM Eastern Standard Time\nTo:\tAnton\nCc:\[email protected]\nSubject:\tRe: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnton wrote:\n> I repost here my original question \"Why it no uses indexes?\" (on\n> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\n> mean that you miss this discussion.\n\nAs I said back then:\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nRe: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\n\n\nI just read the lead ups to this post - didn't see Tom and Greg's comments.\n\nThe approach we took was to recognize the ordering of child nodes and propagate that to the append in the special case of only one child (after CE).  This is the most common use-case in 'partitioning', and so is an easy, high payoff low amount of code fix.\n\nI'd suggest we take this approach while also considering a more powerful set of append merge capabilities.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Luke Lonergan [mailto:[email protected]]\nSent:   Saturday, October 27, 2007 03:14 PM Eastern Standard Time\nTo:     Heikki Linnakangas; Anton\nCc:     [email protected]\nSubject:        Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnd I repeat - 'we fixed that and submitted a patch' - you can find it in the unapplied patches queue.\n\nThe patch isn't ready for application, but someone can quickly implement it I'd expect.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Heikki Linnakangas [mailto:[email protected]]\nSent:   Saturday, October 27, 2007 05:20 AM Eastern Standard Time\nTo:     Anton\nCc:     [email protected]\nSubject:        Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC LIMIT 1\n\nAnton wrote:\n> I repost here my original question \"Why it no uses indexes?\" (on\n> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\n> mean that you miss this discussion.\n\nAs I said back then:\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node.\n\n--\n  Heikki Linnakangas\n  EnterpriseDB   http://www.enterprisedb.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings", "msg_date": "Sat, 27 Oct 2007 15:28:04 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "\n\"Luke Lonergan\" <[email protected]> writes:\n\n> The approach we took was to recognize the ordering of child nodes and\n> propagate that to the append in the special case of only one child (after\n> CE). This is the most common use-case in 'partitioning', and so is an easy,\n> high payoff low amount of code fix.\n\nAh yes, we should definitely try to prune singleton append nodes. On a lark I\nhad tried to do precisely that to see what would happen but ran into precisely\nthe problem you had to solve here with your pullup_vars function. That's one\nof the functions which wasn't included in the original patch so I'll look at\nthe patch from the queue to see what's involved.\n\nActually currently it's not a common case because we can't eliminate the\nparent partition. I have some ideas for how to deal with that but haven't\nwritten them up yet.\n\nIn theory if we can preserve ordering across append nodes there's no good\nreason to prune them. But generally I think simplifying the plan is good if\nonly to present simpler plans to the user. \n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 28 Oct 2007 01:20:26 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" } ]
[ { "msg_contents": "Works great - plans no longer sort, but rather use indices as expected. It's in use in Greenplum now.\n\nIt's a simple approach, should easily extend from gpdb to postgres. The patch is against gpdb so someone needs to 'port' it.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom: \tSimon Riggs [mailto:[email protected]]\nSent:\tSaturday, October 27, 2007 05:34 PM Eastern Standard Time\nTo:\tLuke Lonergan\nCc:\tHeikki Linnakangas; Anton; [email protected]\nSubject:\tRe: [PERFORM] partitioned table and ORDER BY indexed_field DESCLIMIT 1\n\nOn Sat, 2007-10-27 at 15:12 -0400, Luke Lonergan wrote:\n> And I repeat - 'we fixed that and submitted a patch' - you can find it\n> in the unapplied patches queue.\n\nI got the impression it was a suggestion rather than a tested patch,\nforgive me if that was wrong.\n\nDid the patch work? Do you have timings/different plan?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n\n\n\nRe: [PERFORM] partitioned table and ORDER BY indexed_field DESCLIMIT 1\n\n\n\nWorks great - plans no longer sort, but rather use indices as expected.  It's in use in Greenplum now.\n\nIt's a simple approach, should easily extend from gpdb to postgres. The patch is against gpdb so someone needs to 'port' it.\n\n- Luke\n\nMsg is shrt cuz m on ma treo\n\n -----Original Message-----\nFrom:   Simon Riggs [mailto:[email protected]]\nSent:   Saturday, October 27, 2007 05:34 PM Eastern Standard Time\nTo:     Luke Lonergan\nCc:     Heikki Linnakangas; Anton; [email protected]\nSubject:        Re: [PERFORM] partitioned table and ORDER BY indexed_field DESCLIMIT 1\n\nOn Sat, 2007-10-27 at 15:12 -0400, Luke Lonergan wrote:\n> And I repeat - 'we fixed that and submitted a patch' - you can find it\n> in the unapplied patches queue.\n\nI got the impression it was a suggestion rather than a tested patch,\nforgive me if that was wrong.\n\nDid the patch work? Do you have timings/different plan?\n\n--\n  Simon Riggs\n  2ndQuadrant  http://www.2ndQuadrant.com", "msg_date": "Sat, 27 Oct 2007 17:48:16 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field\n DESCLIMIT 1" }, { "msg_contents": "On Sat, 2007-10-27 at 17:48 -0400, Luke Lonergan wrote:\n> Works great - plans no longer sort, but rather use indices as\n> expected. It's in use in Greenplum now.\n> \n> It's a simple approach, should easily extend from gpdb to postgres.\n> The patch is against gpdb so someone needs to 'port' it.\n\nThe part of the patch that didn't work for me was the nrels==1 bit. The\nway it currently works there is only ever 0 or 2+ rels. The normal\nPostgres code has to cater for the possibility of a non-empty parent\ntable, which seems to destroy the possibility of using this technique.\n\nI agree its annoying and I have a way of doing this, but that's an 8.4\nthing now.\n\nAnybody think different?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sun, 28 Oct 2007 09:13:53 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field\n\tDESCLIMIT 1" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> I agree its annoying and I have a way of doing this, but that's an 8.4\n> thing now.\n\nIt was an 8.4 thing quite some time ago, since no working patch was ever\nsubmitted.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Oct 2007 12:53:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESCLIMIT 1 " }, { "msg_contents": "On Sun, 2007-10-28 at 12:53 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > I agree its annoying and I have a way of doing this, but that's an 8.4\n> > thing now.\n> \n> It was an 8.4 thing quite some time ago, since no working patch was ever\n> submitted.\n\nSorry, I meant that the problem itself is annoying, not anything to do\nwith patches. I'm very happy with all that we've done.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sun, 28 Oct 2007 17:27:24 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field\n\tDESCLIMIT 1" } ]
[ { "msg_contents": "All,\n\nWe are trying to implement partition on one tables on date basis. the\noverall cost and timming and cost of the query is increasing on the Append\nof the child table output. As shown below:\n\n*-> Append (cost=0.00..112217.92 rows=2752906 width=52) (actual time=\n2454.207..20712.021 rows=2752905 loops=1)\n -> Seq Scan on trm (cost=0.00..28570.35 rows=1 width=52) (actual time=\n2423.374..2423.374 rows=0 loops=1)\n Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\nAND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n -> Seq Scan on trm_d20070601 trm (cost=0.00..29203.41 rows=961094\nwidth=52) (actual time=30.825..3027.217 rows=961094 loops=1)\n Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\nAND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n -> Seq Scan on trm_d20070602 trm (cost=0.00..27442.52 rows=903168\nwidth=52) (actual time=11.142..2687.422 rows=903168 loops=1)\n Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\nAND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n -> Seq Scan on trm_d20070603 trm (cost=0.00..27001.64 rows=888643\nwidth=52) (actual time=13.697..2568.012 rows=888643 loops=1)\n Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\nAND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))*\n\n\nCan someone let me know, how we can reduce the overall cost and time of the\nappend operation, and what parameters in the confirguration needs to be\nchanged?\n\n\nLet me know if you need any further information to improve the query plan.\n\n\nRegards,\nNimesh.\n\nAll,\n \nWe are trying to implement partition on one tables on date basis. the overall cost and timming and cost of the query is increasing on the Append of the child table output. As shown below:\n \n->  Append  (cost=0.00..112217.92 rows=2752906 width=52) (actual time=2454.207..20712.021 rows=2752905 loops=1) ->  Seq Scan on trm  (cost=0.00..28570.35 rows=1 width=52) (actual time=2423.374..2423.374\n rows=0 loops=1)  Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone)) ->  Seq Scan on trm_d20070601 trm  (cost=\n0.00..29203.41 rows=961094 width=52) (actual time=30.825..3027.217 rows=961094 loops=1)  Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n ->  Seq Scan on trm_d20070602 trm  (cost=0.00..27442.52 rows=903168 width=52) (actual time=11.142..2687.422 rows=903168 loops=1)  Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n ->  Seq Scan on trm_d20070603 trm  (cost=0.00..27001.64 rows=888643 width=52) (actual time=13.697..2568.012 rows=888643 loops=1)  Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n \n \nCan someone let me know, how we can reduce the overall cost and time of the append operation, and what parameters in the confirguration needs to be changed?\n \n \nLet me know if you need any further information to improve the query plan.\n \n \nRegards,\nNimesh.", "msg_date": "Sun, 28 Oct 2007 20:52:53 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Append Cost in query planners" }, { "msg_contents": "Nimesh Satam wrote:\n> We are trying to implement partition on one tables on date basis. the\n> overall cost and timming and cost of the query is increasing on the Append\n> of the child table output. As shown below:\n> \n> *-> Append (cost=0.00..112217.92 rows=2752906 width=52) (actual time=\n> 2454.207..20712.021 rows=2752905 loops=1)\n> -> Seq Scan on trm (cost=0.00..28570.35 rows=1 width=52) (actual time=\n> 2423.374..2423.374 rows=0 loops=1)\n> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n> -> Seq Scan on trm_d20070601 trm (cost=0.00..29203.41 rows=961094\n> width=52) (actual time=30.825..3027.217 rows=961094 loops=1)\n> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n> -> Seq Scan on trm_d20070602 trm (cost=0.00..27442.52 rows=903168\n> width=52) (actual time=11.142..2687.422 rows=903168 loops=1)\n> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))\n> -> Seq Scan on trm_d20070603 trm (cost=0.00..27001.64 rows=888643\n> width=52) (actual time=13.697..2568.012 rows=888643 loops=1)\n> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND (sqldate <= '2007-06-03 00:00:00'::timestamp without time zone))*\n> \n> Can someone let me know, how we can reduce the overall cost and time of the\n> append operation, and what parameters in the confirguration needs to be\n> changed?\n\nDoes the query really return almost 3 million rows? If that's the case,\nI'm afraid there isn't much you can do, software-wise. If not, show us\nthe complete query and EXPLAIN ANALYZE output.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 28 Oct 2007 15:38:15 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append Cost in query planners" }, { "msg_contents": "Nimesh Satam wrote:\n> Following is the full plan of the query using partition. Let me know if you\n> need any further information.\n\nWhat indexes are there on the table partitions? You didn't post the\nquery, but it looks like your doing a join between rpt_network and the\npartitioned table. An index on the join key might help...\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 28 Oct 2007 18:22:04 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append Cost in query planners" }, { "msg_contents": "Heikki,\n\n\nThanks for your reply. Will try to do the changes and rivert back. I had one\nmore requirement for partitioning.\n\nI wanted to inherit two different tables for partition. Below is the query\nused to create the table, an crete the inheritance.\n\n\nCREATE TABLE metrics_d20070601 (CHECK (sqldate = '20070601')) INHERITS\n(metrics, date);\n\nFurther more we are using the below mentioned query:\n\nSELECT rs.id AS sid, rs.name AS sname, rc.id AS cid, rc.name AS cname,\nrc.type AS rtype, rc.act_type AS acttype, ra.id AS adid, ra.name AS avname,\nrch.id AS chid, rch.name AS chname, rcr.dim AS dim, SUM(metrics.imp_del) AS\nimp, SUM(metrics.clidel) AS cli, date.sqldate AS date, rg.id AS gid\nFROM metrics, rn CROSS JOIN date, ra, rs, rc, rch, rcr, rg\nWHERE metrics.netkey = rn.key\nAND rn.id = 10\nAND metrics.advkey = ra.key\nAND metrics.campkey = rc.key\nAND metrics.skey = rs.key\nAND metrics.chkey = rch.key\nAND metrics.cr_key = rcr.key\nAND date.sqldate BETWEEN '6/01/2007' AND '6/01/2007'\nAND metrics.gkey = rg.key\nGROUP BY date.sqldate, rs.id, rs.name, ra.id, ra.name, rc.id, rc.name,\nrc.rev_type, rc.act_type, rch.id, rch.name, rcr.dim, rg.id;\n\nAnd the query execution plan is as below\n\n\nQUERY\nPLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=589766.28..651315.41 rows=1119075 width=127)\n -> Sort (cost=589766.28..592563.97 rows=1119075 width=127)\n Sort Key: public.date.sqldate, rs.id, rs.name, ra.id, ra.name,\nrc.id, rc.name, rc.rtype, rc.act_type, rch.id, rch.name, rcr.dim, rg.id\n -> Hash Join (cost=64914.87..433619.51 rows=1119075 width=127)\n Hash Cond: (\"outer\".adv_key = \"inner\".\"key\")\n -> Hash Join (cost=64419.08..402349.16 rows=1119075\nwidth=111)\n Hash Cond: (\"outer\".s_key = \"inner\".\"key\")\n -> Hash Join (cost=63827.54..368185.38 rows=1119075\nwidth=96)\n Hash Cond: (\"outer\".campkey = \"inner\".\"key\")\n -> Hash Join\n(cost=61339.00..323731.53rows=1119075 width=66)\n Hash Cond: (\"outer\".chkey = \"inner\".\"key\")\n -> Hash Join\n(cost=59480.62..293896.26rows=1119075 width=46)\n Hash Cond: (\"outer\".cr_key =\n\"inner\".\"key\")\n -> Hash Join (cost=\n51298.73..243749.06 rows=1119075 width=48)\n Hash Cond: (\"outer\".gkey =\n\"inner\".\"key\")\n -> Hash Join (cost=\n51051.50..204334.21 rows=1119075 width=48)\n Hash Cond:\n((\"outer\".netkey = \"inner\".\"key\") AND (\"outer\".date_key = \"inner\".\"key\"))\n -> Append (cost=\n0.00..51795.56 rows=1901256 width=48)\n -> Seq Scan on\nmetrics (cost=0.00..25614.71 rows=940271 width=48)\n -> Seq Scan on\nmetrics_d20070601 metrics (cost=0.00..26180.85 rows=960985 width=48)\n -> Hash (cost=\n40615.57..40615.57 rows=960986 width=16)\n -> Nested Loop\n(cost=0.00..40615.57 rows=960986 width=16)\n -> Index\nScan using rpt_netw_key_idx on rn (cost=0.00..16.92 rows=1 width=4)\n Filter:\n(id = 10)\n -> Append\n(cost=0.00..30988.79 rows=960986 width=12)\n ->\nIndex Scan using rpt_dt_sqldt_idx on date (cost=0.00..3.02 rows=1 width=12)\n\nIndex Cond: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\nAND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n -> Seq\nScan on metrics_d20070601 rpt_date (cost=0.00..30985.78 rows=960985\nwidth=12)\n\nFilter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND\n(sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n -> Hash\n(cost=223.18..223.18rows=9618 width=8)\n -> Seq Scan on rg\n(cost=0.00..223.18 rows=9618 width=8)\n -> Hash\n(cost=7367.71..7367.71rows=325671 width=6)\n -> Seq Scan on rc (cost=\n0.00..7367.71 rows=325671 width=6)\n -> Hash (cost=1652.51..1652.51 rows=82351\nwidth=28)\n -> Seq Scan on rch (cost=\n0.00..1652.51 rows=82351 width=28)\n -> Hash (cost=2283.83..2283.83 rows=81883\nwidth=38)\n -> Seq Scan on rc\n(cost=0.00..2283.83rows=81883 width=38)\n -> Hash (cost=520.63..520.63 rows=28363 width=23)\n -> Seq Scan on rs (cost=0.00..520.63 rows=28363\nwidth=23)\n -> Hash (cost=435.63..435.63 rows=24063 width=24)\n -> Seq Scan on radv (cost=0.00..435.63 rows=24063\nwidth=24)\n(41 rows)\n\nCan you let me know how we can avoid the double looping on the metrics\ntable. This been a big table causes the queries to slowdown.\n\n\nRegards & Thanks,\nNimesh.\n\n\nOn 10/28/07, Heikki Linnakangas <[email protected]> wrote:\n>\n> Nimesh Satam wrote:\n> > Following is the full plan of the query using partition. Let me know if\n> you\n> > need any further information.\n>\n> What indexes are there on the table partitions? You didn't post the\n> query, but it looks like your doing a join between rpt_network and the\n> partitioned table. An index on the join key might help...\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nHeikki,Thanks for your reply. Will try to do the changes and rivert back. I had one more requirement for partitioning. I wanted to inherit two different tables for partition. Below is the query used to create the table, an crete the inheritance.\nCREATE TABLE metrics_d20070601 (CHECK (sqldate = '20070601')) INHERITS (metrics, date);Further more we are using the below mentioned query:SELECT rs.id AS sid, \nrs.name AS sname, rc.id AS cid, rc.name AS cname, rc.type AS rtype, rc.act_type AS acttype, ra.id AS adid, \nra.name AS avname, rch.id AS chid, rch.name AS chname, rcr.dim AS dim, SUM(metrics.imp_del) AS imp, SUM(metrics.clidel) AS cli, date.sqldate AS date, \nrg.id AS gid  FROM metrics, rn CROSS JOIN date, ra, rs, rc, rch, rcr, rg WHERE metrics.netkey = rn.key AND rn.id = 10AND metrics.advkey = ra.key AND metrics.campkey = rc.key\n AND metrics.skey = rs.key AND metrics.chkey = rch.key AND metrics.cr_key = rcr.key  AND date.sqldate BETWEEN '6/01/2007' AND '6/01/2007' AND metrics.gkey = rg.key  GROUP BY date.sqldate\n, rs.id, rs.name, ra.id, ra.name, rc.id, rc.name, rc.rev_type\n, rc.act_type, rch.id, rch.name, rcr.dim, rg.id;And the query execution plan is as below                                                                                                                                  QUERY PLAN                                                                                                              \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=589766.28..651315.41 rows=1119075 width=127)   ->  Sort  (cost=589766.28..592563.97 rows=1119075 width=127)         Sort Key: public.date.sqldate, rs.id, \nrs.name, ra.id, ra.name, rc.id, rc.name, rc.rtype, rc.act_type, rch.id, \nrch.name, rcr.dim, rg.id         ->  Hash Join  (cost=64914.87..433619.51 rows=1119075 width=127)               Hash Cond: (\"outer\".adv_key = \"inner\".\"key\")\n               ->  Hash Join  (cost=64419.08..402349.16 rows=1119075 width=111)                     Hash Cond: (\"outer\".s_key = \"inner\".\"key\")                     ->  Hash Join  (cost=\n63827.54..368185.38 rows=1119075 width=96)                           Hash Cond: (\"outer\".campkey = \"inner\".\"key\")                           ->  Hash Join  (cost=61339.00..323731.53\n rows=1119075 width=66)                                 Hash Cond: (\"outer\".chkey = \"inner\".\"key\")                                 ->  Hash Join  (cost=59480.62..293896.26 rows=1119075 width=46)\n                                       Hash Cond: (\"outer\".cr_key = \"inner\".\"key\")                                       ->  Hash Join  (cost=51298.73..243749.06 rows=1119075 width=48)\n                                             Hash Cond: (\"outer\".gkey = \"inner\".\"key\")                                             ->  Hash Join  (cost=51051.50..204334.21 rows=1119075 width=48)\n                                                   Hash Cond: ((\"outer\".netkey = \"inner\".\"key\") AND (\"outer\".date_key = \"inner\".\"key\"))                                                  \n ->  Append  (cost=0.00..51795.56 rows=1901256 width=48)                                                         ->  Seq Scan on metrics  (cost=\n0.00..25614.71 rows=940271 width=48)                                                         ->  Seq Scan on metrics_d20070601 metrics  (cost=0.00..26180.85\n rows=960985 width=48)                                                   ->  Hash  (cost=40615.57..40615.57 rows=960986 width=16)\n                                                         ->  Nested Loop  (cost=0.00..40615.57 rows=960986 width=16)\n                                                               ->  Index Scan using rpt_netw_key_idx on rn  (cost=0.00..16.92 rows=1 width=4)                                                                     Filter: (id = 10)\n                                                               ->  Append  (cost=0.00..30988.79 rows=960986 width=12)\n                                                                     ->  Index Scan using rpt_dt_sqldt_idx on date  (cost=0.00..3.02 rows=1 width=12)\n                                                                           Index Cond: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n                                                                     ->  Seq Scan on metrics_d20070601 rpt_date  (cost=0.00..30985.78 rows=960985 width=12)\n                                                                           Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n                                             ->  Hash  (cost=223.18..223.18 rows=9618 width=8)                                                   ->  Seq Scan on rg  (cost=0.00..223.18 rows=9618 width=8)\n                                       ->  Hash  (cost=7367.71..7367.71 rows=325671 width=6)                                             ->  Seq Scan on rc  (cost=0.00..7367.71 rows=325671 width=6)                                 ->  Hash  (cost=\n1652.51..1652.51 rows=82351 width=28)                                       ->  Seq Scan on rch  (cost=0.00..1652.51 rows=82351 width=28)                           ->  Hash  (cost=2283.83..2283.83 rows=81883 width=38)\n                                 ->  Seq Scan on rc  (cost=0.00..2283.83 rows=81883 width=38)                     ->  Hash  (cost=520.63..520.63 rows=28363 width=23)                           ->  Seq Scan on rs  (cost=\n0.00..520.63 rows=28363 width=23)               ->  Hash  (cost=435.63..435.63 rows=24063 width=24)                     ->  Seq Scan on radv  (cost=0.00..435.63 rows=24063 width=24)(41 rows)Can you let me know how we can avoid the double looping on the metrics table. This been a big table causes the queries to slowdown.\nRegards & Thanks,Nimesh.On 10/28/07, Heikki Linnakangas <[email protected]\n> wrote:Nimesh Satam wrote:> Following is the full plan of the query using partition. Let me know if you\n> need any further information.What indexes are there on the table partitions? You didn't post thequery, but it looks like your doing a join between rpt_network and thepartitioned table. An index on the join key might help...\n--  Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com", "msg_date": "Mon, 29 Oct 2007 16:39:16 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Append Cost in query planners" }, { "msg_contents": "Nimesh Satam wrote:\n> Heikki,\n> \n> \n> Thanks for your reply. Will try to do the changes and rivert back. I had one\n> more requirement for partitioning.\n> \n> I wanted to inherit two different tables for partition. Below is the query\n> used to create the table, an crete the inheritance.\n> \n> \n> CREATE TABLE metrics_d20070601 (CHECK (sqldate = '20070601')) INHERITS\n> (metrics, date);\n> \n> Further more we are using the below mentioned query:\n> \n> SELECT rs.id AS sid, rs.name AS sname, rc.id AS cid, rc.name AS cname,\n> rc.type AS rtype, rc.act_type AS acttype, ra.id AS adid, ra.name AS avname,\n> rch.id AS chid, rch.name AS chname, rcr.dim AS dim, SUM(metrics.imp_del) AS\n> imp, SUM(metrics.clidel) AS cli, date.sqldate AS date, rg.id AS gid\n> FROM metrics, rn CROSS JOIN date, ra, rs, rc, rch, rcr, rg\n> WHERE metrics.netkey = rn.key\n> AND rn.id = 10\n> AND metrics.advkey = ra.key\n> AND metrics.campkey = rc.key\n> AND metrics.skey = rs.key\n> AND metrics.chkey = rch.key\n> AND metrics.cr_key = rcr.key\n> AND date.sqldate BETWEEN '6/01/2007' AND '6/01/2007'\n> AND metrics.gkey = rg.key\n> GROUP BY date.sqldate, rs.id, rs.name, ra.id, ra.name, rc.id, rc.name,\n> rc.rev_type, rc.act_type, rch.id, rch.name, rcr.dim, rg.id;\n> \n> And the query execution plan is as below\n> \n> \n> QUERY\n> PLAN\n> \n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=589766.28..651315.41 rows=1119075 width=127)\n> -> Sort (cost=589766.28..592563.97 rows=1119075 width=127)\n> Sort Key: public.date.sqldate, rs.id, rs.name, ra.id, ra.name,\n> rc.id, rc.name, rc.rtype, rc.act_type, rch.id, rch.name, rcr.dim, rg.id\n> -> Hash Join (cost=64914.87..433619.51 rows=1119075 width=127)\n> Hash Cond: (\"outer\".adv_key = \"inner\".\"key\")\n> -> Hash Join (cost=64419.08..402349.16 rows=1119075\n> width=111)\n> Hash Cond: (\"outer\".s_key = \"inner\".\"key\")\n> -> Hash Join (cost=63827.54..368185.38 rows=1119075\n> width=96)\n> Hash Cond: (\"outer\".campkey = \"inner\".\"key\")\n> -> Hash Join\n> (cost=61339.00..323731.53rows=1119075 width=66)\n> Hash Cond: (\"outer\".chkey = \"inner\".\"key\")\n> -> Hash Join\n> (cost=59480.62..293896.26rows=1119075 width=46)\n> Hash Cond: (\"outer\".cr_key =\n> \"inner\".\"key\")\n> -> Hash Join (cost=\n> 51298.73..243749.06 rows=1119075 width=48)\n> Hash Cond: (\"outer\".gkey =\n> \"inner\".\"key\")\n> -> Hash Join (cost=\n> 51051.50..204334.21 rows=1119075 width=48)\n> Hash Cond:\n> ((\"outer\".netkey = \"inner\".\"key\") AND (\"outer\".date_key = \"inner\".\"key\"))\n> -> Append (cost=\n> 0.00..51795.56 rows=1901256 width=48)\n> -> Seq Scan on\n> metrics (cost=0.00..25614.71 rows=940271 width=48)\n> -> Seq Scan on\n> metrics_d20070601 metrics (cost=0.00..26180.85 rows=960985 width=48)\n> -> Hash (cost=\n> 40615.57..40615.57 rows=960986 width=16)\n> -> Nested Loop\n> (cost=0.00..40615.57 rows=960986 width=16)\n> -> Index\n> Scan using rpt_netw_key_idx on rn (cost=0.00..16.92 rows=1 width=4)\n> Filter:\n> (id = 10)\n> -> Append\n> (cost=0.00..30988.79 rows=960986 width=12)\n> ->\n> Index Scan using rpt_dt_sqldt_idx on date (cost=0.00..3.02 rows=1 width=12)\n> \n> Index Cond: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n> -> Seq\n> Scan on metrics_d20070601 rpt_date (cost=0.00..30985.78 rows=960985\n> width=12)\n> \n> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND\n> (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n> -> Hash\n> (cost=223.18..223.18rows=9618 width=8)\n> -> Seq Scan on rg\n> (cost=0.00..223.18 rows=9618 width=8)\n> -> Hash\n> (cost=7367.71..7367.71rows=325671 width=6)\n> -> Seq Scan on rc (cost=\n> 0.00..7367.71 rows=325671 width=6)\n> -> Hash (cost=1652.51..1652.51 rows=82351\n> width=28)\n> -> Seq Scan on rch (cost=\n> 0.00..1652.51 rows=82351 width=28)\n> -> Hash (cost=2283.83..2283.83 rows=81883\n> width=38)\n> -> Seq Scan on rc\n> (cost=0.00..2283.83rows=81883 width=38)\n> -> Hash (cost=520.63..520.63 rows=28363 width=23)\n> -> Seq Scan on rs (cost=0.00..520.63 rows=28363\n> width=23)\n> -> Hash (cost=435.63..435.63 rows=24063 width=24)\n> -> Seq Scan on radv (cost=0.00..435.63 rows=24063\n> width=24)\n> (41 rows)\n> \n> Can you let me know how we can avoid the double looping on the metrics\n> table. This been a big table causes the queries to slowdown.\n\nWell, if the index on metrics.netkey helps, it doesn't matter if it's\nscanned twice.\n\nOn a query with that many tables involved, you should try raising\njoin_collapse_limit from the default. That query accesses 9 tables,\nwhich is just above the default join_collapse_limit of 8, so the planner\nis not considering all possible join orders.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 29 Oct 2007 11:40:05 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append Cost in query planners" }, { "msg_contents": "Heikki,\n\n\nThanks for the information. join_collapse_limit = 1 is already set before\nsending the query plan.\n\nWill a index scan on metrics.netkey help in improving the performance what\nother configuration parameters should we consider while opting for\npartition?\n\n\nRegards,\nNimesh.\n\nOn 10/29/07, Heikki Linnakangas <[email protected]> wrote:\n>\n> Nimesh Satam wrote:\n> > Heikki,\n> >\n> >\n> > Thanks for your reply. Will try to do the changes and rivert back. I had\n> one\n> > more requirement for partitioning.\n> >\n> > I wanted to inherit two different tables for partition. Below is the\n> query\n> > used to create the table, an crete the inheritance.\n> >\n> >\n> > CREATE TABLE metrics_d20070601 (CHECK (sqldate = '20070601')) INHERITS\n> > (metrics, date);\n> >\n> > Further more we are using the below mentioned query:\n> >\n> > SELECT rs.id AS sid, rs.name AS sname, rc.id AS cid, rc.name AS cname,\n> > rc.type AS rtype, rc.act_type AS acttype, ra.id AS adid, ra.name AS\n> avname,\n> > rch.id AS chid, rch.name AS chname, rcr.dim AS dim, SUM(metrics.imp_del)\n> AS\n> > imp, SUM(metrics.clidel) AS cli, date.sqldate AS date, rg.id AS gid\n> > FROM metrics, rn CROSS JOIN date, ra, rs, rc, rch, rcr, rg\n> > WHERE metrics.netkey = rn.key\n> > AND rn.id = 10\n> > AND metrics.advkey = ra.key\n> > AND metrics.campkey = rc.key\n> > AND metrics.skey = rs.key\n> > AND metrics.chkey = rch.key\n> > AND metrics.cr_key = rcr.key\n> > AND date.sqldate BETWEEN '6/01/2007' AND '6/01/2007'\n> > AND metrics.gkey = rg.key\n> > GROUP BY date.sqldate, rs.id, rs.name, ra.id, ra.name, rc.id, rc.name,\n> > rc.rev_type, rc.act_type, rch.id, rch.name, rcr.dim, rg.id;\n> >\n> > And the query execution plan is as below\n> >\n> >\n> > QUERY\n> > PLAN\n> >\n> >\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=589766.28..651315.41 rows=1119075 width=127)\n> > -> Sort (cost=589766.28..592563.97 rows=1119075 width=127)\n> > Sort Key: public.date.sqldate, rs.id, rs.name, ra.id, ra.name,\n> > rc.id, rc.name, rc.rtype, rc.act_type, rch.id, rch.name, rcr.dim, rg.id\n> > -> Hash Join (cost=64914.87..433619.51 rows=1119075\n> width=127)\n> > Hash Cond: (\"outer\".adv_key = \"inner\".\"key\")\n> > -> Hash Join (cost=64419.08..402349.16 rows=1119075\n> > width=111)\n> > Hash Cond: (\"outer\".s_key = \"inner\".\"key\")\n> > -> Hash Join (cost=63827.54..368185.38rows=1119075\n> > width=96)\n> > Hash Cond: (\"outer\".campkey = \"inner\".\"key\")\n> > -> Hash Join\n> > (cost=61339.00..323731.53rows=1119075 width=66)\n> > Hash Cond: (\"outer\".chkey =\n> \"inner\".\"key\")\n> > -> Hash Join\n> > (cost=59480.62..293896.26rows=1119075 width=46)\n> > Hash Cond: (\"outer\".cr_key =\n> > \"inner\".\"key\")\n> > -> Hash Join (cost=\n> > 51298.73..243749.06 rows=1119075 width=48)\n> > Hash Cond: (\"outer\".gkey =\n> > \"inner\".\"key\")\n> > -> Hash Join (cost=\n> > 51051.50..204334.21 rows=1119075 width=48)\n> > Hash Cond:\n> > ((\"outer\".netkey = \"inner\".\"key\") AND (\"outer\".date_key =\n> \"inner\".\"key\"))\n> > -> Append (cost=\n> > 0.00..51795.56 rows=1901256 width=48)\n> > -> Seq Scan on\n> > metrics (cost=0.00..25614.71 rows=940271 width=48)\n> > -> Seq Scan on\n> > metrics_d20070601 metrics (cost=0.00..26180.85 rows=960985 width=48)\n> > -> Hash (cost=\n> > 40615.57..40615.57 rows=960986 width=16)\n> > -> Nested Loop\n> > (cost=0.00..40615.57 rows=960986 width=16)\n> > -> Index\n> > Scan using rpt_netw_key_idx on rn (cost=0.00..16.92 rows=1 width=4)\n>\n> > Filter:\n> > (id = 10)\n>\n> > -> Append\n> > (cost=0.00..30988.79 rows=960986 width=12)\n> > ->\n> > Index Scan using rpt_dt_sqldt_idx on date (cost=0.00..3.02 rows=1\n> width=12)\n> >\n> > Index Cond: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time\n> zone)\n> > AND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n>\n> > -> Seq\n> > Scan on metrics_d20070601 rpt_date (cost=0.00..30985.78 rows=960985\n> > width=12)\n> >\n> > Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)\n> AND\n> > (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n> > -> Hash\n> > (cost=223.18..223.18rows=9618 width=8)\n> > -> Seq Scan on rg\n> > (cost=0.00..223.18 rows=9618 width=8)\n> > -> Hash\n> > (cost=7367.71..7367.71rows=325671 width=6)\n> > -> Seq Scan on rc (cost=\n> > 0.00..7367.71 rows=325671 width=6)\n> > -> Hash (cost=1652.51..1652.51rows=82351\n> > width=28)\n> > -> Seq Scan on rch (cost=\n> > 0.00..1652.51 rows=82351 width=28)\n> > -> Hash (cost=2283.83..2283.83 rows=81883\n> > width=38)\n> > -> Seq Scan on rc\n> > (cost=0.00..2283.83rows=81883 width=38)\n> > -> Hash (cost=520.63..520.63 rows=28363 width=23)\n> > -> Seq Scan on rs (cost=0.00..520.63rows=28363\n> > width=23)\n> > -> Hash (cost=435.63..435.63 rows=24063 width=24)\n> > -> Seq Scan on radv (cost=0.00..435.63 rows=24063\n> > width=24)\n> > (41 rows)\n> >\n> > Can you let me know how we can avoid the double looping on the metrics\n> > table. This been a big table causes the queries to slowdown.\n>\n> Well, if the index on metrics.netkey helps, it doesn't matter if it's\n> scanned twice.\n>\n> On a query with that many tables involved, you should try raising\n> join_collapse_limit from the default. That query accesses 9 tables,\n> which is just above the default join_collapse_limit of 8, so the planner\n> is not considering all possible join orders.\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n\nHeikki,Thanks for the information. join_collapse_limit = 1 is already set before sending the query plan. Will a index scan on metrics.netkey help in improving the performance what other configuration parameters should we consider while opting for partition?\nRegards,Nimesh.On 10/29/07, Heikki Linnakangas <[email protected]> wrote:\nNimesh Satam wrote:> Heikki,>>> Thanks for your reply. Will try to do the changes and rivert back. I had one\n> more requirement for partitioning.>> I wanted to inherit two different tables for partition. Below is the query> used to create the table, an crete the inheritance.>>> CREATE TABLE metrics_d20070601 (CHECK (sqldate = '20070601')) INHERITS\n> (metrics, date);>> Further more we are using the below mentioned query:>> SELECT rs.id AS sid, rs.name AS sname, \nrc.id AS cid, rc.name AS cname,> rc.type AS rtype, rc.act_type AS acttype, ra.id AS adid, ra.name AS avname,> \nrch.id AS chid, rch.name AS chname, rcr.dim AS dim, SUM(metrics.imp_del) AS> imp, SUM(metrics.clidel) AS cli, date.sqldate AS date, rg.id AS gid> FROM metrics, rn CROSS JOIN date, ra, rs, rc, rch, rcr, rg\n> WHERE metrics.netkey = rn.key> AND rn.id = 10> AND metrics.advkey = ra.key> AND metrics.campkey = rc.key> AND metrics.skey = rs.key> AND metrics.chkey\n = rch.key> AND metrics.cr_key = rcr.key> AND date.sqldate BETWEEN '6/01/2007' AND '6/01/2007'> AND metrics.gkey = rg.key> GROUP BY date.sqldate, rs.id\n, rs.name, ra.id, ra.name, rc.id, rc.name,> rc.rev_type, rc.act_type, \nrch.id, rch.name, rcr.dim, rg.id;>> And the query execution plan is as below>>> QUERY> PLAN>> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  GroupAggregate  (cost=589766.28..651315.41 rows=1119075 width=127)>    ->  Sort  (cost=589766.28..592563.97 rows=1119075 width=127)>          Sort Key: public.date.sqldate, \nrs.id, rs.name, ra.id, ra.name,> rc.id, rc.name, rc.rtype, rc.act_type\n, rch.id, rch.name, rcr.dim, rg.id>          ->  Hash Join  (cost=64914.87..433619.51 rows=1119075 width=127)>                Hash Cond: (\"outer\".adv_key = \"inner\".\"key\")\n>                ->  Hash Join  (cost=64419.08..402349.16 rows=1119075> width=111)>                      Hash Cond: (\"outer\".s_key = \"inner\".\"key\")>                      ->  Hash Join  (cost=\n63827.54..368185.38 rows=1119075> width=96)>                            Hash Cond: (\"outer\".campkey = \"inner\".\"key\")>                            ->  Hash Join> (cost=\n61339.00..323731.53rows=1119075 width=66)>                                  Hash Cond: (\"outer\".chkey = \"inner\".\"key\")>                                  ->  Hash Join> (cost=\n59480.62..293896.26rows=1119075 width=46)>                                        Hash Cond: (\"outer\".cr_key => \"inner\".\"key\")>                                        ->  Hash Join  (cost=\n> 51298.73..243749.06 rows=1119075 width=48)>                                              Hash Cond: (\"outer\".gkey => \"inner\".\"key\")>                                              ->  Hash Join  (cost=\n> 51051.50..204334.21 rows=1119075 width=48)>                                                    Hash Cond:> ((\"outer\".netkey = \"inner\".\"key\") AND (\"outer\".date_key = \"inner\".\"key\"))\n>                                                    ->  Append  (cost=> 0.00..51795.56 rows=1901256 width=48)>                                                          ->  Seq Scan on> metrics  (cost=\n0.00..25614.71 rows=940271 width=48)>                                                          ->  Seq Scan on> metrics_d20070601 metrics  (cost=0.00..26180.85 rows=960985 width=48)>                                                    ->  Hash  (cost=\n> 40615.57..40615.57 rows=960986 width=16)>                                                          ->  Nested Loop> (cost=0.00..40615.57 rows=960986 width=16)>                                                                ->  Index\n> Scan using rpt_netw_key_idx on rn  (cost=0.00..16.92 rows=1 width=4)>                                                                      Filter:> (id = 10)>                                                                ->  Append\n> (cost=0.00..30988.79 rows=960986 width=12)>                                                                      ->> Index Scan using rpt_dt_sqldt_idx on date  (cost=0.00..3.02 rows=1 width=12)\n>> Index Cond: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone)> AND (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))>                                                                      ->  Seq\n> Scan on metrics_d20070601 rpt_date  (cost=0.00..30985.78 rows=960985> width=12)>> Filter: ((sqldate >= '2007-06-01 00:00:00'::timestamp without time zone) AND> (sqldate <= '2007-06-01 00:00:00'::timestamp without time zone))\n>                                              ->  Hash> (cost=223.18..223.18rows=9618 width=8)>                                                    ->  Seq Scan on rg> (cost=0.00..223.18 rows=9618 width=8)\n>                                        ->  Hash> (cost=7367.71..7367.71rows=325671 width=6)>                                              ->  Seq Scan on rc  (cost=> 0.00..7367.71 rows=325671 width=6)\n>                                  ->  Hash  (cost=1652.51..1652.51 rows=82351> width=28)>                                        ->  Seq Scan on rch  (cost=> 0.00..1652.51 rows=82351 width=28)\n>                            ->  Hash  (cost=2283.83..2283.83 rows=81883> width=38)>                                  ->  Seq Scan on rc> (cost=0.00..2283.83rows=81883 width=38)>                      ->  Hash  (cost=\n520.63..520.63 rows=28363 width=23)>                            ->  Seq Scan on rs  (cost=0.00..520.63 rows=28363> width=23)>                ->  Hash  (cost=435.63..435.63 rows=24063 width=24)\n>                      ->  Seq Scan on radv  (cost=0.00..435.63 rows=24063> width=24)> (41 rows)>> Can you let me know how we can avoid the double looping on the metrics> table. This been a big table causes the queries to slowdown.\nWell, if the index on metrics.netkey helps, it doesn't matter if it'sscanned twice.On a query with that many tables involved, you should try raisingjoin_collapse_limit from the default. That query accesses 9 tables,\nwhich is just above the default join_collapse_limit of 8, so the planneris not considering all possible join orders.--  Heikki Linnakangas  EnterpriseDB   http://www.enterprisedb.com", "msg_date": "Mon, 29 Oct 2007 17:25:51 +0530", "msg_from": "\"Nimesh Satam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Append Cost in query planners" }, { "msg_contents": "Nimesh Satam wrote:\n> Thanks for the information. join_collapse_limit = 1 is already set before\n> sending the query plan.\n\nI assume that was a typo, and you really set it to 10 or higher...\n\n> Will a index scan on metrics.netkey help in improving the performance what\n> other configuration parameters should we consider while opting for\n> partition?\n\nWhether it helps or not depends on many things, like the distribution of\nthe data, but it's worth trying. I'd try that first, before worrying\nabout config parameters.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 29 Oct 2007 15:22:07 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Append Cost in query planners" } ]
[ { "msg_contents": "Hi,\n\nI have a query that uses left outer join, and this seems to prevent the\nindex on the right column to be used.\n\nI couldn't really trim down the query without having the index used\nnormally.. \n\nSo, I have the following tables that join :\n\nOffer -> AdCreatedEvent -> Account -> ContactInformation -> City ->\nGisFeature\nand\nOffer -> ResidenceDescription -> City -> GisFeature\n\nThe query is at the end of the email.\n\nWhat happens is that the \"ContactInformation -> City \" outer join link\nprevents postgres from using the index on City. \n\nIf I only join offer -> ResidenceDescription -> City -> GisFeature, the\nindex is used;\nIf I join everything in the query without the GIS condition, the index\nis used :\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 limit 10 offset 10\n\nAlso, if I replace the ContactInformation -> City link by an inner\njoin, the index is used.\n\n\nWith the outer join, \nexplain analyze tells me :\n Hash (cost=37037.86..37037.86 rows=2331986 width=16)\n -> Seq Scan on city city1_\n(cost=0.00..37037.86 rows=2331986 width=16)\n\nWhereas the inner join tells me :\n-> Index Scan using cityid on city city8_ (cost=0.00..8.52 rows=1\nwidth=16)\n\n\nSo, what could prevent postgrs from using the index ? I ran all the\nvacuum analyze stuff, and the stats cannot possibly tell postgres that\nit's not worth using the index (2 million entries in the city table, way\nmore than in any other table).\n\nWhich options do I have to force postgres to use an index here ?\n\nThanks for your help,\nSami Dalouche\n\n\n---------------\nselect * from Offer this_ inner join AdCreatedEvent ace3_ on\nthis_.adCreatedEvent_id=ace3_.id left outer join FunalaEvent ace3_1_ on\nace3_.id=ace3_1_.id left outer join Account account6_ on\nace3_.eventInitiator_id=account6_.id left outer join ContactInformation\ncontactinf7_ on account6_.contactInformation_id=contactinf7_.id left\nouter join City city8_ on contactinf7_.city_id=city8_.id inner join\nResidenceDescription residenced19_ on\nthis_.residenceDescription_id=residenced19_.id inner join City city1_ on\nresidenced19_.city_id=city1_.id inner join GisFeature gf2_ on\ncity1_.associatedGisFeature_id=gf2_.id left outer join ResidenceType\nresidencet22_ on residenced19_.residenceType_id=residencet22_.id where\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 limit 10 offset 10 \n\n", "msg_date": "Sun, 28 Oct 2007 22:25:10 +0100", "msg_from": "Sami Dalouche <[email protected]>", "msg_from_op": true, "msg_subject": "Outer joins and Seq scans" }, { "msg_contents": "Sami Dalouche <[email protected]> writes:\n> So, what could prevent postgrs from using the index ?\n\nYou've carefully withheld all the details that might let us guess.\nIf I had to guess anyway, I'd guess this is a pre-8.2 PG release\nthat doesn't know how to rearrange outer joins, but there are any\nnumber of other possibilities.\n\nIf you want useful help on a query planning issue, you generally need\nto provide\n\t- the exact Postgres version\n\t- full schema declaration of the relevant tables\n\t- exact queries tested\n\t- full EXPLAIN ANALYZE output\nWith less info than that, people are just shooting in the dark.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Oct 2007 18:08:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and Seq scans " }, { "msg_contents": "Hi,\n\nSorry for not giving enough information.. I didn't want to pollute you\nwith too much detail... \n\nSo, the version of postgres I use is :\nsamokk@samlaptop:~/Desktop $ dpkg -l | grep postgres\nii postgresql-8.2 8.2.5-1.1\nobject-relational SQL database, version 8.2 \nii postgresql-8.2-postgis 1.2.1-2\ngeographic objects support for PostgreSQL 8.\nii postgresql-client-8.2 8.2.5-1.1\nfront-end programs for PostgreSQL 8.2\nii postgresql-client-common 78\nmanager for multiple PostgreSQL client versi\nii postgresql-common 78\nmanager for PostgreSQL database clusters\nii postgresql-contrib-8.2 8.2.5-1.1\nadditional facilities for PostgreSQL\n\nsamokk@samlaptop:~/Desktop $ uname -a\nLinux samlaptop 2.6.22-14-generic #1 SMP Sun Oct 14 23:05:12 GMT 2007\ni686 GNU/Linux\n\nThe exact query that is run is :\n\nselect * from RoommateResidenceOffer this_ inner join AdCreatedEvent\nace3_ on this_.adCreatedEvent_id=ace3_.id left outer join FunalaEvent\nace3_1_ on ace3_.id=ace3_1_.id left outer join Account account6_ on\nace3_.eventInitiator_id=account6_.id left outer join ContactInformation\ncontactinf7_ on account6_.contactInformation_id=contactinf7_.id left\nouter join City city8_ on contactinf7_.city_id=city8_.id inner join\nResidenceDescription residenced19_ on\nthis_.residenceDescription_id=residenced19_.id inner join City city1_ on\nresidenced19_.city_id=city1_.id inner join GisFeature gf2_ on\ncity1_.associatedGisFeature_id=gf2_.id left outer join ResidenceType\nresidencet22_ on residenced19_.residenceType_id=residencet22_.id where\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 limit 10 offset 10\n\nThe full Explain Analyze for this query is attached in \"exp1.txt\".\n\n---- \n\nThe slightly modified version of the query above, using inner join\ninstead of outer join for outer join City city8_ on\ncontactinf7_.city_id=city8_.id\n\nselect * from RoommateResidenceOffer this_ inner join AdCreatedEvent\nace3_ on this_.adCreatedEvent_id=ace3_.id left outer join FunalaEvent\nace3_1_ on ace3_.id=ace3_1_.id left outer join Account account6_ on\nace3_.eventInitiator_id=account6_.id left outer join ContactInformation\ncontactinf7_ on account6_.contactInformation_id=contactinf7_.id inner\njoin City city8_ on contactinf7_.city_id=city8_.id inner join\nResidenceDescription residenced19_ on\nthis_.residenceDescription_id=residenced19_.id inner join City city1_ on\nresidenced19_.city_id=city1_.id inner join GisFeature gf2_ on\ncity1_.associatedGisFeature_id=gf2_.id left outer join ResidenceType\nresidencet22_ on residenced19_.residenceType_id=residencet22_.id where\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 limit 10 offset 10\n\nThe full explain analyze is included in exp2.txt.\n\nSo, now the part of the schema that is relevant :\n\n Table « public.roommateresidenceoffer »\n Colonne | Type |\nModificateurs \n---------------------------------------+------------------------+---------------\n id | bigint | not\nnull\n endofavailabilitydate | date | \n minimumleasedurationinmonths | integer | not\nnull\n brokerfees | numeric(19,2) | not\nnull\n currencycode | character varying(255) | not\nnull\n monthlyadditionalchargesapproximation | numeric(19,2) | not\nnull\n monthlybaseprice | numeric(19,2) | not\nnull\n pricingperiod | character varying(255) | not\nnull\n securitydeposit | numeric(19,2) | not\nnull\n startofavailabilitydate | date | not\nnull\n sublease | boolean | not\nnull\n cabletv | boolean | not\nnull\n electricity | boolean | not\nnull\n heat | boolean | not\nnull\n highspeedinternetaccess | boolean | not\nnull\n phoneline | boolean | not\nnull\n satellitetv | boolean | not\nnull\n securitysystem | boolean | not\nnull\n trashpickup | boolean | not\nnull\n unlimitedphoneplan | boolean | not\nnull\n water | boolean | not\nnull\n offerdescriptiontext | text | \n totalnumberofroommates | integer | not\nnull\n willhaveseparateroom | boolean | not\nnull\n adcreatedevent_id | bigint | \n residencedescription_id | bigint | \nIndex :\n « roommateresidenceoffer_pkey » PRIMARY KEY, btree (id)\n « roommateresidenceofferadcreatedevent » btree (adcreatedevent_id)\nContraintes de clés étrangères :\n « fk27b7359611df9610 » FOREIGN KEY (adcreatedevent_id) REFERENCES\nadcreatedevent(id)\n « fk27b73596364f1d0 » FOREIGN KEY (residencedescription_id)\nREFERENCES residencedescription(id)\n\nsirika_development=# \\d adcreatedevent\n Table « public.adcreatedevent »\n Colonne | Type | Modificateurs \n-------------------+--------+---------------\n id | bigint | not null\n eventinitiator_id | bigint | \nIndex :\n « adcreatedevent_pkey » PRIMARY KEY, btree (id)\nContraintes de clés étrangères :\n « fk4422475278f7361 » FOREIGN KEY (id) REFERENCES funalaevent(id)\n « fk4422475e7e1a3f5 » FOREIGN KEY (eventinitiator_id) REFERENCES\naccount(id)\n\n\nsirika_development=# \\d funalaevent\n Table « public.funalaevent »\n Colonne | Type | Modificateurs \n--------------+-----------------------------+---------------\n id | bigint | not null\n utceventdate | timestamp without time zone | not null\nIndex :\n « funalaevent_pkey » PRIMARY KEY, btree (id)\n « funalaeventdate » btree (utceventdate)\n\n\n\\d Account;\n Table « public.account »\n Colonne | Type | Modificateurs \n-------------------------+------------------------+---------------\n id | bigint | not null\n login | character varying(255) | not null\n password | character varying(255) | not null\n settings_id | bigint | \n profile_id | bigint | \n declaredasadultevent_id | bigint | \n contactinformation_id | bigint | \nIndex :\n « account_pkey » PRIMARY KEY, btree (id)\n « account_login_key » UNIQUE, btree (\"login\")\nContraintes de clés étrangères :\n « fk1d0c220d3f80c97 » FOREIGN KEY (declaredasadultevent_id)\nREFERENCES declaredasadultevent(id)\n « fk1d0c220d7c918a2a » FOREIGN KEY (settings_id) REFERENCES\naccountsettings(id)\n « fk1d0c220d95133e52 » FOREIGN KEY (profile_id) REFERENCES\nuserprofile(id)\n « fk1d0c220ddfd5cd37 » FOREIGN KEY (contactinformation_id)\nREFERENCES contactinformation(id)\n\n\n \\d ContactInformation\n Table « public.contactinformation »\n Colonne | Type | Modificateurs \n-----------------------------+------------------------+---------------\n id | bigint | not null\n street | text | \n zipcode | text | \n name | character varying(255) | \n currentemailchangedevent_id | bigint | \n city_id | bigint | \nIndex :\n « contactinformation_pkey » PRIMARY KEY, btree (id)\n « contactinformationcity » btree (city_id)\n « contactinformationcurrentemailchangedevent » btree\n(currentemailchangedevent_id)\nContraintes de clés étrangères :\n « fk36e2e10c6412f2ff » FOREIGN KEY (city_id) REFERENCES city(id)\n « fk36e2e10cb79b5056 » FOREIGN KEY (currentemailchangedevent_id)\nREFERENCES emailchangedevent(id)\n\n\n\\d City\n Table « public.city »\n Colonne | Type | Modificateurs \n-------------------------+--------+---------------\n id | bigint | not null\n associatedgisfeature_id | bigint | \nIndex :\n « city_pkey » PRIMARY KEY, btree (id)\n « cityassociatedgisfeatureid » btree (associatedgisfeature_id)\n « cityid » btree (id)\nContraintes de clés étrangères :\n « fk200d8b1020e199 » FOREIGN KEY (associatedgisfeature_id)\nREFERENCES gisfeature(id)\n\n\n Table « public.residencedescription »\n Colonne | Type | Modificateurs \n-------------------------------+------------------+---------------\n id | bigint | not null\n street | text | \n zipcode | text | \n barbecueandpicnicarea | boolean | not null\n basketballcourt | boolean | not null\n bikeshelter | boolean | not null\n billiards | boolean | not null\n clubhouse | boolean | not null\n conferenceroom | boolean | not null\n doorman | boolean | not null\n drycleaner | boolean | not null\n fitnesscenter | boolean | not null\n gatedentrance | boolean | not null\n laundryfacility | boolean | not null\n onsitemanagement | boolean | not null\n pool | boolean | not null\n sauna | boolean | not null\n spa | boolean | not null\n tenniscourt | boolean | not null\n residencedescriptiontext | text | \n extrastorage | boolean | not null\n privatepool | boolean | not null\n privatesauna | boolean | not null\n privatespa | boolean | not null\n airconditioning | boolean | not null\n areainsquaremeters | double precision | \n balcony | boolean | not null\n basefloornumber | integer | \n buzzer | boolean | not null\n ceilingfan | boolean | not null\n dishwasher | boolean | not null\n disposal | boolean | not null\n dryer | boolean | not null\n dvd | boolean | not null\n elevator | boolean | not null\n firedetector | boolean | not null\n fireplace | boolean | not null\n fullkitchen | boolean | not null\n furnished | boolean | not null\n hifi | boolean | not null\n housewaresinkitchen | boolean | not null\n microwave | boolean | not null\n numberofbathrooms | integer | \n numberofbedrooms | integer | \n numberoflivingrooms | integer | \n numberofseparatedtoilets | integer | \n oven | boolean | not null\n patio | boolean | not null\n refrigerator | boolean | not null\n stove | boolean | not null\n totalnumberoffloorsinbuilding | integer | \n tv | boolean | not null\n vcr | boolean | not null\n washer | boolean | not null\n wheelchairaccess | boolean | not null\n yard | boolean | not null\n coveredparkingspaces | integer | not null\n garage | boolean | not null\n streetparkingavailability | bigint | \n uncoveredparkingspaces | integer | not null\n petsallowed | boolean | not null\n smokingallowed | boolean | not null\n city_id | bigint | \n residencetype_id | bigint | \nIndex :\n « residencedescription_pkey » PRIMARY KEY, btree (id)\n « residencedescriptioncity » btree (city_id)\nContraintes de clés étrangères :\n « fk997d05366412f2ff » FOREIGN KEY (city_id) REFERENCES city(id)\n « fk997d0536a3749aa4 » FOREIGN KEY (residencetype_id) REFERENCES\nresidencetype(id)\n\n\\d gisfeature\n Table « public.gisfeature »\n Colonne | Type | Modificateurs \n-------------------------+-----------------------------+---------------\n id | bigint | not null\n asciiname | character varying(255) | \n elevation | bigint | \n featureclass | character varying(255) | \n featurecode | character varying(255) | \n featureid | bigint | not null\n featuresource | character varying(255) | not null\n gtopo30averageelevation | bigint | \n location | geometry | \n modificationdate | timestamp without time zone | \n name | character varying(255) | \n population | bigint | \n timezone | character varying(255) | \n parententity_id | bigint | \nIndex :\n « gisfeature_pkey » PRIMARY KEY, btree (id)\n « gisfeatureasciiname » btree (asciiname)\n « gisfeaturefeatureid » btree (featureid)\n « gisfeaturefeaturesource » btree (featuresource)\n « gisfeatureid » btree (id)\n « gisfeaturelocation » gist (\"location\")\n « gisfeaturenamestartswith » btree (lower(name::text)\nvarchar_pattern_ops)\n « gisfeatureparententityid » btree (parententity_id)\n « gisfeaturepopulation » btree (population)\nContraintes de clés étrangères :\n « fk6372220511a389a5 » FOREIGN KEY (parententity_id) REFERENCES\nabstractadministrativeentity(id)\n\n \\d residencetype\n Table « public.residencetype »\n Colonne | Type | Modificateurs \n---------------------+------------------------+---------------\n id | bigint | not null\n code | character varying(255) | not null\n residenceattachment | integer | \nIndex :\n « residencetype_pkey » PRIMARY KEY, btree (id)\n « residencetype_code_key » UNIQUE, btree (code)\n\nIf you need more information for your diagnostic, do not hesitate to\nrequest ;-) \nThanks a lot for your help,\nSami Dalouche\n\n\nLe dimanche 28 octobre 2007 à 18:08 -0400, Tom Lane a écrit :\n> Sami Dalouche <[email protected]> writes:\n> > So, what could prevent postgrs from using the index ?\n> \n> You've carefully withheld all the details that might let us guess.\n> If I had to guess anyway, I'd guess this is a pre-8.2 PG release\n> that doesn't know how to rearrange outer joins, but there are any\n> number of other possibilities.\n> \n> If you want useful help on a query planning issue, you generally need\n> to provide\n> \t- the exact Postgres version\n> \t- full schema declaration of the relevant tables\n> \t- exact queries tested\n> \t- full EXPLAIN ANALYZE output\n> With less info than that, people are just shooting in the dark.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match", "msg_date": "Sun, 28 Oct 2007 23:55:46 +0100", "msg_from": "Sami Dalouche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Outer joins and Seq scans" }, { "msg_contents": "Sami Dalouche <[email protected]> writes:\n> So, the version of postgres I use is :\n> samokk@samlaptop:~/Desktop $ dpkg -l | grep postgres\n> ii postgresql-8.2 8.2.5-1.1\n\nOK. I think you have run afoul of a bug that was introduced in 8.2.5\nthat causes it not to realize that it can interchange the ordering of\ncertain outer joins. Is there any chance you can apply the one-line\npatch shown here:\nhttp://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n\nIf rebuilding packages is not to your taste, possibly a down-rev to\n8.2.4 would be the easiest solution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 28 Oct 2007 19:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and Seq scans " }, { "msg_contents": "Hi,\n\nLe lundi 29 octobre 2007, Tom Lane a écrit :\n> Is there any chance you can apply the one-line\n> patch shown here:\n> http://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n>\n> If rebuilding packages is not to your taste, possibly a down-rev to\n> 8.2.4 would be the easiest solution.\n\nThe debian package for PostgreSQL uses a .tar.gz of the upstream code along \nwith a debian/patches/ directory with ordered patches files \n(##-whatever.patch). Just adding the given file into this directory before to \nbuilding the package should do. \n\nThe operations to issue should look like this:\n $ apt-get source postgresql-8.2\n $ tar xzf postgresql-8.2_8.2.5.orig.tar.gz\n $ cd postgresql-8.2-8.2.5 \n $ zcat ../postgresql-8.2_8.2.5-1.diff.gz |patch -p1\n $ cp make_outerjoininfo.patch debian/patches/60-make_outerjoininfo.patch\n $ debuild -us -uc\n\nThis will give you a new package for postgresql, which you can even tweak the \nversion number to your taste by editing debian/changelog and adding (for \nexample) a postgresql-8.2 (8.2.5.1-1) entry, just before the debuild step.\n\nHope this helps,\n-- \ndim", "msg_date": "Mon, 29 Oct 2007 10:48:14 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and Seq scans" }, { "msg_contents": "Dimitri Fontaine wrote:\n> Hi,\n> \n> Le lundi 29 octobre 2007, Tom Lane a écrit :\n>> Is there any chance you can apply the one-line\n>> patch shown here:\n>> http://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n>>\n>> If rebuilding packages is not to your taste, possibly a down-rev to\n>> 8.2.4 would be the easiest solution.\n> \n> The debian package for PostgreSQL uses a .tar.gz of the upstream code along \n> with a debian/patches/ directory with ordered patches files \n> (##-whatever.patch). Just adding the given file into this directory before to \n> building the package should do. \n\n[snip detailed example]\n\nIt never occurred to me that you could do that. Thanks Dmitri, I learn \nsomething here all the time.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 29 Oct 2007 11:11:51 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and Seq scans" }, { "msg_contents": "Richard Huxton a �crit :\n> Dimitri Fontaine wrote:\n>> Hi,\n>>\n>> Le lundi 29 octobre 2007, Tom Lane a �crit :\n>>> Is there any chance you can apply the one-line\n>>> patch shown here:\n>>> http://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n>>>\n>>> If rebuilding packages is not to your taste, possibly a down-rev to\n>>> 8.2.4 would be the easiest solution.\n>>\n>> The debian package for PostgreSQL uses a .tar.gz of the upstream code \n>> along with a debian/patches/ directory with ordered patches files \n>> (##-whatever.patch). Just adding the given file into this directory \n>> before to building the package should do. \n>\n> [snip detailed example]\n>\n> It never occurred to me that you could do that. Thanks Dmitri, I learn \n> something here all the time.\n<more>\nYou can consider using apt-build(like gentoo users like to do) to patch \non the fly (but it does not support tar.gz inside package, wich is \nunfortunely the case for postgresql package ....)\n</more>\n", "msg_date": "Mon, 29 Oct 2007 12:28:02 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Outer joins and Seq scans" } ]
[ { "msg_contents": "\n\nHi,\n\nI need to implement a Load Balancing solution in the PostgresSQL\ndatabase. Actually we are working with postgres 8.1 but postgres 8.2 is\nplanified to be upgrade in short.\n\nI read about it, and would like to know any experience with this.\n\nDid you ever need Load Balancing?\nWhat did you implement?\nIs it working fine? Any big trouble?\n\nThanks in advance,\n\n\n\n", "msg_date": "Tue, 30 Oct 2007 09:20:24 +0100", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "High Availability and Load Balancing" } ]
[ { "msg_contents": "I have the following query that is a part of a function:\n\nselect order_details.tpv_success_id, order_details.tpv_id,\norder_details.ver_code, order_details.app_id,\n order_details.acct_id, order_details.first_name || ' ' ||\norder_details.last_name as customer_name,\n order_details.order_date as creation_date,\nverification_success.checked_out, order_details.csr_name,\n order_details.products,\torder_details.promotion,\norder_details.division_id, order_details.has_dish_billing_info,\n (select\narray_upper(acct_product_data_requirements_details.acct_prod_ids, 1))\nas num_prods,\n\t\tcount(distinct case when provision_issues.account_product_id is not\nnull and provision_issues.resolved_date is null then\nprovision_issues.account_product_id end ) as num_open_issues,\n\t\tcount(case when reconciliations.rec_id is not null and\nreconciliation_cancels.rec_cancel_id is null then\nreconciliations.rec_id end ) as num_provisioned,\n\t\tcount(case when reconciliation_cancels.rec_cancel_id is not null\nthen reconciliation_cancels.rec_cancel_id end ) as num_canceled\n from frontier.order_details\n inner join frontier.verification_success\n on order_details.tpv_success_id =\nverification_success.tpv_success_id\n inner join frontier.acct_product_data_requirements_details\n left outer join frontier.provision_issues\n on provision_issues.account_product_id =\nany(acct_product_data_requirements_details.acct_prod_ids) and\nprovision_issues.resolved_date is null\n left outer join frontier.reconciliations\n left outer join frontier.reconciliation_cancels\n on reconciliations.rec_id =\nreconciliation_cancels.rec_id\n on reconciliations.account_product_id =\nany(acct_product_data_requirements_details.acct_prod_ids)\n on order_details.acct_id =\nacct_product_data_requirements_details.acct_id\n where verification_success.checked_out is null\n group by order_details.tpv_success_id, order_details.tpv_id,\norder_details.ver_code, order_details.app_id,\n order_details.acct_id, order_details.first_name || ' ' ||\norder_details.last_name, order_details.order_date,\nverification_success.checked_out, order_details.csr_name,\n order_details.products,\torder_details.promotion, num_prods,\norder_details.division_id, order_details.has_dish_billing_info\nhaving ( count(case when reconciliations.rec_id is not null and\nreconciliation_cancels.rec_cancel_id is null then\nreconciliations.rec_id end ) < (select\narray_upper(acct_product_data_requirements_details.acct_prod_ids,\n1)) ) and\n\t\t(\n\t\tcount(distinct case when provision_issues.account_product_id is not\nnull and provision_issues.resolved_date is null then\nprovision_issues.account_product_id end ) +\n\t\tcount(case when reconciliations.rec_id is not null and\nreconciliation_cancels.rec_cancel_id is null then\nreconciliations.rec_id end ) +\n\t\tcount(case when reconciliation_cancels.rec_cancel_id is not null\nthen reconciliation_cancels.rec_cancel_id end )\n\t\t) < (select\narray_upper(acct_product_data_requirements_details.acct_prod_ids, 1))\n\tand order_details.division_id =\nany('{79,38,70,66,35,40,37,36,67,41,65,39}') --this array here\nvaries. indexes are present for the different variations\norder by order_details.order_date\n\nhere is the execution plan:\n\nSort (cost=1350779962.18..1350969875.28 rows=75965240 width=290)\n(actual time=16591.711..16591.723 rows=110 loops=1)\n Sort Key: frontier.order_details.order_date\n -> GroupAggregate (cost=1235567017.53..1295217399.34 rows=75965240\nwidth=290) (actual time=16583.383..16591.420 rows=110 loops=1)\n Filter: ((count(CASE WHEN ((rec_id IS NOT NULL) AND\n(rec_cancel_id IS NULL)) THEN rec_id ELSE NULL::integer END) <\n(subplan)) AND (((count(DISTINCT CASE WHEN ((account_product_id IS NOT\nNULL) AND (resolved_date IS NULL)) THEN account_product_id ELSE\nNULL::integer END) + count(CASE WHEN ((rec_id IS NOT NULL) AND\n(rec_cancel_id IS NULL)) THEN rec_id ELSE NULL::integer END)) +\ncount(CASE WHEN (rec_cancel_id IS NOT NULL) THEN rec_cancel_id ELSE\nNULL::integer END)) < (subplan)))\n -> Sort (cost=1235567017.53..1238002161.29 rows=974057502\nwidth=290) (actual time=16576.997..16577.513 rows=3366 loops=1)\n Sort Key: frontier.order_details.tpv_success_id,\nfrontier.order_details.tpv_id, frontier.order_details.ver_code,\nfrontier.order_details.app_id, frontier.order_details.acct_id,\n(((frontier.order_details.first_name)::text || ' '::text) ||\n(frontier.order_details.last_name)::text),\nfrontier.order_details.order_date, verification_success.checked_out,\nfrontier.order_details.csr_name, frontier.order_details.products,\nfrontier.order_details.promotion, (subplan),\nfrontier.order_details.division_id,\nfrontier.order_details.has_dish_billing_info\n -> Merge Join (cost=100001383.41..310142000.26\nrows=974057502 width=290) (actual time=1055.584..16560.634 rows=3366\nloops=1)\n Merge Cond: (\"outer\".acct_id = \"inner\".acct_id)\n -> Nested Loop Left Join\n(cost=328.94..173666048.48 rows=1928122302 width=53) (actual\ntime=0.236..16499.771 rows=7192 loops=1)\n Join Filter: (\"inner\".account_product_id =\nANY (\"outer\".acct_prod_ids))\n -> Nested Loop Left Join\n(cost=2.21..134714.57 rows=564852 width=45) (actual\ntime=0.215..1021.209 rows=5523 loops=1)\n Join Filter:\n(\"inner\".account_product_id = ANY (\"outer\".acct_prod_ids))\n -> Index Scan using \"FKI_acct_id\" on\nacct_product_data_requirements_details (cost=0.00..488.19 rows=5484\nwidth=33) (actual time=0.011..15.502 rows=5484 loops=1)\n -> Bitmap Heap Scan on\nprovision_issues (cost=2.21..17.27 rows=206 width=12) (actual\ntime=0.035..0.106 rows=206 loops=5484)\n Recheck Cond: (resolved_date IS\nNULL)\n -> Bitmap Index Scan on\n\"IX_resolved_date_null\" (cost=0.00..2.21 rows=206 width=0) (actual\ntime=0.032..0.032 rows=206 loops=5484)\n -> Materialize (cost=326.74..395.01\nrows=6827 width=12) (actual time=0.000..0.852 rows=6827 loops=5523)\n -> Merge Left Join\n(cost=0.00..319.91 rows=6827 width=12) (actual time=0.016..13.426\nrows=6827 loops=1)\n Merge Cond: (\"outer\".rec_id =\n\"inner\".rec_id)\n -> Index Scan using\nreconciliation_pkey on reconciliations (cost=0.00..215.59 rows=6827\nwidth=8) (actual time=0.004..4.209 rows=6827 loops=1)\n -> Index Scan using\n\"FKI_rec_id\" on reconciliation_cancels (cost=0.00..56.80 rows=2436\nwidth=8) (actual time=0.004..1.534 rows=2436 loops=1)\n -> Sort (cost=100001054.46..100001061.39\nrows=2770 width=241) (actual time=18.984..19.937 rows=3366 loops=1)\n Sort Key: frontier.order_details.acct_id\n -> Hash Join\n(cost=100000131.90..100000896.08 rows=2770 width=241) (actual\ntime=6.525..13.644 rows=2459 loops=1)\n Hash Cond: (\"outer\".tpv_success_id =\n\"inner\".tpv_success_id)\n -> Append\n(cost=100000000.00..100000694.84 rows=2776 width=199) (actual\ntime=0.092..4.627 rows=2459 loops=1)\n -> Seq Scan on order_details\n(cost=100000000.00..100000012.45 rows=35 width=199) (actual\ntime=0.001..0.001 rows=0 loops=1)\n Filter: (division_id = ANY\n('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Heap Scan on\norder_details_august_2007 order_details (cost=2.33..88.63 rows=380\nwidth=158) (actual time=0.089..0.557 rows=330 loops=1)\n Recheck Cond: (division_id\n= ANY ('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Index Scan on\n\"IX_august_2007_provision_list2\" (cost=0.00..2.33 rows=380 width=0)\n(actual time=0.075..0.075 rows=330 loops=1)\n -> Bitmap Heap Scan on\norder_details_july_2007 order_details (cost=2.31..71.39 rows=288\nwidth=159) (actual time=0.082..0.521 rows=314 loops=1)\n Recheck Cond: (division_id\n= ANY ('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Index Scan on\n\"IX_july_2007_provision_list2\" (cost=0.00..2.31 rows=288 width=0)\n(actual time=0.069..0.069 rows=314 loops=1)\n -> Bitmap Heap Scan on\norder_details_june_2007 order_details (cost=2.05..71.82 rows=279\nwidth=148) (actual time=0.029..0.106 rows=51 loops=1)\n Recheck Cond: (division_id\n= ANY ('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Index Scan on\n\"IX_june_2007_provision_list2\" (cost=0.00..2.05 rows=279 width=0)\n(actual time=0.022..0.022 rows=51 loops=1)\n -> Bitmap Heap Scan on\norder_details_october_2007 order_details (cost=7.24..279.34 rows=1060\nwidth=159) (actual time=0.285..2.035 rows=1244 loops=1)\n Recheck Cond: (division_id\n= ANY ('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Index Scan on\n\"IX_october_2007_provision_list2\" (cost=0.00..7.24 rows=1060 width=0)\n(actual time=0.239..0.239 rows=1244 loops=1)\n -> Bitmap Heap Scan on\norder_details_september_2007 order_details (cost=4.52..171.21\nrows=734 width=150) (actual time=0.130..0.856 rows=520 loops=1)\n Recheck Cond: (division_id\n= ANY ('{79,38,70,66,35,40,37,36,67,41,65,39}'::integer[]))\n -> Bitmap Index Scan on\n\"IX_september_2007_provision_list2\" (cost=0.00..4.52 rows=734\nwidth=0) (actual time=0.110..0.110 rows=520 loops=1)\n -> Hash (cost=118.21..118.21\nrows=5473 width=8) (actual time=6.414..6.414 rows=5484 loops=1)\n -> Bitmap Heap Scan on\nverification_success (cost=22.48..118.21 rows=5473 width=8) (actual\ntime=0.731..3.311 rows=5484 loops=1)\n Recheck Cond: (checked_out\nIS NULL)\n -> Bitmap Index Scan on\n\"IX_checked_out_isnull\" (cost=0.00..22.48 rows=5473 width=0) (actual\ntime=0.722..0.722 rows=5485 loops=1)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0)\n(actual time=0.001..0.001 rows=1 loops=3366)\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.000..0.000 rows=1 loops=780)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual\ntime=0.000..0.000 rows=1 loops=2459)\nTotal runtime: 16593.353 ms\n\n\nI have attached an erd of the tables used in this query. If it is\nstripped out it can be viewed here: http://www.ketema.net/provision_list_tables_erd.jpg\n\nMy concern is with the sort step that takes 15 seconds by itself:\n\n-> Sort (cost=1235567017.53..1238002161.29 rows=974057502 width=290)\n(actual time=16576.997..16577.513 rows=3366 loops=1)\n Sort Key: frontier.order_details.tpv_success_id,\nfrontier.order_details.tpv_id, frontier.order_details.ver_code,\nfrontier.order_details.app_id, frontier.order_details.acct_id,\n(((frontier.order_details.first_name)::text || ' '::text) ||\n(frontier.order_details.last_name)::text),\nfrontier.order_details.order_date, verification_success.checked_out,\nfrontier.order_details.csr_name, frontier.order_details.products,\nfrontier.order_details.promotion, (subplan),\nfrontier.order_details.division_id,\nfrontier.order_details.has_dish_billing_info\n\nI believe this is due to the aggregate done in the select clause as\nwell as the sub select:\n\n(select\narray_upper(acct_product_data_requirements_details.acct_prod_ids, 1))\nas num_prods,\n\t\tcount(distinct case when provision_issues.account_product_id is not\nnull and provision_issues.resolved_date is null then\nprovision_issues.account_product_id end ) as num_open_issues,\n\t\tcount(case when reconciliations.rec_id is not null and\nreconciliation_cancels.rec_cancel_id is null then\nreconciliations.rec_id end ) as num_provisioned,\n\t\tcount(case when reconciliation_cancels.rec_cancel_id is not null\nthen reconciliation_cancels.rec_cancel_id end ) as num_canceled\nI believe the counts and sub select cause the sort to be performed\nmultiple times?\n\nHow can I improve this step?\n\nThings I have thought about:\n1)Creating indexes on the aggregates...Found out this can't be done.\n2)Create Views of the counts and the sub select...is this any faster\nas the view is executed at run time anyway?\n3)Create actual tables of the sub select and aggregates...How would\nthis be maintained to ensure it was always accurate?\n4)Increasing hardware resources. Currently box is on a single\nprocessor amd64 with 8Gb of RAM. below are the settings for resource\nusage.\nshared_buffers = 65536\ntemp_buffers = 5000\nmax_prepared_transactions = 2000\nwork_mem = 131072\nmaintenance_work_mem = 512000\nmax_stack_depth = 7168\nmax_fsm_pages = 160000\nmax_fsm_relations = 4000\nThe only function of this box if for Pg, so I do not mind it using\nevery last drop of ram and resources that it can.\n5)Upgrade version of pg..currently is running 8.1.4\n\nWould appreciate any suggestions.\n\nThanks\n\nhttp://pgsql.privatepaste.com/7ffDdPQvIN\n\n", "msg_date": "Tue, 30 Oct 2007 12:18:57 -0000", "msg_from": "Ketema <[email protected]>", "msg_from_op": true, "msg_subject": "Improving Query" }, { "msg_contents": "Ketema wrote:\n> I have the following query that is a part of a function:\n\nYikes! Difficult to get a clear view of what this query is doing.\n\nOK, I'm assuming you're vacuumed and analysed on all these tables...\n\n\n> My concern is with the sort step that takes 15 seconds by itself:\n> \n> -> Sort (cost=1235567017.53..1238002161.29 rows=974057502 width=290)\n> (actual time=16576.997..16577.513 rows=3366 loops=1)\n\nThat's taking hardly any time, the startup time is 16576.997 already. Of \ncourse, the row estimate is *way* out of line.\n\nIf you look here (where the explain is a bit easier to see)\nhttp://explain-analyze.info/query_plans/1258-query-plan-224\n\nThe two main things to look at seem to be the nested loops near the top \nand a few lines down the materialise (cost=326...\n\nThese two nested loops seem to be pushing the row estimates wildly out \nof reality. They also consume much of the time.\n\nThe immediate thing that leaps out here is that you are trying to join \nan int to an array of ints. Why are you using this setup rather than a \nseparate table?\n\n> How can I improve this step?\n> \n> Things I have thought about:\n> 1)Creating indexes on the aggregates...Found out this can't be done.\n\nNope - not sure what it would mean in any case.\n\n> 2)Create Views of the counts and the sub select...is this any faster\n> as the view is executed at run time anyway?\n\nMight make the query easier to write, won't make it faster. Not without \nmaterialised views which are the fancy name for #3...\n\n> 3)Create actual tables of the sub select and aggregates...How would\n> this be maintained to ensure it was always accurate?\n\nTriggers.\n\n> 4)Increasing hardware resources. Currently box is on a single\n> processor amd64 with 8Gb of RAM. below are the settings for resource\n> usage.\n> shared_buffers = 65536\n> temp_buffers = 5000\n> max_prepared_transactions = 2000\n\n????\n\n> work_mem = 131072\n> maintenance_work_mem = 512000\n\nCan't say about these without knowing whether you've got only one \nconnection or 100.\n\n> max_stack_depth = 7168\n> max_fsm_pages = 160000\n> max_fsm_relations = 4000\n> The only function of this box if for Pg, so I do not mind it using\n> every last drop of ram and resources that it can.\n> 5)Upgrade version of pg..currently is running 8.1.4\n\nWell every version gets better at planning, so it can't hurt.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 30 Oct 2007 13:23:39 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query" }, { "msg_contents": "\nOn Oct 30, 2007, at 7:18 , Ketema wrote:\n\n> here is the execution plan:\n\nI've put this online here:\n\nhttp://explain-analyze.info/query_plans/1259-ketema-2007-10-30\n\n> I have attached an erd of the tables used in this query. If it is\n> stripped out it can be viewed here: http://www.ketema.net/ \n> provision_list_tables_erd.jpg\n>\n> My concern is with the sort step that takes 15 seconds by itself:\n>\n> -> Sort (cost=1235567017.53..1238002161.29 rows=974057502 width=290)\n> (actual time=16576.997..16577.513 rows=3366 loops=1)\n\nWhat jumps out at me is the huge difference in estimated and returned \nrows, and the huge cost estimates. Have you analyzed recently?\n\nDo you have enable_seqscan disabled? It appears so, due to the high \ncost here:\n\n-> Seq Scan on order_details (cost=100000000.0..100000012.45 rows=35 \nwidth=199) (actual time=0.001..0.001 rows=0 loops=1)\n\nhttp://explain-analyze.info/query_plans/1259-ketema-2007-10-30#node-3594\n\nWhat does it look like with seqscan enabled?\n\n\n> 2)Create Views of the counts and the sub select...is this any faster\n> as the view is executed at run time anyway?\n\nViews aren't materialized: it's like inlining the definition of the \nview itself in the query.\n\n> 3)Create actual tables of the sub select and aggregates...How would\n> this be maintained to ensure it was always accurate?\n\nOne way would be to update the summaries using triggers. Hopefully \nyou won't need to do this after analyzing and perhaps tweaking your \nserver configuration.\n\nUnfortunately I don't have the time to look at the query plan in more \ndetail, but I suspect there's a better way to get the results you're \nlooking for.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n", "msg_date": "Tue, 30 Oct 2007 08:31:46 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query" }, { "msg_contents": "\nOn Oct 30, 2007, at 9:23 AM, Richard Huxton wrote:\n\n> Ketema wrote:\n>> I have the following query that is a part of a function:\n>\n> Yikes! Difficult to get a clear view of what this query is doing.\nIt seems complicated because you only have a small subset of the \nschema these tables tie into.\nBe happy to share the whole thing, if it is needed.\n>\n> OK, I'm assuming you're vacuumed and analysed on all these tables...\nYes. Auto-vacuum is on and do a Full vacuuum every 2 days.\n>\n>\n>> My concern is with the sort step that takes 15 seconds by itself:\n>> -> Sort (cost=1235567017.53..1238002161.29 rows=974057502 \n>> width=290)\n>> (actual time=16576.997..16577.513 rows=3366 loops=1)\n>\n> That's taking hardly any time, the startup time is 16576.997 \n> already. Of course, the row estimate is *way* out of line.\nOK. I misread the plan and took start up time as the time it took to \nperform operation. Thanks for the link to explain analyze.\n>\n> If you look here (where the explain is a bit easier to see)\n> http://explain-analyze.info/query_plans/1258-query-plan-224\n>\n> The two main things to look at seem to be the nested loops near the \n> top and a few lines down the materialise (cost=326...\n>\n> These two nested loops seem to be pushing the row estimates wildly \n> out of reality. They also consume much of the time.\n>\n> The immediate thing that leaps out here is that you are trying to \n> join an int to an array of ints. Why are you using this setup \n> rather than a separate table?\nI see what you are talking about. When I initially used this set up \nit was because I wanted to avoid a table that had a ton of rows in it \nthat I knew I would have to join to often. So I made a column that \nholds on average 4 or 5 ints representing \"products\" on a particular \n\"order\". I did not realize that using a function in the join would be \nworse that simply having a large table.\n>\n>> How can I improve this step?\n>> Things I have thought about:\n>> 1)Creating indexes on the aggregates...Found out this can't be done.\n>\n> Nope - not sure what it would mean in any case.\nMy initial thought was the counts were causing the slow up. THis is \nnot the issue though as you have shown.\n>\n>> 2)Create Views of the counts and the sub select...is this any faster\n>> as the view is executed at run time anyway?\n>\n> Might make the query easier to write, won't make it faster. Not \n> without materialised views which are the fancy name for #3...\n>\n>> 3)Create actual tables of the sub select and aggregates...How would\n>> this be maintained to ensure it was always accurate?\n>\n> Triggers.\nBecause of the use of this system I may take this route as I think it \nwill be less changes.\n>\n>> 4)Increasing hardware resources. Currently box is on a single\n>> processor amd64 with 8Gb of RAM. below are the settings for resource\n>> usage.\n>> shared_buffers = 65536\n>> temp_buffers = 5000\n>> max_prepared_transactions = 2000\n>\n> ????\nThese are settings out of postgresql.conf Currently systctl.conf is \nset to kernel.shmmax = 805306368\nconnections are at 300 and I usually have about 200 connections open.\n>\n>> work_mem = 131072\n>> maintenance_work_mem = 512000\n>\n> Can't say about these without knowing whether you've got only one \n> connection or 100.\n>\n>> max_stack_depth = 7168\n>> max_fsm_pages = 160000\n>> max_fsm_relations = 4000\n>> The only function of this box if for Pg, so I do not mind it using\n>> every last drop of ram and resources that it can.\n>> 5)Upgrade version of pg..currently is running 8.1.4\n>\n> Well every version gets better at planning, so it can't hurt.\nAt one point I did go to 8.2.3 on a dev box and performance was \nhorrible. Have not had opportunity to see how to make \npostgresql.conf file in 8.2 match settings in 8.1 as some things have \nchanged.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n", "msg_date": "Tue, 30 Oct 2007 09:42:02 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query" }, { "msg_contents": "\nOn Oct 30, 2007, at 9:31 AM, Michael Glaesemann wrote:\n\n>\n> On Oct 30, 2007, at 7:18 , Ketema wrote:\n>\n>> here is the execution plan:\n>\n> I've put this online here:\n>\n> http://explain-analyze.info/query_plans/1259-ketema-2007-10-30\n>\n>> I have attached an erd of the tables used in this query. If it is\n>> stripped out it can be viewed here: http://www.ketema.net/ \n>> provision_list_tables_erd.jpg\n>>\n>> My concern is with the sort step that takes 15 seconds by itself:\n>>\n>> -> Sort (cost=1235567017.53..1238002161.29 rows=974057502 \n>> width=290)\n>> (actual time=16576.997..16577.513 rows=3366 loops=1)\n>\n> What jumps out at me is the huge difference in estimated and \n> returned rows, and the huge cost estimates. Have you analyzed \n> recently?\nYes. I run vacuum FULL ANALYZE VERBOSE every two days with a cron job.\n\nI am running again now any way.\n>\n> Do you have enable_seqscan disabled? It appears so, due to the high \n> cost here:\n>\n> -> Seq Scan on order_details (cost=100000000.0..100000012.45 \n> rows=35 width=199) (actual time=0.001..0.001 rows=0 loops=1)\n>\n> http://explain-analyze.info/query_plans/1259- \n> ketema-2007-10-30#node-3594\n>\n> What does it look like with seqscan enabled?\nit was disabled. new plan posted here:\n\nhttp://explain-analyze.info/query_plans/1261-provision-list-seq-scan- \nenabled\n>\n>\n>> 2)Create Views of the counts and the sub select...is this any faster\n>> as the view is executed at run time anyway?\n>\n> Views aren't materialized: it's like inlining the definition of the \n> view itself in the query.\n>\n>> 3)Create actual tables of the sub select and aggregates...How would\n>> this be maintained to ensure it was always accurate?\n>\n> One way would be to update the summaries using triggers. Hopefully \n> you won't need to do this after analyzing and perhaps tweaking your \n> server configuration.\n>\n> Unfortunately I don't have the time to look at the query plan in \n> more detail, but I suspect there's a better way to get the results \n> you're looking for.\n>\n> Michael Glaesemann\n> grzm seespotcode net\n>\n>\n\n", "msg_date": "Tue, 30 Oct 2007 10:15:34 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Ketema wrote:\n>> 5)Upgrade version of pg..currently is running 8.1.4\n\n> Well every version gets better at planning, so it can't hurt.\n\n+1 ... there are at least two things about this query that 8.2 could be\nexpected to be a great deal smarter about:\n* mixed outer and inner joins\n* something = ANY(array)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 30 Oct 2007 11:39:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query " }, { "msg_contents": "I am definitely interested in upgrading.\n\nIs there a guide out there that perhaps was created to explain the \nchanges in the config files from 8.1 to 8.2 ?\n\nMigration guide I guess?\n\n\nOn Oct 30, 2007, at 11:39 AM, Tom Lane wrote:\n\n> Richard Huxton <[email protected]> writes:\n>> Ketema wrote:\n>>> 5)Upgrade version of pg..currently is running 8.1.4\n>\n>> Well every version gets better at planning, so it can't hurt.\n>\n> +1 ... there are at least two things about this query that 8.2 \n> could be\n> expected to be a great deal smarter about:\n> * mixed outer and inner joins\n> * something = ANY(array)\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Tue, 30 Oct 2007 12:41:00 -0400", "msg_from": "Ketema Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Improving Query " } ]
[ { "msg_contents": "Hi list,\n\nI have the following query:\nselect t.a1, t.a2 from table1 t inner join table2 s \nusing(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;\n\nWith the following output from analyze:\n\"Sort (cost=35075.03..35077.51 rows=991 width=14) (actual\ntime=33313.718..33321.935 rows=22599 loops=1)\"\n\" Sort Key: s.pos\"\n\" -> Hash Join (cost=7851.48..35025.71 rows=991 width=14) (actual\ntime=256.513..33249.701 rows=22599 loops=1)\"\n\" Hash Cond: ((t.id)::text = (s.id)::text)\"\n\" -> Bitmap Heap Scan on table1 t (cost=388.25..27357.57\nrows=22286 width=23) (actual time=112.595..32989.663 rows=22864\nloops=1)\"\n\" Recheck Cond: ((pid)::text = 'xyz'::text)\"\n\" -> Bitmap Index Scan on idx_table1 (cost=0.00..382.67\nrows=22286 width=0) (actual time=103.790..103.790 rows=22864 loops=1)\"\n\" Index Cond: ((pid)::text = 'xyz'::text)\"\n\" -> Hash (cost=7180.62..7180.62 rows=22609 width=17) (actual\ntime=143.867..143.867 rows=22864 loops=1)\"\n\" -> Bitmap Heap Scan on table2 s (cost=333.00..7180.62\nrows=22609 width=17) (actual time=108.715..126.637 rows=22864 loops=1)\"\n\" Recheck Cond: ((chromosome)::text = '9'::text)\"\n\" -> Bitmap Index Scan on idx_table2 \n(cost=0.00..327.35 rows=22609 width=0) (actual time=108.608..108.608\nrows=22864 loops=1)\"\n\" Index Cond: ((chromosome)::text =\n'9'::text)\"\n\nMy OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;\nshared_buffers is set to 32MB (as I read it should be fairly low on\nWindows) and work_mem is set to 2500MB, but nevertheless the query takes\nabout 38 seconds to finish. The table \"table1\" contains approx. 3\nmillion tuples and table2 approx. 500.000 tuples. If anyone could give\nan advice on either how to optimize the settings in postgresql.conf or\nanything else to make this query run faster, I really would appreciate.\n\n\n\n\nChristian Rengstl M.A.\nKlinik und Poliklinik für Innere Medizin II\nKardiologie - Forschung\nUniversitätsklinikum Regensburg\nB3 1.388\nFranz-Josef-Strauss-Allee 11\n93053 Regensburg\nTel.: +49-941-944-7230\n\n\n\n", "msg_date": "Tue, 30 Oct 2007 14:28:27 +0100", "msg_from": "\"Christian Rengstl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing PostgreSQL for Windows" }, { "msg_contents": "Although I'm not an expert on this stuff, but 32 MB of shared buffers\nseems quite low to me, even for a windows machine. I'm running postgres\n8.2 on my workstation with 2GB of ram and an AMD x64 3500+ with\nshared_buffer set to 256MB without any trouble an it's running fine,\neven on large datasets and other applications running. In my experience, \nshared_buffers are more important than work_mem. \n\nHave you tried increasing default_statistic_targets (eg to 200 or more) and after that\nrunning \"analyze\" on your tables or the entire database?\n\n\nMarc\n\nChristian Rengstl wrote:\n> Hi list,\n>\n> I have the following query:\n> select t.a1, t.a2 from table1 t inner join table2 s \n> using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;\n>\n> With the following output from analyze:\n> \"Sort (cost=35075.03..35077.51 rows=991 width=14) (actual\n> time=33313.718..33321.935 rows=22599 loops=1)\"\n> \" Sort Key: s.pos\"\n> \" -> Hash Join (cost=7851.48..35025.71 rows=991 width=14) (actual\n> time=256.513..33249.701 rows=22599 loops=1)\"\n> \" Hash Cond: ((t.id)::text = (s.id)::text)\"\n> \" -> Bitmap Heap Scan on table1 t (cost=388.25..27357.57\n> rows=22286 width=23) (actual time=112.595..32989.663 rows=22864\n> loops=1)\"\n> \" Recheck Cond: ((pid)::text = 'xyz'::text)\"\n> \" -> Bitmap Index Scan on idx_table1 (cost=0.00..382.67\n> rows=22286 width=0) (actual time=103.790..103.790 rows=22864 loops=1)\"\n> \" Index Cond: ((pid)::text = 'xyz'::text)\"\n> \" -> Hash (cost=7180.62..7180.62 rows=22609 width=17) (actual\n> time=143.867..143.867 rows=22864 loops=1)\"\n> \" -> Bitmap Heap Scan on table2 s (cost=333.00..7180.62\n> rows=22609 width=17) (actual time=108.715..126.637 rows=22864 loops=1)\"\n> \" Recheck Cond: ((chromosome)::text = '9'::text)\"\n> \" -> Bitmap Index Scan on idx_table2 \n> (cost=0.00..327.35 rows=22609 width=0) (actual time=108.608..108.608\n> rows=22864 loops=1)\"\n> \" Index Cond: ((chromosome)::text =\n> '9'::text)\"\n>\n> My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;\n> shared_buffers is set to 32MB (as I read it should be fairly low on\n> Windows) and work_mem is set to 2500MB, but nevertheless the query takes\n> about 38 seconds to finish. The table \"table1\" contains approx. 3\n> million tuples and table2 approx. 500.000 tuples. If anyone could give\n> an advice on either how to optimize the settings in postgresql.conf or\n> anything else to make this query run faster, I really would appreciate.\n>\n>\n>\n>\n> Christian Rengstl M.A.\n> Klinik und Poliklinik für Innere Medizin II\n> Kardiologie - Forschung\n> Universitätsklinikum Regensburg\n> B3 1.388\n> Franz-Josef-Strauss-Allee 11\n> 93053 Regensburg\n> Tel.: +49-941-944-7230\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n> \n\n-- \n\nMarc Schablewski\nclick:ware Informationstechnik GmbH\n\n", "msg_date": "Tue, 30 Oct 2007 15:21:03 +0100", "msg_from": "Marc Schablewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL for Windows" }, { "msg_contents": ">From: Christian Rengstl\n>Subject: [PERFORM] Optimizing PostgreSQL for Windows\n>\n>Hi list,\n>\n>I have the following query:\n>select t.a1, t.a2 from table1 t inner join table2 s\n>using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos;\n>\n>\" -> Bitmap Heap Scan on table1 t (cost=388.25..27357.57\n>rows=22286 width=23) (actual time=112.595..32989.663 rows=22864 loops=1)\"\n>\" Recheck Cond: ((pid)::text = 'xyz'::text)\"\n>\" -> Bitmap Index Scan on idx_table1 (cost=0.00..382.67\n>rows=22286 width=0) (actual time=103.790..103.790 rows=22864 loops=1)\"\n>\" Index Cond: ((pid)::text = 'xyz'::text)\"\n\n\nThe bitmap heap scan on table1 seems very slow. What version of Postgres\nare you using? There were performance enhancements in 8.1 and 8.2. What\nkind of a hard drive are you using? I would guess a single SATA drive would\ngive you better performance than that, but I don't know for sure. Do you\nregularly vacuum the table? Not enough vacuuming can lead to tables filled\nwith dead rows, which can increase the amount of data needing to be scanned\nconsiderably.\n\nDave\n\n\n\n", "msg_date": "Tue, 30 Oct 2007 10:13:15 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL for Windows" }, { "msg_contents": "Christian Rengstl a écrit :\n> My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;\n> shared_buffers is set to 32MB (as I read it should be fairly low on\n> Windows) and work_mem is set to 2500MB, but nevertheless the query takes\n> about 38 seconds to finish. The table \"table1\" contains approx. 3\n> million tuples and table2 approx. 500.000 tuples. If anyone could give\n> an advice on either how to optimize the settings in postgresql.conf or\n> anything else to make this query run faster, I really would appreciate.\n> \n\n32MB for shared_buffers seems really low to me but 2500MB for work_mem\nseems awfully high. The highest I've seen for work_mem was something\nlike 128MB. I think the first thing you have to do is to really lower\nwork_mem. Something like 64MB seems a better bet at first.\n\nRegards.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n", "msg_date": "Tue, 30 Oct 2007 20:21:05 +0100", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing PostgreSQL for Windows" }, { "msg_contents": "Now the execution time for my query is down to ~10 - 13 seconds, which\nis already a big step ahead. Thanks!\nAre there any other settings that might be necessary to tweak on\nwindows in order to reduce execution time even a little bit more?\nOne thing i don't understand very well though is that if I execute the\nquery on table 1 with some conditions for the first time it is still\nslow, but when i execute it more often with changing the conditions it\ngets faster. Even when i query table 1 then query table 3 (with the same\ntable definition) and then query table 1 again, the query on table 1\ngets faster again.\n\n\nChristian Rengstl M.A.\nKlinik und Poliklinik für Innere Medizin II\nKardiologie - Forschung\nUniversitätsklinikum Regensburg\nB3 1.388\nFranz-Josef-Strauss-Allee 11\n93053 Regensburg\nTel.: +49-941-944-7230\n\n\n\n\n>>> On Tue, Oct 30, 2007 at 8:21 PM, in message\n<[email protected]>,\nGuillaume Lelarge <[email protected]> wrote: \n> Christian Rengstl a écrit :\n>> My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz;\n>> shared_buffers is set to 32MB (as I read it should be fairly low on\n>> Windows) and work_mem is set to 2500MB, but nevertheless the query\ntakes\n>> about 38 seconds to finish. The table \"table1\" contains approx. 3\n>> million tuples and table2 approx. 500.000 tuples. If anyone could\ngive\n>> an advice on either how to optimize the settings in postgresql.conf\nor\n>> anything else to make this query run faster, I really would\nappreciate.\n>> \n> \n> 32MB for shared_buffers seems really low to me but 2500MB for\nwork_mem\n> seems awfully high. The highest I've seen for work_mem was something\n> like 128MB. I think the first thing you have to do is to really\nlower\n> work_mem. Something like 64MB seems a better bet at first.\n> \n> Regards.\n> \n> \n> -- \n> Guillaume.\n> http://www.postgresqlfr.org\n> http://dalibo.com\n\n", "msg_date": "Wed, 31 Oct 2007 09:43:57 +0100", "msg_from": "\"Christian Rengstl\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing PostgreSQL for Windows" } ]
[ { "msg_contents": "Hi All!\n\nI had a big big big table. I tried to divide it in 300 partitions with \n30M rows each one. The problem was when I used the table to insert \ninformation: the perfomance was LOW.\n\nI did some testing. I created a 300 partitioned empty table. Then, I \ninserted some rows on it and the perfomance was SLOW too.\n\nSLOW = 1% perfomance compared with a non partitioned table. That is too \nmuch.\n\nThen, I did a 10 partitioned table version with 30M rows each one and I \ninserted rows there. The performance was the same that the no \npartitioned table version.\n\nI suspect there is a lock problem there. I think every SQL command do a \nlock to ALL the partitions so the perfomance with concurrent inserts and \nupdates are far worst than the no partitioned version.\n\nThe perfomace degrade with the number of partitions. And it degrade \nfast: I have troubles with 40 partitions.\n\nAm I right? is there a workaround? Can I replace the partitioned version \nwith another schema? any suggestion? I prefer to use something \ntransparent for the program because it uses EJB3 = deep changes and \ntesting on any change to the database layer.\n\n\nRegards\n\nPablo Alcaraz\n\n", "msg_date": "Tue, 30 Oct 2007 14:09:05 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "tables with 300+ partitions" }, { "msg_contents": "Pablo Alcaraz wrote:\n> I had a big big big table. I tried to divide it in 300 partitions with \n> 30M rows each one. The problem was when I used the table to insert \n> information: the perfomance was LOW.\n\nThat's very vague. What exactly did you do? Just inserted a few rows, or \nperhaps a large bulk load of millions of rows? What was the bottleneck, \ndisk I/O or CPU usage? How long did the operation take, and how long did \nyou expect it to take?\n\n> I did some testing. I created a 300 partitioned empty table. Then, I \n> inserted some rows on it and the perfomance was SLOW too.\n> \n> SLOW = 1% perfomance compared with a non partitioned table. That is too \n> much.\n> \n> Then, I did a 10 partitioned table version with 30M rows each one and I \n> inserted rows there. The performance was the same that the no \n> partitioned table version.\n\nThat suggests that the CPU time is spent in planning the query, possibly \nin constraint exclusion. But that's a very different scenario from \nhaving millions of rows in each partition.\n\n\n> I suspect there is a lock problem there. I think every SQL command do a \n> lock to ALL the partitions so the perfomance with concurrent inserts and \n> updates are far worst than the no partitioned version.\n\nEvery query takes an AccessShareLock on each partition, but that doesn't \nprevent concurrent inserts or updates, and acquiring the locks isn't \nvery expensive. In other words: no, that's not it.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 30 Oct 2007 18:56:50 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": "On 10/30/07, Pablo Alcaraz <[email protected]> wrote:\n>\n> I did some testing. I created a 300 partitioned empty table. Then, I\n> inserted some rows on it and the perfomance was SLOW too.\n\n\nIs the problem with inserting to the partitioned table or selecting from\nit? It sounds like inserting is the problem in which case I ask: how are\nyou redirecting inserts to the appropriate partition? If you're using\nrules, then insert performance will quickly degrade with number of\npartitions as *every* rule needs to be evaluated for *every* row inserted to\nthe base table. Using a trigger which you can modify according to some\nschedule is much faster, or better yet, use some application-level logic to\ninsert directly to the desired partition.\n\nSteve\n\nOn 10/30/07, Pablo Alcaraz <[email protected]> wrote:\nI did some testing. I created a 300 partitioned empty table. Then, Iinserted some rows on it and the perfomance was SLOW too.\n\n \nIs the problem with inserting to the partitioned table or selecting from it?  It sounds like inserting is the problem in which case I ask: how are you redirecting inserts to the appropriate partition?  If you're using rules, then insert performance will quickly degrade with number of partitions as *every* rule needs to be evaluated for *every* row inserted to the base table.  Using a trigger which you can modify according to some schedule is much faster, or better yet, use some application-level logic to insert directly to the desired partition.\n\n \nSteve", "msg_date": "Tue, 30 Oct 2007 17:00:10 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": "Steven Flatt wrote:\n> On 10/30/07, *Pablo Alcaraz* <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> I did some testing. I created a 300 partitioned empty table. Then, I\n> inserted some rows on it and the perfomance was SLOW too. \n>\n> \n> Is the problem with inserting to the partitioned table or selecting \n> from it? It sounds like inserting is the problem in which case I ask: \n> how are you redirecting inserts to the appropriate partition? If \n> you're using rules, then insert performance will quickly degrade with \n> number of partitions as *every* rule needs to be evaluated for *every* \n> row inserted to the base table. Using a trigger which you can modify \n> according to some schedule is much faster, or better yet, use some \n> application-level logic to insert directly to the desired partition.\n> \n> Steve\nI was a program inserting into the base table. The program ran in 200+ \nthreads and every thread insert data on it. Every thread inserts a row \nevery 3 seconds aprox.(or they used to do it), but when I put more \npartitions the insert speed went to 1 insert every 2 minutes.\n\nThe selects that need to evaluate all partitions were slow too, but I \nthink I can wait for them. :D\n\nI wonder if the update are slow too. I do not know that.\n\nDo I need to do a trigger for insert only or I need a trigger to update \nand delete too?\n\nTo modify the appilication logic for this is not an options because they \nneed to open the program, modify it and retest. All because an \nimplementation problem. I prefer to try to solve it at the database \nlevel because the database need this table partitioned.\n\nThanks for your help\n\nRegards.\n\nPablo\n> \n\n\n\n\n\n\n\n\nSteven Flatt wrote:\n\nOn 10/30/07, Pablo Alcaraz <[email protected]>\nwrote:\nI\ndid some testing. I created a 300 partitioned empty table. Then, I\ninserted some rows on it and the perfomance was SLOW too.\n \n \nIs the problem with inserting to the partitioned table or\nselecting from it?  It sounds like inserting is the problem in which\ncase I ask: how are you redirecting inserts to the appropriate\npartition?  If you're using rules, then insert performance will quickly\ndegrade with number of partitions as *every* rule needs to be evaluated\nfor *every* row inserted to the base table.  Using a trigger which you\ncan modify according to some schedule is much faster, or better yet,\nuse some application-level logic to insert directly to the desired\npartition.\n \n \nSteve\n\n\nI was a program inserting into the base table. The program ran in 200+\nthreads and every thread insert data on it. Every thread inserts a row\nevery 3 seconds aprox.(or they used to do it), but when I put more\npartitions the insert speed went to 1 insert every 2 minutes.\n\nThe selects that need to evaluate all partitions were slow too, but I\nthink I can wait for them. :D\n\nI wonder if the update are slow too. I do not know that.\n\nDo I need to do a trigger for insert only or I need a trigger to update\nand delete too?\n\nTo modify the appilication logic for this is not an options because\nthey need to open the program, modify it and retest. All because an\nimplementation problem. I prefer to try to solve it at the database\nlevel because the database need this table partitioned.\n\nThanks for your help\n\nRegards.\n\nPablo", "msg_date": "Wed, 31 Oct 2007 11:27:42 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": "On 10/31/07, Pablo Alcaraz <[email protected]> wrote:\n>\n> I was a program inserting into the base table. The program ran in 200+\n> threads and every thread insert data on it. Every thread inserts a row every\n> 3 seconds aprox.(or they used to do it), but when I put more partitions the\n> insert speed went to 1 insert every 2 minutes.\n>\n\nWe still need to know how you're redirecting inserts to the appropriate\npartition. My guess is that you have rules on the base table which say \"if\nan incoming row matches a certain criteria, then insert into partition x\ninstead\". There are several discussions on this newsgroup already about why\nusing rules for partitioning hurts insert performance.\n\nNext question is *how* are you partitioning the table? If you're\npartitioning based on some sort of log time field (i.e. today's data goes in\nthis partition, tomorrow's data goes in that partition, etc.), then it is\nreasonably easy to use a single trigger (as oppose to many rules) on the\nbase table which you can modify on some schedule (using cron, for example).\nIf you're partitioning based on some other id field, then using a trigger\nwon't work as nicely.\n\n\n Do I need to do a trigger for insert only or I need a trigger to update and\n> delete too?\n>\n\nFor delete, probably not. For update, depends on how you're partitioning.\nCould the update cause the row to belong to a different partition? Do you\ncare about moving it at that point?\n\nSteve\n\nOn 10/31/07, Pablo Alcaraz <[email protected]> wrote:\n\nI was a program inserting into the base table. The program ran in 200+ threads and every thread insert data on it. Every thread inserts a row every 3 seconds aprox.(or they used to do it), but when I put more partitions the insert speed went to 1 insert every 2 minutes.\n\n \nWe still need to know how you're redirecting inserts to the appropriate partition.  My guess is that you have rules on the base table which say \"if an incoming row matches a certain criteria, then insert into partition x instead\".  There are several discussions on this newsgroup already about why using rules for partitioning hurts insert performance.\n\n \nNext question is *how* are you partitioning the table?  If you're partitioning based on some sort of log time field (i.e. today's data goes in this partition, tomorrow's data goes in that partition, etc.), then it is reasonably easy to use a single trigger (as oppose to many rules) on the base table which you can modify on some schedule (using cron, for example).  If you're partitioning based on some other id field, then using a trigger won't work as nicely.\n\n \n\nDo I need to do a trigger for insert only or I need a trigger to update and delete too?\n \nFor delete, probably not.  For update, depends on how you're partitioning.  Could the update cause the row to belong to a different partition?  Do you care about moving it at that point?\n \nSteve", "msg_date": "Wed, 31 Oct 2007 13:03:21 -0400", "msg_from": "\"Steven Flatt\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": "On 10/31/07, Pablo Alcaraz <[email protected]> wrote:\n>\n> Steven Flatt wrote:\n>\n> On 10/30/07, Pablo Alcaraz <[email protected]> wrote:\n> > I did some testing. I created a 300 partitioned empty table. Then, I\n> > inserted some rows on it and the perfomance was SLOW too.\n>\n>\n> Is the problem with inserting to the partitioned table or selecting from it?\n> It sounds like inserting is the problem in which case I ask: how are you\n> redirecting inserts to the appropriate partition? If you're using rules,\n> then insert performance will quickly degrade with number of partitions as\n> *every* rule needs to be evaluated for *every* row inserted to the base\n> table. Using a trigger which you can modify according to some schedule is\n> much faster, or better yet, use some application-level logic to insert\n> directly to the desired partition.\n>\n> Steve I was a program inserting into the base table. The program ran in 200+\n> threads and every thread insert data on it. Every thread inserts a row every\n> 3 seconds aprox.(or they used to do it), but when I put more partitions the\n> insert speed went to 1 insert every 2 minutes.\n>\n> The selects that need to evaluate all partitions were slow too, but I think\n> I can wait for them. :D\n>\n> I wonder if the update are slow too. I do not know that.\n>\n> Do I need to do a trigger for insert only or I need a trigger to update and\n> delete too?\n\nYou need a trigger for any update / delete / insert you don't want to\nbe really slow. Basically, if a rule is doing it now, you need a\ntrigger to do it to speed it up.\n\nMy experience has been that at 200 to 1000 partitions, the speed of\nthe smaller tables still makes selects faster than with one big table\nfor certain kinds of access. At some point, the cost of planning a\nlookup against thousands of tables will be more than the savings of\nlooking in a really small table.\n\nThe nice thing about triggers is that you can use maths to figure out\nthe name of the table you'll be writing to instead of a static table\nlike most rules use. So, all you have to do is make sure the new\ntables get added under the parent and poof, you're ready to go, no\nneed for a new trigger.\n", "msg_date": "Wed, 31 Oct 2007 12:57:17 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": " > Steven Flatt wrote:\n >> On 10/30/07, *Pablo Alcaraz* <[email protected]\n >> <mailto:[email protected]>> wrote:\n >>\n >> I did some testing. I created a 300 partitioned empty table. Then, I\n >> inserted some rows on it and the perfomance was SLOW too.\n >>\n >> Is the problem with inserting to the partitioned table or selecting\n >> from it? It sounds like inserting is the problem in which case I\n >> ask: how are you redirecting inserts to the appropriate partition?\n >> If you're using rules, then insert performance will quickly degrade\n >> with number of partitions as *every* rule needs to be evaluated for\n >> *every* row inserted to the base table. Using a trigger which you\n >> can modify according to some schedule is much faster, or better yet,\n >> use some application-level logic to insert directly to the desired\n >> partition.\n >>\n >> Steve\n > I was a program inserting into the base table. The program ran in 200+\n > threads and every thread insert data on it. Every thread inserts a row\n > every 3 seconds aprox.(or they used to do it), but when I put more\n > partitions the insert speed went to 1 insert every 2 minutes.\n\nOK, that gives about 70 inserts per second - depending on the amount of\ndata inserted this may or may not be manageable. What is the size of the\ndata the threads are writing with each insert, or what is the size of\nthe whole table (not the number of rows, but size in MB / GB). What is\nthe table structure - what indices are defined on it, etc.?\n\nWhat kind of SELECT queries do you execute on the table / partitions?\nAggregations or simple queries? Have you executed ANALYZE on all the\npartitions after loading the data? What are the EXPLAIN plan for the\nslow SELECT queries?\n\nAnyway 300 partitions for 200 threads seems a little bit too much to me.\nI'd use something like 10 partitions or something like that. What\nstrategy have you chosen to redirect the inserts into the partitions,\ni.e. how do you determine the partition the insert should be written to?\n\nMaybe I missed something, but what is the CPU and I/O load? In other\nwords, is the system CPU bound or I/O bound?\n\n > The selects that need to evaluate all partitions were slow too, but I\n > think I can wait for them. :D\n >\n > I wonder if the update are slow too. I do not know that.\n >\n > Do I need to do a trigger for insert only or I need a trigger to\n > update and delete too?\n\nIf you have created the queries using \"INHERITS\" then all you need to do\nis redirect inserts - either using a RULE, a BEFORE INSERT trigger, or a\nstored procedure. Each of these options has advandages / disadvantages:\n\nRules are quite easy to maintain (once you create a new partition you\njust need to create a new rule), but may have serious overhead in case\nof many partitions as you have to evaluate all rules .\n\nTriggers are not as easy to maintain as all the tables have to be in a\nsingle procedure, and adding / removing a partition means modifying the\nprocedure. On the other side the performance may be better in case of\nmany partitions.\n\nBoth the solutions mentioned above have the advantage of transparency,\ni.e. the clients don't need to know about them. Stored procedures have\nthe advantages and disadvanteges of a trigger, plus they have to be\ninvoked by the client.\n\n Tomas\n", "msg_date": "Wed, 31 Oct 2007 18:57:28 +0100", "msg_from": "=?windows-1252?Q?Tom=E1=9A_Vondra?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tables with 300+ partitions" }, { "msg_contents": "Scott Marlowe wrote:\n> On 10/31/07, Pablo Alcaraz <[email protected]> wrote:\n> \n>> Steven Flatt wrote:\n>>\n>> On 10/30/07, Pablo Alcaraz <[email protected]> wrote:\n>> \n>>> I did some testing. I created a 300 partitioned empty table. Then, I\n>>> inserted some rows on it and the perfomance was SLOW too.\n>>> \n>> Is the problem with inserting to the partitioned table or selecting from it?\n>> It sounds like inserting is the problem in which case I ask: how are you\n>> redirecting inserts to the appropriate partition? If you're using rules,\n>> then insert performance will quickly degrade with number of partitions as\n>> *every* rule needs to be evaluated for *every* row inserted to the base\n>> table. Using a trigger which you can modify according to some schedule is\n>> much faster, or better yet, use some application-level logic to insert\n>> directly to the desired partition.\n>>\n>> Steve I was a program inserting into the base table. The program ran in 200+\n>> threads and every thread insert data on it. Every thread inserts a row every\n>> 3 seconds aprox.(or they used to do it), but when I put more partitions the\n>> insert speed went to 1 insert every 2 minutes.\n>>\n>> The selects that need to evaluate all partitions were slow too, but I think\n>> I can wait for them. :D\n>>\n>> I wonder if the update are slow too. I do not know that.\n>>\n>> Do I need to do a trigger for insert only or I need a trigger to update and\n>> delete too?\n>> \n>\n> You need a trigger for any update / delete / insert you don't want to\n> be really slow. Basically, if a rule is doing it now, you need a\n> trigger to do it to speed it up.\n>\n> My experience has been that at 200 to 1000 partitions, the speed of\n> the smaller tables still makes selects faster than with one big table\n> for certain kinds of access. At some point, the cost of planning a\n> lookup against thousands of tables will be more than the savings of\n> looking in a really small table.\n>\n> The nice thing about triggers is that you can use maths to figure out\n> the name of the table you'll be writing to instead of a static table\n> like most rules use. So, all you have to do is make sure the new\n> tables get added under the parent and poof, you're ready to go, no\n> need for a new trigger.\n>\n> \n\nCurrently I have a insert rule only and the updates are right solved. I \nthink the UPDATEs use the constraint because the program use the base \ntable everywhere.\n\nThis is the base table structure:\n\n-- Table: t\n\n-- DROP TABLE t;\n\nCREATE TABLE t\n(\n idt bigint NOT NULL,\n idtpadre bigint NOT NULL,\n e integer NOT NULL,\n dmodi timestamp without time zone NOT NULL DEFAULT now(),\n p integer NOT NULL DEFAULT 0,\n m text NOT NULL\n)\nWITHOUT OIDS;\nALTER TABLE t OWNER TO e;\n\n\n\n-- Rule: \"t_update_00003 ON t\"\n\n-- DROP RULE t_update_00003 ON t;\n\nCREATE OR REPLACE RULE t_update_00003 AS\n ON INSERT TO t\n WHERE new.idt >= 1::bigint AND new.idt <= 30000000::bigint DO \nINSTEAD INSERT INTO t_00003 (idt, idtpadre, e, dmodi, p, m)\n VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00006 ON t\"\n\n-- DROP RULE t_update_00006 ON t;\n\nCREATE OR REPLACE RULE t_update_00006 AS\n ON INSERT TO t\n WHERE new.idt >= 30000001::bigint AND new.idt <= 60000000::bigint DO \nINSTEAD INSERT INTO t_00006 (idt, idtpadre, e, dmodi, p, m)\n VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00009 ON t\"\n\n-- DROP RULE t_update_00009 ON t;\n\nCREATE OR REPLACE RULE t_update_00009 AS\n ON INSERT TO t\n WHERE new.idt >= 60000001::bigint AND new.idt <= 90000000::bigint DO \nINSTEAD INSERT INTO t_00009 (idt, idtpadre, e, dmodi, p, m)\n VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00012 ON t\"\n\n-- DROP RULE t_update_00012 ON t;\n\nCREATE OR REPLACE RULE t_update_00012 AS\n ON INSERT TO t\n WHERE new.idt >= 90000001::bigint AND new.idt <= 120000000::bigint DO \nINSTEAD INSERT INTO t_00012 (idt, idtpadre, e, dmodi, p, m)\n VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\netc ... 300 hundred partitions\n\n\nThe partitions are created like:\n\nCREATE TABLE t_00003\n(\n CONSTRAINT t_00003_pkey PRIMARY KEY (idt),\n CONSTRAINT t_00003_idt_check CHECK (idt >= 1::bigint AND idt <= \n30000000::bigint)\n) INHERITS (t)\nWITHOUT OIDS;\nALTER TABLE t_00003 OWNER TO e;\n\nCREATE INDEX t_00003_e\n ON t_00003\n USING btree\n (e);\n\netc\n\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn 10/31/07, Pablo Alcaraz <[email protected]> wrote:\n \n\n Steven Flatt wrote:\n\nOn 10/30/07, Pablo Alcaraz <[email protected]> wrote:\n \n\nI did some testing. I created a 300 partitioned empty table. Then, I\ninserted some rows on it and the perfomance was SLOW too.\n \n\n\nIs the problem with inserting to the partitioned table or selecting from it?\n It sounds like inserting is the problem in which case I ask: how are you\nredirecting inserts to the appropriate partition? If you're using rules,\nthen insert performance will quickly degrade with number of partitions as\n*every* rule needs to be evaluated for *every* row inserted to the base\ntable. Using a trigger which you can modify according to some schedule is\nmuch faster, or better yet, use some application-level logic to insert\ndirectly to the desired partition.\n\nSteve I was a program inserting into the base table. The program ran in 200+\nthreads and every thread insert data on it. Every thread inserts a row every\n3 seconds aprox.(or they used to do it), but when I put more partitions the\ninsert speed went to 1 insert every 2 minutes.\n\n The selects that need to evaluate all partitions were slow too, but I think\nI can wait for them. :D\n\n I wonder if the update are slow too. I do not know that.\n\n Do I need to do a trigger for insert only or I need a trigger to update and\ndelete too?\n \n\n\nYou need a trigger for any update / delete / insert you don't want to\nbe really slow. Basically, if a rule is doing it now, you need a\ntrigger to do it to speed it up.\n\nMy experience has been that at 200 to 1000 partitions, the speed of\nthe smaller tables still makes selects faster than with one big table\nfor certain kinds of access. At some point, the cost of planning a\nlookup against thousands of tables will be more than the savings of\nlooking in a really small table.\n\nThe nice thing about triggers is that you can use maths to figure out\nthe name of the table you'll be writing to instead of a static table\nlike most rules use. So, all you have to do is make sure the new\ntables get added under the parent and poof, you're ready to go, no\nneed for a new trigger.\n\n \n\n\nCurrently I have a insert rule only and the updates are right solved. I\nthink the UPDATEs use the constraint because the program use the base\ntable everywhere.\n\nThis is the base table structure:\n\n-- Table: t\n\n-- DROP TABLE t;\n\nCREATE TABLE t\n(\n  idt bigint NOT NULL,\n  idtpadre bigint NOT NULL,\n  e integer NOT NULL,\n  dmodi timestamp without time zone NOT NULL DEFAULT now(),\n  p integer NOT NULL DEFAULT 0,\n  m text NOT NULL\n) \nWITHOUT OIDS;\nALTER TABLE t OWNER TO e;\n\n\n\n-- Rule: \"t_update_00003 ON t\"\n\n-- DROP RULE t_update_00003 ON t;\n\nCREATE OR REPLACE RULE t_update_00003 AS\n    ON INSERT TO t\n   WHERE new.idt >= 1::bigint AND new.idt <= 30000000::bigint DO\nINSTEAD  INSERT INTO t_00003 (idt, idtpadre, e, dmodi, p, m) \n  VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00006 ON t\"\n\n-- DROP RULE t_update_00006 ON t;\n\nCREATE OR REPLACE RULE t_update_00006 AS\n    ON INSERT TO t\n   WHERE new.idt >= 30000001::bigint AND new.idt <=\n60000000::bigint DO INSTEAD  INSERT INTO t_00006 (idt, idtpadre, e,\ndmodi, p, m) \n  VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00009 ON t\"\n\n-- DROP RULE t_update_00009 ON t;\n\nCREATE OR REPLACE RULE t_update_00009 AS\n    ON INSERT TO t\n   WHERE new.idt >= 60000001::bigint AND new.idt <=\n90000000::bigint DO INSTEAD  INSERT INTO t_00009 (idt, idtpadre, e,\ndmodi, p, m) \n  VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\n-- Rule: \"t_update_00012 ON t\"\n\n-- DROP RULE t_update_00012 ON t;\n\nCREATE OR REPLACE RULE t_update_00012 AS\n    ON INSERT TO t\n   WHERE new.idt >= 90000001::bigint AND new.idt <=\n120000000::bigint DO INSTEAD  INSERT INTO t_00012 (idt, idtpadre, e,\ndmodi, p, m) \n  VALUES (new.idt, new.idtpadre, new.e, new.dmodi, new.p, new.m);\n\netc ... 300 hundred partitions\n\n\nThe partitions are created like:\n\nCREATE TABLE t_00003\n(\n  CONSTRAINT t_00003_pkey PRIMARY KEY (idt),\n  CONSTRAINT t_00003_idt_check CHECK (idt >= 1::bigint AND idt <=\n30000000::bigint)\n) INHERITS (t) \nWITHOUT OIDS;\nALTER TABLE t_00003 OWNER TO e;\n\nCREATE INDEX t_00003_e\n  ON t_00003\n  USING btree\n  (e);\n\netc", "msg_date": "Wed, 31 Oct 2007 16:15:34 -0400", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tables with 300+ partitions" } ]
[ { "msg_contents": "I have two small queries which are both very fast to evaluate \nseparately. The first query, \"Query 1\", calculates some statistics and \nthe the second query, \"Query 2\", finds a subset of relevant keys.\nWhen combined into a single query which calculates statistics from only \nthe subset of relevant keys the evaluation plan explodes and uses both \nseq scans and bitmap heap scans.\nHow can I improve the performance of the combined query?\n\nQueries and output from EXPLAIN ANALYZE can be seen here with some \nsyntax highlighting:\n http://rafb.net/p/BJIW4p69.html\n\nI will also paste it here:\n\n=============================================================================\nQUERY 1 (very *fast*):\n=============================================================================\nSELECT keyId, count(1) as num_matches\nFROM stats\nGROUP BY keyId\nLIMIT 50\n\n Limit (cost=0.00..23.65 rows=50 width=8) (actual time=0.090..2.312 \nrows=50 loops=1)\n -> GroupAggregate (cost=0.00..4687.46 rows=9912 width=8) (actual \ntime=0.085..2.145 rows=50 loops=1)\n -> Index Scan using stats_keyId on stats (cost=0.00..3820.19 \nrows=99116 width=8) (actual time=0.031..1.016 rows=481 loops=1)\n Total runtime: 2.451 ms\n(4 rows)\n\n\n=============================================================================\nQUERY 2 (very *fast*):\n=============================================================================\nSELECT keyId, sortNum\nFROM items i\nWHERE sortNum > 123\nORDER BY sortNum\nLIMIT 50\n\n Limit (cost=0.01..9.87 rows=50 width=8) (actual time=0.068..0.610 \nrows=50 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.009..0.025 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.006..0.007 rows=1 loops=1)\n -> Index Scan using items_sortNum on items i (cost=0.00..1053.67 \nrows=5344 width=8) (actual time=0.063..0.455 rows=50 loops=1)\n Index Cond: (sortNum >= $0)\n Total runtime: 0.749 ms\n(7 rows)\n\n\n\n=============================================================================\nCOMBINED QUERY (very *slow*):\n=============================================================================\n SELECT keyId, sortNum, count(1)\n FROM stats s, items i\n WHERE s.keyId = i.keyId AND i.sortNum > 123\n GROUP BY i.keyId, i.sortNum\n ORDER BY i.sortNum\n LIMIT 50\n\nLimit (cost=3281.72..3281.84 rows=50 width=16) (actual \ntime=435.838..436.043 rows=50 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.016..0.021 rows=1 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual \ntime=0.012..0.013 rows=1 loops=1)\n -> Sort (cost=3281.71..3289.97 rows=3304 width=16) (actual \ntime=435.833..435.897 rows=50 loops=1)\n Sort Key: i.sortNum\n -> Hash Join (cost=2745.80..3088.59 rows=3304 width=16) \n(actual time=364.247..413.164 rows=8490 loops=1)\n Hash Cond: (s.keyId = i.keyId)\n -> HashAggregate (cost=2270.53..2394.43 rows=9912 \nwidth=8) (actual time=337.869..356.533 rows=9911 loops=1)\n -> Seq Scan on items (cost=0.00..1527.16 \nrows=99116 width=8) (actual time=0.016..148.118 rows=99116 loops=1)\n -> Hash (cost=408.47..408.47 rows=5344 width=12) \n(actual time=26.342..26.342 rows=4491 loops=1)\n -> Bitmap Heap Scan on items i \n(cost=121.67..408.47 rows=5344 width=12) (actual time=5.007..16.898 \nrows=4491 loops=1)\n Recheck Cond: (sortNum >= $0)\n -> Bitmap Index Scan on items_sortNum \n(cost=0.00..120.33 rows=5344 width=0) (actual time=4.273..4.273 \nrows=13375 loops=1)\n Index Cond: (sortNum >= $0)\nTotal runtime: 436.421 ms\n(16 rows)\n", "msg_date": "Tue, 30 Oct 2007 21:48:16 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Two fast queries get slow when combined" }, { "msg_contents": "cluster wrote:\n> SELECT keyId, sortNum, count(1)\n> FROM stats s, items i\n> WHERE s.keyId = i.keyId AND i.sortNum > 123\n> GROUP BY i.keyId, i.sortNum\n> ORDER BY i.sortNum\n> LIMIT 50\n> \n> Limit (cost=3281.72..3281.84 rows=50 width=16) (actual \n> time=435.838..436.043 rows=50 loops=1)\n> InitPlan\n> -> Limit (cost=0.00..0.01 rows=1 width=0) (actual \n> time=0.016..0.021 rows=1 loops=1)\n> -> Result (cost=0.00..0.01 rows=1 width=0) (actual \n> time=0.012..0.013 rows=1 loops=1)\n> -> Sort (cost=3281.71..3289.97 rows=3304 width=16) (actual \n> time=435.833..435.897 rows=50 loops=1)\n> Sort Key: i.sortNum\n> -> Hash Join (cost=2745.80..3088.59 rows=3304 width=16) \n> (actual time=364.247..413.164 rows=8490 loops=1)\n> Hash Cond: (s.keyId = i.keyId)\n> -> HashAggregate (cost=2270.53..2394.43 rows=9912 \n> width=8) (actual time=337.869..356.533 rows=9911 loops=1)\n> -> Seq Scan on items (cost=0.00..1527.16 \n> rows=99116 width=8) (actual time=0.016..148.118 rows=99116 loops=1)\n> -> Hash (cost=408.47..408.47 rows=5344 width=12) \n> (actual time=26.342..26.342 rows=4491 loops=1)\n> -> Bitmap Heap Scan on items i \n> (cost=121.67..408.47 rows=5344 width=12) (actual time=5.007..16.898 \n> rows=4491 loops=1)\n> Recheck Cond: (sortNum >= $0)\n> -> Bitmap Index Scan on items_sortNum \n> (cost=0.00..120.33 rows=5344 width=0) (actual time=4.273..4.273 \n> rows=13375 loops=1)\n> Index Cond: (sortNum >= $0)\n> Total runtime: 436.421 ms\n> (16 rows)\n\nThere's something odd about that plan. It's doing both a seq scan and a \nbitmap scan on \"items\", but I can't see stats table being mentioned \nanywhere. Looking at the row count, I believe that seq scan is actually \non the stats table, not items like it says above. Is that really a \nverbatim copy of the output you got?\n\nWhich version of Postgres is this?\n\nYou could try rewriting the query like this:\n\nSELECT keyId, sortNum,\n (SELECT count(*) FROM stats s WHERE s.keyId = i.keyId) AS stats_cnt\nFROM items i\nWHERE i.sortNum > 123\nORDER BY sortNum\nLIMIT 50\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 30 Oct 2007 21:46:45 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two fast queries get slow when combined" }, { "msg_contents": "cluster <[email protected]> writes:\n> Queries and output from EXPLAIN ANALYZE can be seen here with some \n> syntax highlighting:\n> http://rafb.net/p/BJIW4p69.html\n\nYou are lying to us about how those queries were posed to Postgres\n(and no I don't feel a need to explain how I know). In future please\npresent the full truth about what you are doing, not a simplification\nthat you think is sufficient.\n\nBut I think the short answer to your question is that query 1 is fast\nbecause it need only select the first 50 rows in some ordering, and\nquery 2 is fast because it need only select the first 50 rows in\nsome ordering, but they are not the same ordering so the join query\ndoesn't get to exploit that shortcut.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Oct 2007 00:05:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two fast queries get slow when combined " }, { "msg_contents": "> There's something odd about that plan. It's doing both a seq scan and a \n> bitmap scan on \"items\", but I can't see stats table being mentioned \n> anywhere.\n\nHuh? Aaah, sorry. I made a major search/replace-refactoring (that \nobviously went wrong) on all open files in the editor before posting to \nthis newsgroup, and one of these files was my preparation of the queries \nand planner output. Damn.\nSorry for wasting your time! :-(\n\nHowever, your suggestion worked perfectly. Thanks a lot!\n", "msg_date": "Wed, 31 Oct 2007 10:37:24 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two fast queries get slow when combined" }, { "msg_contents": "> You are lying to us about how those queries were posed to Postgres\n> (and no I don't feel a need to explain how I know).\n\nSorry. The \"lying\" was not intended as explained in my reply to Heikku.\n\nThanks for the tips anyways.\n", "msg_date": "Wed, 31 Oct 2007 10:39:24 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two fast queries get slow when combined" } ]
[ { "msg_contents": "I am trying to build a very Robust DB server that will support 1000+\nconcurrent users (all ready have seen max of 237 no pooling being\nused). I have read so many articles now that I am just saturated. I\nhave a general idea but would like feedback from others.\n\nI understand query tuning and table design play a large role in\nperformance, but taking that factor away\nand focusing on just hardware, what is the best hardware to get for Pg\nto work at the highest level\n(meaning speed at returning results)?\n\nHow does pg utilize multiple processors? The more the better?\nAre queries spread across multiple processors?\nIs Pg 64 bit?\nIf so what processors are recommended?\n\nI read this : http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html\nPOSTGRESQL uses a multi-process model, meaning each database\nconnection has its own Unix process. Because of this, all multi-cpu\noperating systems can spread multiple database connections among the\navailable CPUs. However, if only a single database connection is\nactive, it can only use one CPU. POSTGRESQL does not use multi-\nthreading to allow a single process to use multiple CPUs.\n\nIts pretty old (2003) but is it still accurate? if this statement is\naccurate how would it affect connection pooling software like pg_pool?\n\nRAM? The more the merrier right? Understanding shmmax and the pg\nconfig file parameters for shared mem has to be adjusted to use it.\nDisks? standard Raid rules right? 1 for safety 5 for best mix of\nperformance and safety?\nAny preference of SCSI over SATA? What about using a High speed (fibre\nchannel) mass storage device?\n\nWho has built the biggest baddest Pg server out there and what do you\nuse?\n\nThanks!\n\n", "msg_date": "Wed, 31 Oct 2007 09:45:08 -0700", "msg_from": "Ketema <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware for PostgreSQL" }, { "msg_contents": "On 31-10-2007 17:45 Ketema wrote:\n> I understand query tuning and table design play a large role in\n> performance, but taking that factor away\n> and focusing on just hardware, what is the best hardware to get for Pg\n> to work at the highest level\n> (meaning speed at returning results)?\n\nIt really depends on your budget and workload. Will it be read-heavy or \nwrite-heavy? How large will the database be? Are those concurrent users \nactively executing queries or is the actual concurrent query load lower \n(it normally is)?\nYou should probably also try to estimate the amount of concurrently \nexecuted queries and how heavy those queries are, as that is normally \nmore important as a performance measure. And normally its much less than \nthe amount of concurrently connected users.\n\n> How does pg utilize multiple processors? The more the better?\n> Are queries spread across multiple processors?\n\nIt forks a process for a new connection and leaves the multi-cpu \nscheduling to the OS. It does not spread a single query across multiple \ncpu's. But with many concurrent users, you normally don't want or need \nthat anyway, it would mainly add extra stress to the scheduling of your \noperating system.\n\n> Is Pg 64 bit?\nIt can be compiled 64-bit and is available pre-compiled as 64-bits as well.\n\n> If so what processors are recommended?\n\nI think the x86-class cpu's deliver the most bang for buck and are the \nbest tested with postgres. Both AMD and Intel cpu's are pretty good, but \nI think currently a system with two intel quad core cpus is in a very \ngood price/performance-point. Obviously you'll need to match the cpus to \nyour load, you may need more cpu-cores.\n\n> Its pretty old (2003) but is it still accurate? if this statement is\n> accurate how would it affect connection pooling software like pg_pool?\n\nIt just keeps the process alive as long as the connection isn't closed, \nnothing fancy or worrisome going on there. That's just the behavior I'd \nexpect at the connection pool-level.\n\n> RAM? The more the merrier right? Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n\nMore is better, but don't waste your money on it if you don't need it, \nif your (the active part of your) database is smaller than the RAM, \nincreasing it doesn't do that much. I would be especially careful with \nconfigurations that require those very expensive 4GB-modules.\n\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n\nMake sure you have a battery backed controller (or multiple), but you \nshould consider raid 10 if you have many writes and raid 5 or 50 if you \nhave a read-heavy environment. There are also people reporting that it's \nfaster to actually build several raid 1's and use the OS to combine them \nto a raid 10.\nBe careful with the amount of disks, in performance terms you're likely \nbetter off with 16x 73GB than with 8x 146GB\n\n> Any preference of SCSI over SATA? What about using a High speed (fibre\n> channel) mass storage device?\n\nI'd consider only SAS (serial attached scsi, the successor of scsi) for \na relatively small high performance storage array. Fibre channel is so \nmuch more expensive, that you'll likely get much less performance for \nthe same amount of money. And I'd only use sata in such an environment \nif the amount of storage, not its performance, is the main metric. I.e. \nfor file storage and backups.\n\nBest regards,\n\nArjen\n", "msg_date": "Wed, 31 Oct 2007 20:15:52 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Ketema wrote:\n> I am trying to build a very Robust DB server that will support 1000+\n> concurrent users (all ready have seen max of 237 no pooling being\n> used). I have read so many articles now that I am just saturated. I\n> have a general idea but would like feedback from others.\n> \n> I understand query tuning and table design play a large role in\n> performance, but taking that factor away\n> and focusing on just hardware, what is the best hardware to get for Pg\n> to work at the highest level\n> (meaning speed at returning results)?\n> \n> How does pg utilize multiple processors? The more the better?\n\nIf you have many simultaneous queries, it will use more processors. If\nyou run just a single query at a time, it'll only use one CPU.\n\n> Are queries spread across multiple processors?\n\nNo, not a single query. Max one CPU per query.\n\n\n> Is Pg 64 bit?\n\nYes, if your OS and platform is.\n\n> If so what processors are recommended?\n\nAFAIK, the latest intels and AMDs are all good, and fairly equal. Make\nsure you turn hyperthreading off. Multicore is fine, but not HT.\n\n\n> I read this : http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html\n> POSTGRESQL uses a multi-process model, meaning each database\n> connection has its own Unix process. Because of this, all multi-cpu\n> operating systems can spread multiple database connections among the\n> available CPUs. However, if only a single database connection is\n> active, it can only use one CPU. POSTGRESQL does not use multi-\n> threading to allow a single process to use multiple CPUs.\n> \n> Its pretty old (2003) but is it still accurate? \n\nYes.\n\n\n> if this statement is\n> accurate how would it affect connection pooling software like pg_pool?\n\nNot at all, really. It's only interesting how many running queries you\nhave, not how many connections. There are other advantages to the\npg_pool and friends, such as not having to fork new processes so often,\nbut it doesn't affect the spread over CPUs.\n\n\n> RAM? The more the merrier right? Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n\nYes. As long as your database doesn't fit entirely in RAM with room over\nfor sorting and such, more RAM will make things faster in just about\nevery case.\n\n\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n\nRAID-10 for best mix of performance and safety. RAID-5 can give you a\ndecent compromise between cost and performance/safety.\n\nAnd get a RAID controller with lots of cache memory with battery backup.\nThis is *very* important.\n\nAnd remember - lots of spindles (disks) if you want good write\nperformance. Regardless of which RAID you use.\n\n\n> Any preference of SCSI over SATA? What about using a High speed (fibre\n> channel) mass storage device?\n\nAbsolutely SCSI or SAS, and not SATA. I see no point with plain FC\ndisks, but if you get a high end SAN solution with FC between the host\nand the controllers, that's what you're going to be using. There are\nthings to be said both for using DAS and SAN - they both ahve their\nadvantages.\n\n\n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n\nProbably not me :-) The biggest one I've set up is 16 cores, 32Gb RAM\nand no more than 800Gb disk... But it's very fast :-)\n\nOh, and I'd absolutely recommend you go for brandname hardware, like IBM\nor HP (or Sun or something if you don't want to go down the intel path).\n\n//Magnus\n", "msg_date": "Wed, 31 Oct 2007 20:27:07 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": ">>> On Wed, Oct 31, 2007 at 11:45 AM, in message\n<[email protected]>, Ketema\n<[email protected]> wrote: \n \n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n \nI don't think that would be us, but I can give you an example of\nwhat can work. We have a 220 GB database which is a replication\ntarget for OLTP and supports a web site with about 2 million web\nhits per day. Daily, we have about 1 million database transactions\nwhich modify data and 10 million database transactions which are\nread-only. The box has 8 4 GHz Xeon processors, 12 GB RAM, and RAID\n5 with 13 live spindles and two hot spares. The RAID controller has\n256 MB RAM with battery backup. This gives very good performance,\nproperly tuned.\n \nBesides the PostgreSQL database, the box also runs middle tier\nsoftware written in Java.\n \nI'll second the opinions that connection pooling and a high-\nquality RAID controller with battery backed RAM cache are crucial.\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 31 Oct 2007 16:54:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "> I understand query tuning and table design play a large role in\n> performance, but taking that factor away\n> and focusing on just hardware, what is the best hardware to get for Pg\n> to work at the highest level\n> (meaning speed at returning results)?\n\nDepends heavily on the particular application, but most performance \nproblems were caused by I/O (some of them because of bad table or \napplication design, some of them by slow drives).\n\n> How does pg utilize multiple processors? The more the better?\n\nLinux version uses processes, so it's able to use multiple processors. \n(Not sure about Windows version, but I guess it uses threads.)\n\n> Are queries spread across multiple processors?\n> Is Pg 64 bit?\n> If so what processors are recommended?\n\nHard to tell, as for example I've seen several benchmarks about Xeons \nfrom Intel, half of them saying that's the right CPU for PostgreSQL, the \nother half saying there are better CPUs. But as I've said before - in \nmost cases the performance problems are caused by slow drives - take \nyour money and put them in more RAM / better drives (SCSI).\n\n> I read this : http://www.postgresql.org/files/documentation/books/aw_pgsql/hw_performance/node12.html\n> POSTGRESQL uses a multi-process model, meaning each database\n> connection has its own Unix process. Because of this, all multi-cpu\n> operating systems can spread multiple database connections among the\n> available CPUs. However, if only a single database connection is\n> active, it can only use one CPU. POSTGRESQL does not use multi-\n> threading to allow a single process to use multiple CPUs.\n> \n> Its pretty old (2003) but is it still accurate? if this statement is\n> accurate how would it affect connection pooling software like pg_pool?\n\nYes, it's quite accurate. But see this (plus the rest of the documents \nin the \"Docs\" section on that site)\n\nhttp://www.powerpostgresql.com/PerfList\n\n> RAM? The more the merrier right? Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n\nYes, the more RAM you can get, the better the performance (usually). The \nproblem is you've forgotten to mention the size of the database and the \nusage patterns. If the whole database fits into the RAM, the performance \ncan only increase in most cases.\n\nIn case of clients that mostly write data into the database, the amount \nof RAM won't help too much as the data need to be written to the disk \nanyway (unless you turn off 'fsync' which is a really stupid thing to do \nin case of important data).\n\nRAID - a quite difficult question and I'm not quite a master in this \nfield, so I'll just quote some simple truths from the article mentioned \nabove:\n\n1) more spindles == better\n\n So buy multiple small disks rather than one large one, and spread the\n reads / writes across all of them using RAID 0, tablespaces or\n partitioning.\n\n2) separate the transaction log from the database\n\n It's mostly written, and it's the most valuable data you have. And in\n case you use PITR, this is the only thing that really needs to be\n backed up.\n\n3) RAID 0+1/1+0 > RAID 5\n\n> Any preference of SCSI over SATA? What about using a High speed (fibre\n> channel) mass storage device?\n\nSCSI is definitely better than SATA - the SATA are consumer level \ngenerally - the main criteria in it's development is capacity, and it \ndefinitely can't compete with SCSI 10k drives when it comes to transfer \nrates, seek times, CPU utilization, etc. (and that's what really matters \nwith databases). And you can even buy 15k SAS drives for reasonable \namount of money today ...\n\nTomas\n", "msg_date": "Wed, 31 Oct 2007 22:58:57 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "> > Who has built the biggest baddest Pg server out there and what do you\n> > use?\n\nIn my last job we had a 360GB database running on a 8 way opteron with\n32 Gigs of ram. Two of those beasts connected to a SAN for hot\nfailover purposes.\n\nWe did not have much web traffic, but tons of update/insert traffic,\nmillions per day actually on pretty hefty tables (column wise).\n\n- Ericson Smith\nhttp://www.funadvice.com\n", "msg_date": "Wed, 31 Oct 2007 19:03:43 -0400", "msg_from": "\"Ericson Smith\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "\nOn Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:\n\n> 2) separate the transaction log from the database\n> \n> It's mostly written, and it's the most valuable data you have. And in\n> case you use PITR, this is the only thing that really needs to be\n> backed up.\n\nMy main DB datastore is in a raid1 array and the xlog is still\nmaintained in a single OS drive. Is this considered OK?\n\n", "msg_date": "Thu, 01 Nov 2007 11:53:02 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Tomas Vondra wrote:\n>> How does pg utilize multiple processors? The more the better?\n> \n> Linux version uses processes, so it's able to use multiple processors.\n> (Not sure about Windows version, but I guess it uses threads.)\n\nNo, the Windows version also uses processes.\n\n\n//Magnus\n\n", "msg_date": "Thu, 01 Nov 2007 07:52:41 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Ow Mun Heng wrote:\n> On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:\n> \n>> 2) separate the transaction log from the database\n>>\n>> It's mostly written, and it's the most valuable data you have. And in\n>> case you use PITR, this is the only thing that really needs to be\n>> backed up.\n> \n> My main DB datastore is in a raid1 array and the xlog is still\n> maintained in a single OS drive. Is this considered OK?\n\nIs your OS not RAIDed? I'd keep everything RAIDed one way or another -\notherwise you are certain to get downtime if the disk fails.\n\nAlso, if you don't have a *dedicated* disk for the xlog (putting it on\nthe OS disk doesn't make it dedicated), you miss out on most of the\nperformance advantage of doing it. The advantage is in that the writes\nwill be sequential so the disks don't have to seek, but if you have\nother access on the same disk, that's not true anymore.\n\nYou're likely better off (performance-wise) putting it on the same disk\nas the database itself if that one has better RAID, for example.\n\n//Magnus\n", "msg_date": "Thu, 01 Nov 2007 07:54:50 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "\nOn Thu, 2007-11-01 at 07:54 +0100, Magnus Hagander wrote:\n> Ow Mun Heng wrote:\n> > On Wed, 2007-10-31 at 22:58 +0100, Tomas Vondra wrote:\n> > \n> >> 2) separate the transaction log from the database\n> >>\n> >> It's mostly written, and it's the most valuable data you have. And in\n> >> case you use PITR, this is the only thing that really needs to be\n> >> backed up.\n> > \n> > My main DB datastore is in a raid1 array and the xlog is still\n> > maintained in a single OS drive. Is this considered OK?\n> \n> Is your OS not RAIDed? I'd keep everything RAIDed one way or another -\n> otherwise you are certain to get downtime if the disk fails.\n\nNope it's not raided. It's a very low end \"server\" running on IDE, max 4\ndrives. 1x80G system and 3x500G Raid1+1 hot spare\n\n> \n> Also, if you don't have a *dedicated* disk for the xlog (putting it on\n> the OS disk doesn't make it dedicated), you miss out on most of the\n> performance advantage of doing it. The advantage is in that the writes\n> will be sequential so the disks don't have to seek, but if you have\n> other access on the same disk, that's not true anymore.\n\nAs of right now, budget constraints is making me make do with that I've\ngot/(and it's not a whole lot)\n\n> \n> You're likely better off (performance-wise) putting it on the same disk\n> as the database itself if that one has better RAID, for example.\n\nI'm thinking along the lines of since nothing much writes to the OS\nDisk, I should(keyword) be safe.\n\nThanks for the food for thought. Now.. time to find some dough to throw\naround. :-)\n\n", "msg_date": "Thu, 01 Nov 2007 15:46:01 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Ow Mun Heng wrote:\n>> You're likely better off (performance-wise) putting it on the same disk\n>> as the database itself if that one has better RAID, for example.\n> \n> I'm thinking along the lines of since nothing much writes to the OS\n> Disk, I should(keyword) be safe.\n\nUnless it's *always* in the cache (not so likely), reads will also move\nthe heads...\n\nIn the situation you have, I'd put the xlog on the same disk as the data\n- mainly because it gives you RAID on it in case the disk breaks.\n\n//Magnus\n", "msg_date": "Thu, 01 Nov 2007 08:52:21 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "> > You're likely better off (performance-wise) putting it on the same disk\n> > as the database itself if that one has better RAID, for example.\n> I'm thinking along the lines of since nothing much writes to the OS\n> Disk, I should(keyword) be safe.\n\nYou are almost certainly wrong about this; think \"syslog\"\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Thu, 01 Nov 2007 08:56:46 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Ketema wrote:\n> I am trying to build a very Robust DB server that will support 1000+\n> concurrent users (all ready have seen max of 237 no pooling being\n> used). I have read so many articles now that I am just saturated. I\n> have a general idea but would like feedback from others.\n\nDescribe a bit better. 1,000 users or 1,000 simultaneous connections?\nIe, do you have a front-end where someone logs on, gets a connection,\nand keeps it for the duration or is it a web-type app where each request\nmight connect-query-disconnect? If the latter, are your connections\npersistent? How many queries/second do you expect?\n\nHow complex are the queries (retrieve single record or data-mining)?\nRead-only or lots of updates? Do the read-queries need to be done every\ntime or are they candidates for caching?\n\n> RAM? The more the merrier right?\n\nGenerally, true. But once you reach the point that everything can fit in\nRAM, more is just wasted $$$. And, according to my reading, there are\ncases where more RAM can hurt - basically if you manage to create a\nsituation where your queries are large enough to just flush cache so you\ndon't benefit from caching but are hurt by spending time checking cache\nfor the data.\n\n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n\nNot me.\n\nSomeone just showed me live system monitoring data on one of his several\nPG machines. That one was clocking multi-thousand TPS on a server\n(Sun??) with 128GB RAM. That much RAM makes \"top\" look amusing.\n\nSeveral of the social-networking sites are using PG - generally\nspreading load over several (dozens) of servers. They also make heavy\nuse of pooling and caching - think dedicated memcached servers offering\na combined pool of several TB RAM.\n\nFor pooling, pgbouncer seems to have a good reputation. Tests on my\ncurrent production server show it shaving a few ms off every\nconnect-query-disconnect cycle. Connects are fairly fast in PG but that\ndelay becomes a significant issue under heavy load.\n\nTest pooling carefully, though. If you blindly run everything through\nyour pooler instead of just selected apps, you can end up with\nunexpected problems when one client changes a backend setting like \"set\nstatement_timeout to 5\". If the next client assigned to that backend\nconnection runs a long-duration analysis query, it is likely to fail.\n\nCheers,\nSteve\n", "msg_date": "Thu, 01 Nov 2007 09:20:46 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "Magnus Hagander wrote:\n> Ow Mun Heng wrote:\n>>> You're likely better off (performance-wise) putting it on the same disk\n>>> as the database itself if that one has better RAID, for example.\n>> I'm thinking along the lines of since nothing much writes to the OS\n>> Disk, I should(keyword) be safe.\n> \n> Unless it's *always* in the cache (not so likely), reads will also move\n> the heads...\n\nAnd if you aren't mounted noatime, reads will also cause a write.\n\nCheers,\nSteve\n", "msg_date": "Thu, 01 Nov 2007 11:16:13 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "\nOn Thu, 2007-11-01 at 11:16 -0700, Steve Crawford wrote:\n> Magnus Hagander wrote:\n> > Ow Mun Heng wrote:\n> >>> You're likely better off (performance-wise) putting it on the same disk\n> >>> as the database itself if that one has better RAID, for example.\n> >> I'm thinking along the lines of since nothing much writes to the OS\n> >> Disk, I should(keyword) be safe.\n> > \n> > Unless it's *always* in the cache (not so likely), reads will also move\n> > the heads...\n> \n> And if you aren't mounted noatime, reads will also cause a write.\n\n\n/dev/VolGroup00/LogVol01 / ext3 defaults,noatime 1 1\n/dev/md0 /raid1_array ext3 noatime,data=writeback 1 1\n\nYep..yep..\n", "msg_date": "Fri, 02 Nov 2007 08:26:54 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "\n\nKetema wrote:\n\n> RAM? The more the merrier right? Understanding shmmax and the pg\n> config file parameters for shared mem has to be adjusted to use it.\n> Disks? standard Raid rules right? 1 for safety 5 for best mix of\n> performance and safety?\n> Any preference of SCSI over SATA? What about using a High speed (fibre\n> channel) mass storage device?\n\nYou might also check out NETAPP NAS arrays.\nDue to clever use of a big memory buffer and a custom filesystem on a\nRAID-4 array it has performance comparable to a big RAM disk connected\nthrough NFS.\n\nWorks quite well with OLTP (at least for us).\n\n> \n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n> \n> Thanks!\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n", "msg_date": "Fri, 02 Nov 2007 15:05:13 +0100", "msg_from": "Jurgen Haan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" }, { "msg_contents": "On Wednesday 31 October 2007 12:45, Ketema wrote:\n> I am trying to build a very Robust DB server that will support 1000+\n> concurrent users (all ready have seen max of 237 no pooling being\n> used). I have read so many articles now that I am just saturated. I\n> have a general idea but would like feedback from others.\n>\n\nMost of the other answers you've gotten have been pretty good, but I had some \nquestions on the above; specifically is there a reason you're avoid pooling? \n(something like pgbouncer can work wonders). Are your 1000+ concurrent users \nworking in something like a web environment, where they won't need a 1:1 \nuser:connection map to service them all, or are these going to be more \npermanent connections into the system? FWIW I'd done 1000 connections \nsimultaneous on pretty basic hardware, but you need to have the right kind of \nworkloads to be able to do it. \n\n>\n> Who has built the biggest baddest Pg server out there and what do you\n> use?\n>\n\nWhile I'm not sure this will be that much help, I'd feel remisce if I didn't \npoint you to it... \nhttp://www.lethargy.org/~jesus/archives/66-Big-Bad-PostgreSQL.html\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Thu, 08 Nov 2007 12:14:35 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware for PostgreSQL" } ]
[ { "msg_contents": "-- For some reason, my message doesn't seem to go through the mailing\nlist, so I am trying without any attachment\n\nHi,\n\nThank you Tom and Dimitri for your precious help.\n\nSo, I applied the patch at\nhttp://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n\nThe good news is that with the patch applied, the query is ~3 times\nfaster. The bad news is that it is still WAYYY slower than when using an\ninner join (~10 sec vs 300 ms)\n\nThe outer join query is\n select * from RoommateResidenceOffer this_ inner join AdCreatedEvent\nace3_ on this_.adCreatedEvent_id=ace3_.id left outer join FunalaEvent\nace3_1_ on ace3_.id=ace3_1_.id left outer join Account account6_ on\nace3_.eventInitiator_id=account6_.id left outer join ContactInformation\ncontactinf7_ on account6_.contactInformation_id=contactinf7_.id left\nouter join City city8_ on contactinf7_.city_id=city8_.id left outer join\nGisFeature gisfeature9_ on\ncity8_.associatedGisFeature_id=gisfeature9_.id left outer join\nEmailChangedEvent emailchang10_ on\ncontactinf7_.currentEmailChangedEvent_id=emailchang10_.id left outer\njoin FunalaEvent emailchang10_1_ on emailchang10_.id=emailchang10_1_.id\nleft outer join ContactInformation contactinf11_ on\nemailchang10_.contactInformation_id=contactinf11_.id left outer join\nEmailCheckedEvent emailcheck12_ on\nemailchang10_.emailCheckedEvent_id=emailcheck12_.id left outer join\nFunalaEvent emailcheck12_1_ on emailcheck12_.id=emailcheck12_1_.id left\nouter join DeclaredAsAdultEvent declaredas13_ on\naccount6_.declaredAsAdultEvent_id=declaredas13_.id left outer join\nFunalaEvent declaredas13_1_ on declaredas13_.id=declaredas13_1_.id left\nouter join UserProfile userprofil14_ on\naccount6_.profile_id=userprofil14_.id left outer join AccountSettings\naccountset15_ on account6_.settings_id=accountset15_.id left outer join\nAccountCreatedEvent accountcre16_ on\naccount6_.id=accountcre16_.createdAccount_id left outer join FunalaEvent\naccountcre16_1_ on accountcre16_.id=accountcre16_1_.id left outer join\nIpAddress ipaddress17_ on\naccountcre16_.requesterAddress_id=ipaddress17_.id left outer join\nAccountCancelledEvent accountcan18_ on\naccountcre16_.id=accountcan18_.accountCreatedEvent_id left outer join\nFunalaEvent accountcan18_1_ on accountcan18_.id=accountcan18_1_.id inner\njoin ResidenceDescription residenced19_ on\nthis_.residenceDescription_id=residenced19_.id inner join City city1_ on\nresidenced19_.city_id=city1_.id inner join GisFeature gf2_ on\ncity1_.associatedGisFeature_id=gf2_.id left outer join ResidenceType\nresidencet22_ on residenced19_.residenceType_id=residencet22_.id where\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 and ace3_1_.utcEventDate>='2007-09-29\n00:00:00' order by ace3_1_.utcEventDate asc limit 10;\n\nand the full explain analyze output is in exp3.txt (12794,919 ms)\n\nthe inner join query is \n select * from RoommateResidenceOffer this_ inner join AdCreatedEvent\nace3_ on this_.adCreatedEvent_id=ace3_.id left outer join FunalaEvent\nace3_1_ on ace3_.id=ace3_1_.id left outer join Account account6_ on\nace3_.eventInitiator_id=account6_.id left outer join ContactInformation\ncontactinf7_ on account6_.contactInformation_id=contactinf7_.id inner\njoin City city8_ on contactinf7_.city_id=city8_.id left outer join\nGisFeature gisfeature9_ on\ncity8_.associatedGisFeature_id=gisfeature9_.id left outer join\nEmailChangedEvent emailchang10_ on\ncontactinf7_.currentEmailChangedEvent_id=emailchang10_.id left outer\njoin FunalaEvent emailchang10_1_ on emailchang10_.id=emailchang10_1_.id\nleft outer join ContactInformation contactinf11_ on\nemailchang10_.contactInformation_id=contactinf11_.id left outer join\nEmailCheckedEvent emailcheck12_ on\nemailchang10_.emailCheckedEvent_id=emailcheck12_.id left outer join\nFunalaEvent emailcheck12_1_ on emailcheck12_.id=emailcheck12_1_.id left\nouter join DeclaredAsAdultEvent declaredas13_ on\naccount6_.declaredAsAdultEvent_id=declaredas13_.id left outer join\nFunalaEvent declaredas13_1_ on declaredas13_.id=declaredas13_1_.id left\nouter join UserProfile userprofil14_ on\naccount6_.profile_id=userprofil14_.id left outer join AccountSettings\naccountset15_ on account6_.settings_id=accountset15_.id left outer join\nAccountCreatedEvent accountcre16_ on\naccount6_.id=accountcre16_.createdAccount_id left outer join FunalaEvent\naccountcre16_1_ on accountcre16_.id=accountcre16_1_.id left outer join\nIpAddress ipaddress17_ on\naccountcre16_.requesterAddress_id=ipaddress17_.id left outer join\nAccountCancelledEvent accountcan18_ on\naccountcre16_.id=accountcan18_.accountCreatedEvent_id left outer join\nFunalaEvent accountcan18_1_ on accountcan18_.id=accountcan18_1_.id inner\njoin ResidenceDescription residenced19_ on\nthis_.residenceDescription_id=residenced19_.id inner join City city1_ on\nresidenced19_.city_id=city1_.id inner join GisFeature gf2_ on\ncity1_.associatedGisFeature_id=gf2_.id left outer join ResidenceType\nresidencet22_ on residenced19_.residenceType_id=residencet22_.id where\ngf2_.location && setSRID(cast ('BOX3D(1.5450494105576016\n48.73176862850233,3.1216171894423983 49.00156477149768)'as box3d), 4326)\nAND distance_sphere(gf2_.location, GeomFromText('POINT(2.3333333\n48.8666667)',4326)) <= 15000 and ace3_1_.utcEventDate>='2007-09-29\n00:00:00' order by ace3_1_.utcEventDate asc limit 10;\n\n\nand the full explain analyze output is in exp4.txt (153,220 ms)\n\n\nWhen comparing the outputs, we can see for instance that \n Seq Scan on funalaevent ace3_1_ (cost=0.00..2763.78 rows=149653\nwidth=16) (actual time=0.033..271.267 rows=149662 loops=1) (exp3)\n\nvs Index Scan using funalaevent_pkey on funalaevent ace3_1_ (exp4)\n\nSo, there is still something that prevents the indexes from being used\n(the funalaevent table contains ~ 50 K entries, as much as\nadcreatedevent. City contains 2 million entries). So any seq scan is\nawful....\n\nSo, is it possible that there is still a similar bug somewhere else ?\n\nThanks\nSami Dalouche\n\n\nLe dimanche 28 octobre 2007 à 19:45 -0400, Tom Lane a écrit :\n> Sami Dalouche <[email protected]> writes:\n> > So, the version of postgres I use is :\n> > samokk@samlaptop:~/Desktop $ dpkg -l | grep postgres\n> > ii postgresql-8.2 8.2.5-1.1\n> \n> OK. I think you have run afoul of a bug that was introduced in 8.2.5\n> that causes it not to realize that it can interchange the ordering of\n> certain outer joins. Is there any chance you can apply the one-line\n> patch shown here:\n> http://archives.postgresql.org/pgsql-committers/2007-10/msg00374.php\n> \n> If rebuilding packages is not to your taste, possibly a down-rev to\n> 8.2.4 would be the easiest solution.\n> \n> regards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n\n\n\n\n=========================\nexp3.txt\n\n\nQUERY\nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=67357.88..67357.89 rows=2 width=3355) (actual\ntime=11686.334..11686.369 rows=10 loops=1)\n -> Sort (cost=67357.88..67357.89 rows=2 width=3355) (actual\ntime=11686.328..11686.343 rows=10 loops=1)\n Sort Key: ace3_1_.utceventdate\n -> Hash Left Join (cost=28098.27..67357.87 rows=2 width=3355)\n(actual time=4127.714..7944.527 rows=50000 loops=1)\n Hash Cond: (residenced19_.residencetype_id =\nresidencet22_.id)\n -> Hash Join (cost=28097.07..67356.64 rows=2\nwidth=3330) (actual time=4127.623..7641.190 rows=50000 loops=1)\n Hash Cond: (residenced19_.city_id = city1_.id)\n -> Hash Left Join (cost=23922.95..62995.02\nrows=49997 width=3157) (actual time=4064.232..7239.816 rows=50000\nloops=1)\n Hash Cond: (account6_.settings_id =\naccountset15_.id)\n -> Hash Left Join (cost=23921.84..62306.45\nrows=49997 width=2633) (actual time=4064.157..6956.697 rows=50000\nloops=1)\n Hash Cond: (account6_.id =\naccountcre16_.createdaccount_id)\n -> Hash Join (cost=23866.11..61563.26\nrows=49997 width=1496) (actual time=4063.758..6664.260 rows=50000\nloops=1)\n Hash Cond:\n(this_.residencedescription_id = residenced19_.id)\n -> Hash Left Join\n(cost=17145.46..35788.66 rows=49997 width=1274) (actual\ntime=3656.249..4750.195 rows=50000 loops=1)\n Hash Cond:\n(emailchang10_.contactinformation_id = contactinf11_.id)\n -> Hash Left Join\n(cost=17144.35..35100.09 rows=49997 width=1235) (actual\ntime=3656.212..4535.661 rows=50000 loops=1)\n Hash Cond:\n(account6_.profile_id = userprofil14_.id)\n -> Hash Left Join\n(cost=17143.24..34411.51 rows=49997 width=618) (actual\ntime=3656.143..4284.698 rows=50000 loops=1)\n Hash Cond:\n(emailchang10_.emailcheckedevent_id = emailcheck12_.id)\n -> Hash Left\nJoin (cost=17133.92..34194.71 rows=49997 width=594) (actual\ntime=3656.062..4087.233 rows=50000 loops=1)\n Hash\nCond: (account6_.declaredasadultevent_id = declaredas13_.id)\n -> Hash\nLeft Join (cost=17131.39..33504.72 rows=49997 width=570) (actual\ntime=3655.699..3895.024 rows=50000 loops=1)\n\nHash Cond: (emailchang10_.id = emailchang10_1_.id)\n ->\nHash Left Join (cost=12067.00..19568.88 rows=49997 width=554) (actual\ntime=1778.375..2762.537 rows=50000 loops=1)\n\nHash Cond: (ace3_.eventinitiator_id = account6_.id)\n\n-> Hash Join (cost=11976.67..19029.74 rows=49997 width=192) (actual\ntime=1777.954..2518.385 rows=50000 loops=1)\n\nHash Cond: (ace3_1_.id = this_.adcreatedevent_id)\n\n-> Seq Scan on funalaevent ace3_1_ (cost=0.00..2763.78 rows=149653\nwidth=16) (actual time=0.033..271.267 rows=149662 loops=1)\n\nFilter: (utceventdate >= '2007-09-29 00:00:00'::timestamp without time\nzone)\n\n-> Hash (cost=10105.67..10105.67 rows=50000 width=176) (actual\ntime=1411.204..1411.204 rows=50000 loops=1)\n\n-> Hash Join (cost=3591.00..10105.67 rows=50000 width=176) (actual\ntime=645.641..1183.975 rows=50000 loops=1)\n\nHash Cond: (ace3_.id = this_.adcreatedevent_id)\n\n-> Seq Scan on adcreatedevent ace3_ (cost=0.00..2323.41 rows=149641\nwidth=16) (actual time=0.019..234.812 rows=149641 loops=1)\n\n-> Hash (cost=1818.00..1818.00 rows=50000 width=160) (actual\ntime=295.946..295.946 rows=50000 loops=1)\n\n-> Seq Scan on roommateresidenceoffer this_ (cost=0.00..1818.00\nrows=50000 width=160) (actual time=0.037..101.503 rows=50000 loops=1)\n\n-> Hash (cost=90.27..90.27 rows=5 width=362) (actual time=0.369..0.369\nrows=5 loops=1)\n\n-> Nested Loop Left Join (cost=2.11..90.27 rows=5 width=362) (actual\ntime=0.141..0.344 rows=5 loops=1)\n\n-> Nested Loop Left Join (cost=2.11..46.96 rows=5 width=205) (actual\ntime=0.131..0.313 rows=5 loops=1)\n\n-> Nested Loop Left Join (cost=2.11..4.29 rows=5 width=189) (actual\ntime=0.115..0.275 rows=5 loops=1)\n\nJoin Filter: (contactinf7_.currentemailchangedevent_id =\nemailchang10_.id)\n\n-> Nested Loop Left Join (cost=1.05..2.67 rows=5 width=100) (actual\ntime=0.056..0.152 rows=5 loops=1)\n\nJoin Filter: (account6_.contactinformation_id = contactinf7_.id)\n\n-> Seq Scan on account account6_ (cost=0.00..1.05 rows=5 width=61)\n(actual time=0.030..0.035 rows=5 loops=1)\n\n-> Materialize (cost=1.05..1.10 rows=5 width=39) (actual\ntime=0.004..0.011 rows=5 loops=5)\n\n-> Seq Scan on contactinformation contactinf7_ (cost=0.00..1.05 rows=5\nwidth=39) (actual time=0.006..0.012 rows=5 loops=1)\n\n-> Materialize (cost=1.05..1.10 rows=5 width=89) (actual\ntime=0.006..0.014 rows=5 loops=5)\n\n-> Seq Scan on emailchangedevent emailchang10_ (cost=0.00..1.05 rows=5\nwidth=89) (actual time=0.021..0.028 rows=5 loops=1)\n\n-> Index Scan using cityid on city city8_ (cost=0.00..8.52 rows=1\nwidth=16) (actual time=0.002..0.002 rows=0 loops=5)\n\nIndex Cond: (contactinf7_.city_id = city8_.id)\n\n-> Index Scan using gisfeatureid on gisfeature gisfeature9_\n(cost=0.00..8.65 rows=1 width=157) (actual time=0.001..0.001 rows=0\nloops=5)\n\nIndex Cond: (city8_.associatedgisfeature_id = gisfeature9_.id)\n ->\nHash (cost=2389.62..2389.62 rows=149662 width=16) (actual\ntime=496.535..496.535 rows=149662 loops=1)\n\n-> Seq Scan on funalaevent emailchang10_1_ (cost=0.00..2389.62\nrows=149662 width=16) (actual time=0.016..216.256 rows=149662 loops=1)\n -> Hash\n(cost=2.46..2.46 rows=5 width=24) (actual time=0.300..0.300 rows=5\nloops=1)\n ->\nMerge Right Join (cost=1.11..2.46 rows=5 width=24) (actual\ntime=0.215..0.285 rows=5 loops=1)\n\nMerge Cond: (declaredas13_1_.id = declaredas13_.id)\n\n-> Index Scan using funalaevent_pkey on funalaevent declaredas13_1_\n(cost=0.00..4809.35 rows=149662 width=16) (actual time=0.121..0.149\nrows=22 loops=1)\n\n-> Sort (cost=1.11..1.12 rows=5 width=8) (actual time=0.071..0.076\nrows=5 loops=1)\n\nSort Key: declaredas13_.id\n\n-> Seq Scan on declaredasadultevent declaredas13_ (cost=0.00..1.05\nrows=5 width=8) (actual time=0.034..0.041 rows=5 loops=1)\n -> Hash\n(cost=9.31..9.31 rows=1 width=24) (actual time=0.065..0.065 rows=1\nloops=1)\n ->\nNested Loop Left Join (cost=0.00..9.31 rows=1 width=24) (actual\ntime=0.054..0.060 rows=1 loops=1)\n ->\nSeq Scan on emailcheckedevent emailcheck12_ (cost=0.00..1.01 rows=1\nwidth=8) (actual time=0.025..0.027 rows=1 loops=1)\n ->\nIndex Scan using funalaevent_pkey on funalaevent emailcheck12_1_\n(cost=0.00..8.28 rows=1 width=16) (actual time=0.016..0.018 rows=1\nloops=1)\n\nIndex Cond: (emailcheck12_.id = emailcheck12_1_.id)\n -> Hash\n(cost=1.05..1.05 rows=5 width=617) (actual time=0.047..0.047 rows=5\nloops=1)\n -> Seq Scan on\nuserprofile userprofil14_ (cost=0.00..1.05 rows=5 width=617) (actual\ntime=0.027..0.033 rows=5 loops=1)\n -> Hash (cost=1.05..1.05\nrows=5 width=39) (actual time=0.021..0.021 rows=5 loops=1)\n -> Seq Scan on\ncontactinformation contactinf11_ (cost=0.00..1.05 rows=5 width=39)\n(actual time=0.005..0.011 rows=5 loops=1)\n -> Hash (cost=3365.40..3365.40\nrows=77540 width=222) (actual time=405.248..405.248 rows=77540 loops=1)\n -> Seq Scan on\nresidencedescription residenced19_ (cost=0.00..3365.40 rows=77540\nwidth=222) (actual time=0.048..157.678 rows=77540 loops=1)\n -> Hash (cost=55.66..55.66 rows=5\nwidth=1137) (actual time=0.367..0.367 rows=5 loops=1)\n -> Nested Loop Left Join\n(cost=13.80..55.66 rows=5 width=1137) (actual time=0.228..0.347 rows=5\nloops=1)\n -> Hash Left Join\n(cost=13.80..15.32 rows=5 width=1121) (actual time=0.215..0.312 rows=5\nloops=1)\n Hash Cond:\n(accountcre16_.id = accountcan18_.accountcreatedevent_id)\n -> Hash Left Join\n(cost=2.22..3.68 rows=5 width=73) (actual time=0.183..0.269 rows=5\nloops=1)\n Hash Cond:\n(accountcre16_.requesteraddress_id = ipaddress17_.id)\n -> Merge Right\nJoin (cost=1.11..2.50 rows=5 width=40) (actual time=0.117..0.189 rows=5\nloops=1)\n Merge\nCond: (accountcre16_1_.id = accountcre16_.id)\n -> Index\nScan using funalaevent_pkey on funalaevent accountcre16_1_\n(cost=0.00..4809.35 rows=149662 width=16) (actual time=0.029..0.059\nrows=24 loops=1)\n -> Sort\n(cost=1.11..1.12 rows=5 width=24) (actual time=0.068..0.073 rows=5\nloops=1)\n\nSort Key: accountcre16_.id\n ->\nSeq Scan on accountcreatedevent accountcre16_ (cost=0.00..1.05 rows=5\nwidth=24) (actual time=0.039..0.044 rows=5 loops=1)\n -> Hash\n(cost=1.05..1.05 rows=5 width=33) (actual time=0.044..0.044 rows=5\nloops=1)\n -> Seq\nScan on ipaddress ipaddress17_ (cost=0.00..1.05 rows=5 width=33)\n(actual time=0.024..0.030 rows=5 loops=1)\n -> Hash\n(cost=10.70..10.70 rows=70 width=1048) (actual time=0.004..0.004 rows=0\nloops=1)\n -> Seq Scan on\naccountcancelledevent accountcan18_ (cost=0.00..10.70 rows=70\nwidth=1048) (actual time=0.002..0.002 rows=0 loops=1)\n -> Index Scan using\nfunalaevent_pkey on funalaevent accountcan18_1_ (cost=0.00..8.06 rows=1\nwidth=16) (actual time=0.002..0.002 rows=0 loops=5)\n Index Cond:\n(accountcan18_.id = accountcan18_1_.id)\n -> Hash (cost=1.05..1.05 rows=5 width=524)\n(actual time=0.050..0.050 rows=5 loops=1)\n -> Seq Scan on accountsettings\naccountset15_ (cost=0.00..1.05 rows=5 width=524) (actual\ntime=0.028..0.034 rows=5 loops=1)\n -> Hash (cost=4173.19..4173.19 rows=74 width=173)\n(actual time=63.358..63.358 rows=137 loops=1)\n -> Nested Loop (cost=22.54..4173.19 rows=74\nwidth=173) (actual time=4.611..62.608 rows=137 loops=1)\n -> Bitmap Heap Scan on gisfeature gf2_\n(cost=22.54..2413.73 rows=208 width=157) (actual time=4.517..32.198\nrows=1697 loops=1)\n Filter: ((\"location\" &&\n'0103000020E610000001000000050000009BC810BB85B8F83F01D42B98AA5D48409BC810BB85B8F83F4134414633804840D44ADA6E12F908404134414633804840D44ADA6E12F9084001D42B98AA5D48409BC810BB85B8F83F01D42B98AA5D4840'::geometry) AND (distance_sphere(\"location\", '0101000020E6100000915731A6AAAA0240218436EFEE6E4840'::geometry) <= 15000::double precision))\n -> Bitmap Index Scan on\ngisfeaturelocation (cost=0.00..22.49 rows=625 width=0) (actual\ntime=4.109..4.109 rows=2761 loops=1)\n Index Cond: (\"location\" &&\n'0103000020E610000001000000050000009BC810BB85B8F83F01D42B98AA5D48409BC810BB85B8F83F4134414633804840D44ADA6E12F908404134414633804840D44ADA6E12F9084001D42B98AA5D48409BC810BB85B8F83F01D42B98AA5D4840'::geometry)\n -> Index Scan using\ncityassociatedgisfeatureid on city city1_ (cost=0.00..8.45 rows=1\nwidth=16) (actual time=0.013..0.013 rows=0 loops=1697)\n Index Cond:\n(city1_.associatedgisfeature_id = gf2_.id)\n -> Hash (cost=1.09..1.09 rows=9 width=25) (actual\ntime=0.065..0.065 rows=9 loops=1)\n -> Seq Scan on residencetype residencet22_\n(cost=0.00..1.09 rows=9 width=25) (actual time=0.030..0.042 rows=9\nloops=1)\n Total runtime: 12366.311 ms\n(102 lignes)\n\n\n\n===========================\nexp4.txt:\n\n\nQUERY\nPLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3198.68..3198.69 rows=1 width=3355) (actual\ntime=0.189..0.189 rows=0 loops=1)\n -> Sort (cost=3198.68..3198.69 rows=1 width=3355) (actual\ntime=0.184..0.184 rows=0 loops=1)\n Sort Key: ace3_1_.utceventdate\n -> Nested Loop (cost=44.85..3198.67 rows=1 width=3355)\n(actual time=0.153..0.153 rows=0 loops=1)\n -> Nested Loop (cost=44.85..3190.00 rows=1 width=3198)\n(actual time=0.149..0.149 rows=0 loops=1)\n -> Nested Loop Left Join (cost=44.85..3181.47\nrows=1 width=3182) (actual time=0.147..0.147 rows=0 loops=1)\n Join Filter: (residenced19_.residencetype_id\n= residencet22_.id)\n -> Nested Loop Left Join\n(cost=44.85..3180.26 rows=1 width=3157) (actual time=0.145..0.145 rows=0\nloops=1)\n Join Filter: (account6_.settings_id =\naccountset15_.id)\n -> Nested Loop Left Join\n(cost=44.85..3179.15 rows=1 width=2633) (actual time=0.143..0.143 rows=0\nloops=1)\n -> Nested Loop Left Join\n(cost=44.85..3170.85 rows=1 width=2617) (actual time=0.140..0.140 rows=0\nloops=1)\n Join Filter:\n(accountcre16_.requesteraddress_id = ipaddress17_.id)\n -> Nested Loop Left Join\n(cost=44.85..3169.74 rows=1 width=2584) (actual time=0.139..0.139 rows=0\nloops=1)\n -> Nested Loop Left\nJoin (cost=44.85..3161.67 rows=1 width=2568) (actual time=0.136..0.136\nrows=0 loops=1)\n Join Filter:\n(accountcre16_.id = accountcan18_.accountcreatedevent_id)\n -> Nested Loop\nLeft Join (cost=44.85..3150.10 rows=1 width=1520) (actual\ntime=0.133..0.133 rows=0 loops=1)\n Join\nFilter: (account6_.id = accountcre16_.createdaccount_id)\n ->\nNested Loop (cost=44.85..3148.99 rows=1 width=1496) (actual\ntime=0.132..0.132 rows=0 loops=1)\n ->\nNested Loop Left Join (cost=44.85..3140.70 rows=1 width=1274) (actual\ntime=0.129..0.129 rows=0 loops=1)\n\n-> Nested Loop Left Join (cost=44.85..3132.40 rows=1 width=1258)\n(actual time=0.126..0.126 rows=0 loops=1)\n\nJoin Filter: (emailchang10_.contactinformation_id = contactinf11_.id)\n\n-> Nested Loop Left Join (cost=44.85..3131.29 rows=1 width=1219)\n(actual time=0.125..0.125 rows=0 loops=1)\n\n-> Nested Loop Left Join (cost=44.85..3122.99 rows=1 width=1203)\n(actual time=0.122..0.122 rows=0 loops=1)\n\nJoin Filter: (emailchang10_.emailcheckedevent_id = emailcheck12_.id)\n\n-> Nested Loop Left Join (cost=44.85..3121.97 rows=1 width=1195)\n(actual time=0.120..0.120 rows=0 loops=1)\n\n-> Nested Loop Left Join (cost=44.85..3113.67 rows=1 width=1179)\n(actual time=0.118..0.118 rows=0 loops=1)\n\nJoin Filter: (account6_.declaredasadultevent_id = declaredas13_.id)\n\n-> Nested Loop Left Join (cost=44.85..3112.56 rows=1 width=1171)\n(actual time=0.115..0.115 rows=0 loops=1)\n\nJoin Filter: (account6_.profile_id = userprofil14_.id)\n\n-> Nested Loop Left Join (cost=44.85..3111.45 rows=1 width=554)\n(actual time=0.112..0.112 rows=0 loops=1)\n\n-> Nested Loop Left Join (cost=44.85..3102.79 rows=1 width=397)\n(actual time=0.110..0.110 rows=0 loops=1)\n\nJoin Filter: (contactinf7_.currentemailchangedevent_id =\nemailchang10_.id)\n\n-> Nested Loop (cost=44.85..3101.67 rows=1 width=308) (actual\ntime=0.108..0.108 rows=0 loops=1)\n\n-> Nested Loop (cost=44.85..3093.77 rows=1 width=148) (actual\ntime=0.106..0.106 rows=0 loops=1)\n\n-> Hash Join (cost=44.85..3085.84 rows=1 width=132) (actual\ntime=0.104..0.104 rows=0 loops=1)\n\nHash Cond: (ace3_.eventinitiator_id = account6_.id)\n\n-> Seq Scan on adcreatedevent ace3_ (cost=0.00..2323.41 rows=149641\nwidth=16) (actual time=0.034..0.034 rows=1 loops=1)\n\n-> Hash (cost=44.84..44.84 rows=1 width=116) (actual time=0.045..0.045\nrows=0 loops=1)\n\n-> Nested Loop (cost=0.00..44.84 rows=1 width=116) (actual\ntime=0.043..0.043 rows=0 loops=1)\n\nJoin Filter: (account6_.contactinformation_id = contactinf7_.id)\n\n-> Nested Loop (cost=0.00..43.73 rows=1 width=55) (actual\ntime=0.040..0.040 rows=0 loops=1)\n\n-> Seq Scan on contactinformation contactinf7_ (cost=0.00..1.05 rows=5\nwidth=39) (actual time=0.004..0.012 rows=5 loops=1)\n\n-> Index Scan using cityid on city city8_ (cost=0.00..8.52 rows=1\nwidth=16) (actual time=0.002..0.002 rows=0 loops=5)\n\nIndex Cond: (contactinf7_.city_id = city8_.id)\n\n-> Seq Scan on account account6_ (cost=0.00..1.05 rows=5 width=61)\n(never executed)\n\n-> Index Scan using funalaevent_pkey on funalaevent ace3_1_\n(cost=0.00..7.92 rows=1 width=16) (never executed)\n\nIndex Cond: (ace3_.id = ace3_1_.id)\n\nFilter: (utceventdate >= '2007-09-29 00:00:00'::timestamp without time\nzone)\n\n-> Index Scan using roommateresidenceofferadcreatedevent on\nroommateresidenceoffer this_ (cost=0.00..7.89 rows=1 width=160) (never\nexecuted)\n\nIndex Cond: (this_.adcreatedevent_id = ace3_.id)\n\n-> Seq Scan on emailchangedevent emailchang10_ (cost=0.00..1.05 rows=5\nwidth=89) (never executed)\n\n-> Index Scan using gisfeatureid on gisfeature gisfeature9_\n(cost=0.00..8.65 rows=1 width=157) (never executed)\n\nIndex Cond: (city8_.associatedgisfeature_id = gisfeature9_.id)\n\n-> Seq Scan on userprofile userprofil14_ (cost=0.00..1.05 rows=5\nwidth=617) (never executed)\n\n-> Seq Scan on declaredasadultevent declaredas13_ (cost=0.00..1.05\nrows=5 width=8) (never executed)\n\n-> Index Scan using funalaevent_pkey on funalaevent declaredas13_1_\n(cost=0.00..8.28 rows=1 width=16) (never executed)\n\nIndex Cond: (declaredas13_.id = declaredas13_1_.id)\n\n-> Seq Scan on emailcheckedevent emailcheck12_ (cost=0.00..1.01 rows=1\nwidth=8) (never executed)\n\n-> Index Scan using funalaevent_pkey on funalaevent emailcheck12_1_\n(cost=0.00..8.28 rows=1 width=16) (never executed)\n\nIndex Cond: (emailcheck12_.id = emailcheck12_1_.id)\n\n-> Seq Scan on contactinformation contactinf11_ (cost=0.00..1.05\nrows=5 width=39) (never executed)\n\n-> Index Scan using funalaevent_pkey on funalaevent emailchang10_1_\n(cost=0.00..8.28 rows=1 width=16) (never executed)\n\nIndex Cond: (emailchang10_.id = emailchang10_1_.id)\n ->\nIndex Scan using residencedescription_pkey on residencedescription\nresidenced19_ (cost=0.00..8.28 rows=1 width=222) (never executed)\n\nIndex Cond: (this_.residencedescription_id = residenced19_.id)\n -> Seq\nScan on accountcreatedevent accountcre16_ (cost=0.00..1.05 rows=5\nwidth=24) (never executed)\n -> Seq Scan on\naccountcancelledevent accountcan18_ (cost=0.00..10.70 rows=70\nwidth=1048) (never executed)\n -> Index Scan using\nfunalaevent_pkey on funalaevent accountcan18_1_ (cost=0.00..8.06 rows=1\nwidth=16) (never executed)\n Index Cond:\n(accountcan18_.id = accountcan18_1_.id)\n -> Seq Scan on ipaddress\nipaddress17_ (cost=0.00..1.05 rows=5 width=33) (never executed)\n -> Index Scan using\nfunalaevent_pkey on funalaevent accountcre16_1_ (cost=0.00..8.28 rows=1\nwidth=16) (never executed)\n Index Cond:\n(accountcre16_.id = accountcre16_1_.id)\n -> Seq Scan on accountsettings\naccountset15_ (cost=0.00..1.05 rows=5 width=524) (never executed)\n -> Seq Scan on residencetype residencet22_\n(cost=0.00..1.09 rows=9 width=25) (never executed)\n -> Index Scan using cityid on city city1_\n(cost=0.00..8.52 rows=1 width=16) (never executed)\n Index Cond: (residenced19_.city_id =\ncity1_.id)\n -> Index Scan using gisfeatureid on gisfeature gf2_\n(cost=0.00..8.66 rows=1 width=157) (never executed)\n Index Cond: (city1_.associatedgisfeature_id =\ngf2_.id)\n Filter: ((\"location\" &&\n'0103000020E610000001000000050000009BC810BB85B8F83F01D42B98AA5D48409BC810BB85B8F83F4134414633804840D44ADA6E12F908404134414633804840D44ADA6E12F9084001D42B98AA5D48409BC810BB85B8F83F01D42B98AA5D4840'::geometry) AND (distance_sphere(\"location\", '0101000020E6100000915731A6AAAA0240218436EFEE6E4840'::geometry) <= 15000::double precision))\n Total runtime: 25.647 ms\n(80 lignes)\n\n\n", "msg_date": "Wed, 31 Oct 2007 22:22:19 +0100", "msg_from": "Sami Dalouche <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: Outer joins and Seq scans]" }, { "msg_contents": "Sami Dalouche <[email protected]> writes:\n> -- For some reason, my message doesn't seem to go through the mailing\n> list, so I am trying without any attachment\n\nPlease don't do that, at least not that way. These explain outputs have\nbeen line-wrapped to the point of utter unreadability.\n\nThe main problem looks to me that you're trying to do a 25-way join.\nYou'll want to increase join_collapse_limit and maybe fool with the\ngeqo parameters. I fear you won't get a plan in a sane amount of time\nif you try to do the full query as a single exhaustive search. You\ncan either raise join_collapse_limit all the way and trust geqo to\nfind a decent plan repeatably (not a real safe assumption unfortunately)\nor raise both join_collapse_limit and geqo_threshold to some\nintermediate level and hope that a slightly wider partial plan search\nwill find the plan you need.\n\nIt's also possible that you're just stuck and the outer join is\ninherently harder to execute. I didn't study the query closely enough\nto see if it's joining to any left join right-hand-sides, or anything\nelse that would forbid picking a nice join order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 31 Oct 2007 18:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: Outer joins and Seq scans] " }, { "msg_contents": "Sami Dalouche <[email protected]> writes:\n> Compare that to the following query, that is exactly the same except\n> that the City table is inner'joined instead of outer joined\n> ...\n> the explain analyze is available at :\n> http://www.photosdesami.com/temp/exp6.txt\n\nAFAICS it's just absolutely blind luck that that query is fast. The\nplanner chooses to do the contactinf7_/city8_ join first, and because\nthat happens to return no rows at all, all the rest of the query falls\nout in no time, even managing to avoid the scan of adcreatedevent.\nIf there were any rows out of that join it would be a great deal slower.\n\nThere is a pretty significant semantic difference between the two\nqueries, too, now that I look closer: when you make \n\"... join City city8_ on contactinf7_.city_id=city8_.id\"\na plain join instead of left join, that means the join to contactinf7_\ncan be reduced to a plain join as well, because no rows with nulls for\ncontactinf7_ could possibly contribute to the upper join's result.\nThat optimization doesn't apply in the original form of the query,\nwhich restricts the planner's freedom to rearrange things.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2007 09:29:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: Outer joins and Seq scans] " }, { "msg_contents": "Thanks for your answer.\n\nSo, basically, what you are saying is that there is nothing particularly\nwrong with the query, nor with its optimization ? So if I need\nperformance for this query, I should just revert to other techniques\n(giving more memory to postgres, caching outside postgres, etc..) ?\n\nRegards,\nSami Dalouche\n\nLe jeudi 01 novembre 2007 à 09:29 -0400, Tom Lane a écrit :\n> Sami Dalouche <[email protected]> writes:\n> > Compare that to the following query, that is exactly the same except\n> > that the City table is inner'joined instead of outer joined\n> > ...\n> > the explain analyze is available at :\n> > http://www.photosdesami.com/temp/exp6.txt\n> \n> AFAICS it's just absolutely blind luck that that query is fast. The\n> planner chooses to do the contactinf7_/city8_ join first, and because\n> that happens to return no rows at all, all the rest of the query falls\n> out in no time, even managing to avoid the scan of adcreatedevent.\n> If there were any rows out of that join it would be a great deal slower.\n> \n> There is a pretty significant semantic difference between the two\n> queries, too, now that I look closer: when you make \n> \"... join City city8_ on contactinf7_.city_id=city8_.id\"\n> a plain join instead of left join, that means the join to contactinf7_\n> can be reduced to a plain join as well, because no rows with nulls for\n> contactinf7_ could possibly contribute to the upper join's result.\n> That optimization doesn't apply in the original form of the query,\n> which restricts the planner's freedom to rearrange things.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Thu, 01 Nov 2007 15:21:17 +0100", "msg_from": "Sami Dalouche <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Fwd: Re: Outer joins and Seq scans]" }, { "msg_contents": "Sami Dalouche wrote:\n> -- For some reason, my message doesn't seem to go through the mailing\n> list, so I am trying without any attachment\n\nFWIW, you can post EXPLAIN ANALYZE results on the web here:\nhttp://www.explain-analyze.info/\n\nIt's a pretty cool utility by Michael Glaesemann that should makes our\nlives a bit better (at least along a certain axis).\n\n-- \nAlvaro Herrera http://www.PlanetPostgreSQL.org/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n", "msg_date": "Fri, 2 Nov 2007 15:45:48 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Fwd: Re: Outer joins and Seq scans]" } ]
[ { "msg_contents": "Hi,\n\nI have a table \"login\" with approx 600,000 tuples, a person table with \napprox 100000 tuples.\n\nWhen running\n select max(\"when\") from login where userid='userid'\n\nit takes a second or two, but when adding \"group by userid\" the planner \ndecides on using another plan, and it gets *much* faster. See example below.\n\nNumber of tuples per user varies from zero to a couple of thousands. It \nseems to slower when there are no tuples as all, but it is always slow.\n\nThis is only for max() and min(). For count(), the plan is the same, it \nalways uses \"Aggregate\".\n\nAny ideas about this? Do we need to add \"group by userid\" to our code base \nto optimize, or is there another way? Updating postgresql to 8.2 is a long \nterm option, but I'd like a short term option as well...\n\nRegards,\nPalle\n\n\npp=# select version();\n version\n-------------------------------------------------------------------------------------------------\n PostgreSQL 8.1.8 on amd64-portbld-freebsd6.1, compiled by GCC cc (GCC) \n3.4.4 [FreeBSD] 20050518\n(1 row)\n\nTime: 0,530 ms\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='girgen' ;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=323.80..323.81 rows=1 width=0) (actual \ntime=3478.781..3478.785 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..323.80 rows=1 width=8) (actual \ntime=3478.768..3478.768 rows=0 loops=1)\n -> Index Scan Backward using login_when_idx on \"login\" \n(cost=0.00..131461.90 rows=406 width=8) (actual time=3478.759..3478.759 \nrows=0 loops=1)\n Filter: ((\"when\" IS NOT NULL) AND (userid = \n'sarah.gilliam1'::text))\n Total runtime: 3478.868 ms\n(6 rows)\n\nTime: 3480,442 ms\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='girgen' \ngroup by userid;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..648.44 rows=1 width=25) (actual \ntime=0.191..0.191 rows=0 loops=1)\n -> Index Scan using login_userid_idx on \"login\" (cost=0.00..646.40 \nrows=406 width=25) (actual time=0.183..0.183 rows=0 loops=1)\n Index Cond: (userid = 'sarah.gilliam1'::text)\n Total runtime: 0.243 ms\n(4 rows)\n\nTime: 0,938 ms\npp=# \\d login\n Table \"public.login\"\n Column | Type | Modifiers\n--------+--------------------------+--------------------\n userid | text |\n kursid | integer |\n when | timestamp with time zone |\n mode | text | default 'pm'::text\nIndexes:\n \"login_kurs_user_idx\" btree (kursid, userid)\n \"login_userid_idx\" btree (userid)\n \"login_when_idx\" btree (\"when\")\nForeign-key constraints:\n \"pp_fk1\" FOREIGN KEY (userid) REFERENCES person(userid) ON UPDATE \nCASCADE ON DELETE CASCADE\n \"pp_fk2\" FOREIGN KEY (kursid) REFERENCES course(id) ON UPDATE CASCADE \nON DELETE CASCADE\n\n\n", "msg_date": "Thu, 01 Nov 2007 14:07:55 +0100", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "select max(field) from table much faster with a group by clause?" }, { "msg_contents": "Palle Girgensohn <[email protected]> writes:\n> When running\n> select max(\"when\") from login where userid='userid'\n> it takes a second or two, but when adding \"group by userid\" the planner \n> decides on using another plan, and it gets *much* faster. See example below.\n\nIt's only faster for cases where there are few or no rows for the\nparticular userid ...\n\n> Number of tuples per user varies from zero to a couple of thousands.\n\nThe planner is using an intermediate estimate of 406 rows. You might be\nwell advised to increase the statistics target for login.userid --- with\nluck that would help it to choose the right plan type for both common\nand uncommon userids.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2007 09:43:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "\n\n--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Palle Girgensohn <[email protected]> writes:\n>> When running\n>> select max(\"when\") from login where userid='userid'\n>> it takes a second or two, but when adding \"group by userid\" the planner\n>> decides on using another plan, and it gets *much* faster. See example\n>> below.\n>\n> It's only faster for cases where there are few or no rows for the\n> particular userid ...\n\nWell, no, not really. See below. OTH, it sometimes a bit slower. Seems to \ndepend on how far away from the estimated number of rows you get? Weird?\n\n>> Number of tuples per user varies from zero to a couple of thousands.\n>\n> The planner is using an intermediate estimate of 406 rows. You might be\n> well advised to increase the statistics target for login.userid --- with\n> luck that would help it to choose the right plan type for both common\n> and uncommon userids.\n\nI'll try that, thanks!\n\n--\n\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' group by userid;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 6,194 ms\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' ;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 992,391 ms\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' ;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 779,582 ms\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' ;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 818,667 ms\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' ;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 640,242 ms\npp=# SELECT max(\"when\") FROM login WHERE userid='kudo' group by userid;\n max\n-------------------------------\n 2007-01-04 15:31:46.863325+01\n(1 row)\n\nTime: 18,384 ms\npp=# SELECT count(*) FROM login WHERE userid='kudo' group by userid;\n count\n-------\n 1998\n(1 row)\n\nTime: 12,762 ms\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='kudo' \ngroup by userid;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..648.44 rows=1 width=25) (actual \ntime=24.700..24.703 rows=1 loops=1)\n -> Index Scan using login_userid_idx on \"login\" (cost=0.00..646.40 \nrows=406 width=25) (actual time=0.140..16.931 rows=1998 loops=1)\n Index Cond: (userid = 'kudo'::text)\n Total runtime: 24.779 ms\n(4 rows)\n\nTime: 25,633 ms\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='kudo' ;\n \nQUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=323.93..323.94 rows=1 width=0) (actual \ntime=1400.994..1400.997 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..323.93 rows=1 width=8) (actual \ntime=1400.975..1400.979 rows=1 loops=1)\n -> Index Scan Backward using login_when_idx on \"login\" \n(cost=0.00..131515.87 rows=406 width=8) (actual time=1400.968..1400.968 \nrows=1 loops=1)\n Filter: ((\"when\" IS NOT NULL) AND (userid = 'kudo'::text))\n Total runtime: 1401.057 ms\n(6 rows)\n\nTime: 1401,881 ms\n\n\npp=# SELECT userid, count(\"when\") FROM login WHERE userid in ('girgen' , \n'kudo') group by userid;\n userid | count\n--------+-------\n kudo | 1998\n girgen | 1120\n(2 rows)\n\n\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='girgen' \ngroup by userid;\n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..648.44 rows=1 width=25) (actual \ntime=25.137..25.141 rows=1 loops=1)\n -> Index Scan using login_userid_idx on \"login\" (cost=0.00..646.40 \nrows=406 width=25) (actual time=0.121..20.712 rows=1120 loops=1)\n Index Cond: (userid = 'girgen'::text)\n Total runtime: 25.209 ms\n(4 rows)\n\nTime: 25,986 ms\n\npp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='girgen' ;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=323.93..323.94 rows=1 width=0) (actual time=6.695..6.698 \nrows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..323.93 rows=1 width=8) (actual \ntime=6.669..6.675 rows=1 loops=1)\n -> Index Scan Backward using login_when_idx on \"login\" \n(cost=0.00..131515.87 rows=406 width=8) (actual time=6.660..6.660 rows=1 \nloops=1)\n Filter: ((\"when\" IS NOT NULL) AND (userid = \n'girgen'::text))\n Total runtime: 6.785 ms\n(6 rows)\n\nTime: 7,776 ms\n\n", "msg_date": "Thu, 01 Nov 2007 15:20:14 +0100", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "\n\n--On torsdag, november 01, 2007 09.43.39 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Palle Girgensohn <[email protected]> writes:\n>> When running\n>> select max(\"when\") from login where userid='userid'\n>> it takes a second or two, but when adding \"group by userid\" the planner\n>> decides on using another plan, and it gets *much* faster. See example\n>> below.\n>\n> It's only faster for cases where there are few or no rows for the\n> particular userid ...\n>\n>> Number of tuples per user varies from zero to a couple of thousands.\n>\n> The planner is using an intermediate estimate of 406 rows. You might be\n> well advised to increase the statistics target for login.userid --- with\n> luck that would help it to choose the right plan type for both common\n> and uncommon userids.\n\nUnfortunately, altering statistics doesn't help. I see no difference when \naltering the value from 10 (default) to 100, 1000 or 100000. :-(\n\nAre there any other things I can modify?\n\nOH, btw, maybe something in the postgresql.conf sucks?\n\nmax_connections = 100\nshared_buffers = 30000 # min 16 or max_connections*2, 8KB \neach\ntemp_buffers = 2500 # min 100, 8KB each\nmax_prepared_transactions = 100 # can be 0 or more\nwork_mem = 16384 # min 64, size in KB\nmaintenance_work_mem = 16384 # min 1024, size in KB\nmax_stack_depth = 32768 # min 100, size in KB\nmax_fsm_pages = 500000\nmax_fsm_relations = 20000\nmax_files_per_process = 2000\nfsync = off\ncheckpoint_segments = 50 # in logfile segments, min 1, 16MB \neach\neffective_cache_size = 10000 # typically 8KB each\nrandom_page_cost = 1.8\ngeqo = on\ngeqo_threshold = 10\nfrom_collapse_limit = 8\njoin_collapse_limit = 8 # 1 disables collapsing of explicit\n\n", "msg_date": "Thu, 01 Nov 2007 15:36:29 +0100", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "Palle Girgensohn <[email protected]> writes:\n> Unfortunately, altering statistics doesn't help. I see no difference when \n> altering the value from 10 (default) to 100, 1000 or 100000. :-(\n\nUm, you did re-ANALYZE the table after changing the setting?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2007 11:06:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "\n\n--On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Palle Girgensohn <[email protected]> writes:\n>> Unfortunately, altering statistics doesn't help. I see no difference\n>> when altering the value from 10 (default) to 100, 1000 or 100000. :-(\n>\n> Um, you did re-ANALYZE the table after changing the setting?\n\nalter table login alter userid SET statistics 1000;\nvacuum analyze login;\n\n", "msg_date": "Thu, 01 Nov 2007 16:25:08 +0100", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "On 11/1/07, Tom Lane <[email protected]> wrote:\n> Palle Girgensohn <[email protected]> writes:\n> > Unfortunately, altering statistics doesn't help. I see no difference when\n> > altering the value from 10 (default) to 100, 1000 or 100000. :-(\n>\n> Um, you did re-ANALYZE the table after changing the setting?\n\nAnd he changed it with\n\nALTER TABLE name ALTER [ COLUMN ] column SET STORAGE { PLAIN |\nEXTERNAL | EXTENDED | MAIN }\n\nright?\n", "msg_date": "Thu, 1 Nov 2007 10:28:53 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause?" }, { "msg_contents": "Palle Girgensohn <[email protected]> writes:\n> --On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane <[email protected]> \n> wrote:\n>> Um, you did re-ANALYZE the table after changing the setting?\n\n> alter table login alter userid SET statistics 1000;\n> vacuum analyze login;\n\nHm, that's the approved procedure all right. But the plans didn't\nchange at all? Not even the estimated number of rows?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2007 11:34:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " }, { "msg_contents": "\n\"Palle Girgensohn\" <[email protected]> writes:\n\n> Are there any other things I can modify?\n\nYou might consider an index on <userid,when>. Keep in mind that every new\nindex imposes an incremental cost on every update and insert and increases the\ntime for vacuum.\n\n> max_prepared_transactions = 100 # can be 0 or more\n\nAre you actually using prepared transactions (are you synchronising multiple\ndatabases using a transaction manager)? If not then set this to 0 as it takes\nsome resources.\n\n> maintenance_work_mem = 16384 # min 1024, size in KB\n\nRaising this might decrease vacuum times if that's a problem.\n\n> fsync = off\n\nYou realize that this means if the system loses power or the kernel crashes\nyou could have data corruption? Do you take very frequent backups or can you\nreconstruct your data?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Thu, 01 Nov 2007 15:47:32 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause?" }, { "msg_contents": "On Thu, Nov 01, 2007 at 02:07:55PM +0100, Palle Girgensohn wrote:\n> I have a table \"login\" with approx 600,000 tuples, a person table with \n> approx 100000 tuples.\n> When running\n> select max(\"when\") from login where userid='userid'\n> it takes a second or two, but when adding \"group by userid\" the planner \n> decides on using another plan, and it gets *much* faster. See example below.\n> pp=# explain analyze SELECT max(\"when\") FROM login WHERE userid='girgen' ;\n\njust do:\ncreate index q on login (userid, \"when\"); and you should be fine.\nif it will not help, rewrite the query as:\nselect \"when\"\nfrom login\nwhere userid = 'girgen'\norder by userid desc, \"when\" desc limit 1;\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Fri, 2 Nov 2007 08:56:04 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select max(field) from table much faster with a group by clause?" }, { "msg_contents": "\n\n--On torsdag, november 01, 2007 11.34.42 -0400 Tom Lane <[email protected]> \nwrote:\n\n> Palle Girgensohn <[email protected]> writes:\n>> --On torsdag, november 01, 2007 11.06.57 -0400 Tom Lane\n>> <[email protected]> wrote:\n>>> Um, you did re-ANALYZE the table after changing the setting?\n>\n>> alter table login alter userid SET statistics 1000;\n>> vacuum analyze login;\n>\n> Hm, that's the approved procedure all right. But the plans didn't\n> change at all? Not even the estimated number of rows?\n\nEstimated number of rows did change from ~400 to ~1900, but the timing was \nthe same.\n\nSeems that the problem is that it is using an index on \"when\". Removing \nthat index (login_when_idx) changes the plan, and makes the query equally \nfast whether group by is there or not. I may need the index, though, in \nwhich one more index, on (userid, \"when\"), will fix the problem. I'd rather \nget rid of an index than creating another one.\n\nAnyway, I think I have two suggestions for a solution that will work for \nme. I still think it is strange that the group by clause so radically \nchanges the behaviour and the query time.\n\nCheers,\nPalle\n\npp=# \\d login\n Table \"public.login\"\n Column | Type | Modifiers\n--------+--------------------------+--------------------\n userid | text |\n kursid | integer |\n when | timestamp with time zone |\n mode | text | default 'pm'::text\nIndexes:\n \"login_kurs_user_idx\" btree (kursid, userid)\n \"login_userid_idx\" btree (userid)\n \"login_when_idx\" btree (\"when\")\nForeign-key constraints:\n \"pp_fk1\" FOREIGN KEY (userid) REFERENCES person(userid) ON UPDATE \nCASCADE ON DELETE CASCADE\n \"pp_fk2\" FOREIGN KEY (kursid) REFERENCES course(id) ON UPDATE CASCADE \nON DELETE CASCADE\n\n\n\n\n", "msg_date": "Fri, 02 Nov 2007 11:44:24 +0100", "msg_from": "Palle Girgensohn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select max(field) from table much faster with a group by clause? " } ]
[ { "msg_contents": "I am comparing the same query on two different PG 8.2 servers, one Linux \n(8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.\n\nThe Windows posgrestsql.config is pretty well tuned but it looks like \nsomeone had wiped out the Linux config so the default one was re-installed. \nAll performance-related memory allocation values seem to be set to the \ndefaults, but mods have been made: max_connections = 100 and shared_buffers \n= 32MB.\n\nThe performance for this query is terrible on the Linux server, and good on \nthe Windows server - presumably because the original Linux PG config has \nbeen lost. This query requires: that \"set enable_seqscan to 'off';\"\n\nStill, the Linux server did not create the same, fast plan as the Windows \nserver. In order to get the same plan we had to:\n\nset enable_hashjoin to 'off';\nset enable_mergejoin to 'off';\n\nThe plans were now similar, using nested loops and bitmapped heap scans. Now \nthe Linux query outperformed the Windows query.\n\nQuestion: Can anyone tell me which config values would have made PG select \nhash join and merge joins when the nested loop/bitmap heap scan combination \nwas faster?\n\nCarlo \n\n", "msg_date": "Thu, 1 Nov 2007 16:57:34 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to avoid hashjoin and mergejoin" }, { "msg_contents": "On 11/1/07, Carlo Stonebanks <[email protected]> wrote:\n> I am comparing the same query on two different PG 8.2 servers, one Linux\n> (8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.\n>\n> The Windows posgrestsql.config is pretty well tuned but it looks like\n> someone had wiped out the Linux config so the default one was re-installed.\n> All performance-related memory allocation values seem to be set to the\n> defaults, but mods have been made: max_connections = 100 and shared_buffers\n> = 32MB.\n>\n> The performance for this query is terrible on the Linux server, and good on\n> the Windows server - presumably because the original Linux PG config has\n> been lost. This query requires: that \"set enable_seqscan to 'off';\"\n\nHave you run analyze on the server yet?\n\nA few general points on performance tuning. With 8.2 you should set\nshared_buffers to a pretty big chunk of memory on linux, up to 25% or\nso. That means 32 Meg shared buffers is REAL low for a linux server.\nTry running anywhere from 512Meg up to 1Gig for starters and see if\nthat helps too. Also turn up work_mem to something like 16 to 32 meg\nthen restart the server after making these changes.\n\nThen give us the explain analyze output with all the enable_xxx set to ON.\n\nsummary: analyze, increase shared_buffers and work_mem, give us explain analyze.\n", "msg_date": "Thu, 1 Nov 2007 16:38:35 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid hashjoin and mergejoin" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> Still, the Linux server did not create the same, fast plan as the Windows \n> server. In order to get the same plan we had to:\n\n> set enable_hashjoin to 'off';\n> set enable_mergejoin to 'off';\n\nThis is just about never the appropriate way to solve a performance\nproblem, as it will inevitably create performance problems in other\nqueries.\n\nWhat I'm wondering is whether the tables have been ANALYZEd recently,\nand also whether there are any nondefault postgresql.conf settings in\nuse on the other server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Nov 2007 17:41:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid hashjoin and mergejoin " }, { "msg_contents": "<<This is just about never the appropriate way to solve a performance\nproblem, as it will inevitably create performance problems in other\nqueries.\n>>\n\nIn this particular example, this was done to \"force\" the query on the Linux\nbox to use the same plan as on the Windows box to prove that - once the\ncorrect plan was chosen - the Linux box could at least MATCH the Windows\nbox.\n\nThat being said, I should mention this: we take certain \"core\" queries that\nwe know are essential and embed them in a plpgsql SRF's that save the\nvarious settings, modify them as required for the query, then restore them\nafter the rows are returned.\n\nDoes this address the problem you mentioned?\n\n<< What I'm wondering is whether the tables have been ANALYZEd recently,>>\n\nThis is SUPPOSED to be done after a restore - but I will verify, thanks for\nthe reminder.\n\n<< and also whether there are any nondefault postgresql.conf settings in\nuse on the other server.>>\n\nDefinitely - this is what alerted me to the fact that there was something\nsuspicious. We try to optimize our memory settings (based on various tuning\ndocs, advice from here, and good old trial-and-error). Since the new config\nhad barely any changes, I knew something was wrong.\n\nCarlo \n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: November 1, 2007 5:42 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] How to avoid hashjoin and mergejoin \n\n\"Carlo Stonebanks\" <[email protected]> writes:\n> Still, the Linux server did not create the same, fast plan as the Windows \n> server. In order to get the same plan we had to:\n\n> set enable_hashjoin to 'off';\n> set enable_mergejoin to 'off';\n\nThis is just about never the appropriate way to solve a performance\nproblem, as it will inevitably create performance problems in other\nqueries.\n\nWhat I'm wondering is whether the tables have been ANALYZEd recently,\nand also whether there are any nondefault postgresql.conf settings in\nuse on the other server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 2 Nov 2007 00:12:29 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to avoid hashjoin and mergejoin " }, { "msg_contents": "Larry,\n\n \n\nConsidering these recommendations, let's try setting shared_buffers to 2GB\nand work_mem to 16MB. The thing is that work_mem is per connection, and if\nwe get too aggressive and we get a lot of simultaneous users, we can\npotentially eat up a lot of memory.\n\n \n\nSo 2GB + (100 * 16MB) = 3.6GB total RAM eaten up under peak load for these\ntwo values alone.\n\n \n\nIf we wanted to get more aggressive, we can raise work_mem.\n\n \n\nCarlo\n\n \n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: November 1, 2007 5:39 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] How to avoid hashjoin and mergejoin\n\n \n\nOn 11/1/07, Carlo Stonebanks <[email protected]> wrote:\n\n> I am comparing the same query on two different PG 8.2 servers, one Linux\n\n> (8GB RAM) and one Windows (32GB RAM). Both have similar drives and CPU's.\n\n> \n\n> The Windows posgrestsql.config is pretty well tuned but it looks like\n\n> someone had wiped out the Linux config so the default one was\nre-installed.\n\n> All performance-related memory allocation values seem to be set to the\n\n> defaults, but mods have been made: max_connections = 100 and\nshared_buffers\n\n> = 32MB.\n\n> \n\n> The performance for this query is terrible on the Linux server, and good\non\n\n> the Windows server - presumably because the original Linux PG config has\n\n> been lost. This query requires: that \"set enable_seqscan to 'off';\"\n\n \n\nHave you run analyze on the server yet?\n\n \n\nA few general points on performance tuning. With 8.2 you should set\n\nshared_buffers to a pretty big chunk of memory on linux, up to 25% or\n\nso. That means 32 Meg shared buffers is REAL low for a linux server.\n\nTry running anywhere from 512Meg up to 1Gig for starters and see if\n\nthat helps too. Also turn up work_mem to something like 16 to 32 meg\n\nthen restart the server after making these changes.\n\n \n\nThen give us the explain analyze output with all the enable_xxx set to ON.\n\n \n\nsummary: analyze, increase shared_buffers and work_mem, give us explain\nanalyze.\n\n\n\n\n\n\n\n\n\n\nLarry,\n \nConsidering these\nrecommendations, let's try setting shared_buffers to 2GB and work_mem to 16MB. The\nthing is that work_mem is per connection, and if we get too aggressive and we\nget a lot of simultaneous users, we can potentially eat up a lot of memory.\n \nSo 2GB + (100 * 16MB) = 3.6GB\ntotal RAM eaten up under peak load for these two values alone.\n \nIf we wanted to get more\naggressive, we can raise work_mem.\n \nCarlo\n \n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: November 1, 2007 5:39 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] How to avoid hashjoin and mergejoin\n \nOn 11/1/07, Carlo Stonebanks <[email protected]>\nwrote:\n> I am comparing the same query on two different PG 8.2 servers, one\nLinux\n> (8GB RAM) and one Windows (32GB RAM). Both have similar drives and\nCPU's.\n> \n> The Windows posgrestsql.config is pretty well tuned but it looks\nlike\n> someone had wiped out the Linux config so the default one was\nre-installed.\n> All performance-related memory allocation values seem to be set to\nthe\n> defaults, but mods have been made: max_connections = 100 and\nshared_buffers\n> = 32MB.\n> \n> The performance for this query is terrible on the Linux server,\nand good on\n> the Windows server - presumably because the original Linux PG\nconfig has\n> been lost. This query requires: that \"set enable_seqscan to\n'off';\"\n \nHave you run analyze on the server yet?\n \nA few general points on performance tuning.  With 8.2 you should\nset\nshared_buffers to a pretty big chunk of memory on linux, up to 25% or\nso.  That means 32 Meg shared buffers is REAL low for a linux\nserver.\nTry running anywhere from 512Meg up to 1Gig for starters and see if\nthat helps too.  Also turn up work_mem to something like 16 to 32\nmeg\nthen restart the server after making these changes.\n \nThen give us the explain analyze output with all the enable_xxx set to\nON.\n \nsummary: analyze, increase shared_buffers and work_mem, give us explain\nanalyze.", "msg_date": "Fri, 2 Nov 2007 10:46:11 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to avoid hashjoin and mergejoin" } ]
[ { "msg_contents": "Hello,\n\nI am new to setting up PostgreSQL machines for our operational \nenvironments and would appreciate if someone can take a look at this \nsetup; throw tomatoes if it looks too bad. We're expecting an \ninitial load of about 5 million text meta-data records to our \ndatabase; and are expecting upwards of 50 million records by 2008. \nWe are expecting 40 \"connect-query-disconnect' clients every 5 \nminutes or so, and are looking at 15 connections/sec on our front \nfacing components. We've designed a set of Dell systems which we are \nplanning to stick into our Slony/PgPool-II hybrid cluster; taking \nover our current random hodgepodge of machines we used when first \nexperimenting. Each of these systems will be identical. Speed is \nimportant but we are putting more weight on the storage aspects. \nBelow is our model system:\n\nDell PowerEdge Energy 2950\n(2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB\n4GB 667Mhz Dual Ranked DIMMs, Energy Smart\n\nPERC 5/i, x8 Backplane, Integrated Controller Card\n\nHard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5\n\nHard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-in \nHot Plug HD\nHard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HD\n\nHard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS \n3Gbps Hard Drive, 2-5 inch, Hot Plug\n\nNetwork Adapter: Dual Embedded Broadcom NetXTreme II 5708 Gigabit \nEthernet NIC\n\nIt's overkill for our initial system but we are shooting for a system \nthat allows for growth. If someone can let us know if we're on the \nright path or are shooting ourselves in the foot with this setup I'd \nappreciate it.\n\nThanks,\n\n- Mark\n", "msg_date": "Thu, 1 Nov 2007 18:15:22 -0700", "msg_from": "Mark Floyd <[email protected]>", "msg_from_op": true, "msg_subject": "hardware for PostgreSQL" }, { "msg_contents": "On 11/1/07, Mark Floyd <[email protected]> wrote:\n> Hello,\n> Dell PowerEdge Energy 2950\n> (2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB\n> 4GB 667Mhz Dual Ranked DIMMs, Energy Smart\n>\n> PERC 5/i, x8 Backplane, Integrated Controller Card\n>\n> Hard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5\n>\n> Hard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-in\n> Hot Plug HD\n> Hard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HD\n>\n> Hard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS\n> 3Gbps Hard Drive, 2-5 inch, Hot Plug\n\nIf you can fit 8 drives in it, for the love of god add two more and\nmirror your OS and xlog drives ( I assume that's what you mean by\ndrive 2 for logs). Running a server on non-redundant drives is not\nthe best way to do things.\n\nAnd if you can live on ~ 300 Gigs of storage instead of 450 Gigs, look\ninto RAID-10 for your data array. RAID 10 is noticeably faster than\nRAID-5 for any database that sees a fair bit of writing activity.\n\n> It's overkill for our initial system but we are shooting for a system\n> that allows for growth. If someone can let us know if we're on the\n> right path or are shooting ourselves in the foot with this setup I'd\n> appreciate it.\n\nOther than the 8 cores, it's not really overkill. And depending on\nyour usage patterns 8 cores may well not be overkill too.\n", "msg_date": "Thu, 1 Nov 2007 22:31:03 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hardware for PostgreSQL" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello,\n\nI have the following query:\n\nexplain analyze\nSELECT\n (cast((\n SELECT cast(row(1, o.id, NULL, NULL, NULL, NULL) as something)\n FROM ONLY object o WHERE o.id = l.e\n UNION ALL\n SELECT cast(row(2, l2.id, l2.s, l2.e, l2.intensity, NULL) as something)\n FROM ONLY link l2 WHERE l2.id = l.e\n UNION ALL\n SELECT cast(row(3, r.id, r.o, r.format, NULL, r.data) as something)\n FROM ONLY representation r WHERE r.id = l.e\n ) as something)).*,\n l.id, l.s, l.intensity\nFROM link l\nWHERE l.s = 8692\n;\n\n\nand the execution plan:\n\n QUERY PLAN\n- ---------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on link l (cost=6.33..27932.79 rows=178 width=32) (actual time=0.172..12.366 rows=149 loops=1)\n Recheck Cond: (s = 8692)\n -> Bitmap Index Scan on link_s_idx (cost=0.00..6.29 rows=178 width=0) (actual time=0.050..0.050 rows=149 loops=1)\n Index Cond: (s = 8692)\n SubPlan\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.002..0.002 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.013 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.002..0.002 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.013 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.015 rows=1 loops=149)\n -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.004..0.004 rows=0 loops=149)\n Index Cond: (id = $0)\n -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.004..0.004 rows=0 loops=149)\n Index Cond: (id = $0)\n Total runtime: 12.594 ms\n(48 rows)\n\n(hope it wasn't mangled)...\n\nThe \"something\" type is:\n\ncreate type something as (\n t integer,\n id bigint,\n ref1 bigint,\n ref2 bigint,\n intensity double precision,\n data bytea\n);\n\nProblem: It looks as if every column of \"something\" is fetched seperately. I'd think a plan which only did one indexscan for the row on each table and then returns\nthe complete row at once better. (Especially if the dataset is in memory)\n\nWhat I want to achive is to load a set of somethings of yet unknown type (any of link, object, representation) at once, namely those objects which reside\nat the ends of the links. Is there any better way?\n\nMy version:\n version\n- -------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.3beta2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.3 20071014 (prerelease) (Debian 4.2.2-3)\n(1 row)\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFHKvAszhchXT4RR5ARAniBAJ9bk6noLG6tIb2NKmAS7bk6Fpig9QCeNEzF\nYND1waoDKi46BjjNEKwFMF0=\n=/h36\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 02 Nov 2007 10:38:53 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "Unfortunate expansion of composite types in union" }, { "msg_contents": "Hello\n\nI am not sure, but this query will be evaluated efectivelly, because\nall necessary data will be in cache.\n\nPostgreSQL doesn't support Common Table Expressions - you can write\nSRF function:\n\nCREATE OR REPLACE FUNCTION c(integer)\nRETURNS SETOF something AS $$\nDECLARE r RECORD;\n o something;\nBEGIN\n FOR r IN SELECT * FROM link WHERE link.s = $1 LOOP\n o := (1, r.id, NULL, NULL, NULL);\n RETURN NEXT o;\n o := (2, r.id, r.s, r.e, r.intensity, NULL);\n RETURN NEXT o;\n o := (3, r.id, r.o, r.format, NULL, r.data);\n RETURN NEXT o;\n RETURN;\n END LOOP;\nEND;\n$$ LANGUAGE plpgsql;\n\nSELECT * FROM c(8692);\n\nRegards\nPavel Stehule\n\nOn 02/11/2007, Jens-Wolfhard Schicke <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello,\n>\n> I have the following query:\n>\n> explain analyze\n> SELECT\n> (cast((\n> SELECT cast(row(1, o.id, NULL, NULL, NULL, NULL) as something)\n> FROM ONLY object o WHERE o.id = l.e\n> UNION ALL\n> SELECT cast(row(2, l2.id, l2.s, l2.e, l2.intensity, NULL) as something)\n> FROM ONLY link l2 WHERE l2.id = l.e\n> UNION ALL\n> SELECT cast(row(3, r.id, r.o, r.format, NULL, r.data) as something)\n> FROM ONLY representation r WHERE r.id = l.e\n> ) as something)).*,\n> l.id, l.s, l.intensity\n> FROM link l\n> WHERE l.s = 8692\n> ;\n>\n>\n> and the execution plan:\n>\n> QUERY PLAN\n> - ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on link l (cost=6.33..27932.79 rows=178 width=32) (actual time=0.172..12.366 rows=149 loops=1)\n> Recheck Cond: (s = 8692)\n> -> Bitmap Index Scan on link_s_idx (cost=0.00..6.29 rows=178 width=0) (actual time=0.050..0.050 rows=149 loops=1)\n> Index Cond: (s = 8692)\n> SubPlan\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.002..0.002 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.013 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.002..0.002 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.013 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.004..0.011 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.004..0.004 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.003..0.003 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Append (cost=0.00..25.52 rows=3 width=34) (actual time=0.006..0.015 rows=1 loops=149)\n> -> Index Scan using object_id_idx on object o (cost=0.00..8.30 rows=1 width=8) (actual time=0.006..0.006 rows=1 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using link_id_idx on link l2 (cost=0.00..8.86 rows=1 width=32) (actual time=0.004..0.004 rows=0 loops=149)\n> Index Cond: (id = $0)\n> -> Index Scan using representation_id_idx on representation r (cost=0.00..8.32 rows=1 width=34) (actual time=0.004..0.004 rows=0 loops=149)\n> Index Cond: (id = $0)\n> Total runtime: 12.594 ms\n> (48 rows)\n>\n> (hope it wasn't mangled)...\n>\n> The \"something\" type is:\n>\n> create type something as (\n> t integer,\n> id bigint,\n> ref1 bigint,\n> ref2 bigint,\n> intensity double precision,\n> data bytea\n> );\n>\n> Problem: It looks as if every column of \"something\" is fetched seperately. I'd think a plan which only did one indexscan for the row on each table and then returns\n> the complete row at once better. (Especially if the dataset is in memory)\n>\n> What I want to achive is to load a set of somethings of yet unknown type (any of link, object, representation) at once, namely those objects which reside\n> at the ends of the links. Is there any better way?\n>\n> My version:\n> version\n> - -------------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.3beta2 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.2.3 20071014 (prerelease) (Debian 4.2.2-3)\n> (1 row)\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n>\n> iD8DBQFHKvAszhchXT4RR5ARAniBAJ9bk6noLG6tIb2NKmAS7bk6Fpig9QCeNEzF\n> YND1waoDKi46BjjNEKwFMF0=\n> =/h36\n> -----END PGP SIGNATURE-----\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Fri, 2 Nov 2007 11:32:08 +0100", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unfortunate expansion of composite types in union" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nPavel Stehule wrote:\n> PostgreSQL doesn't support Common Table Expressions - you can write\n> SRF function:\n>\n> CREATE OR REPLACE FUNCTION c(integer)\n> RETURNS SETOF something AS $$\n> DECLARE r RECORD;\n> o something;\n> BEGIN\n> FOR r IN SELECT * FROM link WHERE link.s = $1 LOOP\n> o := (1, r.id, NULL, NULL, NULL);\n> RETURN NEXT o;\n> o := (2, r.id, r.s, r.e, r.intensity, NULL);\n> RETURN NEXT o;\n> o := (3, r.id, r.o, r.format, NULL, r.data);\n> RETURN NEXT o;\n> RETURN;\n> END LOOP;\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> SELECT * FROM c(8692);\nThis is a completely different query from my one. (That is, the results are different.)\n\nMy problem is that I have a schema like\nfastgraph=# \\d object\n Table \"public.object\"\n Column | Type | Modifiers\n- --------+--------+----------------------------------------------\n id | bigint | not null default nextval('id_seq'::regclass)\nIndexes:\n \"object_id_idx\" UNIQUE, btree (id)\n\nfastgraph=# \\d link\n Table \"public.link\"\n Column | Type | Modifiers\n- -----------+------------------+----------------------------------------------\n id | bigint | not null default nextval('id_seq'::regclass)\n s | bigint | not null\n e | bigint | not null\n intensity | double precision | not null\nIndexes:\n \"link_id_idx\" UNIQUE, btree (id)\n \"link_e_idx\" btree (e)\n \"link_s_idx\" btree (s)\n \"link_se_idx\" btree (s, e)\nInherits: object\n\nfastgraph=# \\d representation\n Table \"public.representation\"\n Column | Type | Modifiers\n- --------+--------+----------------------------------------------\n id | bigint | not null default nextval('id_seq'::regclass)\n o | bigint | not null\n format | bigint | not null\n data | bytea | not null\nIndexes:\n \"representation_id_idx\" UNIQUE, btree (id)\n \"representation_o_idx\" btree (o)\n \"representation_text\" hash (data) WHERE format = 1\nInherits: object\n\nnow I want those \"objects\" (with inheritance) which are connected to some other. So I tried the\nquery in the original post, and found the execution plan to be suboptimal. Today I tried to do it with OUTER JOINs\nbut failed utterly. So what is the best way to get the results? The original query is exactly what I need, only the plan\nis bad. Any Ideas?\n\nRegards,\n Jens-Wolfhard Schicke\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFHLfp+zhchXT4RR5ARAorgAKDr2grqWnxbvFMYOPiLJuHpjco30ACgswQB\n9/qW9rz+ZngkBYdR0RLsils=\n=LdBJ\n-----END PGP SIGNATURE-----\n", "msg_date": "Sun, 04 Nov 2007 17:59:43 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unfortunate expansion of composite types in union" } ]
[ { "msg_contents": "In an 8 disk configuration where 2 are used for OS; 2 for xlog, and 4 for the database.. is this possible given Dell's possible configurations only allow 2 different RAID setups (SAS Raid1/Raid5)? I will also be contacting Dell, but does this require a more advanced RAID controller that supports 3 disk clusters (Raid 1 with 2 disks, Raid 1 with 2 disks, Raid 5 with 4 disks)? If I'm only allowed a 2:6 disk setup, where do you recommened placing the Xlogs; on the OS disks or on the databasae disks?\n\n- Mark\n\nOn 11/1/07, Mark Floyd <[email protected]> wrote:\nHello,\nDell PowerEdge Energy 2950\n(2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB\n4GB 667Mhz Dual Ranked DIMMs, Energy Smart\n\nPERC 5/i, x8 Backplane, Integrated Controller Card\n\nHard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5\n\nHard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-in\nHot Plug HD\nHard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HD\n\nHard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS\n3Gbps Hard Drive, 2-5 inch, Hot Plug\n\nIf you can fit 8 drives in it, for the love of god add two more and\nmirror your OS and xlog drives ( I assume that's what you mean by\ndrive 2 for logs). Running a server on non-redundant drives is not\nthe best way to do things.\n\nAnd if you can live on ~ 300 Gigs of storage instead of 450 Gigs, look\ninto RAID-10 for your data array. RAID 10 is noticeably faster than\nRAID-5 for any database that sees a fair bit of writing activity.\n\nIt's overkill for our initial system but we are shooting for a system\nthat allows for growth. If someone can let us know if we're on the\nright path or are shooting ourselves in the foot with this setup I'd\nappreciate it.\n\nOther than the 8 cores, it's not really overkill. And depending on\nyour usage patterns 8 cores may well not be overkill too.\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n_________________________________________________________________\nPeek-a-boo FREE Tricks & Treats for You!\nhttp://www.reallivemoms.com?ocid=TXT_TAGHM&loc=us\n\n\n\n\n\nIn an 8 disk configuration where 2 are used for OS; 2 for xlog, and 4 for the database.. is this possible given Dell's possible configurations only allow 2 different RAID setups (SAS Raid1/Raid5)?  I will also be contacting Dell, but does this require a more advanced RAID controller that supports 3 disk clusters (Raid 1 with 2 disks, Raid 1 with 2 disks, Raid 5 with 4 disks)?  If I'm only allowed a 2:6 disk setup, where do you recommened placing the Xlogs; on the OS disks or on the databasae disks?- MarkOn 11/1/07, Mark Floyd <[email protected]> wrote:Hello,Dell PowerEdge Energy 2950(2)  Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB4GB 667Mhz Dual Ranked DIMMs, Energy SmartPERC 5/i, x8 Backplane, Integrated Controller CardHard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5Hard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-inHot Plug HDHard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HDHard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS3Gbps Hard Drive, 2-5 inch, Hot PlugIf you can fit 8 drives in it, for the love of god add two more andmirror your OS and xlog drives ( I assume that's what you mean bydrive 2 for logs).    Running a server on non-redundant drives is notthe best way to do things.And if you can live on ~ 300 Gigs of storage instead of 450 Gigs, lookinto RAID-10 for your data array.  RAID 10 is noticeably faster thanRAID-5 for any database that sees a fair bit of writing activity.It's overkill for our initial system but we are shooting for a systemthat allows for growth.  If someone can let us know if we're on theright path or are shooting ourselves in the foot with this setup I'dappreciate it.Other than the 8 cores, it's not really overkill.  And depending onyour usage patterns 8 cores may well not be overkill too.---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?               http://www.postgresql.org/docs/faqPeek-a-boo FREE Tricks & Treats for You! Get 'em!", "msg_date": "Fri, 2 Nov 2007 15:03:12 -0500", "msg_from": "Mark F <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "From: [email protected]\nTo: [email protected]\nSubject: [PERFORM] \nDate: Fri, 2 Nov 2007 15:03:12 -0500\n\n\n\n\n\n\n\n\nIn an 8 disk configuration where 2 are used for OS; 2 for xlog, and 4 for the database.. is this possible given Dell's possible configurations only allow 2 different RAID setups (SAS Raid1/Raid5)? I will also be contacting Dell, but does this require a more advanced RAID controller that supports 3 disk clusters (Raid 1 with 2 disks, Raid 1 with 2 disks, Raid 5 with 4 disks)? If I'm only allowed a 2:6 disk setup, where do you recommened placing the Xlogs; on the OS disks or on the databasae disks?\n\n- Mark\n\nOn 11/1/07, Mark Floyd <[email protected]> wrote:\nHello,\nDell PowerEdge Energy 2950\n(2) Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB\n4GB 667Mhz Dual Ranked DIMMs, Energy Smart\n\nPERC 5/i, x8 Backplane, Integrated Controller Card\n\nHard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5\n\nHard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-in\nHot Plug HD\nHard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HD\n\nHard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS\n3Gbps Hard Drive, 2-5 inch, Hot Plug\n\nIf you can fit 8 drives in it, for the love of god add two more and\nmirror your OS and xlog drives ( I assume that's what you mean by\ndrive 2 for logs). Running a server on non-redundant drives is not\nthe best way to do things.\n\nAnd if you can live on ~ 300 Gigs of storage instead of 450 Gigs, look\ninto RAID-10 for your data array. RAID 10 is noticeably faster than\nRAID-5 for any database that sees a fair bit of writing activity.\n\nIt's overkill for our initial system but we are shooting for a system\nthat allows for growth. If someone can let us know if we're on the\nright path or are shooting ourselves in the foot with this setup I'd\nappreciate it.\n\nOther than the 8 cores, it's not really overkill. And depending on\nyour usage patterns 8 cores may well not be overkill too.\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\nPeek-a-boo FREE Tricks & Treats for You! Get 'em!\n\n_________________________________________________________________\nPeek-a-boo FREE Tricks & Treats for You!\nhttp://www.reallivemoms.com?ocid=TXT_TAGHM&loc=us\n\n\n\n\n\nFrom: [email protected]: [email protected]: [PERFORM] Date: Fri, 2 Nov 2007 15:03:12 -0500\n\n\n\n\n\nIn an 8 disk configuration where 2 are used for OS; 2 for xlog, and 4 for the database.. is this possible given Dell's possible configurations only allow 2 different RAID setups (SAS Raid1/Raid5)?  I will also be contacting Dell, but does this require a more advanced RAID controller that supports 3 disk clusters (Raid 1 with 2 disks, Raid 1 with 2 disks, Raid 5 with 4 disks)?  If I'm only allowed a 2:6 disk setup, where do you recommened placing the Xlogs; on the OS disks or on the databasae disks?- MarkOn 11/1/07, Mark Floyd <[email protected]> wrote:Hello,Dell PowerEdge Energy 2950(2)  Quad Core Intel Xeon L5320, 2x4MB Cache, 1.86Ghz, 1066Mhz FSB4GB 667Mhz Dual Ranked DIMMs, Energy SmartPERC 5/i, x8 Backplane, Integrated Controller CardHard Drive Configuration: Integrated SAS/SATA RAID1/Raid 5Hard Drive 1 (For Operating System): 36GB 10K RPM SAS 3Gbps 2.5-inHot Plug HDHard Drive 2 (For logs): 36GB 10K RPM SAS 3Gbps 2.5-in Hot Plug HDHard Drives 3,4,5,6 (In a RAID 5 Configuration): (4) 146GB 10K SAS3Gbps Hard Drive, 2-5 inch, Hot PlugIf you can fit 8 drives in it, for the love of god add two more andmirror your OS and xlog drives ( I assume that's what you mean bydrive 2 for logs).    Running a server on non-redundant drives is notthe best way to do things.And if you can live on ~ 300 Gigs of storage instead of 450 Gigs, lookinto RAID-10 for your data array.  RAID 10 is noticeably faster thanRAID-5 for any database that sees a fair bit of writing activity.It's overkill for our initial system but we are shooting for a systemthat allows for growth.  If someone can let us know if we're on theright path or are shooting ourselves in the foot with this setup I'dappreciate it.Other than the 8 cores, it's not really overkill.  And depending onyour usage patterns 8 cores may well not be overkill too.---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?               http://www.postgresql.org/docs/faqPeek-a-boo FREE Tricks & Treats for You! Get 'em!\nPeek-a-boo FREE Tricks & Treats for You! Get 'em!", "msg_date": "Fri, 2 Nov 2007 16:23:53 -0500", "msg_from": "Mark F <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware for PostgreSQL (RAID configurations)" } ]
[ { "msg_contents": "Consider:\n\nCREATE VIEW_X AS\nSELECT <query A>\nUNION ALL\nSELECT <query B>\nUNION ALL\nSELECT <query C>;\n\nversus\n\nCREATE VIEW_A AS\nSELECT <query A>;\n\nCREATE VIEW_B AS\nSELECT <query B>;\n\nCREATE VIEW_C AS\nSELECT <query B>;\n\nwhere <query A>, <query B> and <query C> are each somewhat complex\nwith several joins, but utilizing different tables for each of A, B\nand C.\n\nPerformance on\n\nSELECT * from VIEW_X WHERE <conditions>;\n\nwas absolutely terrible. But performance on\n\nSELECT * from VIEW_A WHERE <conditions>\nUNION ALL\nSELECT * from VIEW_B WHERE <conditions>\nUNION ALL\nSELECT * from VIEW_C WHERE <conditions>;\n\nwas nice and speedy, perhaps 100 times faster than the first.\n\nIf it's possible to consider this abstractly, is there any particular\nreason why there is such a vast difference in performance? I would\nguess that is has something to do with how the WHERE conditions are\napplied to a view composed of a UNION of queries. Perhaps this is an\nopportunity for improvement in the code. In the first case, it's as if\nthe server is doing the union on all rows (over 10 million altogether\nin my case) without filtering, then applying the conditions to the\nresult. Maybe there is no better way.\n\nI can post query plans if anyone is interested. I haven't really\nlearned how to make sense out of them myself yet.\n\nFor my purposes, I'm content to use the union of separate views in my\napplication, so if this doesn't pique anyone's interest, feel free to\nignore it.\n\nJeff\n", "msg_date": "Sat, 3 Nov 2007 15:22:18 -0500", "msg_from": "\"Jeff Larsen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Union within View vs.Union of Views" }, { "msg_contents": "Jeff Larsen wrote:\n> If it's possible to consider this abstractly, is there any particular\n> reason why there is such a vast difference in performance? I would\n> guess that is has something to do with how the WHERE conditions are\n> applied to a view composed of a UNION of queries. Perhaps this is an\n> opportunity for improvement in the code. In the first case, it's as if\n> the server is doing the union on all rows (over 10 million altogether\n> in my case) without filtering, then applying the conditions to the\n> result. Maybe there is no better way.\n>\n> I can post query plans if anyone is interested. I haven't really\n> learned how to make sense out of them myself yet.\n>\n> For my purposes, I'm content to use the union of separate views in my\n> application, so if this doesn't pique anyone's interest, feel free to\n> ignore it.\n> \nI hit this as well in less impacting statements. I found myself curious \nthat the sub-plan would have to be executed in full before it applied \nthe filter. Perhaps PostgreSQL has difficulty pushing WHERE conditions \nthrough the rule system? It's an area I only barely understand, so I \nnever looked further...\n\nI'm interested, but do not have anything of value to provide either. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n", "msg_date": "Sat, 03 Nov 2007 17:39:21 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Union within View vs.Union of Views" }, { "msg_contents": "Jeff Larsen wrote:\n> Performance on\n> \n> SELECT * from VIEW_X WHERE <conditions>;\n> \n> was absolutely terrible. But performance on\n> \n> SELECT * from VIEW_A WHERE <conditions>\n> UNION ALL\n> SELECT * from VIEW_B WHERE <conditions>\n> UNION ALL\n> SELECT * from VIEW_C WHERE <conditions>;\n> \n> was nice and speedy, perhaps 100 times faster than the first.\n> \n> If it's possible to consider this abstractly, is there any particular\n> reason why there is such a vast difference in performance? I would\n> guess that is has something to do with how the WHERE conditions are\n> applied to a view composed of a UNION of queries. Perhaps this is an\n> opportunity for improvement in the code. In the first case, it's as if\n> the server is doing the union on all rows (over 10 million altogether\n> in my case) without filtering, then applying the conditions to the\n> result. Maybe there is no better way.\n\nThat's surprising. The planner knows how to push down WHERE conditions \nto parts of a UNION ALL, and should be able to generate the same plan in \nboth cases. Maybe it's just estimating the costs differently? Did you \ncopy-paste all the conditions in the single WHERE clause of the slow \nquery to all the three WHERE clauses on the separate views? Even if some \nof the clauses are not applicable, they might still affect the cost \nestimates and lead to a worse plan.\n\n> I can post query plans if anyone is interested. I haven't really\n> learned how to make sense out of them myself yet.\n\nYes, please. Please post the SQL and schema as well if possible.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 03 Nov 2007 23:03:39 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Union within View vs.Union of Views" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> Jeff Larsen wrote:\n>> If it's possible to consider this abstractly, is there any particular\n>> reason why there is such a vast difference in performance?\n\n> That's surprising. The planner knows how to push down WHERE conditions \n> to parts of a UNION ALL, and should be able to generate the same plan in \n> both cases.\n\nThere are a bunch of special cases where it can't do that, though.\nLook into src/backend/optimizer/path/allpaths.c, particularly\nsubquery_is_pushdown_safe:\n\n * Conditions checked here:\n *\n * 1. If the subquery has a LIMIT clause, we must not push down any quals,\n * since that could change the set of rows returned.\n *\n * 2. If the subquery contains EXCEPT or EXCEPT ALL set ops we cannot push\n * quals into it, because that would change the results.\n *\n * 3. For subqueries using UNION/UNION ALL/INTERSECT/INTERSECT ALL, we can\n * push quals into each component query, but the quals can only reference\n * subquery columns that suffer no type coercions in the set operation.\n * Otherwise there are possible semantic gotchas. So, we check the\n * component queries to see if any of them have different output types;\n * differentTypes[k] is set true if column k has different type in any\n * component.\n\nand qual_is_pushdown_safe:\n\n * Conditions checked here:\n *\n * 1. The qual must not contain any subselects (mainly because I'm not sure\n * it will work correctly: sublinks will already have been transformed into\n * subplans in the qual, but not in the subquery).\n *\n * 2. The qual must not refer to the whole-row output of the subquery\n * (since there is no easy way to name that within the subquery itself).\n *\n * 3. The qual must not refer to any subquery output columns that were\n * found to have inconsistent types across a set operation tree by\n * subquery_is_pushdown_safe().\n *\n * 4. If the subquery uses DISTINCT ON, we must not push down any quals that\n * refer to non-DISTINCT output columns, because that could change the set\n * of rows returned. This condition is vacuous for DISTINCT, because then\n * there are no non-DISTINCT output columns, but unfortunately it's fairly\n * expensive to tell the difference between DISTINCT and DISTINCT ON in the\n * parsetree representation. It's cheaper to just make sure all the Vars\n * in the qual refer to DISTINCT columns.\n *\n * 5. We must not push down any quals that refer to subselect outputs that\n * return sets, else we'd introduce functions-returning-sets into the\n * subquery's WHERE/HAVING quals.\n *\n * 6. We must not push down any quals that refer to subselect outputs that\n * contain volatile functions, for fear of introducing strange results due\n * to multiple evaluation of a volatile function.\n\nIdly looking at this, I'm suddenly wondering whether the prohibition on\npushing into an EXCEPT is necessary. If a qual eliminates rows from the\nEXCEPT's output, can't we just eliminate those same rows from the\ninputs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Nov 2007 21:38:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Union within View vs.Union of Views " }, { "msg_contents": "[[ Oops, still not used to gmail. Accidentally posted this straight to\nTom and not the list]]\n\n> There are a bunch of special cases where it can't do that, though.\n> Look into src/backend/optimizer/path/allpaths.c, particularly\n> subquery_is_pushdown_safe:\n\nMy case probably fits the 'special case' description. Not all the\ncolumns in each subquery matched up, so there were NULL::text\nplaceholders in some spots in the SELECT. In the case where\nperformance got bad, one of those columns was included in the\napplication's WHERE clause.\n\nThat's a good enough explanation for me. I'll spare you the gory\ndetails of my tables, unless a developer wants to have a look at it\noff-list.\n\nJeff\n", "msg_date": "Sat, 3 Nov 2007 20:57:21 -0500", "msg_from": "\"Jeff Larsen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Union within View vs.Union of Views" }, { "msg_contents": "On 11/3/07, Tom Lane wrote:\n> \"Jeff Larsen\" <[email protected]> writes:\n> > My case probably fits the 'special case' description. Not all the\n> > columns in each subquery matched up, so there were NULL::text\n> > placeholders in some spots in the SELECT. In the case where\n> > performance got bad, one of those columns was included in the\n> > application's WHERE clause.\n>\n> Please see if explicitly casting the nulls to the same datatype as the\n> other items they're unioned with makes it go fast. It sounds like you\n> are hitting the \"no type coercions\" restriction.\n\nSure enough, explicitly casting to exactly the same type for each\ncolumn did the trick. In fact the union within the view now has a\nslight edge over the union of views.\n\nThanks,\n\nJeff\n", "msg_date": "Sun, 4 Nov 2007 04:11:02 -0600", "msg_from": "\"Jeff Larsen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Union within View vs.Union of Views" } ]
[ { "msg_contents": "Hello,\n\nThis question may sound dumb, but I would like to know if using\n\"MixedCase sensitive quoted\" names instead of lowercase names for\nobject names has any negative hit to the database performance.\n\nThanks!\n", "msg_date": "Sun, 4 Nov 2007 00:40:42 +0100", "msg_from": "\"Whatever Deep\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"MixedCase sensitive quoted\" names" }, { "msg_contents": "\"Whatever Deep\" <[email protected]> writes:\n> This question may sound dumb, but I would like to know if using\n> \"MixedCase sensitive quoted\" names instead of lowercase names for\n> object names has any negative hit to the database performance.\n\nI can't imagine you could measure any performance difference ...\nthe two cases involve slightly different code paths in scan.l,\nbut after that it doesn't matter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Nov 2007 21:45:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"MixedCase sensitive quoted\" names " } ]
[ { "msg_contents": "\nHello all,\n\nWhat are the ideal settings for values in this postgresql.conf file??? I\nhave tried so many parameter changes but I still can not get the 8.1.4\nversion to perform as well as the 7.x version...what do others have their\npostgrsql.conf file values set to???\n\nAre there any known performance degrades going from 7.x to 8.1 versions???\n\nThanks...Michelle\n-- \nView this message in context: http://www.nabble.com/Postgresql.conf-Settings-tf4747486.html#a13575017\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sun, 4 Nov 2007 08:28:30 -0800 (PST)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql.conf Settings" }, { "msg_contents": "On 11/4/07, smiley2211 <[email protected]> wrote:\n>\n> Hello all,\n>\n> What are the ideal settings for values in this postgresql.conf file??? I\n> have tried so many parameter changes but I still can not get the 8.1.4\n> version to perform as well as the 7.x version...what do others have their\n> postgrsql.conf file values set to???\n\nWithout knowing what query you're trying to run it's hard to say.\n\nCan you post the explain analyze output for the query in both 7.x and 8.1?\n\n> Are there any known performance degrades going from 7.x to 8.1 versions???\n\nThe only one I know of is if you try to do something like:\n\nselect * from table1 t1 left join table2 t2 on (t1.id=t2.id) where t2.id is null\n", "msg_date": "Sun, 4 Nov 2007 12:37:02 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql.conf Settings" }, { "msg_contents": "\nScott,\n\nThanks for responding...I've posted all that information before and tried\nall the suggestions but the query is still taking over 1 hour to complete\n:(...I just wanted to possible hear what others have say 'effective cache',\n'shared_buffers' etc set to...\n\nThanks...Marsha\n\n\nScott Marlowe-2 wrote:\n> \n> On 11/4/07, smiley2211 <[email protected]> wrote:\n>>\n>> Hello all,\n>>\n>> What are the ideal settings for values in this postgresql.conf file??? I\n>> have tried so many parameter changes but I still can not get the 8.1.4\n>> version to perform as well as the 7.x version...what do others have their\n>> postgrsql.conf file values set to???\n> \n> Without knowing what query you're trying to run it's hard to say.\n> \n> Can you post the explain analyze output for the query in both 7.x and 8.1?\n> \n>> Are there any known performance degrades going from 7.x to 8.1\n>> versions???\n> \n> The only one I know of is if you try to do something like:\n> \n> select * from table1 t1 left join table2 t2 on (t1.id=t2.id) where t2.id\n> is null\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Postgresql.conf-Settings-tf4747486.html#a13576252\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sun, 4 Nov 2007 10:30:28 -0800 (PST)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgresql.conf Settings" }, { "msg_contents": "smiley2211 wrote:\n> What are the ideal settings for values in this postgresql.conf file??? I\n> have tried so many parameter changes but I still can not get the 8.1.4\n> version to perform as well as the 7.x version...what do others have their\n> postgrsql.conf file values set to???\n> \n> Are there any known performance degrades going from 7.x to 8.1 versions???\n\nBefore spending any more time on tuning, upgrade at the very least to \nthe latest 8.1.X version. Or since you're upgrading anyway, why not \nupgrade to latest 8.2.X version while you're at it. There's been plenty \nof planner changes between those releases.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 04 Nov 2007 19:30:12 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql.conf Settings" } ]
[ { "msg_contents": "For those of you considering a move to the upcoming 8.3 release, now in \nbeta, I've written some documentation on the changes made in checkpoint \nand background writer configuration in the new version:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nSince the first half of that covers the current behavior in 8.1 and 8.2, \nthose sections may be helpful if you'd like to know more about checkpoint \nslowdowns and ways to resolve them even if you have no plans to evaluate \n8.3 yet. I'd certainly encourage anyone who can run the 8.3 beta to \nconsider adding some tests in this area while there's still time to \ncorrect any issues encountered before the official release.\n\nOn the topic of performance improvements in 8.3, I don't think this list \nhas been getting information about the concurrent sequential scans \nimprovements. Check out these documents for more about that:\n\nhttp://j-davis.com/postgresql/83v82_scans.html\nhttp://j-davis.com/postgresql/syncscan/syncscan.pdf\nhttp://j-davis.com/postgresql/syncscan/syncscan.odp\n\nI particularly liked that first one because it gives a nice sample of how \nto generate large amounts of data easily and then run benchmarks against \nit, which is a handy thing to know for many types of performance \ncomparisons.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 4 Nov 2007 19:33:46 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Migrating to 8.3 - checkpoints and background writer" }, { "msg_contents": "On Sun, Nov 04, 2007 at 07:33:46PM -0500, Greg Smith wrote:\n> On the topic of performance improvements in 8.3, I don't think this list \n> has been getting information about the concurrent sequential scans \n> improvements. Check out these documents for more about that:\n>\n> http://j-davis.com/postgresql/83v82_scans.html\n\nThat's a nice writeup. I'm a bit puzzled by this part, though: \"All tests\nare on linux with the anticipatory I/O scheduler. The default I/O scheduler\nfor Linux is CFQ (Completely Fair Queue), which does not work well for\nPostgreSQL in my tests.\"\n\nAll earlier benchmarks I've seen (even specifically for Postgres) have said\nthat cfq > deadline > anticipatory for database work. How large was the\ndifference for you?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 5 Nov 2007 01:39:48 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to 8.3 - checkpoints and background writer" }, { "msg_contents": "On Mon, 5 Nov 2007, Steinar H. Gunderson wrote:\n\n> I'm a bit puzzled by this part, though: \"All tests are on linux with the \n> anticipatory I/O scheduler. The default I/O scheduler for Linux is CFQ \n> (Completely Fair Queue), which does not work well for PostgreSQL in my \n> tests.\"\n\nThe syncronized scan articles were from Jeff Davis and I can't answer for \nhim. I will say I don't actually agree with that part of the document \nmyself and almost put a disclaimer to that effect in my message; here it \nis now that you bring it up. I suspect the strong preference for avoiding \nCFQ in his tests comes from the limitations of how simple (S)ATA drives \nare handled in Linux, and that tests with a more robust disk subsystem may \nvery well give different results. Certainly the adaptive scheduler \nappears stronger compared to CFQ as disk seek times go up, and that's the \narea where regular hard drives are weakest relative to what's normally in \na server-class system.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 4 Nov 2007 22:06:26 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Migrating to 8.3 - checkpoints and background writer" }, { "msg_contents": "\nOn Nov 4, 2007, at 6:33 PM, Greg Smith wrote:\n\n> For those of you considering a move to the upcoming 8.3 release, \n> now in beta, I've written some documentation on the changes made in \n> checkpoint and background writer configuration in the new version:\n>\n> http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n>\n> Since the first half of that covers the current behavior in 8.1 and \n> 8.2, those sections may be helpful if you'd like to know more about \n> checkpoint slowdowns and ways to resolve them even if you have no \n> plans to evaluate 8.3 yet. I'd certainly encourage anyone who can \n> run the 8.3 beta to consider adding some tests in this area while \n> there's still time to correct any issues encountered before the \n> official release.\n\nGreg, thanks a lot of this. I'd say this should definitely be linked \nto from the main site's techdocs section.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Mon, 5 Nov 2007 10:54:05 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to 8.3 - checkpoints and background writer" } ]
[ { "msg_contents": "Hi.\n\nWe are using a HP DL 380 G5 with 4 sas-disks at 10K rpm. The\ncontroller is a built in ciss-controller with 256 MB battery-backed\ncache. It is partitioned as raid 1+0.\n\nOur queries are mainly selects.\n\nI will get four 72 GB sas-disks at 15K rpm. Reading the archives\nsuggest raid 1+0 for optimal read/write performance, but with a solid\nraid-controller raid 5 will also perform very well when reading.\n\nIs the ciss-controller found in HP-servers a \"better\" raid-controller\ncompared to the areca-raid-controller mentioned on this list?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 5 Nov 2007 14:19:18 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": true, "msg_subject": "hp ciss on freebsd" }, { "msg_contents": "\nOn Nov 5, 2007, at 8:19 AM, Claus Guttesen wrote:\n\n>\n> Is the ciss-controller found in HP-servers a \"better\" raid-controller\n> compared to the areca-raid-controller mentioned on this list?\n>\n\nI've had great success with the P600 controller (upgraded to 512MB \nbbwc) plugged into an MSA70 with a pile of SAS disks. I'm using R6 \n(ADG) and getting some crazy good numbers with it.\n\nMy newest box has a built-in p400, that did ok, but not as good as \nthe p600. HP also has the P800 available as well.\n\nYour best bet is to load up some data, and do some testing. Check \nout the pgiosim project on pgfoundry, it sort of simulates a pg index \nscan, which is probably what you'll want to focus on more than seq \nread speed.\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.dellsmartexitin.com/\nhttp://www.stuarthamm.net/\n\n\n\n", "msg_date": "Mon, 5 Nov 2007 10:47:53 -0500", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hp ciss on freebsd" }, { "msg_contents": "On Mon, 5 Nov 2007, Claus Guttesen wrote:\n\n> Is the ciss-controller found in HP-servers a \"better\" raid-controller\n> compared to the areca-raid-controller mentioned on this list?\n\nIf you search the archives for \"cciss\" you'll see a few complaints about \nthis controller not working all that well under Linux. The smart thing to \ndo regardless of what other people say is to test yourself and see if \nyou're meeting expectations. I've got a sample of how a single disk \nperforms with an Areca controller you can use as a baseline at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 5 Nov 2007 11:32:28 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hp ciss on freebsd" }, { "msg_contents": "\nOn Nov 5, 2007, at 8:19 AM, Claus Guttesen wrote:\n\n> I will get four 72 GB sas-disks at 15K rpm. Reading the archives\n> suggest raid 1+0 for optimal read/write performance, but with a solid\n> raid-controller raid 5 will also perform very well when reading.\n\nIf you only have 4 drives, I'd recommend not to go with RAID5. You \nwant to max out spindles. The 256k RAM may not be enough of a cache \ntoo.\n\n", "msg_date": "Thu, 8 Nov 2007 11:37:50 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hp ciss on freebsd" } ]
[ { "msg_contents": "Hi,\n\nI am running postgres 8.2 on RH linux.\nMy daemon downloads files and then inserts the data into preliminary\ntables, and finally calls a stored procedure which reads data from a\nview and inserts into the final table.\n\nI have a bit of a peculiar problem. (I understand this may not be the\nright venue).\n\nMy deamon calls a stored procedure SP_LoadFiles().\nThe daemon stops syncing at about 6:30 pm and restarts syncing at about 7:30 am.\n\nEvery day, I have to manually re-start the daemon for the function\nsp_LoadFiles() to actually load the files.\nI can see that the procedure is being called, but it does not load the data.\nIf I run the procedure manually via psql : select * from\nsp_loadfiles(); it works and the data is loaded.\nmy stored proc sp_loadfiles is accessing a View which is accessing a\ncouple of tables. There is no dynamic sql being generated, just\ninserts from the view.\n\nIs this a connection issue?\nDo I have to end the daemons db connection. Is this set in the postgresql.conf?\n\nThank you.\nRadhika\n\n-- \nIt is all a matter of perspective. You choose your view by choosing\nwhere to stand. --Larry Wall\n", "msg_date": "Mon, 5 Nov 2007 10:32:46 -0500", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Database connections and stored procs (functions)" }, { "msg_contents": "On 11/5/07, Radhika S <[email protected]> wrote:\n> Hi,\n>\n> I am running postgres 8.2 on RH linux.\n> My daemon downloads files and then inserts the data into preliminary\n> tables, and finally calls a stored procedure which reads data from a\n> view and inserts into the final table.\n>\n> I have a bit of a peculiar problem. (I understand this may not be the\n> right venue).\n>\n> My deamon calls a stored procedure SP_LoadFiles().\n> The daemon stops syncing at about 6:30 pm and restarts syncing at about 7:30 am.\n>\n> Every day, I have to manually re-start the daemon for the function\n> sp_LoadFiles() to actually load the files.\n> I can see that the procedure is being called, but it does not load the data.\n> If I run the procedure manually via psql : select * from\n> sp_loadfiles(); it works and the data is loaded.\n> my stored proc sp_loadfiles is accessing a View which is accessing a\n> couple of tables. There is no dynamic sql being generated, just\n> inserts from the view.\n>\n> Is this a connection issue?\n> Do I have to end the daemons db connection. Is this set in the postgresql.conf?\n\nthe answer to your question probably lies within the log. make sure\nyour daemon is logging the connection attempt and any errors. Check\nthe database log for any problems. My gut is telling me the problem\nmight be on your end (can't be sure with this info).\n\nmerlin\n", "msg_date": "Mon, 5 Nov 2007 21:47:02 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database connections and stored procs (functions)" } ]
[ { "msg_contents": "PostgreSQL:8.2.4\n\n \n\nI am collecting statistics info now on my database. I have used the\nfollowing two queries:\n\n \n\nselect * from pg_stat_all_indexes;\n\nselect * from pg_statio_all_indexes;\n\n \n\nHow can I use the information from these two queries to better optimize\nmy indexes? Or maybe even get rid of some unnecessary indexes.\n\n \n\nExample output:\n\n \n\n relid | indexrelid | schemaname | relname |\nindexrelname | idx_blks_read | idx_blks_hit \n\n---------+------------+---------------+-----------------------+---------\n--------------------------+---------------+--------------\n\n 16801 | 57855 | a | screen |\nscreen_index1 | 1088 | 213618\n\n 16801 | 57857 | a | screen |\nscreen_index3 | 905 | 201219\n\n 16803 | 16805 | pg_toast | pg_toast_16801 |\npg_toast_16801_index | 3879 | 1387471\n\n 16978 | 16980 | pg_toast | pg_toast_16976 |\npg_toast_16976_index | 0 | 0\n\n 942806 | 942822 | b | question_result_entry |\nquestion_result_entry_index1 | 18 | 0\n\n 942806 | 942824 | b | question_result_entry |\nquestion_result_entry_index2 | 18 | 0\n\n 942806 | 942828 | b | question_result_entry |\nquestion_result_entry_index3 | 18 | 0\n\n \n\n relid | indexrelid | schemaname | relname |\nindexrelname | idx_scan | idx_tup_read | idx_tup_fetch \n\n---------+------------+---------------+-----------------------+---------\n--------------------------+-----------+--------------+---------------\n\n 16801 | 57855 | a | screen\n| screen_index1 | 48693 | 1961745 |\n1899027\n\n 16801 | 57857 | a | screen\n| screen_index3 | 13192 | 132214 |\n87665\n\n 16803 | 16805 | pg_toast | pg_toast_16801 |\npg_toast_16801_index | 674183 | 887962 |\n887962\n\n 16978 | 16980 | pg_toast | pg_toast_16976 |\npg_toast_16976_index | 0 | 0 |\n0\n\n 942806 | 942822 | b | question_result_entry |\nquestion_result_entry_index1 | 0 | 0 |\n0 \n\n 942806 | 942824 | b | question_result_entry |\nquestion_result_entry_index2 | 0 | 0 |\n0\n\n 942806 | 942828 | b | question_result_entry |\nquestion_result_entry_index3 | 0 | 0 |\n0\n\n \n\n \n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL:8.2.4\n \nI am collecting statistics info now on my database.  I\nhave used the following two queries:\n \nselect * from pg_stat_all_indexes;\nselect * from pg_statio_all_indexes;\n \nHow can I use the information from these two queries to\nbetter optimize my indexes?  Or maybe even get rid of some unnecessary\nindexes.\n \nExample output:\n \n  relid  | indexrelid | \nschemaname   |       \nrelname       \n|          \nindexrelname           \n| idx_blks_read | idx_blks_hit \n---------+------------+---------------+-----------------------+-----------------------------------+---------------+--------------\n   16801 |      57855 | a  \n              |\nscreen               \n|\nscreen_index1                    \n|          1088\n|       213618\n   16801 |      57857 | a  \n              |\nscreen               \n|\nscreen_index3                    \n|           905\n|       201219\n   16803 |      16805 |\npg_toast      |\npg_toast_16801        |\npg_toast_16801_index             \n|          3879\n|      1387471\n   16978 |      16980 |\npg_toast      |\npg_toast_16976        |\npg_toast_16976_index             \n|             0\n|            0\n  942806 |     942822 | b                |\nquestion_result_entry |\nquestion_result_entry_index1     \n|            18\n|            0\n  942806 |     942824 | b                |\nquestion_result_entry |\nquestion_result_entry_index2     \n|            18\n|            0\n  942806 |     942828 | b                |\nquestion_result_entry |\nquestion_result_entry_index3     \n|            18\n|            0\n \n  relid  | indexrelid | \nschemaname   |       \nrelname       \n|          \nindexrelname           \n| idx_scan  | idx_tup_read | idx_tup_fetch \n---------+------------+---------------+-----------------------+-----------------------------------+-----------+--------------+---------------\n   16801 |      57855 | a  \n                 |\nscreen               \n      |\nscreen_index1                    \n      |     48693\n|      1961745 |      \n1899027\n   16801 |      57857 | a  \n                 |\nscreen               \n      |\nscreen_index3                    \n      |     13192\n|       132214\n|         87665\n   16803 |      16805 |\npg_toast         |\npg_toast_16801        |\npg_toast_16801_index              |   \n674183 |       887962\n|        887962\n   16978 |      16980 |\npg_toast         |\npg_toast_16976        |\npg_toast_16976_index             \n|         0 |           \n0 |             0\n 942806 |     942822 | b                    |\nquestion_result_entry | question_result_entry_index1    |        \n0 |            0\n|             0       \n 942806 |     942824 | b                    |\nquestion_result_entry | question_result_entry_index2    |        \n0 |            0\n|             0\n 942806 |     942828 | b                 \n  | question_result_entry |\nquestion_result_entry_index3    |        \n0 |            0\n|             0\n \n \n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Mon, 5 Nov 2007 10:42:46 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "index stat" }, { "msg_contents": ">>> On Mon, Nov 5, 2007 at 10:42 AM, in message\n<[email protected]>, \"Campbell,\nLance\" <[email protected]> wrote: \n\n> How can I [. . .] get rid of some unnecessary indexes\n \nHere's what I periodically run to look for unused indexes:\n \nselect relname, indexrelname\n from pg_stat_user_indexes\n where indexrelname not like '%_pkey'\n and idx_scan = 0\n order by relname, indexrelname\n;\n \nWe omit the primary keys from the list (based on our naming\nconvention) because they are needed to ensure integrity.\n \n-Kevin\n \n\n\n", "msg_date": "Wed, 07 Nov 2007 10:32:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index stat" } ]
[ { "msg_contents": "PostgreSQL: 8.2.4\n\n \n\nDoes anyone have any companies they would recommend using for\nperformance tuning training of PostgreSQL for Linux? Or general DBA\ntraining?\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2.4\n \nDoes anyone have any companies they would recommend using\nfor performance tuning training of PostgreSQL for Linux?  Or general DBA\ntraining?\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Mon, 5 Nov 2007 10:44:04 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Training Recommendations" }, { "msg_contents": "EnterpriseDB (www.enterprisedb.com), ofcourse\n\nCampbell, Lance wrote:\n>\n> PostgreSQL: 8.2.4\n>\n> \n>\n> Does anyone have any companies they would recommend using for \n> performance tuning training of PostgreSQL for Linux? Or general DBA \n> training?\n>\n> \n>\n> Thanks,\n>\n> \n>\n> Lance Campbell\n>\n> Project Manager/Software Architect\n>\n> Web Services at Public Affairs\n>\n> University of Illinois\n>\n> 217.333.0382\n>\n> http://webservices.uiuc.edu\n>\n> \n>\n", "msg_date": "Wed, 28 Nov 2007 21:20:46 +0500", "msg_from": "\"Usama Munir Dar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On Wednesday 28 November 2007 11:20, Usama Munir Dar wrote:\n> EnterpriseDB (www.enterprisedb.com), ofcourse\n>\n\nlame :-P\n\n> Campbell, Lance wrote:\n> > PostgreSQL: 8.2.4\n> >\n> >\n> >\n> > Does anyone have any companies they would recommend using for\n> > performance tuning training of PostgreSQL for Linux? Or general DBA\n> > training?\n> >\n\nNever take advice from a guy who top posts... A friend of mine just went \nthrough an OTG course and had good things to say, and I've heard other speak \nwell of it too, so I'd probably recommend them, but there are several \noptions, check out the training section on the website:\nhttp://www.postgresql.org/about/eventarchive\n\nNote also some of the more popular pg support companies also offer personal \ntraining, even if it isn't advertised. HTH.\n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Fri, 30 Nov 2007 04:15:09 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On Nov 30, 2007 4:15 AM, Robert Treat <[email protected]> wrote:\n> Never take advice from a guy who top posts... A friend of mine just went\n> through an OTG course and had good things to say, and I've heard other speak\n> well of it too, so I'd probably recommend them, but there are several\n> options, check out the training section on the website:\n> http://www.postgresql.org/about/eventarchive\n>\n> Note also some of the more popular pg support companies also offer personal\n> training, even if it isn't advertised. HTH.\n\nI've been dying to know if anyone has ever done PostgreSQL training at\n'the big nerd ranch'.\n\nmerlin\n", "msg_date": "Sun, 2 Dec 2007 08:30:52 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On 02.12.2007, at 06:30, Merlin Moncure wrote:\n\n> I've been dying to know if anyone has ever done PostgreSQL training at\n> 'the big nerd ranch'.\n\nThere are a couple of reviews floating around the web:\n\nhttp://www.linux.com/articles/48870\nhttp://www.linuxjournal.com/article/7847\n\nI was in the course too (out of interest) but as I'm with Big Nerd \nRanch, I don't want to say anything here about the course.\n\ncug\n\n-- \nhttp://www.event-s.net\n\n", "msg_date": "Sun, 2 Dec 2007 10:32:19 -0700", "msg_from": "Guido Neitzer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "\n\nRobert Treat wrote:\n> On Wednesday 28 November 2007 11:20, Usama Munir Dar wrote:\n> \n>> EnterpriseDB (www.enterprisedb.com), ofcourse\n>>\n>> \n>\n> lame :-P\n> \n\nHave you or anyone you know tried the training offerings? or you think \nits lame because i top posted , which of course would be a very poor \ncriteria , not to mention completely unrelated, so i definitely think \nits not the reason. i would love to hear whats wrong with it so we can \nwork on its improvement\n\n\n> \n>> Campbell, Lance wrote:\n>> \n>>> PostgreSQL: 8.2.4\n>>>\n>>>\n>>>\n>>> Does anyone have any companies they would recommend using for\n>>> performance tuning training of PostgreSQL for Linux? Or general DBA\n>>> training?\n>>>\n>>> \n>\n> Never take advice from a guy who top posts... A friend of mine just went \n> through an OTG course and had good things to say, and I've heard other speak \n> well of it too, so I'd probably recommend them, but there are several \n> options, check out the training section on the website:\n> http://www.postgresql.org/about/eventarchive\n>\n> Note also some of the more popular pg support companies also offer personal \n> training, even if it isn't advertised. HTH.\n>\n> \n", "msg_date": "Mon, 03 Dec 2007 01:26:58 +0500", "msg_from": "Usama Munir Dar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "At hi5 we had the pleasure of having Enterprise DB provide a two-day\ntraining seminar for our DBA and operations staff. Everyone was very\nsatisfied with the quality and the price.\n\nOn Mon, Dec 03, 2007 at 01:26:58AM +0500, Usama Munir Dar wrote:\n>\n>\n> Robert Treat wrote:\n>> On Wednesday 28 November 2007 11:20, Usama Munir Dar wrote:\n>> \n>>> EnterpriseDB (www.enterprisedb.com), ofcourse\n>>>\n>>> \n>>\n>> lame :-P\n>> \n>\n> Have you or anyone you know tried the training offerings? or you think its \n> lame because i top posted , which of course would be a very poor criteria , \n> not to mention completely unrelated, so i definitely think its not the \n> reason. i would love to hear whats wrong with it so we can work on its \n> improvement\n>\n>\n>> \n>>> Campbell, Lance wrote:\n>>> \n>>>> PostgreSQL: 8.2.4\n>>>>\n>>>>\n>>>>\n>>>> Does anyone have any companies they would recommend using for\n>>>> performance tuning training of PostgreSQL for Linux? Or general DBA\n>>>> training?\n>>>>\n>>>> \n>>\n>> Never take advice from a guy who top posts... A friend of mine just went \n>> through an OTG course and had good things to say, and I've heard other \n>> speak well of it too, so I'd probably recommend them, but there are \n>> several options, check out the training section on the website:\n>> http://www.postgresql.org/about/eventarchive\n>>\n>> Note also some of the more popular pg support companies also offer \n>> personal training, even if it isn't advertised. HTH.\n>>\n>> \n\n-- \nPaul Lindner ||||| | | | | | | | | |\[email protected]", "msg_date": "Mon, 3 Dec 2007 08:12:22 -0800", "msg_from": "Paul Lindner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On Sunday 02 December 2007 15:26, Usama Munir Dar wrote:\n> Robert Treat wrote:\n> > On Wednesday 28 November 2007 11:20, Usama Munir Dar wrote:\n> >> EnterpriseDB (www.enterprisedb.com), ofcourse\n> >\n> > lame :-P\n>\n> Have you or anyone you know tried the training offerings? or you think\n> its lame because i top posted , which of course would be a very poor\n> criteria , not to mention completely unrelated, so i definitely think\n> its not the reason. i would love to hear whats wrong with it so we can\n> work on its improvement\n>\n\nWhat I thought was lame was that you, being someone who works for EntepriseDB, \nsuggested EnterpriseDB as a solution, with no mention of the other training \noptions available. Now one guy doing this isn't such a big deal (though it \nis still poor practice), but if every training company we're to do this I \nthink you can see how it doesn't do much for helping the public discourse. Of \ncourse I probably would have let the whole thing slide, but you top posted, \nso... \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Mon, 03 Dec 2007 13:52:49 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On Fri, 30 Nov 2007 04:15:09 -0500\nRobert Treat <[email protected]> wrote:\n\n> Never take advice from a guy who top posts... A friend of mine just\n> went through an OTG course and had good things to say, and I've heard\n> other speak well of it too, so I'd probably recommend them, but there\n\nAs a CMD representative, I will state that I have heard good things\nabout OTG's training as well. We have sent several of our smaller\ncustomers (those who don't need in shop training like CMD provides) to\nthem and they have all reported back very happy.\n\n\n> are several options, check out the training section on the website:\n> http://www.postgresql.org/about/eventarchive\n> \n> Note also some of the more popular pg support companies also offer\n> personal training, even if it isn't advertised. HTH.\n\nRight, I believe some even offer (besides CMD) custom training.\n2ndQuandrant (sp?) for example recently announced a PITR training.\n\nSincerely,\n\nJoshua D. Drake\n\n>", "msg_date": "Mon, 3 Dec 2007 15:50:27 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" }, { "msg_contents": "On Mon, 2007-12-03 at 15:50 -0800, Joshua D. Drake wrote:\n> > \n> > Note also some of the more popular pg support companies also offer\n> > personal training, even if it isn't advertised. HTH.\n> \n> Right, I believe some even offer (besides CMD) custom training.\n> 2ndQuandrant (sp?) for example recently announced a PITR training.\n\nJust posting to get the spelling right; thanks Josh for mentioning.\n \n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Tue, 04 Dec 2007 08:15:08 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Training Recommendations" } ]
[ { "msg_contents": "I have a question.\n\nConsider this scenario.\n\nTable customer (\ncustomer_id int8,\ncustomer_l_name varchar(35),\ncustomer_f_name varchar(30),\ncustomer_addr_1 varchar(100),\\\ncustomer_addr_2 varchar(100),\ncustomer_city varchar(50),\ncustomer_state char(2),\ncustomer_zip varchar(9)\n);\n\nOn this table, a customer can search by customer_id, customer_l_name,\nand customer_f_name.\n\nIs it better to create 3 indexes, or one index on the three columns?\n\nI did some initial testing with index customer_test_idx(customer_id,\ncustomer_l_name, customer_f_name) and postgres would use the index for\nselect * from customer where customer_f_name = 'zxy' - so the single\nindex will cover the three.\n\nMy question is, is this better? Does it end up using less memory\nand/or disk or more? I am trying to find ways to keep more of my\ncustomers databases in memory, and I am thinking that loading one\nindex is probably a little better than loading three.\n\nThanks for any advice,\n\nChris\n\nPG 8.1\nRH 4.0\n", "msg_date": "Mon, 5 Nov 2007 13:43:07 -0500", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Which index methodology is better?-" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> Is it better to create 3 indexes, or one index on the three columns?\n\nThis is covered in considerable detail in the fine manual:\n\nhttp://www.postgresql.org/docs/8.2/static/indexes.html\n\nSee particularly sections 11.3 and 11.4\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Nov 2007 14:56:29 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which index methodology is better?- " }, { "msg_contents": "Chris Hoover wrote:\n> On this table, a customer can search by customer_id, customer_l_name,\n> and customer_f_name.\n> \n> Is it better to create 3 indexes, or one index on the three columns?\n> \n> I did some initial testing with index customer_test_idx(customer_id,\n> customer_l_name, customer_f_name) and postgres would use the index for\n> select * from customer where customer_f_name = 'zxy' - so the single\n> index will cover the three.\n\nPostgres can use the index in that case, but it's going to have to scan \nthe whole index, which is a lot slower than looking up just the needed \nrows. If you do an EXPLAIN ANALYZE on that query, and compare it against \n\"select * from customer where customer_id = 123\", you'll see that it's a \nlot more expensive.\n\nI'd recommend having separate indexes. Having just one index probably \ndoes take less space, but the fact that you don't have to always scan \nall of it probably outweighs that.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 05 Nov 2007 20:07:59 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which index methodology is better?-" }, { "msg_contents": "If I do:\n\n begin;\n update some_table set foo = newvalue where a_bunch_of_rows_are_changed;\n analyze some_table;\n rollback;\n\ndoes it roll back the statistics? (I think the answer is yes, but I need to be sure.)\n\nThanks,\nCraig\n", "msg_date": "Mon, 05 Nov 2007 21:12:37 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Is ANALYZE transactional?" }, { "msg_contents": "Craig James <[email protected]> writes:\n> If I do:\n> begin;\n> update some_table set foo = newvalue where a_bunch_of_rows_are_changed;\n> analyze some_table;\n> rollback;\n\n> does it roll back the statistics? (I think the answer is yes, but I need to be sure.)\n\nYes --- ANALYZE doesn't do anything magic, just a plain UPDATE of those\nrows. (You could have easily tested this for yourself...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Nov 2007 00:30:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is ANALYZE transactional? " } ]
[ { "msg_contents": "Hello.\n\nWe are planning to move from MS SQL Server to PostgreSQL for our \nproduction system. Bot read and write performance are equally \nimportant. Writing is the bottleneck of our current MS SQL Server \nsystem.\n\nAll of our existing servers are from Dell, but I want to look at some \nother options as well. We are currently looking at rack boxes with 8 \ninternal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 in \nraid 10 for the base.\n\nHere are our current alternatives:\n\n1) Dell 2900 (5U)\n8 * 146 GB SAS 15Krpm 3,5\"\n8GB ram\nPerc 5/i. battery backup. 256MB ram.\n2 * 4 Xeon 2,66GHz\n\n2) Dell 2950 (2U)\n8 * 146 GB SAS 10Krpm 2,5\" (not really selectable, but I think the \nwebshop is wrong..)\n8GB ram\nPerc 5/i. battery backup. 256MB ram.\n2 * 4 Xeon 2,66GHz\n\n3) HP ProLiant DL380 G5 (2U)\n8 * 146 GB SAS 10Krpm 2,5\"\n8GB ram\nP400 raid controller. battery backup. 512MB ram.\n2 * 2 Xeon 3GHz\n\nAll of those alternatives cost ca the same. How much (in numbers) \nbetter are 15K 3,5\" than 10K 2,5\"? What about the raid controllers? \nAny other alternatives in that price-range?\n\nRegards,\n - Tore.\n", "msg_date": "Tue, 6 Nov 2007 11:12:23 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": true, "msg_subject": "dell versus hp" }, { "msg_contents": "> All of our existing servers are from Dell, but I want to look at some\n> other options as well. We are currently looking at rack boxes with 8\n> internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 in\n> raid 10 for the base.\n>\n> Here are our current alternatives:\n>\n> 1) Dell 2900 (5U)\n> 8 * 146 GB SAS 15Krpm 3,5\"\n> 8GB ram\n> Perc 5/i. battery backup. 256MB ram.\n> 2 * 4 Xeon 2,66GHz\n>\n> 2) Dell 2950 (2U)\n> 8 * 146 GB SAS 10Krpm 2,5\" (not really selectable, but I think the\n> webshop is wrong..)\n> 8GB ram\n> Perc 5/i. battery backup. 256MB ram.\n> 2 * 4 Xeon 2,66GHz\n>\n> 3) HP ProLiant DL380 G5 (2U)\n> 8 * 146 GB SAS 10Krpm 2,5\"\n> 8GB ram\n> P400 raid controller. battery backup. 512MB ram.\n> 2 * 2 Xeon 3GHz\n>\n> All of those alternatives cost ca the same. How much (in numbers)\n> better are 15K 3,5\" than 10K 2,5\"? What about the raid controllers?\n> Any other alternatives in that price-range?\n\nWhen writing is important you want to use 15K rpm disks. I personally\nuse the DL380 and is very satisfied with the hardware and the buildin\nciss-controller (with 256 MB cache using 10K rpm disks).\n\nHow much space do you need? 72 GB is the largest 15K 2.5\" sas-disk from HP.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Tue, 6 Nov 2007 12:36:33 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "Hi List,\n\nLe mardi 06 novembre 2007, Tore Halset a écrit :\n> 1) Dell 2900 (5U)\n> 8 * 146 GB SAS 15Krpm 3,5\"\n> 8GB ram\n> Perc 5/i. battery backup. 256MB ram.\n> 2 * 4 Xeon 2,66GHz\n\nIn fact you can add 2 hot-plug disks on this setup, connected to the \nfrontpane. We've bought this very same model with 10 15 rpm disks some weeks \nago, and it reached production last week.\n\nSo we have 2 OS raid1 disk (with /var/backups and /var/log --- pg_log), 2 \nraid1 disk for WAL and 6 disks in a RAID10, the 3 raids managed by the \nincluded Perc raid controller. So far so good!\n\nSome knowing-better-than-me people on #postgresql had the remark that \ndepending on the write transaction volumes (40 to 60 percent of my tps, but \nno so much for this hardware), I could somewhat benefit in setting the WAL on \nthe OS raid1, and having 8 raid10 disks for data... which I'll consider for \nanother project.\n\nHope this helps,\n-- \ndim", "msg_date": "Tue, 6 Nov 2007 12:53:44 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "Tore,\n\n* Tore Halset ([email protected]) wrote:\n> All of our existing servers are from Dell, but I want to look at some other \n> options as well. We are currently looking at rack boxes with 8 internal SAS \n> discs. Two mirrored for OS, Two mirrored for WAL and 4 in raid 10 for the \n> base.\n\nI'm a big HP fan, personally. Rather than talking about the hardware\nfor a minute though, I'd suggest you check out what's happening for 8.3.\nHere's a pretty good writeup by Greg Smith on it:\n\nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n\nHopefully it'll help w/ whatever hardware you end up going with.\n\n\tEnjoy,\n\n\t\tStephen", "msg_date": "Tue, 6 Nov 2007 08:48:31 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 6, 2007, at 12:53 , Dimitri Fontaine wrote:\n\n> Le mardi 06 novembre 2007, Tore Halset a �crit :\n>> 1) Dell 2900 (5U)\n>> 8 * 146 GB SAS 15Krpm 3,5\"\n>> 8GB ram\n>> Perc 5/i. battery backup. 256MB ram.\n>> 2 * 4 Xeon 2,66GHz\n>\n> In fact you can add 2 hot-plug disks on this setup, connected to the\n> frontpane. We've bought this very same model with 10 15 rpm disks \n> some weeks\n> ago, and it reached production last week.\n>\n> So we have 2 OS raid1 disk (with /var/backups and /var/log --- \n> pg_log), 2\n> raid1 disk for WAL and 6 disks in a RAID10, the 3 raids managed by the\n> included Perc raid controller. So far so good!\n\nInteresting. Do you have any benchmarking numbers? Did you test with \nsoftware raid 10 as well?\n\nRegards,\n - Tore.\n\n", "msg_date": "Tue, 6 Nov 2007 15:14:19 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 6, 2007, at 12:36 , Claus Guttesen wrote:\n\n>> All of our existing servers are from Dell, but I want to look at some\n>> other options as well. We are currently looking at rack boxes with 8\n>> internal SAS discs. Two mirrored for OS, Two mirrored for WAL and 4 \n>> in\n>> raid 10 for the base.\n>>\n>> Here are our current alternatives:\n>>\n>> 1) Dell 2900 (5U)\n>> 8 * 146 GB SAS 15Krpm 3,5\"\n>> 8GB ram\n>> Perc 5/i. battery backup. 256MB ram.\n>> 2 * 4 Xeon 2,66GHz\n>>\n>> 2) Dell 2950 (2U)\n>> 8 * 146 GB SAS 10Krpm 2,5\" (not really selectable, but I think the\n>> webshop is wrong..)\n>> 8GB ram\n>> Perc 5/i. battery backup. 256MB ram.\n>> 2 * 4 Xeon 2,66GHz\n>>\n>> 3) HP ProLiant DL380 G5 (2U)\n>> 8 * 146 GB SAS 10Krpm 2,5\"\n>> 8GB ram\n>> P400 raid controller. battery backup. 512MB ram.\n>> 2 * 2 Xeon 3GHz\n>>\n>> All of those alternatives cost ca the same. How much (in numbers)\n>> better are 15K 3,5\" than 10K 2,5\"? What about the raid controllers?\n>> Any other alternatives in that price-range?\n>\n> When writing is important you want to use 15K rpm disks. I personally\n> use the DL380 and is very satisfied with the hardware and the buildin\n> ciss-controller (with 256 MB cache using 10K rpm disks).\n>\n> How much space do you need? 72 GB is the largest 15K 2.5\" sas-disk \n> from HP.\n\nOkay, thanks. We need 100GB for the database, so 4 72GB in raid 10 \nwill be fine.\n\nRegards,\n - Tore.\n", "msg_date": "Tue, 6 Nov 2007 15:15:39 +0100", "msg_from": "Tore Halset <[email protected]>", "msg_from_op": true, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "Le mardi 06 novembre 2007, Tore Halset a écrit :\n> Interesting. Do you have any benchmarking numbers? Did you test with\n> software raid 10 as well?\n\nJust some basic pg_restore figures, which only make sense (for me anyway) when \ncompared to restoring same data on other machines, and to show the effect of \nhaving a dedicated array for the WALs (fsync off not having that an influence \non the pg_restore timing)...\n\nThe previous production server had a RAM default and made us switch without \ntaking the time for all the tests we could have run on the new \"beast\".\n\nRegards,\n-- \ndim", "msg_date": "Tue, 6 Nov 2007 16:18:51 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Tue, 6 Nov 2007, Dimitri Fontaine wrote:\n\n> Some knowing-better-than-me people on #postgresql had the remark that\n> depending on the write transaction volumes (40 to 60 percent of my tps, but\n> no so much for this hardware), I could somewhat benefit in setting the WAL on\n> the OS raid1, and having 8 raid10 disks for data\n\nThat really depends on the write volume to the OS drive. If there's lots \nof writes there for things like logs and temporary files, the disruption \nto the WAL writes could be a problem. Part of the benefit of having a \nseparate WAL disk is that the drive never has to seek somewhere to write \nanything else.\n\nNow, if instead you considered putting the WAL onto the database disks and \nadding more disks to the array, that might work well. You'd also be \nlosing something because the WAL writes may have to wait behind seeks \nelsewhere. But once you have enough disks in an array to spread all the \nload over that itself may improve write throughput enough to still be a \nnet improvement.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 6 Nov 2007 13:10:10 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 6, 2007, at 5:12 AM, Tore Halset wrote:\n\n> Here are our current alternatives:\n\nTwo things I recommend. If the drives are made by western digital, \nrun away.\n\nIf the PERC5/i is an Adaptec card, run away.\n\nMax out your cache RAM on the RAID card. 256 is the minimum when you \nhave such big data sets that need the big disks you're looking at.\n\n\n", "msg_date": "Thu, 8 Nov 2007 11:41:18 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 6, 2007, at 1:10 PM, Greg Smith wrote:\n\n> elsewhere. But once you have enough disks in an array to spread all \n> the load over that itself may improve write throughput enough to \n> still be a net improvement.\n\nThis has been my expeience with 14+ disks in an array (both RAID10 and \nRAID5). The difference is barely noticeable.\n\n", "msg_date": "Thu, 8 Nov 2007 11:43:04 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 8, 2007 10:43 AM, Vivek Khera <[email protected]> wrote:\n>\n> On Nov 6, 2007, at 1:10 PM, Greg Smith wrote:\n>\n> > elsewhere. But once you have enough disks in an array to spread all\n> > the load over that itself may improve write throughput enough to\n> > still be a net improvement.\n>\n> This has been my expeience with 14+ disks in an array (both RAID10 and\n> RAID5). The difference is barely noticeable.\n\nMine too. I would suggest though, that by the time you get to 14\ndisks, you switch from RAID-5 to RAID-6 so you have double redundancy.\n Performance of a degraded array is better in RAID6 than RAID5, and\nyou can run your rebuilds much slower since you're still redundant.\n\n> If the PERC5/i is an Adaptec card, run away.\n\nI've heard the newest adaptecs, even the perc implementations aren't bad.\n\nOf course, that doesn't mean I'm gonna use one, but who knows? They\nmight have made a decent card after all.\n", "msg_date": "Thu, 8 Nov 2007 12:22:48 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "Le Thursday 08 November 2007 19:22:48 Scott Marlowe, vous avez écrit :\n> On Nov 8, 2007 10:43 AM, Vivek Khera <[email protected]> wrote:\n> > On Nov 6, 2007, at 1:10 PM, Greg Smith wrote:\n> > > elsewhere. But once you have enough disks in an array to spread all\n> > > the load over that itself may improve write throughput enough to\n> > > still be a net improvement.\n> >\n> > This has been my expeience with 14+ disks in an array (both RAID10 and\n> > RAID5). The difference is barely noticeable.\n>\n> Mine too.\n\nMay we conclude from this that mixing WAL and data onto the same array is a \ngood idea starting at 14 spindles?\n\nThe Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS \n(raid1) and 8 for mixing WAL and data... not enough to benefit from the move, \nor still to test?\n\n> I would suggest though, that by the time you get to 14 \n> disks, you switch from RAID-5 to RAID-6 so you have double redundancy.\n> Performance of a degraded array is better in RAID6 than RAID5, and\n> you can run your rebuilds much slower since you're still redundant.\n\nIs raid6 better than raid10 in term of overall performances, or a better cut \nwhen you need capacity more than throughput?\n\nThanks for sharing the knowlegde, regards,\n-- \ndim\n", "msg_date": "Thu, 8 Nov 2007 21:14:36 +0100", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": ">>> On Thu, Nov 8, 2007 at 2:14 PM, in message\n<[email protected]>, Dimitri Fontaine\n<[email protected]> wrote: \n \n> The Dell 2900 5U machine has 10 spindles max, that would make 2 for the OS \n> (raid1) and 8 for mixing WAL and data... not enough to benefit from the \n> move, \n> or still to test?\n \n>From our testing and various posts on the performance list, you can\nexpect a good battery backed caching RAID controller will probably\neliminate most of the performance difference between separate WAL\ndrives and leaving them on the same RAID array with the rest of the\ndatabase. See, for example:\n \nhttp://archives.postgresql.org/pgsql-performance/2007-02/msg00026.php\n \nBen found a difference of \"a few percent\"; I remember seeing a post\nfrom someone who did a lot of testing and found a difference of 1%.\nAs stated in the above referenced posting, it will depend on your\nworkload (and your hardware) so it is best if you can do some\nrealistic tests.\n \n-Kevin\n \n\n\n", "msg_date": "Thu, 08 Nov 2007 14:31:04 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Thursday 08 November 2007, Dimitri Fontaine <[email protected]> \n> Is raid6 better than raid10 in term of overall performances, or a better\n> cut when you need capacity more than throughput?\n\nYou can't touch RAID 10 for performance or reliability. The only reason to \nuse RAID 5 or RAID 6 is to get more capacity out of the same drives.\n\n-- \nAlan\n", "msg_date": "Thu, 8 Nov 2007 12:56:41 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 8, 2007, at 1:22 PM, Scott Marlowe wrote:\n\n> I've heard the newest adaptecs, even the perc implementations aren't \n> bad.\n\n\nI have a pair of Adaptec 2230SLP cards. Worst. Just replaced them on \nTuesday with fibre channel cards connected to external RAID \nenclosures. Much nicer.\n\n", "msg_date": "Thu, 8 Nov 2007 17:01:41 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 8, 2007 2:56 PM, Alan Hodgson <[email protected]> wrote:\n> On Thursday 08 November 2007, Dimitri Fontaine <[email protected]>\n> > Is raid6 better than raid10 in term of overall performances, or a better\n> > cut when you need capacity more than throughput?\n>\n> You can't touch RAID 10 for performance or reliability. The only reason to\n> use RAID 5 or RAID 6 is to get more capacity out of the same drives.\n\nActually, RAID6 is about the same on reliability, since it has double\nparity and theoretically ANY TWO disks could fail, and RAID6 will\nstill have all your data. If the right two disks fail in a RAID-10\nyou lose everything. Admittedly, that's a pretty remote possibility,\nbut so it three drives failing at once in a RAID-6.\n\nFor performance RAID-10 is still pretty much the best choice.\n", "msg_date": "Thu, 8 Nov 2007 17:24:51 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "* Scott Marlowe:\n\n> If the right two disks fail in a RAID-10 you lose everything.\n> Admittedly, that's a pretty remote possibility,\n\nIt's not, unless you carefully layout the RAID-1 subunits so that\ntheir drives aren't physically adjacent. 8-/ I don't think many\ncontrollers support that.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 09 Nov 2007 12:30:33 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "Apart from the disks, you might also investigate using Opterons instead\nof Xeons. there appears to be some significant dent in performance\nbetween Opteron and Xeon. Xeons appear to spend more time in passing\naround ownership of memory cache lines in case of a spinlock.\nIt's not yet clear whether or not here has been worked around the issue.\nYou should at least investigate it a bit.\n\nWe're using a HP DL385 ourselves which performs quite well.\n\n-R-\n\nTore Halset wrote:\n> Hello.\n\n> 1) Dell 2900 (5U)\n> 8 * 146 GB SAS 15Krpm 3,5\"\n> 8GB ram\n> Perc 5/i. battery backup. 256MB ram.\n> 2 * 4 Xeon 2,66GHz\n> \n> 2) Dell 2950 (2U)\n> 8 * 146 GB SAS 10Krpm 2,5\" (not really selectable, but I think the\n> webshop is wrong..)\n> 8GB ram\n> Perc 5/i. battery backup. 256MB ram.\n> 2 * 4 Xeon 2,66GHz\n> \n> 3) HP ProLiant DL380 G5 (2U)\n> 8 * 146 GB SAS 10Krpm 2,5\"\n> 8GB ram\n> P400 raid controller. battery backup. 512MB ram.\n> 2 * 2 Xeon 3GHz\n> \n\n", "msg_date": "Fri, 09 Nov 2007 17:01:08 +0100", "msg_from": "Jurgen Haan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "> Apart from the disks, you might also investigate using Opterons instead\n> of Xeons. there appears to be some significant dent in performance\n> between Opteron and Xeon. Xeons appear to spend more time in passing\n> around ownership of memory cache lines in case of a spinlock.\n> It's not yet clear whether or not here has been worked around the issue.\n> You should at least investigate it a bit.\n>\n> We're using a HP DL385 ourselves which performs quite well.\n\nNot atm. Until new benchmarks are published comparing AMD's new\nquad-core with Intel's ditto, Intel has the edge.\n\nhttp://tweakers.net/reviews/657/6\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 9 Nov 2007 17:40:50 +0100", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 9, 2007 10:40 AM, Claus Guttesen <[email protected]> wrote:\n> > Apart from the disks, you might also investigate using Opterons instead\n> > of Xeons. there appears to be some significant dent in performance\n> > between Opteron and Xeon. Xeons appear to spend more time in passing\n> > around ownership of memory cache lines in case of a spinlock.\n> > It's not yet clear whether or not here has been worked around the issue.\n> > You should at least investigate it a bit.\n> >\n> > We're using a HP DL385 ourselves which performs quite well.\n>\n> Not atm. Until new benchmarks are published comparing AMD's new\n> quad-core with Intel's ditto, Intel has the edge.\n>\n> http://tweakers.net/reviews/657/6\n\nFor 8 cores, it appears AMD has the lead, read this (stolen from\nanother thread):\n\nhttp://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf\n", "msg_date": "Fri, 9 Nov 2007 10:55:45 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Fri, 9 Nov 2007, Scott Marlowe wrote:\n\n>> Not atm. Until new benchmarks are published comparing AMD's new\n>> quad-core with Intel's ditto, Intel has the edge.\n>> http://tweakers.net/reviews/657/6\n>\n> For 8 cores, it appears AMD has the lead, read this (stolen from\n> another thread):\n> http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf\n\nThis issue isn't simple, and it may be the case that both conclusions are \ncorrect in their domain but testing slightly different things. The \nsysbench test used by the FreeBSD benchmark is a much simpler than what \nthe tweakers.net benchmark simulates.\n\nCurrent generation AMD and Intel processors are pretty close in \nperformance, but guessing which will work better involves a complicated \nmix of both CPU and memory issues. AMD's NUMA architecture does some \nthings better, and Intel's memory access takes a second hit in designs \nthat use FB-DIMMs. But Intel has enough of an advantage on actual CPU \nperformance and CPU caching that current designs are usually faster \nregardless.\n\nFor an interesting look at the low-level details here, the current \nmainstream parts are compared at http://techreport.com/articles.x/11443/13 \nand a similar comparison for the just released quad-core Opterons is at \nhttp://techreport.com/articles.x/13176/12\n\nNowadays Intel vs. AMD is tight enough that I don't even worry about that \npart in the context of a database application (there was still a moderate \ngap when the Tweakers results were produced a year ago). On a real \nserver, I'd suggest being more worried about how good the disk controller \nis, what the expansion options are there, and relative $/core. In the \nx86/x64 realm, I don't feel CPU architecture is a huge issue right now \nwhen you're running a database.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 9 Nov 2007 13:03:20 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 8, 2007, at 3:56 PM, Alan Hodgson wrote:\n\n> You can't touch RAID 10 for performance or reliability. The only \n> reason to\n> use RAID 5 or RAID 6 is to get more capacity out of the same drives.\n\nMaybe you can't, but I can. I guess I have better toys than you :-)\n\n", "msg_date": "Fri, 9 Nov 2007 14:08:41 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On November 9, 2007, Vivek Khera <[email protected]> wrote:\n> On Nov 8, 2007, at 3:56 PM, Alan Hodgson wrote:\n> > You can't touch RAID 10 for performance or reliability. The only\n> > reason to\n> > use RAID 5 or RAID 6 is to get more capacity out of the same\n> > drives.\n>\n> Maybe you can't, but I can. I guess I have better toys than you :-)\n>\n\nOK, I'll bite. Name one RAID controller that gives better write \nperformance in RAID 6 than it does in RAID 10, and post the benchmarks.\n\nI'll grant a theoretical reliability edge to RAID 6 (although actual \nimplementations are a lot more iffy), but not performance.\n\n-- \nThe ethanol craze means that we're going to burn up the Midwest's last \nsix inches of topsoil in our gas-tanks.\n\n", "msg_date": "Tue, 13 Nov 2007 07:49:49 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Tue, 13 Nov 2007, Alan Hodgson wrote:\n\n> OK, I'll bite. Name one RAID controller that gives better write\n> performance in RAID 6 than it does in RAID 10, and post the benchmarks.\n>\n> I'll grant a theoretical reliability edge to RAID 6 (although actual\n> implementations are a lot more iffy), but not performance.\n\nOk, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6 vs 8 \ndrive RAID10, but I don't have those bonnie results any longer.\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n14xRAID6 63G 73967 99 455162 58 164543 23 77637 99 438570 31 912.2 1\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59\n\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\n14xRAID10 63G 63968 92 246143 68 140634 30 77722 99 510904 36 607.8 0\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12\n\n\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Tue, 13 Nov 2007 12:32:18 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 8, 2007 1:22 PM, Scott Marlowe <[email protected]> wrote:\n> Mine too. I would suggest though, that by the time you get to 14\n> disks, you switch from RAID-5 to RAID-6 so you have double redundancy.\n> Performance of a degraded array is better in RAID6 than RAID5, and\n> you can run your rebuilds much slower since you're still redundant.\n>\n\ncouple of remarks here:\n* personally im not a believer in raid 6, it seems to hurt random\nwrite performance which is already a problem with raid 5...I prefer\nthe hot spare route, or raid 10.\n* the perc 5 sas controller is rebranded lsi megaraid controller with\nsome custom firmware tweaks. for example, the perc 5/e is a rebranded\n8408 megaraid iirc.\n* perc 5 controllers are decent if unspectacular. good raid 5\nperformance, average raid 10.\n* to the OP, the 15k solution (dell 2900) will likely perform the\nbest, if you don't mind the rack space.\n* again the op, you can possibly consider combining the o/s and the\nwal volumes (2xraid 1 + 6xraid 10) combining the o/s and wal volumes\ncan sometimes also be a win, but doesn't sound likely in your case.\n\nmerlin\nmerlin\n", "msg_date": "Tue, 13 Nov 2007 22:20:41 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Tuesday 13 November 2007, Jeff Frost <[email protected]> wrote:\n> Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6\n> vs 8 drive RAID10, but I don't have those bonnie results any longer.\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n> /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99\n> 438570 31 912.2 1 ------Sequential Create------ --------Random\n> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59\n>\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99\n> 510904 36 607.8 0 ------Sequential Create------ --------Random\n> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12\n\nOK, impressive RAID-6 performance (not so impressive RAID-10 performance, \nbut that could be a filesystem issue). Note to self; try an Areca \ncontroller in next storage server.\n\nthanks.\n\n-- \nThe global consumer economy can best be described as the most efficient way \nto convert natural resources into waste.\n", "msg_date": "Wed, 14 Nov 2007 14:24:23 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Wed, 14 Nov 2007, Alan Hodgson wrote:\n\n> On Tuesday 13 November 2007, Jeff Frost <[email protected]> wrote:\n>> Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6\n>> vs 8 drive RAID10, but I don't have those bonnie results any longer.\n>>\n>> Version 1.03 ------Sequential Output------ --Sequential Input-\n>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99\n>> 438570 31 912.2 1 ------Sequential Create------ --------Random\n>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n>> %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59\n>>\n>>\n>> Version 1.03 ------Sequential Output------ --Sequential Input-\n>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>> /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99\n>> 510904 36 607.8 0 ------Sequential Create------ --------Random\n>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n>> %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12\n>\n> OK, impressive RAID-6 performance (not so impressive RAID-10 performance,\n> but that could be a filesystem issue). Note to self; try an Areca\n> controller in next storage server.\n\nI believe these were both on ext3. I thought I had some XFS results available \nfor comparison, but I couldn't find them.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Wed, 14 Nov 2007 14:36:54 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Wednesday 14 November 2007, Jeff Frost <[email protected]> \nwrote:\n> > OK, impressive RAID-6 performance (not so impressive RAID-10\n> > performance, but that could be a filesystem issue). Note to self; try\n> > an Areca controller in next storage server.\n>\n> I believe these were both on ext3. I thought I had some XFS results\n> available for comparison, but I couldn't find them.\n\nYeah I've seen ext3 write performance issues on RAID-10. XFS is much better.\n\n-- \nQ: Why did God create economists?\nA: In order to make weather forecasters look good.\n\n", "msg_date": "Wed, 14 Nov 2007 14:39:50 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Nov 14, 2007 5:24 PM, Alan Hodgson <[email protected]> wrote:\n> On Tuesday 13 November 2007, Jeff Frost <[email protected]> wrote:\n> > Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6\n> > vs 8 drive RAID10, but I don't have those bonnie results any longer.\n> >\n> > Version 1.03 ------Sequential Output------ --Sequential Input-\n> > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> > /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99\n> > 438570 31 912.2 1 ------Sequential Create------ --------Random\n> > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n> > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> > %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59\n> >\n> >\n> > Version 1.03 ------Sequential Output------ --Sequential Input-\n> > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> > /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99\n> > 510904 36 607.8 0 ------Sequential Create------ --------Random\n> > Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n> > -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n> > %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12\n>\n> OK, impressive RAID-6 performance (not so impressive RAID-10 performance,\n> but that could be a filesystem issue). Note to self; try an Areca\n> controller in next storage server.\n\n607 seeks/sec on a 8 drive raid 10 is terrible...this is not as\ndependant on filesystem as sequential performance...\n\nmerlin\n", "msg_date": "Wed, 14 Nov 2007 20:56:32 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "On Wed, 14 Nov 2007, Merlin Moncure wrote:\n\n> On Nov 14, 2007 5:24 PM, Alan Hodgson <[email protected]> wrote:\n>> On Tuesday 13 November 2007, Jeff Frost <[email protected]> wrote:\n>>> Ok, Areca ARC1261ML. Note that results were similar for an 8 drive RAID6\n>>> vs 8 drive RAID10, but I don't have those bonnie results any longer.\n>>>\n>>> Version 1.03 ------Sequential Output------ --Sequential Input-\n>>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>>> /sec %CP 14xRAID6 63G 73967 99 455162 58 164543 23 77637 99\n>>> 438570 31 912.2 1 ------Sequential Create------ --------Random\n>>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n>>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n>>> %CP 16 12815 63 +++++ +++ 13041 61 12846 67 +++++ +++ 12871 59\n>>>\n>>>\n>>> Version 1.03 ------Sequential Output------ --Sequential Input-\n>>> --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\n>>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n>>> /sec %CP 14xRAID10 63G 63968 92 246143 68 140634 30 77722 99\n>>> 510904 36 607.8 0 ------Sequential Create------ --------Random\n>>> Create-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n>>> -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n>>> %CP 16 6655 16 +++++ +++ 5755 12 7259 17 +++++ +++ 5550 12\n>>\n>> OK, impressive RAID-6 performance (not so impressive RAID-10 performance,\n>> but that could be a filesystem issue). Note to self; try an Areca\n>> controller in next storage server.\n>\n> 607 seeks/sec on a 8 drive raid 10 is terrible...this is not as\n> dependant on filesystem as sequential performance...\n\nThen this must be horrible since it's a 14 drive raid 10. :-/ If we had more \ntime for the testing, I would have tried a bunch of RAID1 volumes and \nused software RAID0 to add the +0 bit and see how that performed.\n\nMerlin, what sort of seeks/sec from bonnie++ do you normally see from your \nRAID10 volumes?\n\nOn an 8xRAID10 volume with the smaller Areca controller we were seeing around \n450 seeks/sec.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Wed, 14 Nov 2007 18:19:17 -0800 (PST)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 14, 2007, at 9:19 PM, Jeff Frost wrote:\n\n>\n> On an 8xRAID10 volume with the smaller Areca controller we were \n> seeing around 450 seeks/sec.\n>\n\nOn our 6 disk raid10 on a 3ware 9550sx I'm able to get about 120 seek \n+ reads/sec per process, with an aggregate up to about 500 or so. \nThe disks are rather pooey 7.5k sata2 disks. I'd been having perf \nissues and I'd been wondering why my IO stats were low.. turns out it \nwas going as fast as the disks or controller could go. I even went \nso far as to write a small tool to sort-of simulate a PG index scan \nto remove all that from the question. It proved my theory - seq \nperformance was murdering us.\n\nThis information led me to spend a pile of money on an MSA70 (HP) and \na pile of 15k SAS disks.\nWhile significantly more expensive, the perf gains were astounding. \nI have 8 disks in a raid6 (iirc, I had comprable numbers for R10, but \nthe space/cost/performance wasn't worth it). I'm able to get about \n350-400tps, per process, with an aggregate somewhere in the 1000s. (I \ndrove it up to 2000 before stopping during testing)\n\nWether the problem is the controller or the disks, I don't know. I \njust know what my numbers tell me. (And the day we went live on the \nMSA a large number of our perf issues went away. Although, now that \nthe IO was plenty sufficient the CPU became the bottleneck! Its \nalways something!) The sata array performs remarkably well for a \nsequential read though. Given our workload, we need the random perf \nmuch more than seq, but I can see the opposite being true in a \nwarehouse workload.\n\nbtw, the tool I wrote is here http://pgfoundry.org/projects/pgiosim/\n\n--\nJeff Trout <[email protected]>\nhttp://www.dellsmartexitin.com/\nhttp://www.stuarthamm.net/\n\n\n\n", "msg_date": "Thu, 15 Nov 2007 09:41:59 -0500", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" }, { "msg_contents": "\nOn Nov 14, 2007, at 5:36 PM, Jeff Frost wrote:\n\n>\n> I believe these were both on ext3. I thought I had some XFS results \n> available for comparison, but I couldn't find them.\n\nYou'd see similar with the UFS2 file system on FreeBSD.\n\n", "msg_date": "Thu, 15 Nov 2007 15:28:44 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: dell versus hp" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI have a two relevant tables:\n\nfastgraph=# \\d object\n Table \"public.object\"\n Column | Type | Modifiers\n- --------+--------+----------------------------------------------\n id | bigint | not null default nextval('id_seq'::regclass)\nIndexes:\n \"object_id_idx\" UNIQUE, btree (id)\n\nactually, this table is partitioned into object, link, representation and format, the latter three of which carry some extra fields,\nwhich are not selected this time. \"id\" is indexed in every one.\n\nfastgraph=# \\d link\n Table \"public.link\"\n Column | Type | Modifiers\n- -----------+------------------+----------------------------------------------\n id | bigint | not null default nextval('id_seq'::regclass)\n s | bigint | not null\n e | bigint | not null\n intensity | double precision | not null\nIndexes:\n \"link_id_idx\" UNIQUE, btree (id)\n \"link_e_idx\" btree (e)\n \"link_s_idx\" btree (s)\n \"link_se_idx\" btree (s, e)\nInherits: object\n\nAnd the query\n\nfastgraph=# explain\nselect distinct o.*\n from object o, link l1, link l2\n where (o.id = l1.s and l2.s = l1.e and l2.e = 8693)\n or (o.id = l1.e and l2.s = l1.s and l2.e = 8693)\n or (o.id = l1.s and l2.e = l1.e and l2.s = 8693)\n or (o.id = l1.e and l2.e = l1.s and l2.s = 8693);\n QUERY PLAN\n- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=168076109270.14..168119432747.82 rows=200 width=8)\n -> Sort (cost=168076109270.14..168097771008.98 rows=8664695536 width=8)\n Sort Key: o.id\n -> Nested Loop (cost=12.74..166290504976.97 rows=8664695536 width=8)\n Join Filter: (((o.id = l1.s) AND (l2.s = l1.e) AND (l2.e = 8693)) OR ((o.id = l1.e) AND (l2.s = l1.s) AND (l2.e = 8693)) OR ((o.id = l1.s) AND (l2.e = l1.e) AND (l2.s = 8693)) OR ((o.id = l1.e) AND (l2.e = l1.s) AND (l2.s = 8693)))\n -> Nested Loop (cost=0.00..270060645.64 rows=9889858512 width=24)\n -> Seq Scan on link l2 (cost=0.00..120087.40 rows=1908 width=16)\n Filter: ((e = 8693) OR (e = 8693) OR (s = 8693) OR (s = 8693))\n -> Append (cost=0.00..89644.64 rows=5183364 width=8)\n -> Seq Scan on object o (cost=0.00..4584.51 rows=317751 width=8)\n -> Seq Scan on link o (cost=0.00..76189.70 rows=4389770 width=8)\n -> Seq Scan on format o (cost=0.00..1.02 rows=2 width=8)\n -> Seq Scan on representation o (cost=0.00..8869.41 rows=475841 width=8)\n -> Bitmap Heap Scan on link l1 (cost=12.74..16.75 rows=1 width=16)\n Recheck Cond: (((o.id = l1.s) AND (l2.s = l1.e)) OR ((l2.s = l1.s) AND (o.id = l1.e)) OR ((o.id = l1.s) AND (l2.e = l1.e)) OR ((l2.e = l1.s) AND (o.id = l1.e)))\n -> BitmapOr (cost=12.74..12.74 rows=1 width=0)\n -> Bitmap Index Scan on link_se_idx (cost=0.00..3.18 rows=1 width=0)\n Index Cond: ((o.id = l1.s) AND (l2.s = l1.e))\n -> Bitmap Index Scan on link_se_idx (cost=0.00..3.18 rows=1 width=0)\n Index Cond: ((l2.s = l1.s) AND (o.id = l1.e))\n -> Bitmap Index Scan on link_se_idx (cost=0.00..3.18 rows=1 width=0)\n Index Cond: ((o.id = l1.s) AND (l2.e = l1.e))\n -> Bitmap Index Scan on link_se_idx (cost=0.00..3.18 rows=1 width=0)\n Index Cond: ((l2.e = l1.s) AND (o.id = l1.e))\n(24 rows)\n\nThese costs are unacceptable for my application. (obviously)\n\nMy question is just, why does the planner think it a good idea to join the unrelated\ntables first using a nested loop over a sequence scan? That seems like the worst\nselectivity possible. It seems like a bug to me, but maybe I am just overlooking something...\n\nCan the planner distribute the DISTINCT down the join tree somehow?\n\nJust for reference the following query seems equivalent:\n\nfastgraph=# explain select distinct o.* from object o where o.id in (\n select l1.s from link l1 where l1.e in (\n select l2.s from link l2 where l2.e = 8693\n union\n select l2.e from link l2 where l2.s = 8693\n )\n union\n select l1.e from link l1 where l1.s in (\n select l2.s from link l2 where l2.e = 8693\n union\n select l2.e from link l2 where l2.s = 8693\n )\n)\n;\n\n QUERY PLAN\n- -------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=358267.90..2698389.18 rows=200 width=8)\n -> Nested Loop (cost=358267.90..2685430.77 rows=5183364 width=8)\n Join Filter: (o.id = l1.s)\n -> Unique (cost=358267.90..364484.89 rows=1243398 width=8)\n -> Sort (cost=358267.90..361376.39 rows=1243398 width=8)\n Sort Key: l1.s\n -> Append (cost=2612.99..215396.61 rows=1243398 width=8)\n -> Hash Join (cost=2612.99..105950.31 rows=1068599 width=8)\n Hash Cond: (l1.e = l2.s)\n -> Seq Scan on link l1 (cost=0.00..76189.70 rows=4389770 width=16)\n -> Hash (cost=2601.06..2601.06 rows=954 width=8)\n -> Unique (cost=2586.75..2591.52 rows=954 width=8)\n -> Sort (cost=2586.75..2589.14 rows=954 width=8)\n Sort Key: l2.s\n -> Append (cost=0.00..2539.54 rows=954 width=8)\n -> Index Scan using link_e_idx on link l2 (cost=0.00..1838.03 rows=773 width=8)\n Index Cond: (e = 8693)\n -> Bitmap Heap Scan on link l2 (cost=6.36..691.97 rows=181 width=8)\n Recheck Cond: (s = 8693)\n -> Bitmap Index Scan on link_s_idx (cost=0.00..6.31 rows=181 width=0)\n Index Cond: (s = 8693)\n -> Hash Join (cost=2612.99..97012.31 rows=174799 width=8)\n Hash Cond: (l1.s = l2.s)\n -> Seq Scan on link l1 (cost=0.00..76189.70 rows=4389770 width=16)\n -> Hash (cost=2601.06..2601.06 rows=954 width=8)\n -> Unique (cost=2586.75..2591.52 rows=954 width=8)\n -> Sort (cost=2586.75..2589.14 rows=954 width=8)\n Sort Key: l2.s\n -> Append (cost=0.00..2539.54 rows=954 width=8)\n -> Index Scan using link_e_idx on link l2 (cost=0.00..1838.03 rows=773 width=8)\n Index Cond: (e = 8693)\n -> Bitmap Heap Scan on link l2 (cost=6.36..691.97 rows=181 width=8)\n Recheck Cond: (s = 8693)\n -> Bitmap Index Scan on link_s_idx (cost=0.00..6.31 rows=181 width=0)\n Index Cond: (s = 8693)\n -> Append (cost=0.00..1.81 rows=4 width=8)\n -> Index Scan using object_id_idx on object o (cost=0.00..0.31 rows=1 width=8)\n Index Cond: (o.id = l1.s)\n -> Index Scan using link_id_idx on link o (cost=0.00..0.89 rows=1 width=8)\n Index Cond: (o.id = l1.s)\n -> Index Scan using format_id_idx on format o (cost=0.00..0.27 rows=1 width=8)\n Index Cond: (o.id = l1.s)\n -> Index Scan using representation_id_idx on representation o (cost=0.00..0.34 rows=1 width=8)\n Index Cond: (o.id = l1.s)\n(44 rows)\n\nCosts are better (and more practical).\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFHMNMLzhchXT4RR5ARAnhDAKCgSK/2SIb8mwDnjZgGxRtYdWJ+pwCgoEMW\nzp8Mz52WeSZuNLpFGz8NPJI=\n=FtZd\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 06 Nov 2007 21:48:12 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "Subpar Execution Plan" } ]
[ { "msg_contents": "Hi all,\n\nWhile studying a query taking forever after an ANALYZE on a never\nanalyzed database (a bad estimate causes a nested loop on a lot of\ntuples), I found the following problem:\n- without any stats (I removed the line from pg_statistic):\nccm_prod_20071106=# explain analyze select * from cms_items where\nancestors LIKE '1062/%';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\n Seq Scan on cms_items (cost=0.00..689.26 rows=114 width=587) (actual\ntime=0.008..21.692 rows=11326 loops=1)\n Filter: ((ancestors)::text ~~ '1062/%'::text)\n Total runtime: 31.097 ms\n-> the estimate is bad (it's expected) but it's sufficient to prevent\nthe nested loop so it's my current workaround\n\n- after analyzing the cms_items table (statistics is set to 10 but\nit's exactly the same for 100):\nccm_prod_20071106=# explain analyze select * from cms_items where\nancestors LIKE '1062/%';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on cms_items (cost=0.00..689.26 rows=*1* width=103) (actual\ntime=0.010..22.024 rows=11326 loops=1)\n Filter: ((ancestors)::text ~~ '1062/%'::text)\n Total runtime: 31.341 ms\n-> this estimate leads PostgreSQL to choose a nested loop which is\nexecuted more than 11k times and causes the query to take forever.\n\n- if I remove the / from the LIKE clause (which I can't as ancestors\nis more or less a path):\nccm_prod_20071106=# explain analyze select * from cms_items where\nancestors LIKE '1062%';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Seq Scan on cms_items (cost=0.00..689.26 rows=*9097* width=103)\n(actual time=0.043..25.251 rows=11326 loops=1)\n Filter: ((ancestors)::text ~~ '1062%'::text)\n Total runtime: 34.778 ms\n\nWhich is a really good estimate.\n\nIs it something expected?\n\nThe histogram does contain values beginning with '1062/' (5 out of 10)\nand the cms_items table has ~ 22k rows.\n\nVersion is PostgreSQL 8.1.8 on i686-redhat-linux-gnu, compiled by GCC\ngcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-3). I checked the release\nnotes between 8.1.8 and 8.1.10 and I didn't find anything relevant to\nfix this problem.\n\nThanks for any help.\n\nRegards,\n\n--\nGuillaume\n", "msg_date": "Wed, 7 Nov 2007 13:53:16 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "On 11/7/07, Guillaume Smet <[email protected]> wrote:\n> While studying a query taking forever after an ANALYZE on a never\n> analyzed database (a bad estimate causes a nested loop on a lot of\n> tuples), I found the following problem:\n[snip]\n> Total runtime: 31.097 ms\n[snip]\n> Total runtime: 31.341 ms\n[snip]\n> Total runtime: 34.778 ms\n>\n> Which is a really good estimate.\n\nThat's a difference of less than *three milliseconds* -- a difference\nprobably way within the expected overhead of running \"explain\nanalyze\". Furthermore, all three queries use the same basic plan: a\nsequential scan with a filter. At any rate you're microbenchmarking in\na way that is not useful to real-world queries. In what way are these\ntimings a problem?\n\nHave you tried using an index which supports prefix searches? The\ntext_pattern_ops operator class lets yo do this with a plain B-tree\nindex:\n\n create index cms_items_ancestors_index on cms_items (ancestors\ntext_pattern_ops);\n analyze cms_items;\n\nNow all \"like 'prefix%'\" queries should use the index.\n\nAlexander.\n", "msg_date": "Wed, 7 Nov 2007 14:25:40 +0100", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "Alexander,\n\nOn 11/7/07, Alexander Staubo <[email protected]> wrote:\n> That's a difference of less than *three milliseconds* -- a difference\n> probably way within the expected overhead of running \"explain\n> analyze\". Furthermore, all three queries use the same basic plan: a\n> sequential scan with a filter. At any rate you're microbenchmarking in\n> a way that is not useful to real-world queries. In what way are these\n> timings a problem?\n\nIf you read my previous email carefully, you'll see they aren't a\nproblem: the problem is the estimation, not the timing. This is a self\ncontained test case of a far more complex query which uses a bad plan\ncontaining a nested loop due to the bad estimate.\n\n> Now all \"like 'prefix%'\" queries should use the index.\n\nNot when you retrieve 50% of this table of 22k rows but that's not my\nproblem anyway. A seqscan is perfectly fine in this case.\n\nThanks anyway.\n\n--\nGuillaume\n", "msg_date": "Wed, 7 Nov 2007 14:38:04 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> [ bad estimate for LIKE ]\n\nHmmm ... what locale are you working in? I notice that the range\nestimator for this pattern would be \"ancestors >= '1062/' AND\nancestors < '10620'\", which will do the right thing in C locale\nbut maybe not so much elsewhere.\n\n> Version is PostgreSQL 8.1.8 on i686-redhat-linux-gnu,\n\nYou'd probably get better results with 8.2, which has a noticeably\nsmarter LIKE-estimator, at least for histogram sizes of 100 or more.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Nov 2007 11:34:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "On 11/7/07, Tom Lane <[email protected]> wrote:\n> Hmmm ... what locale are you working in? I notice that the range\n> estimator for this pattern would be \"ancestors >= '1062/' AND\n> ancestors < '10620'\", which will do the right thing in C locale\n> but maybe not so much elsewhere.\n\nSorry for not having mentioned it before. Locale is UTF-8.\n\n> > Version is PostgreSQL 8.1.8 on i686-redhat-linux-gnu,\n>\n> You'd probably get better results with 8.2, which has a noticeably\n> smarter LIKE-estimator, at least for histogram sizes of 100 or more.\n\nIt's not really possible to upgrade this application to 8.2 for now.\nIt's a very old app based on the thing formerly called as Red Hat WAF\nand now known as APLAWS and validating WAF and this application with\n8.2 will take quite some time. Moreover the db is big and we can't\nafford the downtime of a migration.\n\nI suppose my best bet is to remove the pg_statistic line and to set\nthe statistics to 0 for this column so that the stats are never\ngenerated again for this column?\n\nThanks,\n\n--\nGuillaume\n", "msg_date": "Wed, 7 Nov 2007 17:52:16 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> On 11/7/07, Tom Lane <[email protected]> wrote:\n>> Hmmm ... what locale are you working in? I notice that the range\n>> estimator for this pattern would be \"ancestors >= '1062/' AND\n>> ancestors < '10620'\", which will do the right thing in C locale\n>> but maybe not so much elsewhere.\n\n> Sorry for not having mentioned it before. Locale is UTF-8.\n\nI wanted the locale (lc_collate), not the encoding.\n\n> I suppose my best bet is to remove the pg_statistic line and to set\n> the statistics to 0 for this column so that the stats are never\n> generated again for this column?\n\nThat would optimize this particular query and probably pessimize\na lot of others. I have another LIKE-estimation bug to go look at\ntoday too; let me see if this one is fixable or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Nov 2007 11:58:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "I wrote:\n> \"Guillaume Smet\" <[email protected]> writes:\n>> [ bad estimate for LIKE ]\n\n> Hmmm ... what locale are you working in? I notice that the range\n> estimator for this pattern would be \"ancestors >= '1062/' AND\n> ancestors < '10620'\", which will do the right thing in C locale\n> but maybe not so much elsewhere.\n\nI've applied a patch that might help you:\nhttp://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Nov 2007 18:14:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "On 11/7/07, Tom Lane <[email protected]> wrote:\n> I wanted the locale (lc_collate), not the encoding.\n\nfr_FR.UTF-8\n\n> That would optimize this particular query and probably pessimize\n> a lot of others.\n\nSure but there aren't a lot of queries based on the ancestors field\nand if they are a bit slower, it's not a problem. However having a\nquery taking forever is not acceptable as the content management app\nis unaccessible.\nSo it can be an acceptable solution in this case, even if not perfect.\n\n--\nGuillaume\n", "msg_date": "Thu, 8 Nov 2007 00:32:55 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "On 11/8/07, Tom Lane <[email protected]> wrote:\n> I've applied a patch that might help you:\n> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php\n\nThanks. I'll build a RPM package tomorrow with this patch and let you\nknow if it fixes the problem.\n\n--\nGuillaume\n", "msg_date": "Thu, 8 Nov 2007 00:36:16 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "Tom,\n\nOn Nov 8, 2007 12:14 AM, Tom Lane <[email protected]> wrote:\n> I've applied a patch that might help you:\n> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php\n\nAFAICS, it doesn't seem to fix the problem. I just compiled\nREL8_1_STABLE branch and I still has the following behaviour:\nlbo=# ANALYZE cms_items;\nANALYZE\nlbo=# explain analyze select * from cms_items where ancestors LIKE '1062/%';\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Seq Scan on cms_items (cost=0.00..688.26 rows=1 width=103) (actual\ntime=0.009..22.258 rows=11326 loops=1)\n Filter: ((ancestors)::text ~~ '1062/%'::text)\n Total runtime: 29.835 ms\n(3 rows)\n\nlbo=# show lc_collate;\n lc_collate\n-------------\n fr_FR.UTF-8\n(1 row)\n\nDo you see any reason why your patch doesn't change anything in this case?\n\nThanks.\n\n--\nGuillaume\n", "msg_date": "Thu, 8 Nov 2007 11:08:33 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> On Nov 8, 2007 12:14 AM, Tom Lane <[email protected]> wrote:\n>> I've applied a patch that might help you:\n>> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php\n\n> AFAICS, it doesn't seem to fix the problem.\n\nHmm, can we see the pg_stats row for the ancestors column?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 10:01:09 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "On Nov 8, 2007 4:01 PM, Tom Lane <[email protected]> wrote:\n> Hmm, can we see the pg_stats row for the ancestors column?\n\nSure:\n public | cms_items | ancestors | 0 | 32 |\n-1 | | |\n{10011/10010/10009/10018/2554055/,10011/10010/84022/23372040/,10011/2233043/2233042/2233041/,10011/3985097/5020039/,10011/872018/13335056/13333051/,1062/22304709/22304714/,1062/2489/2492/27861901/,1062/2527/2530/29658392/,1062/2698/2705/6014040/,1062/52354/52355/255038/255037/,9846852/}\n| -0.151713\n\nI can provide the data if needed, there's nothing confidential in them.\n\n--\nGuillaume\n", "msg_date": "Thu, 8 Nov 2007 16:29:34 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> On Nov 8, 2007 12:14 AM, Tom Lane <[email protected]> wrote:\n>> I've applied a patch that might help you:\n>> http://archives.postgresql.org/pgsql-committers/2007-11/msg00104.php\n\n> AFAICS, it doesn't seem to fix the problem. I just compiled\n> REL8_1_STABLE branch and I still has the following behaviour:\n\nOK, I tried it in fr_FR locale and what I find is that\n\nregression=# select '123/' < '1230'::text;\n ?column? \n----------\n t\n(1 row)\n\nso make_greater_string() will still think that its first try at\ngenerating an upper-bound string is good enough. However\n\nregression=# select '123/1' < '1230'::text;\n ?column? \n----------\n f\n(1 row)\n\nso the data starting with '123/' is still outside the generated range,\nleading to a wrong estimate. I didn't see this behavior yesterday but\nI was experimenting with en_US which I guess has different rules.\n\nWhat I am tempted to do about this is have make_greater_string tack \"zz\"\nonto the supplied prefix, so that it would have to find a string that\ncompares greater than \"123/zz\" before reporting success. This is\ngetting pretty klugy though, so cc'ing to pgsql-hackers to see if anyone\nhas a better idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 12:22:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> What I am tempted to do about this is have make_greater_string tack \"zz\"\n> onto the supplied prefix, so that it would have to find a string that\n> compares greater than \"123/zz\" before reporting success. This is\n> getting pretty klugy though, so cc'ing to pgsql-hackers to see if anyone\n> has a better idea.\n\nHm, instead of \"zz\" is there a convenient way to find out what actual\ncharacter sorts last amongst all the single characters in the locale's\nencoding?\n\nDoesn't really strike at the core reason that this is so klugy though. Surely\nthe \"right\" thing is to push the concept of open versus closed end-points\nthrough deeper into the estimation logic?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 08 Nov 2007 23:10:49 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Doesn't really strike at the core reason that this is so klugy though. Surely\n> the \"right\" thing is to push the concept of open versus closed end-points\n> through deeper into the estimation logic?\n\nNo, the right thing is to take the folk who defined \"dictionary sort\norder\" out behind the barn and shoot 'em ;-). This has got nothing to\ndo with open/closed endpoints and everything to do with the bizarre\nsorting rules used by some locales. In particular the reason I want to\nappend a letter is that some locales discriminate against non-letter\ncharacters in the first pass of sorting.\n\nI did do some experimentation and found that among the ASCII characters\n(ie, codes 32-126), nearly all the non-C locales on my Fedora machine\nsort Z last and z next-to-last or vice versa. Most of the remainder\nsort digits last and z or Z as the last non-digit character. Since Z is\nnot that close to the end of the sort order in C locale, however, z\nseems the best bet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:40:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "I wrote:\n> I did do some experimentation and found that among the ASCII characters\n> (ie, codes 32-126), nearly all the non-C locales on my Fedora machine\n> sort Z last and z next-to-last or vice versa. Most of the remainder\n> sort digits last and z or Z as the last non-digit character. Since Z is\n> not that close to the end of the sort order in C locale, however, z\n> seems the best bet.\n\nWith still further experimentation, it seems that doesn't work very\nwell, because the locales that sort digits last also seem not to\ndiscriminate against digits in their first pass. What did seem to work\nwas:\n\n* Determine which of the strings \"Z\", \"z\", \"y\", \"9\" is seen as largest\nby strcoll().\n\n* Append this string to the given input.\n\n* Search (using the CVS-HEAD make_greater_string logic) for a string\ngreater than that.\n\nThis rule works for all the locales I have installed ... but I don't\nhave any Far Eastern locales installed. Also, my test cases are only\ncovering ASCII characters, and I believe many locales have some non-ASCII\nletters that sort after 'Z'. I'm not sure how hard we need to try to\ncover those corner cases, though. It is ultimately only an estimate...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 21:08:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "On Nov 9, 2007 3:08 AM, Tom Lane <[email protected]> wrote:\n> This rule works for all the locales I have installed ... but I don't\n> have any Far Eastern locales installed. Also, my test cases are only\n> covering ASCII characters, and I believe many locales have some non-ASCII\n> letters that sort after 'Z'. I'm not sure how hard we need to try to\n> cover those corner cases, though. It is ultimately only an estimate...\n\nMy opinion is that it's acceptable to fix the problem for most cases\nin most locales because, as you said, it's only an estimate. We didn't\nhave any report of this problem for years so it seems that it's not a\ncommon case or at least it's not common that the bad estimate leads to\nnoticeably bad plans.\n\nAs far as I understand what you plan to do, it doesn't seem to be\nsomething that prevents us to fix the problem afterwards if someone\ncomes with an example which doesn't fit in the schema you're proposing\nand has a real performance problem with it.\n\n--\nGuillaume\n", "msg_date": "Fri, 9 Nov 2007 08:52:49 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> This rule works for all the locales I have installed ... but I don't\n> have any Far Eastern locales installed. Also, my test cases are only\n> covering ASCII characters, and I believe many locales have some non-ASCII\n> letters that sort after 'Z'. I'm not sure how hard we need to try to\n> cover those corner cases, though. It is ultimately only an estimate...\n\nIf I understand correctly what we're talking about it's generating estimates\nfor LIKE 'foo%' using the algorithm which makes sense for C locale which means\ngenerating the next range of values which start with 'foo%'.\n\nIt seems to me the problematic situations is when the most-frequent-values\ncome into play. Being off slightly in the histogram isn't going to generate\nvery inaccurate estimates but including or not a most-frequent-value could\nthrow off the estimate severely.\n\nCould we not use the bogus range to calculate the histogram estimate but apply\nthe LIKE pattern directly to the most-frequent-values instead of applying the\nbogus range? Or would that be too much code re-organization for now?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!\n", "msg_date": "Fri, 09 Nov 2007 09:21:56 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> Could we not use the bogus range to calculate the histogram estimate\n> but apply the LIKE pattern directly to the most-frequent-values\n> instead of applying the bogus range? Or would that be too much code\n> re-organization for now?\n\nWe have already done that for quite some time. It won't help\nGuillaume's case anyhow: he's got no MCVs, presumably because the field\nis unique.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Nov 2007 11:33:45 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a / " }, { "msg_contents": "On Nov 9, 2007 5:33 PM, Tom Lane <[email protected]> wrote:\n> he's got no MCVs, presumably because the field\n> is unique.\n\nIt is. The ancestors field contains the current folder itself so the\nid of the folder (which is the primary key) is in it.\n\n--\nGuillaume\n", "msg_date": "Fri, 9 Nov 2007 17:39:51 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a /" }, { "msg_contents": "Tom,\n\nJust to confirm you that your last commit fixed the problem:\n\nlbo=# explain analyze select * from cms_items where ancestors LIKE '1062/%';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Seq Scan on cms_items (cost=0.00..688.26 rows=*9097* width=103)\n(actual time=0.011..22.605 rows=11326 loops=1)\n Filter: ((ancestors)::text ~~ '1062/%'::text)\n Total runtime: 30.022 ms\n(3 rows)\n\nThanks for your time.\n\n--\nGuillaume\n", "msg_date": "Fri, 9 Nov 2007 21:57:27 +0100", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Estimation problem with a LIKE clause containing a /" } ]
[ { "msg_contents": "Hello\n\nThis is a question about something we have seen sometimes in the last\nmonths. It happens with tables with a large amount of updates/selects\ncompared with the amount of inserts/deletes. The sizes of these tables\nare small and the amount of rows too.\n\nThe 'problem' is that performance decrease during the day and the only\nthing that helps is to run CLUSTER on the table with problems. VACUUM\nANALYZE does not help.\n\nSome information that can help to find out why this happens:\n\n- PostgreSQL version: 8.1.9\n\n------------------------------------------------------------------------------\nscanorama=# SELECT pg_size_pretty(pg_relation_size('hosts'));\n\n pg_size_pretty\n----------------\n 12 MB\n------------------------------------------------------------------------------\nscanorama=# SELECT count(*) FROM hosts ;\n\n count\n-------\n 16402\n------------------------------------------------------------------------------\nscanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n\n Seq Scan on hosts (cost=0.00..2771.56 rows=66756 width=314) (actual\ntime=0.008..2013.415 rows=16402 loops=1)\n Total runtime: 2048.486 ms\n------------------------------------------------------------------------------\nscanorama=# VACUUM ANALYZE ;\nVACUUM\n------------------------------------------------------------------------------\nscanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n\n Seq Scan on hosts (cost=0.00..2718.57 rows=61357 width=314) (actual\ntime=0.008..1676.283 rows=16402 loops=1)\n Total runtime: 1700.826 ms\n------------------------------------------------------------------------------\nscanorama=# CLUSTER hosts_pkey ON hosts ;\nCLUSTER\n------------------------------------------------------------------------------\nscanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n\n Seq Scan on hosts (cost=0.00..680.02 rows=16402 width=314) (actual\ntime=0.008..31.205 rows=16402 loops=1)\n Total runtime: 53.635 ms\n------------------------------------------------------------------------------\nscanorama=# SELECT * from pg_stat_all_tables WHERE relname LIKE 'hosts';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n--------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 105805 | public | hosts | 1996430 | 32360280252 | 2736391 |\n 3301856 | 948 | 1403325 | 737\n\nThe information from pg_stat_all_tables is from the last 20 days.\n------------------------------------------------------------------------------\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 2536 of 2536 pages, containing 16410 live rows\nand 57042 dead rows; 16410 rows in sample, 16410 estimated total rows\nINFO: free space map contains 191299 pages in 786 relations\nDETAIL: A total of 174560 page slots are in use (including overhead).\n174560 page slots are required to track all free space.\nCurrent limits are: 2000000 page slots, 4000 relations, using 12131 KB.\n------------------------------------------------------------------------------\n\nThe tables with this 'problem' are not big, so CLUSTER finnish very fast\nand it does not have an impact in the access because of locking. But we\nwonder why this happens.\n\nDo you need more information?\n\nThanks in advance.\nregards\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Thu, 08 Nov 2007 11:36:36 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> This is a question about something we have seen sometimes in the last\n> months. It happens with tables with a large amount of updates/selects\n> compared with the amount of inserts/deletes. The sizes of these tables\n> are small and the amount of rows too.\n> \n> The 'problem' is that performance decrease during the day and the only\n> thing that helps is to run CLUSTER on the table with problems. VACUUM\n> ANALYZE does not help.\n> \n> Some information that can help to find out why this happens:\n> \n> - PostgreSQL version: 8.1.9\n> \n> ------------------------------------------------------------------------------\n> scanorama=# SELECT pg_size_pretty(pg_relation_size('hosts'));\n> \n> pg_size_pretty\n> ----------------\n> 12 MB\n> ------------------------------------------------------------------------------\n> scanorama=# SELECT count(*) FROM hosts ;\n> \n> count\n> -------\n> 16402\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..2771.56 rows=66756 width=314) (actual\n> time=0.008..2013.415 rows=16402 loops=1)\n> Total runtime: 2048.486 ms\n> ------------------------------------------------------------------------------\n> scanorama=# VACUUM ANALYZE ;\n> VACUUM\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..2718.57 rows=61357 width=314) (actual\n> time=0.008..1676.283 rows=16402 loops=1)\n> Total runtime: 1700.826 ms\n> ------------------------------------------------------------------------------\n> scanorama=# CLUSTER hosts_pkey ON hosts ;\n> CLUSTER\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..680.02 rows=16402 width=314) (actual\n> time=0.008..31.205 rows=16402 loops=1)\n> Total runtime: 53.635 ms\n> ------------------------------------------------------------------------------\n> scanorama=# SELECT * from pg_stat_all_tables WHERE relname LIKE 'hosts';\n> relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\n> idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n> --------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------\n> 105805 | public | hosts | 1996430 | 32360280252 | 2736391 |\n> 3301856 | 948 | 1403325 | 737\n> \n> The information from pg_stat_all_tables is from the last 20 days.\n> ------------------------------------------------------------------------------\n> INFO: analyzing \"public.hosts\"\n> INFO: \"hosts\": scanned 2536 of 2536 pages, containing 16410 live rows\n> and 57042 dead rows; 16410 rows in sample, 16410 estimated total rows\n> INFO: free space map contains 191299 pages in 786 relations\n> DETAIL: A total of 174560 page slots are in use (including overhead).\n> 174560 page slots are required to track all free space.\n> Current limits are: 2000000 page slots, 4000 relations, using 12131 KB.\n> ------------------------------------------------------------------------------\n> \n> The tables with this 'problem' are not big, so CLUSTER finnish very fast\n> and it does not have an impact in the access because of locking. But we\n> wonder why this happens.\n\n2 seconds for seq scanning 12 MB worth of data sounds like a lot. Have \nyou increased shared_buffers from the default? Which operating system \nare you using? Shared memory access is known to be slower on Windows.\n\nOn a small table like that you could run VACUUM every few minutes \nwithout much impact on performance. That should keep the table size in \ncheck.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 08 Nov 2007 11:12:07 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Performance problems with heavily modified tables (UPDATE or DELETE) are \nusually caused by not vacuuming. There are two main modes the VACUUM can \nrun in (plain or full) and the former works in a much more aggressive \nway (exclusive locking, etc). Try to run VACUUM FULL VERBOSE on the \ntable and see if it helps.\n\nA way to fix this is usually a proper setting of pg_autovacuum daemon - \nit may work on the tables that are not modified heavily, but it does not \nwork for the heavily modified ones. Do you have the autovacuum daemon \nenabled? What are the settings of it? Try to set it a little bit more \naggressive (this can be done on a table level).\n\nThe stats from pg_stat_all_tables are nice, but I guess the stats that \nmatter are located in pg_class catalog, the most interesting beeing \nreltuples and relpages columns - run\n\n SELECT relname, relpages, reltuples WHERE relname LIKE 'hosts';\n\nand observe the number of pages before and afrer the vacuum full (or \ncluster). I guess the number of pages increases quite fast and the \nautovacuum daemon is not able to reclaim that - and this is probably the \ncause why scanning 12 MB of data takes 2 sec, which is way too much - \nthe table is acrually much bigger as it contains a lot of dead data).\n\nTomas\n\n\n> Hello\n> \n> This is a question about something we have seen sometimes in the last\n> months. It happens with tables with a large amount of updates/selects\n> compared with the amount of inserts/deletes. The sizes of these tables\n> are small and the amount of rows too.\n> \n> The 'problem' is that performance decrease during the day and the only\n> thing that helps is to run CLUSTER on the table with problems. VACUUM\n> ANALYZE does not help.\n> \n> Some information that can help to find out why this happens:\n> \n> - PostgreSQL version: 8.1.9\n> \n> ------------------------------------------------------------------------------\n> scanorama=# SELECT pg_size_pretty(pg_relation_size('hosts'));\n> \n> pg_size_pretty\n> ----------------\n> 12 MB\n> ------------------------------------------------------------------------------\n> scanorama=# SELECT count(*) FROM hosts ;\n> \n> count\n> -------\n> 16402\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..2771.56 rows=66756 width=314) (actual\n> time=0.008..2013.415 rows=16402 loops=1)\n> Total runtime: 2048.486 ms\n> ------------------------------------------------------------------------------\n> scanorama=# VACUUM ANALYZE ;\n> VACUUM\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..2718.57 rows=61357 width=314) (actual\n> time=0.008..1676.283 rows=16402 loops=1)\n> Total runtime: 1700.826 ms\n> ------------------------------------------------------------------------------\n> scanorama=# CLUSTER hosts_pkey ON hosts ;\n> CLUSTER\n> ------------------------------------------------------------------------------\n> scanorama=# EXPLAIN ANALYZE SELECT * FROM hosts;\n> \n> Seq Scan on hosts (cost=0.00..680.02 rows=16402 width=314) (actual\n> time=0.008..31.205 rows=16402 loops=1)\n> Total runtime: 53.635 ms\n> ------------------------------------------------------------------------------\n> scanorama=# SELECT * from pg_stat_all_tables WHERE relname LIKE 'hosts';\n> relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\n> idx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n> --------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------\n> 105805 | public | hosts | 1996430 | 32360280252 | 2736391 |\n> 3301856 | 948 | 1403325 | 737\n> \n> The information from pg_stat_all_tables is from the last 20 days.\n> ------------------------------------------------------------------------------\n> INFO: analyzing \"public.hosts\"\n> INFO: \"hosts\": scanned 2536 of 2536 pages, containing 16410 live rows\n> and 57042 dead rows; 16410 rows in sample, 16410 estimated total rows\n> INFO: free space map contains 191299 pages in 786 relations\n> DETAIL: A total of 174560 page slots are in use (including overhead).\n> 174560 page slots are required to track all free space.\n> Current limits are: 2000000 page slots, 4000 relations, using 12131 KB.\n> ------------------------------------------------------------------------------\n> \n> The tables with this 'problem' are not big, so CLUSTER finnish very fast\n> and it does not have an impact in the access because of locking. But we\n> wonder why this happens.\n> \n> Do you need more information?\n> \n> Thanks in advance.\n> regards\n\n", "msg_date": "Thu, 08 Nov 2007 12:20:22 +0100", "msg_from": "=?windows-1252?Q?Tom=E1=9A_Vondra?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Rafael Martinez wrote:\n\n>> The tables with this 'problem' are not big, so CLUSTER finnish very fast\n>> and it does not have an impact in the access because of locking. But we\n>> wonder why this happens.\n> \n> 2 seconds for seq scanning 12 MB worth of data sounds like a lot. Have\n> you increased shared_buffers from the default? Which operating system\n> are you using? Shared memory access is known to be slower on Windows.\n> \n\nThis is a server with 8GB of ram, we are using 25% as shared_buffers.\nLinux RHELAS4 with a 2.6.9-55.0.9.ELsmp kernel / x86_64.\n\n> On a small table like that you could run VACUUM every few minutes\n> without much impact on performance. That should keep the table size in\n> check.\n> \n\nOk, we run VACUUM ANALYZE only one time a day, every night. But we would\nespect the performance to get ok again after running vacuum, and it\ndoesn't. Only CLUSTER helps.\n\nI can not see we need to change the max_fsm_pages parameter and pg_class\nand analyze give us this information today (not long ago a CLUSTER was\nexecuted):\n------------------------------------------------------------------------------\nscanorama=# VACUUM VERBOSE ANALYZE hosts;\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 20230 row versions in 117 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"hosts\": found 0 removable, 20230 nonremovable row versions in\n651 pages\nDETAIL: 3790 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_376127\"\nINFO: index \"pg_toast_376127_index\" now contains 131 row versions in 2\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_376127\": found 0 removable, 131 nonremovable row\nversions in 33 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 651 of 651 pages, containing 16440 live rows and\n3790 dead rows; 16440 rows in sample, 16440 estimated total rows\nVACUUM\n\nscanorama=# SELECT relname, relpages, reltuples from pg_class WHERE\nrelname LIKE 'hosts';\n relname | relpages | reltuples\n---------+----------+-----------\n hosts | 651 | 20230\n------------------------------------------------------------------------------\n\n\nAnymore ideas?\nregards,\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Thu, 08 Nov 2007 15:49:36 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> Hello\n> \n> This is a question about something we have seen sometimes in the last\n> months. It happens with tables with a large amount of updates/selects\n> compared with the amount of inserts/deletes. The sizes of these tables\n> are small and the amount of rows too.\n> \n> The 'problem' is that performance decrease during the day and the only\n> thing that helps is to run CLUSTER on the table with problems. VACUUM\n> ANALYZE does not help.\n\nProbably because all the live tuples are clustered at the end of the\ntable, and the initial pages are polluted with dead tuples. Try\nvacuuming the table much more often, say every few minutes.\n\nYour table is 2536 pages long, but it could probably be in the vicinity\nof 700 ...\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 8 Nov 2007 11:56:46 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> Heikki Linnakangas wrote:\n>> On a small table like that you could run VACUUM every few minutes\n>> without much impact on performance. That should keep the table size in\n>> check.\n> \n> Ok, we run VACUUM ANALYZE only one time a day, every night. But we would\n> espect the performance to get ok again after running vacuum, and it\n> doesn't. Only CLUSTER helps.\n\nIf the table is already bloated, a VACUUM won't usually shrink it. It \nonly makes the space available for reuse, but a sequential scan still \nneeds to go through a lot of pages.\n\nCLUSTER on the other hand repacks the tuples and gets rid of all the \nunused space on pages. You need to run CLUSTER or VACUUM FULL once to \nshrink the relation, but after that frequent-enough VACUUMs should keep \nthe table size down.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 08 Nov 2007 15:09:55 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Alvaro Herrera wrote:\n> Rafael Martinez wrote:\n\n>> The 'problem' is that performance decrease during the day and the only\n>> thing that helps is to run CLUSTER on the table with problems. VACUUM\n>> ANALYZE does not help.\n> \n> Probably because all the live tuples are clustered at the end of the\n> table, and the initial pages are polluted with dead tuples. Try\n> vacuuming the table much more often, say every few minutes.\n> \n> Your table is 2536 pages long, but it could probably be in the vicinity\n> of 700 ...\n> \n\nWe run VACUUM ANALYZE every 10 minuttes during 2-3 days to see if it\nhelped, but when it didn't we when back to the old configuration (1 time\neveryday)\n\nYes, after a CLUSTER we are using 517 pages. But the table does not grow\nmuch, it is always around 12-20MB, it looks like vacuum works without\nproblems.\n\nregards,\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Thu, 08 Nov 2007 16:10:10 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Heikki Linnakangas wrote:\n>\n> If the table is already bloated, a VACUUM won't usually shrink it. It\n> only makes the space available for reuse, but a sequential scan still\n> needs to go through a lot of pages.\n> \n> CLUSTER on the other hand repacks the tuples and gets rid of all the\n> unused space on pages. You need to run CLUSTER or VACUUM FULL once to\n> shrink the relation, but after that frequent-enough VACUUMs should keep\n> the table size down.\n> \n\nOk, thanks for the advice. We will try this and will come back with more\ninformation.\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Thu, 08 Nov 2007 16:15:45 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "In response to Rafael Martinez <[email protected]>:\n\n> Heikki Linnakangas wrote:\n> > Rafael Martinez wrote:\n> \n> >> The tables with this 'problem' are not big, so CLUSTER finnish very fast\n> >> and it does not have an impact in the access because of locking. But we\n> >> wonder why this happens.\n> > \n> > 2 seconds for seq scanning 12 MB worth of data sounds like a lot. Have\n> > you increased shared_buffers from the default? Which operating system\n> > are you using? Shared memory access is known to be slower on Windows.\n> > \n> \n> This is a server with 8GB of ram, we are using 25% as shared_buffers.\n> Linux RHELAS4 with a 2.6.9-55.0.9.ELsmp kernel / x86_64.\n> \n> > On a small table like that you could run VACUUM every few minutes\n> > without much impact on performance. That should keep the table size in\n> > check.\n> > \n> \n> Ok, we run VACUUM ANALYZE only one time a day, every night. But we would\n> espect the performance to get ok again after running vacuum, and it\n> doesn't. Only CLUSTER helps.\n\nIf you have a large value for max_fsm_pages, but only vacuum once a day,\nyou could end up with considerable bloat on a small table, but not enough\nto exceed max_fsm_pages (thus you wouldn't see any warning/errors)\n\nI recommend either:\na) autovaccum, with aggressive settings for that table\nb) a more aggressive schedule for that particular table, maybe a cron\n that vacuums that table every 5 minutes.\n\nYou could also do a combination, i.e. enable autovacuum with conservative\nsettings and set a cron to vacuum the table every 10 minutes.\n\nVacuuming once a day is usually only enough if you have very minimal\nupdates.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 8 Nov 2007 10:17:11 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez <[email protected]> writes:\n> Heikki Linnakangas wrote:\n>> On a small table like that you could run VACUUM every few minutes\n>> without much impact on performance. That should keep the table size in\n>> check.\n\n> Ok, we run VACUUM ANALYZE only one time a day, every night.\n\nThere's your problem.\n\nReading between the lines I gather that you think an update is \"free\"\nin the sense of not creating a need for vacuum. It's not --- it's\nexactly equivalent to an insert + a delete, and it leaves behind a\ndead row that needs to be vacuumed. If you do a lot of updates, you\nneed to vacuum.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 10:33:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance " }, { "msg_contents": "[email protected] (Rafael Martinez) writes:\n> Heikki Linnakangas wrote:\n>> On a small table like that you could run VACUUM every few minutes\n>> without much impact on performance. That should keep the table size in\n>> check.\n>> \n>\n> Ok, we run VACUUM ANALYZE only one time a day, every night. But we would\n> espect the performance to get ok again after running vacuum, and it\n> doesn't. Only CLUSTER helps.\n\nYou have characterized the shape of the problem Right There.\n\nIf you only VACUUM that table once a day, then it has a whole day to\nget cluttered with dead tuples, which increases its size to encompass\n651 pages, and NOTHING ever allows it to shrink back to a small size.\nPlain VACUUM (or VACUUM ANALYZE) does not attempt to shrink table\nsizes. Only VACUUM FULL and CLUSTER do that.\n\nHere are some options to \"parameterize\" your choices:\n\n- If you vacuum the table often enough that only 10% of the table\n consists of dead tuples, then you can expect the table to perpetually\n have 10% of dead space.\n\n- If you vacuum the table seldom enough that 90% of the table may be\n expected to consist of dead tuples, then you can expect this table to\n consistently have 90% of its space be \"dead.\"\n\nIt sounds like this particular table needs to be vacuumed quite a bit\nmore frequently than once a day.\n\nOn our systems, we have certain tables where tuples get killed off so\nfrequently that we find it worthwhile to vacuum those tables once\nevery two to three minutes. If we didn't, we'd see application\nperformance bog down until it forced us to CLUSTER or VACUUM FULL the\ntable.\n-- \n\"cbbrowne\",\"@\",\"acm.org\"\nhttp://linuxfinances.info/info/linux.html\n\"How much more helpful could I be than to provide you with the\nappropriate e-mail address? I could engrave it on a clue-by-four and\ndeliver it to you in Chicago, I suppose.\" -- Seen on Slashdot...\n", "msg_date": "Thu, 08 Nov 2007 10:47:44 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Tom Lane wrote:\n> Rafael Martinez <[email protected]> writes:\n>> Heikki Linnakangas wrote:\n>>> On a small table like that you could run VACUUM every few minutes\n>>> without much impact on performance. That should keep the table size in\n>>> check.\n> \n>> Ok, we run VACUUM ANALYZE only one time a day, every night.\n> \n> There's your problem.\n> \n> Reading between the lines I gather that you think an update is \"free\"\n> in the sense of not creating a need for vacuum. It's not --- it's\n> exactly equivalent to an insert + a delete, and it leaves behind a\n> dead row that needs to be vacuumed. If you do a lot of updates, you\n> need to vacuum.\n> \n\nHello again\n\nWe have more information about this 'problem'.\n\nTom, we have many other tables which are much bigger and have larger\namount of updates/deletes and are working very well with our actual\nvacuum configuration. We are aware of how important is to run vacuum\njobs and we think we have a good understanding of how/why vacuum works.\n\nWe think the problem we are seeing sometimes with these small tables is\nanother thing.\n\nWe increased the vacuum analyze jobs, as you all pointed, from one a day\nto four every hour (we did not run cluster at all since we started with\nthis new configuration). We started with this after a fresh 'cluster' of\nthe table. This has been in production since last week and the\nperformance of this table only gets worst and worst.\n\nAfter 4 days with the new maintenance jobs, it took more than 4 sec to\nrun a select on this table. After running a cluster we are down to\naround 50ms. again.\n\nI can not believe 4 vacuum jobs every hour is not enough for this table.\nIf we see the statistics, it has only ca.67000 updates/day, ca.43\ndeletes/day and ca.48 inserts/day. This is nothing compare with many of\nthe systems we are administrating.\n\nWhat we see in common between these tables (we have seen this a couple\nof times before) is:\n\n- Small table size.\n- Small amount of tuples in the table (almost constant).\n- Large amount of updates compared to inserts/deletes and compared to\nthe amount of tuples in the table.\n\nYou that know the interns of postgres :), can you think of anything that\ncan be causing this behavior? Any more suggestions? do you need more data?\n\nThanks in advance :)\n\nWe are sending all data we had before the last cluster command and after\nit.\n\n----------------------------------------------------------------------\n**** BEFORE CLUSTER ****\n----------------------------------------------------------------------\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 99933 row versions in 558 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"hosts\": found 0 removable, 99933 nonremovable row versions in\n3875 pages\nDETAIL: 83623 dead row versions cannot be removed yet.\nThere were 12079 unused item pointers.\n0 pages are entirely empty.\nCPU 0.02s/0.03u sec elapsed 0.06 sec.\nINFO: vacuuming \"pg_toast.pg_toast_376272\"\nINFO: index \"pg_toast_376272_index\" now contains 133 row versions in 2\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_376272\": found 0 removable, 133 nonremovable row\nversions in 65 pages\nDETAIL: 2 dead row versions cannot be removed yet.\nThere were 127 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 3875 of 3875 pages, containing 16310 live rows\nand 83623 dead rows; 16310 rows in sample, 16310 estimated total rows\n\n\nscanorama=# SELECT age(now(), pg_postmaster_start_time());\n age\n-------------------------\n 25 days 22:40:01.241036\n(1 row)\n\nscanorama=# SELECT pg_size_pretty(pg_relation_size('hosts'));\n pg_size_pretty\n----------------\n 30 MB\n(1 row)\n\nscanorama=# SELECT count(*) from hosts;\n count\n-------\n 16311\n(1 row)\n\nscanorama=# SELECT\nrelname,relpages,reltuples,reltoastrelid,reltoastidxid from pg_class\nwhere relname = 'hosts';\n relname | relpages | reltuples | reltoastrelid | reltoastidxid\n---------+----------+-----------+---------------+---------------\n hosts | 3875 | 100386 | 376276 | 0\n(1 row)\n\nscanorama=# SELECT * from pg_stat_all_tables where schemaname = 'public'\nand relname = 'hosts';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n--------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 105805 | public | hosts | 2412159 | 39109243131 | 3244406 |\n 9870886 | 1208 | 1685525 | 1088\n(1 row)\n\nscanorama=# EXPLAIN ANALYZE SELECT * from hosts;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------\n Seq Scan on hosts (cost=0.00..4878.86 rows=100386 width=314) (actual\ntime=0.025..4719.082 rows=16311 loops=1)\n Total runtime: 4742.754 ms\n(2 rows)\n\n\nscanorama=# CLUSTER hosts_pkey ON hosts ;\nCLUSTER\n\n----------------------------------------------------------------------\n**** AFTER CLUSTER ****\n----------------------------------------------------------------------\n\nscanorama=# VACUUM VERBOSE ANALYZE hosts;\n\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 16321 row versions in 65 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"hosts\": found 0 removable, 16321 nonremovable row versions in\n514 pages\nDETAIL: 10 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_383759\"\nINFO: index \"pg_toast_383759_index\" now contains 131 row versions in 2\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_383759\": found 0 removable, 131 nonremovable row\nversions in 33 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 514 of 514 pages, containing 16311 live rows and\n10 dead rows; 16311 rows in sample, 16311 estimated total rows\nVACUUM\n\n\nscanorama=# SELECT pg_size_pretty(pg_relation_size('hosts'));\n pg_size_pretty\n----------------\n 4112 kB\n(1 row)\n\nscanorama=# SELECT count(*) from hosts;\n count\n-------\n 16311\n(1 row)\n\nscanorama=# SELECT\nrelname,relpages,reltuples,reltoastrelid,reltoastidxid from pg_class\nwhere relname = 'hosts';\n relname | relpages | reltuples | reltoastrelid | reltoastidxid\n---------+----------+-----------+---------------+---------------\n hosts | 514 | 16321 | 383763 | 0\n(1 row)\n\nscanorama=# SELECT * from pg_stat_all_tables where schemaname = 'public'\nand relname = 'hosts';\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n--------+------------+---------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 105805 | public | hosts | 2412669 | 39117480187 | 3244962 |\n 9887752 | 1208 | 1685857 | 1088\n(1 row)\n\nscanorama=# EXPLAIN ANALYZE SELECT * from hosts;\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Seq Scan on hosts (cost=0.00..678.53 rows=16353 width=314) (actual\ntime=0.006..32.143 rows=16311 loops=1)\n Total runtime: 57.408 ms\n(2 rows)\n----------------------------------------------------------------------\n\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Mon, 12 Nov 2007 16:38:03 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> \n> We have more information about this 'problem'.\n> \n\nSending this just in case it can help ....\n\nChecking all the log files from these vacuum jobs we have been running,\nwe found one that looks difference from the rest, specially on the\namount of removed pages.\n\nWe are sending also the output before and after the one we are talking\nabout:\n\n###############################################\n2007-11-11_0245.log\n###############################################\nCOMMAND: /local/opt/pgsql-8.1/bin/psql -h /tmp/pg_sockets/dbpg-meridien\n-p 5432 scanorama -c 'VACUUM VERBOSE ANALYZE hosts'\nCODE: 0\n\nOUTPUT:\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 110886 row versions in 554 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.02s/0.00u sec elapsed 0.87 sec.\nINFO: \"hosts\": found 0 removable, 110886 nonremovable row versions in\n3848 pages\nDETAIL: 94563 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.05s/0.03u sec elapsed 0.94 sec.\nINFO: vacuuming \"pg_toast.pg_toast_376272\"\nINFO: index \"pg_toast_376272_index\" now contains 260 row versions in 2\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_376272\": found 0 removable, 260 nonremovable row\nversions in 65 pages\nDETAIL: 129 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 3848 of 3848 pages, containing 16323 live rows\nand 94563 dead rows; 16323 rows in sample, 16323 estimated total rows\nVACUUM\n\n###############################################\n2007-11-11_0301.log\n###############################################\nCOMMAND: /local/opt/pgsql-8.1/bin/psql -h /tmp/pg_sockets/dbpg-meridien\n-p 5432 scanorama -c 'VACUUM VERBOSE ANALYZE hosts'\nCODE: 0\n\nOUTPUT:\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 16782 row versions in 556 pages\nDETAIL: 94551 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.09u sec elapsed 590.48 sec.\nINFO: \"hosts\": removed 94551 row versions in 3835 pages\nDETAIL: CPU 0.00s/0.03u sec elapsed 0.10 sec.\nINFO: \"hosts\": found 94551 removable, 16695 nonremovable row versions\nin 3865 pages\nDETAIL: 372 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.08s/0.16u sec elapsed 590.99 sec.\nINFO: vacuuming \"pg_toast.pg_toast_376272\"\nINFO: index \"pg_toast_376272_index\" now contains 131 row versions in 2\npages\nDETAIL: 129 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_376272\": removed 129 row versions in 33 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 32.05 sec.\nINFO: \"pg_toast_376272\": found 129 removable, 131 nonremovable row\nversions in 65 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 51.96 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 3875 of 3875 pages, containing 16323 live rows\nand 576 dead rows; 16323 rows in sample, 16323 estimated total rows\nVACUUM\n\n###############################################\n2007-11-11_0315.log\n###############################################\nCOMMAND: /local/opt/pgsql-8.1/bin/psql -h /tmp/pg_sockets/dbpg-meridien\n-p 5432 scanorama -c 'VACUUM VERBOSE ANALYZE hosts'\nCODE: 0\n\nOUTPUT:\nINFO: vacuuming \"public.hosts\"\nINFO: index \"hosts_pkey\" now contains 17363 row versions in 556 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 1.39 sec.\nINFO: \"hosts\": found 0 removable, 17362 nonremovable row versions in\n3875 pages\nDETAIL: 1039 dead row versions cannot be removed yet.\nThere were 94074 unused item pointers.\n0 pages are entirely empty.\nCPU 0.02s/0.02u sec elapsed 1.43 sec.\nINFO: vacuuming \"pg_toast.pg_toast_376272\"\nINFO: index \"pg_toast_376272_index\" now contains 131 row versions in 2\npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_376272\": found 0 removable, 131 nonremovable row\nversions in 65 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 129 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.05 sec.\nINFO: analyzing \"public.hosts\"\nINFO: \"hosts\": scanned 3875 of 3875 pages, containing 16323 live rows\nand 1040 dead rows; 16323 rows in sample, 16323 estimated total rows\nVACUUM\n\n\n\nAfter this last job the amount of dead rows just continued growing until\n today.\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Mon, 12 Nov 2007 17:11:59 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> DETAIL: 83623 dead row versions cannot be removed yet.\n\nLooks like you have a long-running transaction in the background, so \nVACUUM can't remove all dead tuples. I didn't see that in the vacuum \nverbose outputs you sent earlier. Is there any backends in \"Idle in \ntransaction\" state, if you run ps?\n\nIn 8.1, CLUSTER will remove those tuples anyway, but it's actually not \ncorrect. If the long-running transaction decides to do a select on \nhosts-table later on, it will see an empty table because of that. That's \nbeen fixed in 8.3, but it also means that CLUSTER might no longer help \nyou on 8.3. VACUUM FULL is safe in that sense in 8.1 as well.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 12 Nov 2007 16:31:41 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "On Nov 12, 2007 10:11 AM, Rafael Martinez <[email protected]> wrote:\n\n> Sending this just in case it can help ....\n>\n> Checking all the log files from these vacuum jobs we have been running,\n> we found one that looks difference from the rest, specially on the\n> amount of removed pages.\n>\n> We are sending also the output before and after the one we are talking\n> about:\n>\n> ###############################################\n> 2007-11-11_0245.log\n> ###############################################\n> COMMAND: /local/opt/pgsql-8.1/bin/psql -h /tmp/pg_sockets/dbpg-meridien\n> -p 5432 scanorama -c 'VACUUM VERBOSE ANALYZE hosts'\n> CODE: 0\n>\n> OUTPUT:\n> INFO: vacuuming \"public.hosts\"\n> INFO: index \"hosts_pkey\" now contains 110886 row versions in 554 pages\n> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.02s/0.00u sec elapsed 0.87 sec.\n> INFO: \"hosts\": found 0 removable, 110886 nonremovable row versions in\n> 3848 pages\n> DETAIL: 94563 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n\nYou see that right there? You've got 94k dead rows that cannot be removed.\n\nThen, later on, they can:\n\n> CPU 0.04s/0.09u sec elapsed 590.48 sec.\n> INFO: \"hosts\": removed 94551 row versions in 3835 pages\n> DETAIL: CPU 0.00s/0.03u sec elapsed 0.10 sec.\n> INFO: \"hosts\": found 94551 removable, 16695 nonremovable row versions\n> in 3865 pages\n\nSo, between the first and second vacuum you had a long running\ntransaction that finally ended and let you clean up the dead rows.\n\n> After this last job the amount of dead rows just continued growing until\n> today.\n\nI think you've got a long running transaction that's preventing you\nfrom recovering dead rows.\n", "msg_date": "Mon, 12 Nov 2007 10:35:52 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "In response to Heikki Linnakangas <[email protected]>:\n\n> Rafael Martinez wrote:\n> > DETAIL: 83623 dead row versions cannot be removed yet.\n> \n> Looks like you have a long-running transaction in the background, so \n> VACUUM can't remove all dead tuples. I didn't see that in the vacuum \n> verbose outputs you sent earlier. Is there any backends in \"Idle in \n> transaction\" state, if you run ps?\n> \n> In 8.1, CLUSTER will remove those tuples anyway, but it's actually not \n> correct. If the long-running transaction decides to do a select on \n> hosts-table later on, it will see an empty table because of that. That's \n> been fixed in 8.3, but it also means that CLUSTER might no longer help \n> you on 8.3. VACUUM FULL is safe in that sense in 8.1 as well.\n\nConsidering how small the table is, you may want to just program the\nprocess holding the transaction open to do a vacuum full of that table\nwhen it's done with it's work.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Mon, 12 Nov 2007 11:37:40 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Scott Marlowe wrote:\n> On Nov 12, 2007 10:11 AM, Rafael Martinez <[email protected]> wrote:\n> \n>> Sending this just in case it can help ....\n>>\n>> Checking all the log files from these vacuum jobs we have been running,\n>> we found one that looks difference from the rest, specially on the\n>> amount of removed pages.\n>>\n>> We are sending also the output before and after the one we are talking\n>> about:\n>>\n>> ###############################################\n>> 2007-11-11_0245.log\n>> ###############################################\n>> COMMAND: /local/opt/pgsql-8.1/bin/psql -h /tmp/pg_sockets/dbpg-meridien\n>> -p 5432 scanorama -c 'VACUUM VERBOSE ANALYZE hosts'\n>> CODE: 0\n>>\n>> OUTPUT:\n>> INFO: vacuuming \"public.hosts\"\n>> INFO: index \"hosts_pkey\" now contains 110886 row versions in 554 pages\n>> DETAIL: 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.02s/0.00u sec elapsed 0.87 sec.\n>> INFO: \"hosts\": found 0 removable, 110886 nonremovable row versions in\n>> 3848 pages\n>> DETAIL: 94563 dead row versions cannot be removed yet.\n>> There were 0 unused item pointers.\n> \n> You see that right there? You've got 94k dead rows that cannot be removed.\n> \n> Then, later on, they can:\n> \n>> CPU 0.04s/0.09u sec elapsed 590.48 sec.\n>> INFO: \"hosts\": removed 94551 row versions in 3835 pages\n>> DETAIL: CPU 0.00s/0.03u sec elapsed 0.10 sec.\n>> INFO: \"hosts\": found 94551 removable, 16695 nonremovable row versions\n>> in 3865 pages\n> \n> So, between the first and second vacuum you had a long running\n> transaction that finally ended and let you clean up the dead rows.\n\nNo, before 8.3, CLUSTER throws away non-removable dead tuples. So the \nlong running transaction might still be there.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 12 Nov 2007 17:01:19 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "On Nov 12, 2007 11:01 AM, Heikki Linnakangas <[email protected]> wrote:\n>\n> Scott Marlowe wrote:\n> > So, between the first and second vacuum you had a long running\n> > transaction that finally ended and let you clean up the dead rows.\n>\n> No, before 8.3, CLUSTER throws away non-removable dead tuples. So the\n> long running transaction might still be there.\n\nWow, good to know. Why would it have changed in 8.3? Was it\nconsidered broken behaviour?\n", "msg_date": "Mon, 12 Nov 2007 11:04:44 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Scott Marlowe wrote:\n> On Nov 12, 2007 11:01 AM, Heikki Linnakangas <[email protected]> wrote:\n>> Scott Marlowe wrote:\n>>> So, between the first and second vacuum you had a long running\n>>> transaction that finally ended and let you clean up the dead rows.\n>> No, before 8.3, CLUSTER throws away non-removable dead tuples. So the\n>> long running transaction might still be there.\n> \n> Wow, good to know. Why would it have changed in 8.3? Was it\n> considered broken behaviour?\n\nI certainly considered it broken, though it was a known issue all along.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 12 Nov 2007 18:03:16 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Rafael Martinez wrote:\n>> DETAIL: 83623 dead row versions cannot be removed yet.\n> \n> Looks like you have a long-running transaction in the background, so\n> VACUUM can't remove all dead tuples. I didn't see that in the vacuum\n> verbose outputs you sent earlier. Is there any backends in \"Idle in\n> transaction\" state, if you run ps?\n> \n\nI don't see any long transaction in progress (<IDLE> in transaction) and\nif we run the vacuum jobb manual just after checking this, it still\ncannot remove the dead rows.\n\nAny suggestions cause vacuum cannot remove these dead rows?\n\n> In 8.1, CLUSTER will remove those tuples anyway, but it's actually not\n> correct. \n\nWith other words, .... we have to be very carefull to not run CLUSTER on\na table been modified inside a transaction if we do not want to lose\ndata? ...\n\nDoes this mean that if we run a transaction which update/delete many\nrows, run cluster before the transaction is finnish, and then rollback\nthe transaction after cluster has been executed, all dead rows\nupdated/deleted by the transaction can not be rollbacked back because\nthey are not there anymore?\n\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Tue, 13 Nov 2007 09:49:43 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" }, { "msg_contents": "Rafael Martinez wrote:\n> Heikki Linnakangas wrote:\n> \n>> In 8.1, CLUSTER will remove those tuples anyway, but it's actually not\n>> correct. \n> \n> With other words, .... we have to be very carefull to not run CLUSTER on\n> a table been modified inside a transaction if we do not want to lose\n> data? ...\n> \n> Does this mean that if we run a transaction which update/delete many\n> rows, run cluster before the transaction is finnish, and then rollback\n> the transaction after cluster has been executed, all dead rows\n> updated/deleted by the transaction can not be rollbacked back because\n> they are not there anymore?\n> \n\nStupid question, I could have checked this myself. CLUSTER will wait to\nbe executed until the transaction is finish. I have just checked this.\n\n\n-- \n Rafael Martinez, <[email protected]>\n Center for Information Technology Services\n University of Oslo, Norway\n\n PGP Public Key: http://folk.uio.no/rafael/\n", "msg_date": "Tue, 13 Nov 2007 10:03:19 +0100", "msg_from": "Rafael Martinez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need to run CLUSTER to keep performance" } ]
[ { "msg_contents": "I have a database where I dropped all indexes on a table last night\nand built a new set of indexes. The goal is to try and let the\ndatabase have fewer indexes and use them more. I removed a bunch of\nindexes that were surviving from our 7.3 days where functionality will\nnow be covered by 8.1's use of multiple indexes..\n\nAnyway, except for the primary key, all indexes were dropped and then\nthe new indexes where created. However, I am confused by what the\npg_stat_user_indexes and pg_statio_users_indexes are telling me.\nWhich one is correct.\n\npg_stat_user_indexes is reporting this:\n\"indexrelname\",\"idx_scan\",'idx_tup_read\",\"idx_tup_fetch\"\n\"clmhdr_pkey\";1583576;1577152;1577027\n\"hdr_clm_status_partial_idx\";5243;6999857;372251\n\"hdr_create_dt_idx\";1010;1420708;3656\n\"hdr_user_id_idx\";71;928074;918439\n\"hdr_pat_cntl_nbr_idx\";14;42;29\n\"hdr_clm_type_idx\";1;673982;0\n\"hdr_process_dt_idx\";1;22050;0\n\"erb_hdr_create_dt_idx\";0;0;0\n\"erb_hdr_process_dt_idx\";0;0;0\n\"erb_hdr_stmt_from_dt_idx\";0;0;0\n\"erb_hdr_stmt_thru_dt_idx\";0;0;0\n\"erb_hdr_transmit_dt_idx\";0;0;0\n\"hdr_accepted_dt_idx\";0;0;0\n\"hdr_asc_resp_rpt_cd_idx\";0;0;0\n\"hdr_bill_type_idx\";0;0;0\n\"hdr_fss_clm_status_idx\";0;0;0\n\"hdr_fss_process_dt_idx\";0;0;0\n\"hdr_submit_mode_idx\";0;0;0\n\"patient_name_idx\";0;0;0\n\"statement_date_idx\";0;0;0\n\n\npg_statio_user_indexes is reporting:\n\"indexrelname\",\"idx_blks_read\",\"idx_blks_hit\"\n\"hdr_clm_status_partial_idx\";182;59455\n\"clmhdr_pkey\";115382;6540557\n\"erb_hdr_process_dt_idx\";7943;32679\n\"erb_hdr_create_dt_idx\";8000;32042\n\"erb_hdr_transmit_dt_idx\";7953;31511\n\"erb_hdr_stmt_thru_dt_idx\";8667;30924\n\"hdr_create_dt_idx\";11988;42617\n\"erb_hdr_stmt_from_dt_idx\";8632;30173\n\"hdr_fss_clm_status_idx\";9920;32774\n\"hdr_bill_type_idx\";9949;32730\n\"hdr_asc_resp_rpt_cd_idx\";9916;32387\n\"hdr_clm_type_idx\";11777;33130\n\"hdr_fss_process_dt_idx\";11891;33423\n\"hdr_accepted_dt_idx\";11913;32876\n\"hdr_process_dt_idx\";11976;33049\n\"hdr_submit_mode_idx\";13815;32932\n\"hdr_user_id_idx\";17372;34188\n\"hdr_pat_cntl_nbr_idx\";15061;29137\n\"statement_date_idx\";18838;29834\n\"patient_name_idx\";21619;26182\n\n\n\nIf there has been no scans on an index (as according to\npg_stat_user_indexes), why is pg_statio_user_indexes showing non 0\nvalues in idx_blks_hit/read?\n\nPlease help me understand this apparent contradiction.\n\nThanks,\n\nChris\n\nPG 8.1.3\n", "msg_date": "Thu, 8 Nov 2007 09:49:41 -0500", "msg_from": "\"Chris Hoover\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help understanding stat numbers" }, { "msg_contents": "\"Chris Hoover\" <[email protected]> writes:\n> If there has been no scans on an index (as according to\n> pg_stat_user_indexes), why is pg_statio_user_indexes showing non 0\n> values in idx_blks_hit/read?\n\nI grow weary, but I think that \"scan\" is only incremented by commencing\na SELECT search using the index, whereas the block-level counts are also\nincremented when the index is modified by an insert or update. You may\nbe looking at indexes that are eating update cycles but not being used\nfor anything important...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 09 Nov 2007 02:08:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help understanding stat numbers " } ]
[ { "msg_contents": "Hello,\n\nI am having an issue on PostgreSQL 8.0.12. In the past we had \nperformance issues with the query planner for queries on some tables \nwhere we knew we had indexes and it was doing a sequential scan, and \nfor this reason we issue \"SET enable_seqscan = FALSE\" for some queries.\n\nRecently we have stumbled upon one of these kind of queries that is \ngiving terrible performance, because seqscan is disabled. I've reduced \nthe problem to a a command like this one:\n\nSELECT * from gsm_sector_metrics NATURAL JOIN gsm_amr_metrics INNER \nJOIN temp_busy_hr USING(start_time,bsc_id,sect_id);\n\nWhere temp_busy_hr is a temporary table.\n\nIf the previous is issued with seqscan TRUE, it runs within reasonable \ntime, else it runs for ever. The query plan for the previous query \nwith enable_seqscan = TRUE:\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.00..384555.98 rows=1 width=3092)\n -> Nested Loop (cost=0.00..384555.98 rows=1 width=3092)\n Join Filter: ((\"inner\".bsc_id = \"outer\".bsc_id) AND \n(\"inner\".site_id = \"outer\".site_id) AND (\"inner\".sect_id = \n\"outer\".sect_id))\n -> Nested Loop (cost=0.00..368645.64 rows=28 width=1192)\n Join Filter: ((\"outer\".sect_id = \"inner\".sect_id) AND \n(\"outer\".bsc_id = \"inner\".bsc_id))\n -> Seq Scan on temp_busy_hr (cost=0.00..24.00 \nrows=1400 width=24)\n -> Index Scan using gsm_amr_start_time_idx on \ngsm_amr_metrics (cost=0.00..226.66 rows=2094 width=1168)\n Index Cond: (\"outer\".start_time = \ngsm_amr_metrics.start_time)\n -> Index Scan using gsm_sector_start_time_idx on \ngsm_sector_metrics t1 (cost=0.00..528.77 rows=1973 width=1936)\n Index Cond: (t1.start_time = \"outer\".start_time)\n(10 rows)\n\nand the plan for enable_seqscan = FALSE:\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=100000097.16.. 100720844.011111 rows=1 width=3092)\n -> Nested Loop (cost=100000097.16..100720844.01 rows=1 width=3092)\n Join Filter: ((\"inner\".bsc_id = \"outer\".bsc_id) AND \n(\"inner\".site_id = \"outer\".site_id) AND (\"inner\".sect_id = \n\"outer\".sect_id))\n -> Merge Join (cost=100000097.16..100704933.67 rows=28 \nwidth=1192)\n Merge Cond: (\"outer\".start_time = \"inner\".start_time)\n Join Filter: ((\"inner\".sect_id = \"outer\".sect_id) AND \n(\"inner\".bsc_id = \"outer\".bsc_id))\n -> Index Scan using gsm_amr_start_time_idx on \ngsm_amr_metrics (cost=0.00..631211.45 rows=6005551 width=1168)\n -> Sort (cost=100000097.16..100000100.66 rows=1400 \nwidth=24)\n Sort Key: temp_busy_hr.start_time\n -> Seq Scan on temp_busy_hr \n(cost=100000000.00..100000024.00 rows=1400 width=24)\n -> Index Scan using gsm_sector_start_time_idx on \ngsm_sector_metrics t1 (cost=0.00..528.77 rows=1973 width=1936)\n Index Cond: (t1.start_time = \"outer\".start_time)\n(12 rows)\n\nAny ideas what could I try to fix this problem?\n\nThanks,\nPepe\n", "msg_date": "Thu, 8 Nov 2007 16:47:09 -0600", "msg_from": "Pepe Barbe <[email protected]>", "msg_from_op": true, "msg_subject": "Join performance" }, { "msg_contents": "On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:\n> I am having an issue on PostgreSQL 8.0.12. In the past we had performance \n> issues with the query planner for queries on some tables where we knew we \n> had indexes and it was doing a sequential scan, and for this reason we \n> issue \"SET enable_seqscan = FALSE\" for some queries.\n\nThis is a bad idea in general. Did you really measure that this made queries\nrun faster? Generally, using an index is not always a win, and the planner\ntries to figure out when it isn't. Setting it globally is seldom a good idea\nanyway; if it really _is_ a win for a given query, you could always set it\nlocally in that session.\n\n> Any ideas what could I try to fix this problem?\n\nRe-enable seqscan?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 8 Nov 2007 23:51:20 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:\n>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance \n>> issues with the query planner for queries on some tables where we knew we \n>> had indexes and it was doing a sequential scan, and for this reason we \n>> issue \"SET enable_seqscan = FALSE\" for some queries.\n\n> This is a bad idea in general.\n\nIndeed. A less brute-force way of getting the planner to favor\nindexscans is to reduce random_page_cost ... have you experimented\nwith that?\n\nAlso, consider updating to 8.2.x, which has an improved cost model\nfor indexscans and will more often make the correct choice without\nsuch shenanigans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:15:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance " }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:\n>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance \n>> issues with the query planner for queries on some tables where we knew we \n>> had indexes and it was doing a sequential scan, and for this reason we \n>> issue \"SET enable_seqscan = FALSE\" for some queries.\n\n> This is a bad idea in general.\n\nIndeed. A less brute-force way of getting the planner to favor\nindexscans is to reduce random_page_cost ... have you experimented\nwith that?\n\nAlso, consider updating to 8.2.x, which has an improved cost model\nfor indexscans and will more often make the correct choice without\nsuch shenanigans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:15:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance " }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:\n>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance \n>> issues with the query planner for queries on some tables where we knew we \n>> had indexes and it was doing a sequential scan, and for this reason we \n>> issue \"SET enable_seqscan = FALSE\" for some queries.\n\n> This is a bad idea in general.\n\nIndeed. A less brute-force way of getting the planner to favor\nindexscans is to reduce random_page_cost ... have you experimented\nwith that?\n\nAlso, consider updating to 8.2.x, which has an improved cost model\nfor indexscans and will more often make the correct choice without\nsuch shenanigans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:15:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance " }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Thu, Nov 08, 2007 at 04:47:09PM -0600, Pepe Barbe wrote:\n>> I am having an issue on PostgreSQL 8.0.12. In the past we had performance \n>> issues with the query planner for queries on some tables where we knew we \n>> had indexes and it was doing a sequential scan, and for this reason we \n>> issue \"SET enable_seqscan = FALSE\" for some queries.\n\n> This is a bad idea in general.\n\nIndeed. A less brute-force way of getting the planner to favor\nindexscans is to reduce random_page_cost ... have you experimented\nwith that?\n\nAlso, consider updating to 8.2.x, which has an improved cost model\nfor indexscans and will more often make the correct choice without\nsuch shenanigans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:15:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance " }, { "msg_contents": "Ooops, sorry about the multiple copies there --- not sure what happened.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Nov 2007 19:42:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance " }, { "msg_contents": "Pepe Barbe wrote:\n> Hello,\n> \n> I am having an issue on PostgreSQL 8.0.12. In the past we had \n> performance issues with the query planner for queries on some tables \n> where we knew we had indexes and it was doing a sequential scan, and for \n> this reason we issue \"SET enable_seqscan = FALSE\" for some queries.\n> \n> Recently we have stumbled upon one of these kind of queries that is \n> giving terrible performance, because seqscan is disabled. I've reduced \n> the problem to a a command like this one:\n> \n> SELECT * from gsm_sector_metrics NATURAL JOIN gsm_amr_metrics INNER JOIN \n> temp_busy_hr USING(start_time,bsc_id,sect_id);\n> \n> Where temp_busy_hr is a temporary table.\n\nHave you tried analyzing the temp_busy_hr table?\nPossibly adding an index to the temp table can help if you are doing lots of queries.\n\n> \n> If the previous is issued with seqscan TRUE, it runs within reasonable \n> time, else it runs for ever. The query plan for the previous query with \n> enable_seqscan = TRUE:\n\nIt would be worth know how far the estimates are out. Also, have you tried altering the statistics target\nfor relevant columns to increase the accuracy?\n\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Limit (cost=0.00..384555.98 rows=1 width=3092)\n> -> Nested Loop (cost=0.00..384555.98 rows=1 width=3092)\n> Join Filter: ((\"inner\".bsc_id = \"outer\".bsc_id) AND (\"inner\".site_id = \"outer\".site_id) AND (\"inner\".sect_id = \"outer\".sect_id))\n> -> Nested Loop (cost=0.00..368645.64 rows=28 width=1192)\n> Join Filter: ((\"outer\".sect_id = \"inner\".sect_id) AND (\"outer\".bsc_id = \"inner\".bsc_id))\n> -> Seq Scan on temp_busy_hr (cost=0.00..24.00 rows=1400 width=24)\n> -> Index Scan using gsm_amr_start_time_idx on gsm_amr_metrics (cost=0.00..226.66 rows=2094 width=1168)\n> Index Cond: (\"outer\".start_time = gsm_amr_metrics.start_time)\n> -> Index Scan using gsm_sector_start_time_idx on gsm_sector_metrics t1 (cost=0.00..528.77 rows=1973 width=1936)\n> Index Cond: (t1.start_time = \"outer\".start_time)\n> (10 rows)\n> \n> and the plan for enable_seqscan = FALSE:\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------- \n> \n> Limit (cost=100000097.16.. 100720844.011111 rows=1 width=3092)\n> -> Nested Loop (cost=100000097.16..100720844.01 rows=1 width=3092)\n> Join Filter: ((\"inner\".bsc_id = \"outer\".bsc_id) AND (\"inner\".site_id = \"outer\".site_id) AND (\"inner\".sect_id = \"outer\".sect_id))\n> -> Merge Join (cost=100000097.16..100704933.67 rows=28 width=1192)\n> Merge Cond: (\"outer\".start_time = \"inner\".start_time)\n> Join Filter: ((\"inner\".sect_id = \"outer\".sect_id) AND (\"inner\".bsc_id = \"outer\".bsc_id))\n> -> Index Scan using gsm_amr_start_time_idx on gsm_amr_metrics (cost=0.00..631211.45 rows=6005551 width=1168)\n> -> Sort (cost=100000097.16..100000100.66 rows=1400 width=24)\n> Sort Key: temp_busy_hr.start_time\n> -> Seq Scan on temp_busy_hr (cost=100000000.00..100000024.00 rows=1400 width=24)\n> -> Index Scan using gsm_sector_start_time_idx on gsm_sector_metrics t1 (cost=0.00..528.77 rows=1973 width=1936)\n> Index Cond: (t1.start_time = \"outer\".start_time)\n> (12 rows)\n> \n> Any ideas what could I try to fix this problem?\n> \n> Thanks,\n> Pepe\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n", "msg_date": "Sat, 10 Nov 2007 16:26:17 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Join performance" } ]
[ { "msg_contents": "Hi,\n\nI just read this document and thought I should share it with this list:\n\nhttp://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf\n\nAmong other things (FreeBSD advocacy, mostly :) ), it contains a direct\ncomparison between MySQL and PostgreSQL on various platforms, with\nPostgreSQL winning!\n", "msg_date": "Fri, 09 Nov 2007 13:06:27 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Nov 9, 2007 7:06 AM, Ivan Voras <[email protected]> wrote:\n> I just read this document and thought I should share it with this list:\n>\n> http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf\n\nNice presentation. Thanks for posting it on here.\n\n> Among other things (FreeBSD advocacy, mostly :) ), it contains a direct\n> comparison between MySQL and PostgreSQL on various platforms, with\n> PostgreSQL winning!\n\n:)\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 9 Nov 2007 09:49:16 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "\n>\n> Among other things (FreeBSD advocacy, mostly :) ), it contains a direct\n> comparison between MySQL and PostgreSQL on various platforms, with\n> PostgreSQL winning!\n> \nHello,\n\nIf the queries are complex, this is understable. I had a performance\nreview of a Hibernate project (Java Object Relation Mapping) using\nMySQL. ORM produces easily \"complex\" queries with joins and subqueries.\nMySQL uses nested loops for subqueries which lead to performance issues\nwith growing database size.\n\nThey state in their documentation that for version 5.2 there are\nimprovements planned regarding this kind of query.\n\nBest Regards\n\nSebastian\n", "msg_date": "Fri, 09 Nov 2007 16:41:19 +0100", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Nov 9, 2007, at 6:06 AM, Ivan Voras wrote:\n\n> Hi,\n>\n> I just read this document and thought I should share it with this \n> list:\n>\n> http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf\n>\n> Among other things (FreeBSD advocacy, mostly :) ), it contains a \n> direct\n> comparison between MySQL and PostgreSQL on various platforms, with\n> PostgreSQL winning!\n\nWhich is typical for those who aren't in on the FUD :)\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Fri, 9 Nov 2007 09:50:47 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Nov 9, 2007 9:41 AM, Sebastian Hennebrueder <[email protected]> wrote:\n> If the queries are complex, this is understable. I had a performance\n> review of a Hibernate project (Java Object Relation Mapping) using\n> MySQL. ORM produces easily \"complex\" queries with joins and subqueries.\n> MySQL uses nested loops for subqueries which lead to performance issues\n> with growing database size.\n>\n> They state in their documentation that for version 5.2 there are\n> improvements planned regarding this kind of query.\n\nSo, MySQL 5.2 will be catching up to version 7.1 or 7.2 of PostgreSQL\nin that regard?\n", "msg_date": "Fri, 9 Nov 2007 09:57:33 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:\n\n> If the queries are complex, this is understable.\n\nThe queries used for this comparison are trivial. There's only one table \ninvolved and there are no joins. It's testing very low-level aspects of \nperformance.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 9 Nov 2007 11:11:18 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Fri, 9 Nov 2007 11:11:18 -0500 (EST)\nGreg Smith <[email protected]> wrote:\n\n> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:\n> \n> > If the queries are complex, this is understable.\n> \n> The queries used for this comparison are trivial. There's only one table \n> involved and there are no joins. It's testing very low-level aspects of \n> performance.\n\nActually, what it's really showing is parallelism, and I've always\nexpected PostgreSQL to come out on top in that arena.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Fri, 9 Nov 2007 14:04:54 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "Bill Moran wrote:\n> On Fri, 9 Nov 2007 11:11:18 -0500 (EST)\n> Greg Smith <[email protected]> wrote:\n>> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:\n>>> If the queries are complex, this is understable.\n>> The queries used for this comparison are trivial. There's only one table \n>> involved and there are no joins. It's testing very low-level aspects of \n>> performance.\n> \n> Actually, what it's really showing is parallelism, and I've always\n> expected PostgreSQL to come out on top in that arena.\n\nIsn't it showing Postgres winning even without parallelism.\n\nAt 1 threads, Postgres looks like 800TPS where MysQL comes\nin at about 600TPS on their Opteron charts.\n", "msg_date": "Fri, 09 Nov 2007 11:44:04 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "Seems to me there is more thread model implementation problem on\nFreeBSD, and databases just reflecting it... Most of the test I done\non Solaris show the same performance level on the same short READ-only\nqueries for MySQL and PostgreSQL.\n\nAnd to be honest till the end, thread model should be far faster\n(context switching between threads is way faster vs processes), but -\nas I say usually - even a very good idea may be just wasted by a poor\nimplementation... And in case of MySQL they have too much locking to\nmanage concurrency between threads which kills all thread model\nbenefits... Also, to compare apples to apples, they should run this\ntest from remote client rather locally on the same host - however in\nthis case the result for PostgreSQL will mostly depends on client\nimplementation: if client implements reading via CURSOR (quite often),\nreading will generate 4x times more intensive network traffic than\nnecessary and final PostgreSQL result will be worse...\n\nReading this article I'm just happy for them to see progress done on FreeBSD :-)\nAs well to demonstrate OS parallelism it's not so impressive to see\n4CPU server results rather 8CPU or 32threaded Niagara... Don't know\nwhy they did not present similar performance graphs for these\nplatform, strange no?...\n\nRgds,\n-Dimitri\n\n\nOn 11/9/07, Ron Mayer <[email protected]> wrote:\n> Bill Moran wrote:\n> > On Fri, 9 Nov 2007 11:11:18 -0500 (EST)\n> > Greg Smith <[email protected]> wrote:\n> >> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:\n> >>> If the queries are complex, this is understable.\n> >> The queries used for this comparison are trivial. There's only one table\n> >> involved and there are no joins. It's testing very low-level aspects of\n> >> performance.\n> >\n> > Actually, what it's really showing is parallelism, and I've always\n> > expected PostgreSQL to come out on top in that arena.\n>\n> Isn't it showing Postgres winning even without parallelism.\n>\n> At 1 threads, Postgres looks like 800TPS where MysQL comes\n> in at about 600TPS on their Opteron charts.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n", "msg_date": "Sun, 11 Nov 2007 20:27:02 +0100", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "Dimitri wrote:\n> Seems to me there is more thread model implementation problem on\n> FreeBSD, and databases just reflecting it... Most of the test I done\n> on Solaris show the same performance level on the same short READ-only\n> queries for MySQL and PostgreSQL.\n> \n> And to be honest till the end, thread model should be far faster\n> (context switching between threads is way faster vs processes), but -\n> as I say usually - even a very good idea may be just wasted by a poor\n> implementation... And in case of MySQL they have too much locking to\n> manage concurrency between threads which kills all thread model\n> benefits... Also, to compare apples to apples, they should run this\n> test from remote client rather locally on the same host - however in\n> this case the result for PostgreSQL will mostly depends on client\n> implementation: if client implements reading via CURSOR (quite often),\n> reading will generate 4x times more intensive network traffic than\n> necessary and final PostgreSQL result will be worse...\n> \n> Reading this article I'm just happy for them to see progress done on FreeBSD :-)\n> As well to demonstrate OS parallelism it's not so impressive to see\n> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know\n> why they did not present similar performance graphs for these\n> platform, strange no?...\n\nI don't find it strange. I would rather see benchmarks on what the \nmajority of people running on the platform are going to run.\n\nMost people don't run 8core machines and they especially don't run \n32thread Niagra boxes.\n\nJoshua D. Drake\n\n> \n> Rgds,\n> -Dimitri\n> \n\n\n", "msg_date": "Sun, 11 Nov 2007 12:17:22 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Sun, Nov 11, 2007 at 08:27:02PM +0100, Dimitri wrote:\n> As well to demonstrate OS parallelism it's not so impressive to see\n> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know\n> why they did not present similar performance graphs for these\n> platform, strange no?...\n\nI guess it's because their Niagara support is still very raw, and besides,\nit's not a very common platform.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Mon, 12 Nov 2007 00:43:52 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "Steinar H. Gunderson wrote:\n> On Sun, Nov 11, 2007 at 08:27:02PM +0100, Dimitri wrote:\n>> As well to demonstrate OS parallelism it's not so impressive to see\n>> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know\n>> why they did not present similar performance graphs for these\n>> platform, strange no?...\n> \n> I guess it's because their Niagara support is still very raw, and besides,\n> it's not a very common platform.\n> \n> /* Steinar */\n\nNot sure how much coding would need to be done for Niagra chips but I \nwould think that it is more likely a problem of getting the funds so \nthey can have one to work on.\n\n\n\n-- \n\nShane Ambler\[email protected]\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Mon, 12 Nov 2007 20:53:13 +1030", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "\nOn Nov 11, 2007, at 2:17 PM, Joshua D. Drake wrote:\n\n> Dimitri wrote:\n>> Seems to me there is more thread model implementation problem on\n>> FreeBSD, and databases just reflecting it... Most of the test I done\n>> on Solaris show the same performance level on the same short READ- \n>> only\n>> queries for MySQL and PostgreSQL.\n>> And to be honest till the end, thread model should be far faster\n>> (context switching between threads is way faster vs processes), but -\n>> as I say usually - even a very good idea may be just wasted by a poor\n>> implementation... And in case of MySQL they have too much locking to\n>> manage concurrency between threads which kills all thread model\n>> benefits... Also, to compare apples to apples, they should run this\n>> test from remote client rather locally on the same host - however in\n>> this case the result for PostgreSQL will mostly depends on client\n>> implementation: if client implements reading via CURSOR (quite \n>> often),\n>> reading will generate 4x times more intensive network traffic than\n>> necessary and final PostgreSQL result will be worse...\n>> Reading this article I'm just happy for them to see progress done \n>> on FreeBSD :-)\n>> As well to demonstrate OS parallelism it's not so impressive to see\n>> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know\n>> why they did not present similar performance graphs for these\n>> platform, strange no?...\n>\n> I don't find it strange. I would rather see benchmarks on what the \n> majority of people running on the platform are going to run.\n>\n> Most people don't run 8core machines and they especially don't run \n> 32thread Niagra boxes.\n\nWait! So, what do you check you're email with? :)\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Mon, 12 Nov 2007 09:22:04 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "Dimitri wrote:\n\n> Reading this article I'm just happy for them to see progress done on FreeBSD :-)\n> As well to demonstrate OS parallelism it's not so impressive to see\n> 4CPU server results rather 8CPU or 32threaded Niagara... Don't know\n> why they did not present similar performance graphs for these\n> platform, strange no?...\n\nWell, most of the results in the document\n(http://people.freebsd.org/~kris/scaling/7.0%20Preview.pdf) are for\n8-CPU machines, which is about the most you can get with off the shelf\nhardware (2x4-core CPU, the document has both Xeon and Opteron results).\nNiagara support is unfinished, so there's nothing to report there. On\nthe other hand, the document does compare between several versions of\nLinux, FreeBSD, NetBSD and DragonflyBSD, with both MySQL and PostgreSQL,\nso you can draw your conclusions (if any) from there.", "msg_date": "Wed, 14 Nov 2007 20:47:16 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "\nOn Fri, 2007-11-09 at 16:41 +0100, Sebastian Hennebrueder wrote:\n\n> If the queries are complex, this is understable. I had a performance\n> review of a Hibernate project (Java Object Relation Mapping) using\n> MySQL. ORM produces easily \"complex\" queries with joins and subqueries.\n> MySQL uses nested loops for subqueries which lead to performance issues\n> with growing database size.\n\nEven for Postgresql, nested loops are still evil and hampers\nperformance.\n\n\n\n", "msg_date": "Fri, 16 Nov 2007 17:43:05 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "> -----Original Message-----\n> From: Ow Mun Heng\n> Subject: Re: [PERFORM] PostgreSQL vs MySQL, and FreeBSD\n> \n> Even for Postgresql, nested loops are still evil and hampers \n> performance.\n\n\nI don't know about that. There are times when it is the right plan:\n \n\nexplain analyze select * from table1 t1 inner join table2 t2 on t1.f_id =\nt2.id where t1.id = 'xyzzy';\n\n QUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------\n Nested Loop (cost=0.00..17.65 rows=1 width=344) (actual time=0.080..0.096\nrows=1 loops=1)\n -> Index Scan using table1_pkey on table1 t (cost=0.00..9.18 rows=1\nwidth=238) (actual time=0.044..0.048 rows=1 loops=1)\n Index Cond: ((id)::text = 'xyzzy'::text)\n -> Index Scan using table2_pkey on table2 i (cost=0.00..8.46 rows=1\nwidth=106) (actual time=0.019..0.023 rows=1 loops=1)\n Index Cond: (t.f_id = i.id)\n Total runtime: 0.224 ms\n\n\nset enable_nestloop=off;\nSET\n\n\nexplain analyze select * from table1 t1 inner join table2 t2 on t1.f_id =\nt2.id where t1.id = 'xyzzy';\n\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------------------\n Hash Join (cost=9.18..72250.79 rows=1 width=344) (actual\ntime=13493.572..15583.049 rows=1 loops=1)\n Hash Cond: (i.id = t.f_id)\n -> Seq Scan on table2 i (cost=0.00..61297.40 rows=2188840 width=106)\n(actual time=0.015..8278.347 rows=2188840 loops=1)\n -> Hash (cost=9.18..9.18 rows=1 width=238) (actual time=0.056..0.056\nrows=1 loops=1)\n -> Index Scan using table1_pkey on table1 t (cost=0.00..9.18\nrows=1 width=238) (actual time=0.040..0.045 rows=1 loops=1)\n Index Cond: ((id)::text = 'xyzzy'::text)\n Total runtime: 15583.212 ms\n\n(I changed the table names, but everything else is real.)\n\n\n", "msg_date": "Fri, 16 Nov 2007 09:56:28 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Nov 16, 2007 10:56 AM, Dave Dutcher <[email protected]> wrote:\n> I don't know about that. There are times when it is the right plan:\n\nAgreed. IMHO, there's nothing wrong with nested-loop join as long as\nit's being used properly.\n\n\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 16 Nov 2007 11:06:11 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Fri, 16 Nov 2007 11:06:11 -0500\n\"Jonah H. Harris\" <[email protected]> wrote:\n\n> On Nov 16, 2007 10:56 AM, Dave Dutcher <[email protected]> wrote:\n> > I don't know about that. There are times when it is the right\n> > plan:\n> \n> Agreed. IMHO, there's nothing wrong with nested-loop join as long\n> as it's being used properly.\n\nCan you explain further please? (I'm not disagreeing with you, just\nwant to know when nested loops are not used properly - does the\nplanner make mistakes that you have to watch out for?)\n\nThx,\n\nJosh\n", "msg_date": "Fri, 16 Nov 2007 14:36:50 -0600", "msg_from": "Josh Trutwin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "On Nov 16, 2007 3:36 PM, Josh Trutwin <[email protected]> wrote:\n> > Agreed. IMHO, there's nothing wrong with nested-loop join as long\n> > as it's being used properly.\n>\n> Can you explain further please? (I'm not disagreeing with you, just\n> want to know when nested loops are not used properly - does the\n> planner make mistakes that you have to watch out for?)\n\nAs long as statistics are updated properly, it's generally not an\nissue. You just don't want the system using a nested-loop join\nincorrectly (like when table sizes are equal, the outer table is\nlarger than the inner table, or the inner table itself is overly\nlarge).\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Fri, 16 Nov 2007 15:53:51 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" }, { "msg_contents": "\nOn Fri, 2007-11-16 at 11:06 -0500, Jonah H. Harris wrote:\n> On Nov 16, 2007 10:56 AM, Dave Dutcher <[email protected]> wrote:\n> > I don't know about that. There are times when it is the right plan:\n> \n> Agreed. IMHO, there's nothing wrong with nested-loop join as long as\n> it's being used properly.\n\nI do agree also, but in some other cases, the usage of nested loops (esp\nwhen the number of rows estimated to be returned vs the actual number of\nrows being returned differs by up to 100x (or more) then it becomes a\nmajor issue. \n\nThe example pointed out by Dave D shows the est rows = 1 and actual\nrows=1, then good performance of course.\n", "msg_date": "Mon, 19 Nov 2007 09:27:47 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs MySQL, and FreeBSD" } ]
[ { "msg_contents": "Does the amount of memory allocate to work_mem get subtracted from\nshared_buffers?\n\n \n\nExample:\n\n \n\nIf work_mem is 1M and there are 10 connections and shared_buffers is\n100M then would the total be 90 M left for shared_buffers?\n\n \n\nOr does the amount of memory allocated for work_mem have nothing to do\nwith shared_buffers?\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nDoes the amount of memory allocate to work_mem get\nsubtracted from shared_buffers?\n \nExample:\n \nIf work_mem is 1M and there are 10 connections and\nshared_buffers is 100M then would the total be 90 M left for shared_buffers?\n \nOr does the amount of memory allocated for work_mem have\nnothing to do with shared_buffers?\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Fri, 9 Nov 2007 11:49:03 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "work_mem and shared_buffers" }, { "msg_contents": "Campbell, Lance wrote:\n> Does the amount of memory allocate to work_mem get subtracted from\n> shared_buffers?\n> \n> Example:\n> \n> If work_mem is 1M and there are 10 connections and shared_buffers is\n> 100M then would the total be 90 M left for shared_buffers?\n> \n> Or does the amount of memory allocated for work_mem have nothing to do\n> with shared_buffers?\n\nNo, they're completely separate.\n\nNote that a connection can use more than work_mem of memory. For \nexample, if you run a query with multiple Sort or hash-nodes, each such \nnode allocates up to work_mem of memory.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 09 Nov 2007 17:56:30 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "How do you know when you should up the value of work_mem? Just play\nwith the number. Is there a query I could do that would tell me if\nPostgreSql is performing SQL that could use more memory for sorting?\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n\n-----Original Message-----\nFrom: Heikki Linnakangas [mailto:[email protected]] On Behalf Of Heikki\nLinnakangas\nSent: Friday, November 09, 2007 11:57 AM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] work_mem and shared_buffers\n\nCampbell, Lance wrote:\n> Does the amount of memory allocate to work_mem get subtracted from\n> shared_buffers?\n> \n> Example:\n> \n> If work_mem is 1M and there are 10 connections and shared_buffers is\n> 100M then would the total be 90 M left for shared_buffers?\n> \n> Or does the amount of memory allocated for work_mem have nothing to do\n> with shared_buffers?\n\nNo, they're completely separate.\n\nNote that a connection can use more than work_mem of memory. For \nexample, if you run a query with multiple Sort or hash-nodes, each such \nnode allocates up to work_mem of memory.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 9 Nov 2007 12:08:57 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "Wow. That is a nice logging feature in 8.3!\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Bill Moran [mailto:[email protected]] \nSent: Friday, November 09, 2007 2:08 PM\nTo: Campbell, Lance\nCc: [email protected]\nSubject: Re: [PERFORM] work_mem and shared_buffers\n\nOn Fri, 9 Nov 2007 12:08:57 -0600\n\"Campbell, Lance\" <[email protected]> wrote:\n\n> How do you know when you should up the value of work_mem? Just play\n> with the number. Is there a query I could do that would tell me if\n> PostgreSql is performing SQL that could use more memory for sorting?\n\n8.2 and older, it can be difficult to know, and I don't have a specific\nrecommendation.\n\n8.3 includes a parameter to log the usage of temporary files by\nPostgres.\nWhen a sort can't fit in the available memory, it uses a temp file, thus\nyou could use this new feature to track when sorts don't fit in\nwork_mem.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Fri, 9 Nov 2007 13:05:51 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Nov 9, 2007 12:08 PM, Campbell, Lance <[email protected]> wrote:\n> How do you know when you should up the value of work_mem? Just play\n> with the number. Is there a query I could do that would tell me if\n> PostgreSql is performing SQL that could use more memory for sorting?\n\nTrial and error. Note that you can set work_mem for a given session.\nWhile it may seem that making work_mem bigger will always help, that's\nnot necessarily the case.\n\nUsing this query:\n\nselect count(*) from (select * from myreporttable where lasttime >\nnow() - interval '1 week' order by random() ) as l\n\nI did the following: (I ran the query by itself once to fill the\nbuffers / cache of the machine with the data)\n\nwork_mem Time:\n1000kB 29215.563 ms\n4000kB 20612.489 ms\n8000kB 18408.087 ms\n16000kB 16893.964 ms\n32000kB 17681.221 ms\n64000kB 22439.988 ms\n125MB 23398.891 ms\n250MB 25461.797 ms\n\nNote that my best time was at around 16 Meg work_mem. This data set\nis MUCH bigger than 16 Meg, it's around 300-400 Meg. But work_mem\noptimized out at 16 Meg. Btw, I tried it going as high as 768 Meg,\nand it was still slower than 16M.\n\nThis machine has 2 Gigs ram and is optimized for IO not CPU performance.\n", "msg_date": "Fri, 9 Nov 2007 13:12:42 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "It is amazing, how after working with databases very actively for over 8\nyears, I am still learning things.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\n \n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Friday, November 09, 2007 1:13 PM\nTo: Campbell, Lance\nCc: Heikki Linnakangas; [email protected]\nSubject: Re: [PERFORM] work_mem and shared_buffers\n\nOn Nov 9, 2007 12:08 PM, Campbell, Lance <[email protected]> wrote:\n> How do you know when you should up the value of work_mem? Just play\n> with the number. Is there a query I could do that would tell me if\n> PostgreSql is performing SQL that could use more memory for sorting?\n\nTrial and error. Note that you can set work_mem for a given session.\nWhile it may seem that making work_mem bigger will always help, that's\nnot necessarily the case.\n\nUsing this query:\n\nselect count(*) from (select * from myreporttable where lasttime >\nnow() - interval '1 week' order by random() ) as l\n\nI did the following: (I ran the query by itself once to fill the\nbuffers / cache of the machine with the data)\n\nwork_mem Time:\n1000kB 29215.563 ms\n4000kB 20612.489 ms\n8000kB 18408.087 ms\n16000kB 16893.964 ms\n32000kB 17681.221 ms\n64000kB 22439.988 ms\n125MB 23398.891 ms\n250MB 25461.797 ms\n\nNote that my best time was at around 16 Meg work_mem. This data set\nis MUCH bigger than 16 Meg, it's around 300-400 Meg. But work_mem\noptimized out at 16 Meg. Btw, I tried it going as high as 768 Meg,\nand it was still slower than 16M.\n\nThis machine has 2 Gigs ram and is optimized for IO not CPU performance.\n", "msg_date": "Fri, 9 Nov 2007 13:19:39 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Nov 9, 2007 1:19 PM, Campbell, Lance <[email protected]> wrote:\n> It is amazing, how after working with databases very actively for over 8\n> years, I am still learning things.\n\nThe fun thing about postgresql is that just when you've got it figured\nout, somebody will come along and improve it in such a way as to make\nyour previously gathered knowledge obsolete. In a good way.\n\nI imagine in a few years, hardly anyone using postgresql will remember\nthe ancient art of having either apostrophes in a row inside your\nplpgsql functions...\n", "msg_date": "Fri, 9 Nov 2007 13:24:56 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Fri, 9 Nov 2007 12:08:57 -0600\n\"Campbell, Lance\" <[email protected]> wrote:\n\n> How do you know when you should up the value of work_mem? Just play\n> with the number. Is there a query I could do that would tell me if\n> PostgreSql is performing SQL that could use more memory for sorting?\n\n8.2 and older, it can be difficult to know, and I don't have a specific\nrecommendation.\n\n8.3 includes a parameter to log the usage of temporary files by Postgres.\nWhen a sort can't fit in the available memory, it uses a temp file, thus\nyou could use this new feature to track when sorts don't fit in\nwork_mem.\n\n-- \nBill Moran\nPotential Technologies\nhttp://www.potentialtech.com\n", "msg_date": "Fri, 9 Nov 2007 15:08:09 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Nov 9, 2007, at 1:24 PM, Scott Marlowe wrote:\n\n> On Nov 9, 2007 1:19 PM, Campbell, Lance <[email protected]> wrote:\n>> It is amazing, how after working with databases very actively for \n>> over 8\n>> years, I am still learning things.\n>\n> The fun thing about postgresql is that just when you've got it figured\n> out, somebody will come along and improve it in such a way as to make\n> your previously gathered knowledge obsolete. In a good way.\n>\n> I imagine in a few years, hardly anyone using postgresql will remember\n> the ancient art of having either apostrophes in a row inside your\n> plpgsql functions...\n\nSpeaking of that devil, I started working with Postgres mere months \nafter that particular evil went away but we still have a good bit of \nplpgsql with it in production. I've been meaning to convert it and \nclean it up for a while now. Would you, or anybody, happen to know \nof any scripts out there that I could grab to make a quick job, no \nbrains required of it?\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Fri, 9 Nov 2007 14:38:43 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Nov 9, 2007 2:38 PM, Erik Jones <[email protected]> wrote:\n>\n> >\n> > I imagine in a few years, hardly anyone using postgresql will remember\n> > the ancient art of having either apostrophes in a row inside your\n> > plpgsql functions...\n>\n> Speaking of that devil, I started working with Postgres mere months\n> after that particular evil went away but we still have a good bit of\n> plpgsql with it in production. I've been meaning to convert it and\n> clean it up for a while now. Would you, or anybody, happen to know\n> of any scripts out there that I could grab to make a quick job, no\n> brains required of it?\n\nMan, I can't think of any. I'd assume you'd need to look for the\nlongest occurance of ' marks, and replace it with one field, say $1$\nor something, then the next smaller set, with $2$ or something and so\non. I imagine one could write a script to do it. Luckily, we only\nhad one or two levels of ' marks in any of our stored procs, so it was\nonly a few minutes each time I edited one to switch it over.\n", "msg_date": "Fri, 9 Nov 2007 14:47:01 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "Bill Moran a �crit :\n> On Fri, 9 Nov 2007 12:08:57 -0600\n> \"Campbell, Lance\" <[email protected]> wrote:\n>\n> \n>> How do you know when you should up the value of work_mem? Just play\n>> with the number. Is there a query I could do that would tell me if\n>> PostgreSql is performing SQL that could use more memory for sorting?\n>> \n>\n> 8.2 and older, it can be difficult to know, and I don't have a specific\n> recommendation.\n>\n> \nI haven't use it in that context before, but perhaps inotify can be used \nto catch postgresql usage of temp files. ( http://inotify.aiken.cz/ , \nhttp://inotify.aiken.cz/?section=incron&page=about&lang=en )\n\n> 8.3 includes a parameter to log the usage of temporary files by Postgres.\n> When a sort can't fit in the available memory, it uses a temp file, thus\n> you could use this new feature to track when sorts don't fit in\n> work_mem.\n>\n> \n\n", "msg_date": "Mon, 12 Nov 2007 11:17:57 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Fri, 2007-11-09 at 13:12 -0600, Scott Marlowe wrote:\n\n> Note that my best time was at around 16 Meg work_mem. This data set\n> is MUCH bigger than 16 Meg, it's around 300-400 Meg. But work_mem\n> optimized out at 16 Meg. Btw, I tried it going as high as 768 Meg,\n> and it was still slower than 16M.\n\nRemember that what you have shown is that for *this* dataset 16Mb is the\noptimum value. It is not a recommended value for all cases.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sun, 18 Nov 2007 14:29:39 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" }, { "msg_contents": "On Nov 18, 2007 8:29 AM, Simon Riggs <[email protected]> wrote:\n> On Fri, 2007-11-09 at 13:12 -0600, Scott Marlowe wrote:\n>\n> > Note that my best time was at around 16 Meg work_mem. This data set\n> > is MUCH bigger than 16 Meg, it's around 300-400 Meg. But work_mem\n> > optimized out at 16 Meg. Btw, I tried it going as high as 768 Meg,\n> > and it was still slower than 16M.\n>\n> Remember that what you have shown is that for *this* dataset 16Mb is the\n> optimum value. It is not a recommended value for all cases.\n\nActually, on this particular machine, it's held true for all the\ndatasets that are on it.\n\nBut I agree that it's only true for those particular datasets, and\nmore importantly, this machine.\n\nBut my real point was that if you haven't tested various settings on\nyour machine, you don't know if you're helping or hurting with various\nchanges to work_mem\n", "msg_date": "Sun, 18 Nov 2007 08:41:59 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: work_mem and shared_buffers" } ]
[ { "msg_contents": "We've had our PostgreSQL 8.1.4 installation configured to autovacuum\nsince January, but I suspect it might not be doing anything. Perhaps I\ncan determine what happens through the log files? Is there a summary of\nwhich \"when to log\" settings in postgresql.conf should be set to get at\nleast table-level messages about yes/no decisions? The only message I\nsee now is very terse, indicating that autovacuum does run:\n\n \n\n LOG: autovacuum: processing database \"dc_prod\"\n\n \n\nI suspect there's a problem because there appears to be 78% overhead in\nthe database size, whereas I would expect 10-15% based on what I've\nread. This is not good for some Seq Scan operations on large tables\n(the root problem I'm starting to tackle). Notes:\n\n \n\n [+] Last week I restored a production backup into my\n\n development sandbox with a \"psql -f\", then ran a\n\n \"vacuumdb -a z\" on it. After that, I noticed that the\n\n size of the production database is 78% larger than\n\n development, using \"select pg_database_size('dc_prod')\"\n\n in pgAdmin3. Prod is 5.9GB, but my Dev is 3.3GB.\n\n \n\n [+] The worst table has about 2.7x overhead, according to\n\n \"select relpages/reltuples from pg_class\" queries.\n\n \n\nHere are the relevant postgresql.conf settings in production. I can't\nspeak to their suitability, but I think they should reclaim some unused\nspace for reuse.\n\n \n\n #stats_start_collector = on\n\n #stats_block_level = off\n\n stats_row_level = on\n\n #stats_reset_on_server_start = off\n\n \n\n autovacuum = on\n\n autovacuum_naptime = 360\n\n autovacuum_vacuum_threshold = 1000\n\n autovacuum_analyze_threshold = 500\n\n autovacuum_vacuum_scale_factor = 0.04\n\n autovacuum_analyze_scale_factor = 0.02\n\n autovacuum_vacuum_cost_delay = 10\n\n autovacuum_vacuum_cost_limit = -1\n\n \n\nI was suspicious that the stat_row_level might not work because\nstat_block_level is off. But I see pg_stat_user_tables.n_tup_ins,\npg_stat_user_tables.n_tup_upd and pg_stat_user_tables.n_tup_del are all\nincreasing (slowly but surely).\n\n \n\nThanks,\n\nDavid Crane\n\nhttp://www.donorschoose.org <http://www.donorschoose.org> \n\nTeachers Ask. You Choose. Students Learn.\n\n\n\n\n\n\n\n\n\n\nWe’ve had our PostgreSQL 8.1.4 installation configured\nto autovacuum since January, but I suspect it might not be doing anything.  Perhaps\nI can determine what happens through the log files?  Is there a summary of\nwhich “when to log” settings in postgresql.conf should be set to\nget at least table-level messages about yes/no decisions?  The only message I\nsee now is very terse, indicating that autovacuum does run:\n \n    LOG:  autovacuum: processing database\n\"dc_prod\"\n \nI suspect there’s a problem because there appears to\nbe 78% overhead in the database size, whereas I would expect 10-15% based on\nwhat I’ve read.  This is not good for some Seq Scan operations on large\ntables (the root problem I’m starting to tackle).  Notes:\n \n  [+] Last week I restored a production backup into my\n      development sandbox with a “psql -f”, then\nran a\n      “vacuumdb -a z” on it. After that, I\nnoticed that the\n      size of the production database is 78% larger than\n      development, using “select\npg_database_size('dc_prod')”\n      in pgAdmin3.  Prod is 5.9GB, but my Dev is 3.3GB.\n \n  [+] The worst table has about 2.7x overhead, according to\n      \"select relpages/reltuples from pg_class\"\nqueries.\n \nHere are the relevant postgresql.conf settings in production. \nI can’t speak to their suitability, but I think they should reclaim some\nunused space for reuse.\n \n    #stats_start_collector = on\n    #stats_block_level = off\n    stats_row_level = on\n    #stats_reset_on_server_start = off\n \n    autovacuum = on\n    autovacuum_naptime = 360\n    autovacuum_vacuum_threshold = 1000\n    autovacuum_analyze_threshold = 500\n    autovacuum_vacuum_scale_factor = 0.04\n    autovacuum_analyze_scale_factor = 0.02\n    autovacuum_vacuum_cost_delay = 10\n    autovacuum_vacuum_cost_limit = -1\n \nI was suspicious that the stat_row_level might not work because\nstat_block_level is off.  But I see pg_stat_user_tables.n_tup_ins, pg_stat_user_tables.n_tup_upd\nand pg_stat_user_tables.n_tup_del are all increasing (slowly but surely).\n \nThanks,\nDavid Crane\nhttp://www.donorschoose.org\nTeachers Ask. You Choose.\nStudents Learn.", "msg_date": "Fri, 9 Nov 2007 16:44:11 -0500", "msg_from": "\"David Crane\" <[email protected]>", "msg_from_op": true, "msg_subject": "Can I Determine if AutoVacuum Does Anything?" }, { "msg_contents": "David Crane wrote:\n> We've had our PostgreSQL 8.1.4 installation configured to autovacuum\n> since January, but I suspect it might not be doing anything. Perhaps I\n> can determine what happens through the log files? Is there a summary of\n> which \"when to log\" settings in postgresql.conf should be set to get at\n> least table-level messages about yes/no decisions? The only message I\n> see now is very terse, indicating that autovacuum does run:\n\nYeah, you have to set log_min_messages to debug2 to get useful output\nfor autovacuum. This is fixed in 8.3, but for earlier version there is\nnothing short of patching the server.\n\n> autovacuum = on\n> \n> autovacuum_naptime = 360\n\nThis is a bit on the high side, but it may not be very important. Keep\nin mind that in 8.2 and earlier, it means \"how long between autovac\nchecks\", so if there are many databases, it could be long before one\nautovac run in a particular database and the next one. (In 8.3 it has\nbeen redefined to mean the interval between runs on every database).\n\n> autovacuum_vacuum_threshold = 1000\n> autovacuum_analyze_threshold = 500\n\nThese are the default values but for small tables they seem high as\nwell. IIRC your problem is actually with big tables, for which it\ndoesn't make much of a difference.\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 9 Nov 2007 18:49:21 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can I Determine if AutoVacuum Does Anything?" } ]
[ { "msg_contents": "I am doing lots of INSERTs on a table that starts out empty (I did a\nTRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\non. I moved logging up to debug2 level to see what was going on, and I get\nthings like this:\n\n \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\ndead rows; 3000 rows in sample, 411224 estimated total rows\n\nA little later, it says:\n\n\"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\ndead rows; 3000 rows in sample, 538311 estimated total rows\n\n(I suppose that means autovacuum is working.) Is this normal, or have I got\nsomething wrong? Why so many dead rows when just doing inserts? It is not\nthat I think the number is too high, considering the number of rows in the\ntable at the point where I copied this line. It is just that I do not\nunderstand why there are any.\n\nI could easily understand it if I were doing UPDATEs.\n\npostgresql-8.1.9-1.el5\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:15:01 up 18 days, 4:33, 4 users, load average: 6.18, 5.76, 5.26\n", "msg_date": "Sat, 10 Nov 2007 11:27:07 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Curious about dead rows." }, { "msg_contents": "Jean-David Beyer <[email protected]> writes:\n> I am doing lots of INSERTs on a table that starts out empty (I did a\n> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n> on. I moved logging up to debug2 level to see what was going on, and I get\n> things like this:\n\n> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\n> dead rows; 3000 rows in sample, 411224 estimated total rows\n\n> A little later, it says:\n\n> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n> dead rows; 3000 rows in sample, 538311 estimated total rows\n\nWell, *something* is doing deletes or updates in that table. Better\nlook a bit harder at your application ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 10 Nov 2007 12:08:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows. " }, { "msg_contents": "Tom Lane wrote:\n> Jean-David Beyer <[email protected]> writes:\n>> I am doing lots of INSERTs on a table that starts out empty (I did a\n>> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n>> on. I moved logging up to debug2 level to see what was going on, and I get\n>> things like this:\n> \n>> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\n>> dead rows; 3000 rows in sample, 411224 estimated total rows\n> \n>> A little later, it says:\n> \n>> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n>> dead rows; 3000 rows in sample, 538311 estimated total rows\n> \n> Well, *something* is doing deletes or updates in that table. Better\n> look a bit harder at your application ...\n> \nOK, you agree that if I am doing only INSERTs, that there should not be any\ndead rows. Therefore, I _must_ be doing deletes or updates.\n\nBut the program is pretty simple, and I see no UPDATEs or DELETEs. I\nsearched all the program source files (that contain none of them) and all\nthe libraries I have written, and they have none either. Right now the\nprograms are not to the state where UPDATEs or DELETEs are required (though\nthey will be later). I am still developing them and it is easier to just\nrestore from backup or start over from the beginning since most of the\nchanges are data laundering from an ever-increasing number of spreadsheets.\n\nAm I right that TRUNCATE deletes all the rows of a table. They may then be\nstill there, but would not autovacuum clean out the dead rows? Or maybe it\nhas not gotten to them yet? I could do an explicit one earlier.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 13:10:01 up 18 days, 6:28, 7 users, load average: 4.46, 4.34, 4.23\n", "msg_date": "Sat, 10 Nov 2007 13:38:23 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Sat, 10 Nov 2007 13:38:23 -0500\r\nJean-David Beyer <[email protected]> wrote:\r\n\r\n> Tom Lane wrote:\r\n> > Jean-David Beyer <[email protected]> writes:\r\n> >> I am doing lots of INSERTs on a table that starts out empty (I did\r\n> >> a TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs.\r\n> >> Autovacuum is on. I moved logging up to debug2 level to see what\r\n> >> was going on, and I get things like this:\r\n> > \r\n> >> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows\r\n> >> and 1033 dead rows; 3000 rows in sample, 411224 estimated total\r\n> >> rows\r\n> > \r\n> >> A little later, it says:\r\n> > \r\n> >> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows\r\n> >> and 493 dead rows; 3000 rows in sample, 538311 estimated total rows\r\n> > \r\n> > Well, *something* is doing deletes or updates in that table. Better\r\n> > look a bit harder at your application ...\r\n> > \r\n> OK, you agree that if I am doing only INSERTs, that there should not\r\n> be any dead rows. Therefore, I _must_ be doing deletes or updates.\r\n> \r\n> But the program is pretty simple, and I see no UPDATEs or DELETEs. I\r\n> searched all the program source files (that contain none of them) and\r\n> all the libraries I have written, and they have none either. Right\r\n> now the programs are not to the state where UPDATEs or DELETEs are\r\n> required (though they will be later). I am still developing them and\r\n> it is easier to just restore from backup or start over from the\r\n> beginning since most of the changes are data laundering from an\r\n> ever-increasing number of spreadsheets.\r\n> \r\n> Am I right that TRUNCATE deletes all the rows of a table. They may\r\n> then be still there, but would not autovacuum clean out the dead\r\n> rows? Or maybe it has not gotten to them yet? I could do an explicit\r\n> one earlier.\r\n\r\nTruncate will not create dead rows. However ROLLBACK will. Are you\r\ngetting any duplicate key errors or anything like that when you insert?\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake\r\n\r\n \r\n\r\n\r\n- -- \r\n\r\n === The PostgreSQL Company: Command Prompt, Inc. ===\r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\r\n\t\t\tUNIQUE NOT NULL\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL Replication: http://www.commandprompt.com/products/\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHNf2pATb/zqfZUUQRApYEAKCWp107koBhpWQbMjwLybBB6SvDmQCgj8Q6\r\nkPAE4qe1fT6RNbFtqlIw52M=\r\n=/5us\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Sat, 10 Nov 2007 10:51:19 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Joshua D. Drake wrote:\n> On Sat, 10 Nov 2007 13:38:23 -0500 Jean-David Beyer\n> <[email protected]> wrote:\n> \n>>> Tom Lane wrote:\n>>>> Jean-David Beyer <[email protected]> writes:\n>>>>> I am doing lots of INSERTs on a table that starts out empty (I\n>>>>> did a TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs.\n>>>>> Autovacuum is on. I moved logging up to debug2 level to see what\n>>>>> was going on, and I get things like this: \"vl_as\": scanned 3000\n>>>>> of 5296 pages, containing 232944 live rows and 1033 dead rows;\n>>>>> 3000 rows in sample, 411224 estimated total rows A little later,\n>>>>> it says: \"vl_as\": scanned 3000 of 6916 pages, containing 233507\n>>>>> live rows and 493 dead rows; 3000 rows in sample, 538311\n>>>>> estimated total rows\n>>>> Well, *something* is doing deletes or updates in that table.\n>>>> Better look a bit harder at your application ...\n>>>> \n>>> OK, you agree that if I am doing only INSERTs, that there should not \n>>> be any dead rows. Therefore, I _must_ be doing deletes or updates.\n>>> \n>>> But the program is pretty simple, and I see no UPDATEs or DELETEs. I \n>>> searched all the program source files (that contain none of them) and\n>>> all the libraries I have written, and they have none either. Right \n>>> now the programs are not to the state where UPDATEs or DELETEs are \n>>> required (though they will be later). I am still developing them and \n>>> it is easier to just restore from backup or start over from the \n>>> beginning since most of the changes are data laundering from an \n>>> ever-increasing number of spreadsheets.\n>>> \n>>> Am I right that TRUNCATE deletes all the rows of a table. They may \n>>> then be still there, but would not autovacuum clean out the dead \n>>> rows? Or maybe it has not gotten to them yet? I could do an explicit \n>>> one earlier.\n> \n> Truncate will not create dead rows. However ROLLBACK will. Are you \n> getting any duplicate key errors or anything like that when you insert?\n> \nOn the mistaken assumption that TRUNCATE left dead rows, I did a\nVACUUM FULL ANALYZE before running the program full of INSERTs. This did not\nmake any difference.\n\nAs far as ROLLBACK are concerned, every one is immediately preceded by a\nmessage output to the standard error file, and no such messages are produced.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 14:50:01 up 18 days, 8:08, 5 users, load average: 5.23, 5.35, 5.34\n", "msg_date": "Sat, 10 Nov 2007 14:57:10 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Nov 10, 2007 1:57 PM, Jean-David Beyer <[email protected]> wrote:\n>\n> Joshua D. Drake wrote:\n> >\n> > Truncate will not create dead rows. However ROLLBACK will. Are you\n> > getting any duplicate key errors or anything like that when you insert?\n> >\n> On the mistaken assumption that TRUNCATE left dead rows, I did a\n> VACUUM FULL ANALYZE before running the program full of INSERTs. This did not\n> make any difference.\n>\n> As far as ROLLBACK are concerned, every one is immediately preceded by a\n> message output to the standard error file, and no such messages are produced.\n\nSo, there are NO failed inserts, and no updates? Cause that's what\nI'd expect to create the dead rows.\n", "msg_date": "Sat, 10 Nov 2007 20:11:03 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nScott Marlowe wrote:\n> On Nov 10, 2007 1:57 PM, Jean-David Beyer <[email protected]> wrote:\n>> Joshua D. Drake wrote:\n>>> Truncate will not create dead rows. However ROLLBACK will. Are you\n>>> getting any duplicate key errors or anything like that when you insert?\n>>>\n>> On the mistaken assumption that TRUNCATE left dead rows, I did a\n>> VACUUM FULL ANALYZE before running the program full of INSERTs. This did not\n>> make any difference.\n>>\n>> As far as ROLLBACK are concerned, every one is immediately preceded by a\n>> message output to the standard error file, and no such messages are produced.\n> \n> So, there are NO failed inserts, and no updates? Cause that's what\n> I'd expect to create the dead rows.\n> \nSo would I. Hence the original question.\n\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 21:20:01 up 18 days, 14:38, 0 users, load average: 4.38, 4.40, 4.31\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFHNmeBPtu2XpovyZoRAqxzAJ9wLNf7Y9egSd/COtMjWaqKWfJXowCfdDj7\nHEulOz8v4DKtAqWCGTf/22Y=\n=79AU\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 10 Nov 2007 21:22:58 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Sat, Nov 10, 2007 at 09:22:58PM -0500, Jean-David Beyer wrote:\n> > \n> > So, there are NO failed inserts, and no updates? Cause that's what\n> > I'd expect to create the dead rows.\n> > \n> So would I. Hence the original question.\n\nForeign keys with cascading deletes or updates?\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Mon, 12 Nov 2007 11:32:23 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Please don't drop the list, as someone else may see something.\n\nOn Tue, Nov 13, 2007 at 10:06:13AM -0500, Jean-David Beyer wrote:\n> OK. I turned logging from \"none\" to \"mod\" and got a gawdawful lot of stuff.\n\nYes.\n\n> Then I ran it and got all the inserts. Using\n> grep -i delete file\n> grep -i update file\n> grep -i rollback file\n\nHow about ERROR?\n\n> 2007-11-13 08:11:20 EST DEBUG: \"vl_ranks\": scanned 540 of 540 pages,\n> containing 67945 live rows and 554 dead rows; 3000 rows in sample, 67945\n> estimated total rows\n\nIf there are dead rows, something is producing them. Either INSERT is\nfiring a trigger that is doing something there (you won't see an UPDATE in\nthat case), or else something else is causing INSERTs to fail.\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Tue, 13 Nov 2007 10:21:11 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nAndrew Sullivan wrote:\n> Please don't drop the list, as someone else may see something.\n> \n> On Tue, Nov 13, 2007 at 10:06:13AM -0500, Jean-David Beyer wrote:\n>> OK. I turned logging from \"none\" to \"mod\" and got a gawdawful lot of stuff.\n> \n> Yes.\n> \n>> Then I ran it and got all the inserts. Using\n>> grep -i delete file\n>> grep -i update file\n>> grep -i rollback file\n> \n> How about ERROR?\n\n$ grep -i error Tue.log\n$\n> \n>> 2007-11-13 08:11:20 EST DEBUG: \"vl_ranks\": scanned 540 of 540 pages,\n>> containing 67945 live rows and 554 dead rows; 3000 rows in sample, 67945\n>> estimated total rows\n> \n> If there are dead rows, something is producing them. Either INSERT is\n> firing a trigger that is doing something there (you won't see an UPDATE in\n> that case), or else something else is causing INSERTs to fail.\n\nI have no triggers in that database. I do have two sequences.\n\n List of relations\n Schema | Name | Type | Owner\n- --------+------------------------+----------+---------\n public | company_company_id_seq | sequence | jdbeyer\n public | source_source_id_seq | sequence | jdbeyer\n\nstock=> \\d company_company_id_seq\nSequence \"public.company_company_id_seq\"\n Column | Type\n- ---------------+---------\n sequence_name | name\n last_value | bigint\n increment_by | bigint\n max_value | bigint\n min_value | bigint\n cache_value | bigint\n log_cnt | bigint\n is_cycled | boolean\n is_called | boolean\n\nstock=> \\d source_source_id_seq\nSequence \"public.source_source_id_seq\"\n Column | Type\n- ---------------+---------\n sequence_name | name\n last_value | bigint\n increment_by | bigint\n max_value | bigint\n min_value | bigint\n cache_value | bigint\n log_cnt | bigint\n is_cycled | boolean\n is_called | boolean\n\nbut they are not used after the last VACUUM FULL ANALYZE\n\n\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 14:40:01 up 21 days, 7:58, 2 users, load average: 4.33, 4.43, 4.39\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFHOgAiPtu2XpovyZoRApmZAKDH2JaSlxH+DT1rs8E110P9L4r5+ACZAYGY\nz2SQtUvRDHlpCwePE2cskX4=\n=xS8V\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 13 Nov 2007 14:50:59 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Tue, Nov 13, 2007 at 02:50:59PM -0500, Jean-David Beyer wrote:\n> > How about ERROR?\n> \n> $ grep -i error Tue.log\n> $\n\nWell, without actually logging into the machine and looking at the\napplication, I confess I am stumped. Oh, wait. You do have the log level\nhigh enough that you should see errors in the log, right? That's not\ncontrolled by the statement parameter. \n\n> I have no triggers in that database. I do have two sequences.\n\nSequences should not produce any dead rows on the table, unless they're used\nas keys and you're attempting inserts that conflict with used sequence\nvalues. That should cause errors that you'd get in the log, presuming that\nyou have the log level set correctly.\n\nA\n\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Tue, 13 Nov 2007 16:35:10 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "I'm not a private support organisation; please send your replies to the\nlist, not me.\n\nOn Tue, Nov 13, 2007 at 04:57:23PM -0500, Jean-David Beyer wrote:\n> What is it controlled by? The following are the non-default values in\n> postgresql.conf:\n> \n> redirect_stderr = on\n> log_directory = '/srv/dbms/dataB/pgsql/pg_log'\n> log_filename = 'postgresql-%a.log'\n> log_truncate_on_rotation = on\n> log_rotation_age = 1440\n> log_rotation_size = 0\n> log_min_messages = debug2\n\nThis will certainly include error messages, then. Or it ought to. You do\nsee errors in the log when you create one, right? (Try causing an error in\npsql to make sure.)\n\n> log_line_prefix = '%t '\n> log_statement = 'none' (this was 'mod', but it uses too much\n> disk to leave it turned on -- only\n> 4 GBytes in that partition)\n> \n> > \n> They are; they are the primary keys of two tables. But those are all done\n> before the last VACUUM FULL ANALYZE runs, so the dead rows should have been\n> eliminated. And the output of the sequence is the only way of generating a\n> primary key, so it should be impossible anyhow.\n\nI thought you were doing INSERTs? It's not true that the output of the\nsequence is the only way -- if you insert directly, it will happily insert\ninto that column. But it should cause an error to show in the log, which is\nwhat's puzzling me.\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Tue, 13 Nov 2007 17:10:28 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Andrew Sullivan wrote:\n> I'm not a private support organisation; please send your replies to the\n> list, not me.\n\nSorry. Most of the lists I send to have ReplyTo set, but a few do not.\nAnd then I forget.\n> \n> On Tue, Nov 13, 2007 at 04:57:23PM -0500, Jean-David Beyer wrote:\n>> What is it controlled by? The following are the non-default values in\n>> postgresql.conf:\n>>\n>> redirect_stderr = on\n>> log_directory = '/srv/dbms/dataB/pgsql/pg_log'\n>> log_filename = 'postgresql-%a.log'\n>> log_truncate_on_rotation = on\n>> log_rotation_age = 1440\n>> log_rotation_size = 0\n>> log_min_messages = debug2\n> \n> This will certainly include error messages, then. Or it ought to. You do\n> see errors in the log when you create one, right? (Try causing an error in\n> psql to make sure.)\nRight: I do see an error message when I try to insert a duplicate entry. It\nhappens to violate the (company_name, company_permno) uniqueness constraint.\n\n2007-11-13 17:58:30 EST ERROR: duplicate key violates unique constraint\n\"company_name_x\"\n\n(I tried to insert a duplicate entry in the company_name field of relation\n_company_. company_name_x is defined as:\n\"company_name_x\" UNIQUE, btree (company_name, company_permno), tablespace\n\"stockd\" )\n> \n>> log_line_prefix = '%t '\n>> log_statement = 'none' (this was 'mod', but it uses too much\n>> disk to leave it turned on -- only\n>> 4 GBytes in that partition)\n>>\n>> They are; they are the primary keys of two tables. But those are all done\n>> before the last VACUUM FULL ANALYZE runs, so the dead rows should have been\n>> eliminated. And the output of the sequence is the only way of generating a\n>> primary key, so it should be impossible anyhow.\n> \n> I thought you were doing INSERTs? \n\nYes.\n\n> It's not true that the output of the\n> sequence is the only way -- if you insert directly, it will happily insert\n> into that column. \n\nYes, but I get those keys from a sequence only. I never enter them manually\nor from a data file.\n\n> But it should cause an error to show in the log, which is\n> what's puzzling me.\n> \nMe too.\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 17:50:01 up 21 days, 11:08, 4 users, load average: 5.12, 4.77, 4.68\n", "msg_date": "Tue, 13 Nov 2007 18:09:33 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Jean-David Beyer wrote:\n> Andrew Sullivan wrote:\n> > I'm not a private support organisation; please send your replies to the\n> > list, not me.\n> \n> Sorry. Most of the lists I send to have ReplyTo set, but a few do not.\n> And then I forget.\n\nIf you use \"reply to all\", it works wonderfully in both cases.\n(Actually it works even when you're not using mailing lists at all).\n\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"If it wasn't for my companion, I believe I'd be having\nthe time of my life\" (John Dunbar)\n", "msg_date": "Tue, 13 Nov 2007 20:51:33 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On 11/13/07, Alvaro Herrera <[email protected]> wrote:\n> Jean-David Beyer wrote:\n> > Andrew Sullivan wrote:\n> > > I'm not a private support organisation; please send your replies to the\n> > > list, not me.\n> >\n> > Sorry. Most of the lists I send to have ReplyTo set, but a few do not.\n> > And then I forget.\n>\n> If you use \"reply to all\", it works wonderfully in both cases.\n\nThen it upsets the people who don't want to get private copies, only\nlist copies, on most of the Reply-To lists.\n\nThere's no winning :(\n", "msg_date": "Tue, 13 Nov 2007 15:59:19 -0800", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Trevor Talbot escribi�:\n> On 11/13/07, Alvaro Herrera <[email protected]> wrote:\n> > Jean-David Beyer wrote:\n> > > Andrew Sullivan wrote:\n> > > > I'm not a private support organisation; please send your replies to the\n> > > > list, not me.\n> > >\n> > > Sorry. Most of the lists I send to have ReplyTo set, but a few do not.\n> > > And then I forget.\n> >\n> > If you use \"reply to all\", it works wonderfully in both cases.\n> \n> Then it upsets the people who don't want to get private copies, only\n> list copies, on most of the Reply-To lists.\n> \n> There's no winning :(\n\nI am on a couple of mailing lists with Reply-To set, and what my MUA\ndoes is put only the list on the To:, so there is no extra private copy.\nI use \"reply-to-group\" all the time and it works perfectly well.\n\n-- \nAlvaro Herrera http://www.flickr.com/photos/alvherre/\n\"PHP is what I call the \"Dumb Monkey\" language. [A]ny dumb monkey can code\nsomething in PHP. Python takes actual thought to produce something useful.\"\n (J. Drake)\n", "msg_date": "Tue, 13 Nov 2007 21:04:20 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On 2007-11-13 Trevor Talbot wrote:\n> On 11/13/07, Alvaro Herrera <[email protected]> wrote:\n>> Jean-David Beyer wrote:\n>>> Sorry. Most of the lists I send to have ReplyTo set, but a few do\n>>> not. And then I forget.\n>>\n>> If you use \"reply to all\", it works wonderfully in both cases.\n> \n> Then it upsets the people who don't want to get private copies, only\n> list copies, on most of the Reply-To lists.\n> \n> There's no winning :(\n\nUnless you use a mailer that supports Reply, Group-Reply, *and*\nList-Reply. ;)\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Wed, 14 Nov 2007 01:11:03 +0100", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Nov 10, 2007 1:38 PM, Jean-David Beyer <[email protected]> wrote:\n> Tom Lane wrote:\n> > Jean-David Beyer <[email protected]> writes:\n> >> I am doing lots of INSERTs on a table that starts out empty (I did a\n> >> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n> >> on. I moved logging up to debug2 level to see what was going on, and I get\n> >> things like this:\n> >\n> >> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\n> >> dead rows; 3000 rows in sample, 411224 estimated total rows\n> >\n> >> A little later, it says:\n> >\n> >> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n> >> dead rows; 3000 rows in sample, 538311 estimated total rows\n> >\n> > Well, *something* is doing deletes or updates in that table. Better\n> > look a bit harder at your application ...\n> >\n> OK, you agree that if I am doing only INSERTs, that there should not be any\n> dead rows. Therefore, I _must_ be doing deletes or updates.\n>\n> But the program is pretty simple, and I see no UPDATEs or DELETEs. I\n> searched all the program source files (that contain none of them) and all\n> the libraries I have written, and they have none either. Right now the\n> programs are not to the state where UPDATEs or DELETEs are required (though\n> they will be later). I am still developing them and it is easier to just\n> restore from backup or start over from the beginning since most of the\n> changes are data laundering from an ever-increasing number of spreadsheets.\n>\n> Am I right that TRUNCATE deletes all the rows of a table. They may then be\n> still there, but would not autovacuum clean out the dead rows? Or maybe it\n> has not gotten to them yet? I could do an explicit one earlier.\n\nwhat does pg_stat_all_tables say (assuming row level stats are on)?\n\nmerlin\n", "msg_date": "Tue, 13 Nov 2007 19:28:24 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Merlin Moncure wrote:\n> On Nov 10, 2007 1:38 PM, Jean-David Beyer <[email protected]> wrote:\n>> Tom Lane wrote:\n>>> Jean-David Beyer <[email protected]> writes:\n>>>> I am doing lots of INSERTs on a table that starts out empty (I did a\n>>>> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n>>>> on. I moved logging up to debug2 level to see what was going on, and I get\n>>>> things like this:\n>>>> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\n>>>> dead rows; 3000 rows in sample, 411224 estimated total rows\n>>>> A little later, it says:\n>>>> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n>>>> dead rows; 3000 rows in sample, 538311 estimated total rows\n>>> Well, *something* is doing deletes or updates in that table. Better\n>>> look a bit harder at your application ...\n>>>\n>> OK, you agree that if I am doing only INSERTs, that there should not be any\n>> dead rows. Therefore, I _must_ be doing deletes or updates.\n>>\n>> But the program is pretty simple, and I see no UPDATEs or DELETEs. I\n>> searched all the program source files (that contain none of them) and all\n>> the libraries I have written, and they have none either. Right now the\n>> programs are not to the state where UPDATEs or DELETEs are required (though\n>> they will be later). I am still developing them and it is easier to just\n>> restore from backup or start over from the beginning since most of the\n>> changes are data laundering from an ever-increasing number of spreadsheets.\n>>\n>> Am I right that TRUNCATE deletes all the rows of a table. They may then be\n>> still there, but would not autovacuum clean out the dead rows? Or maybe it\n>> has not gotten to them yet? I could do an explicit one earlier.\n> \n> what does pg_stat_all_tables say (assuming row level stats are on)?\n\n# - Query/Index Statistics Collector -\n\n#stats_start_collector = on\nstats_start_collector = on\n\n#stats_command_string = off\n#stats_block_level = off\n\n#stats_row_level = off\nstats_row_level = on\n\n#stats_reset_on_server_start = off\n\n> \nIt says stuff like this:\n\n relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del\n----------+----------+--------------+----------+---------------+-----------+-\n ibd | 75 | 9503850 | 11 | 2350555 | 2416845 |\n 0 | 0\n vl_cf | 139 | 38722575 | 22 | 5392609 | 5692814 |\n 0 | 0\n vl_li | 139 | 39992838 | 22 | 5569855 | 5885516 |\n 0 | 0\n\nI removed the relid and schemaname and squeezed the other columns so it\nwould not be quite so wide. Is this what you might like to know?\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 21:10:01 up 21 days, 14:28, 3 users, load average: 6.20, 5.69, 5.11\n", "msg_date": "Tue, 13 Nov 2007 21:26:08 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Nov 13, 2007 9:26 PM, Jean-David Beyer <[email protected]> wrote:\n> Merlin Moncure wrote:\n> > what does pg_stat_all_tables say (assuming row level stats are on)?\n> It says stuff like this:\n>\n> relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins |\n> n_tup_upd | n_tup_del\n> ----------+----------+--------------+----------+---------------+-----------+-\n> ibd | 75 | 9503850 | 11 | 2350555 | 2416845 |\n> 0 | 0\n> vl_cf | 139 | 38722575 | 22 | 5392609 | 5692814 |\n> 0 | 0\n> vl_li | 139 | 39992838 | 22 | 5569855 | 5885516 |\n> 0 | 0\n>\n> I removed the relid and schemaname and squeezed the other columns so it\n> would not be quite so wide. Is this what you might like to know?\n\nit tells me that you aren't crazy, and that rollbacks are the likely\nthe cause, although you appear to be watching the logs pretty\ncarefully. you can check pg_stat_database to confirm if your\nrollbacks are in line with your expectations. or, you might by seeing\nsome corner case conditions...are any fields in the table foreign\nkeyed to another table (cascading update/delete)? do you have any\nfunctions with handled exceptions or savepoints? (I'm guessing no to\nthe latter).\n\nmerlin\n", "msg_date": "Tue, 13 Nov 2007 22:36:11 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Merlin Moncure wrote:\n> On Nov 13, 2007 9:26 PM, Jean-David Beyer <[email protected]> wrote:\n>> Merlin Moncure wrote:\n>>> what does pg_stat_all_tables say (assuming row level stats are on)?\n>> It says stuff like this:\n>>\n>> relname | seq_scan | seq_tup_read | idx_scan | idx_tup_fetch | n_tup_ins |\n>> n_tup_upd | n_tup_del\n>> ----------+----------+--------------+----------+---------------+-----------+-\n>> ibd | 75 | 9503850 | 11 | 2350555 | 2416845 |\n>> 0 | 0\n>> vl_cf | 139 | 38722575 | 22 | 5392609 | 5692814 |\n>> 0 | 0\n>> vl_li | 139 | 39992838 | 22 | 5569855 | 5885516 |\n>> 0 | 0\n>>\n>> I removed the relid and schemaname and squeezed the other columns so it\n>> would not be quite so wide. Is this what you might like to know?\n> \n> it tells me that you aren't crazy, and that rollbacks are the likely\n> the cause, although you appear to be watching the logs pretty\n> carefully. you can check pg_stat_database to confirm if your\n> rollbacks are in line with your expectations. or, you might by seeing\n> some corner case conditions...are any fields in the table foreign\n> keyed to another table (cascading update/delete)? do you have any\n> functions with handled exceptions or savepoints? (I'm guessing no to\n> the latter).\n> \nHow do I reset the counters in pg_stat_database and pg_stat_all_tables?\nI tried just restarting postgres, but it seems to be saved in the database,\nnot just in the RAM of the server.\n\nRight now I am getting:\n\nstock=> SELECT * FROM pg_stat_database;\n datid | datname | numbackends | xact_commit | xact_rollback | blks_read |\nblks_hit\n-------+-----------+-------------+-------------+---------------+-----------+----------\n\n 16402 | stock | 1 | 261428429 | 3079861 | 0 |\n 0\n(4 rows)\n\nI just watched these as the loading program runs, and I can account for all\nthe new rollbacks, that come after the dead rows are found.\n\nI suppose that blks_read and blks_hit are zero because there are 8 GBytes\nRAM on this machine and I give 2GBytes to shared_buffers = 253000 so that\nall sits in RAM.\n\nI know there have been rollbacks but I do a REINDEX, CLUSTER, and VACUUM\nANALYZE before starting the inserts in question. Do I need to do a VACUUM\nFULL ANALYZE instead?\n\nWhen there were errors in the input data, the program just rolls back the\ntransaction and gives up on that input file. (The program processes hundreds\nof input files and I get an additional input file each week. I then correct\nthe error in the input file and start over. I do not do updates because the\ninput file needs to be corrected anyhow. and the easiest way to check it is\nto load it into the database and let the loader programs check it.)\n\nKeeping things in perspective, the autovacuum gets these eventually, and I\ndo not think it is really hurting performance all that much. But I would\nlike to understand what is going on.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 06:25:01 up 21 days, 23:43, 0 users, load average: 4.02, 4.01, 4.00\n", "msg_date": "Wed, 14 Nov 2007 07:12:45 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Jean-David Beyer wrote:\n\n> How do I reset the counters in pg_stat_database and pg_stat_all_tables?\n> I tried just restarting postgres, but it seems to be saved in the database,\n> not just in the RAM of the server.\n\nThere is a function called pg_stat_reset() or some such.\n\n> I suppose that blks_read and blks_hit are zero because there are 8 GBytes\n> RAM on this machine and I give 2GBytes to shared_buffers = 253000 so that\n> all sits in RAM.\n\nPerhaps you have stats_block_level set to off?\n\n> I know there have been rollbacks but I do a REINDEX, CLUSTER, and VACUUM\n> ANALYZE before starting the inserts in question.\n\nYou do all three on the same tables? That seems pretty pointless. A\nsole CLUSTER has the same effect.\n\n> Do I need to do a VACUUM FULL ANALYZE instead?\n\nNo.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\n\"There was no reply\" (Kernel Traffic)\n", "msg_date": "Wed, 14 Nov 2007 09:58:08 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Alvaro Herrera wrote:\n> Jean-David Beyer wrote:\n> \n>> How do I reset the counters in pg_stat_database and pg_stat_all_tables?\n>> I tried just restarting postgres, but it seems to be saved in the database,\n>> not just in the RAM of the server.\n> \n> There is a function called pg_stat_reset() or some such.\n\nI'll find it.\n> \n>> I suppose that blks_read and blks_hit are zero because there are 8 GBytes\n>> RAM on this machine and I give 2GBytes to shared_buffers = 253000 so that\n>> all sits in RAM.\n> \n> Perhaps you have stats_block_level set to off?\n\nTrue, I will turn them on.\n> \n>> I know there have been rollbacks but I do a REINDEX, CLUSTER, and VACUUM\n>> ANALYZE before starting the inserts in question.\n> \n> You do all three on the same tables? That seems pretty pointless. A\n> sole CLUSTER has the same effect.\n\nI was only doing this to be sure to clean everything up for this data\ngathering process and wanted to know I did not miss anything.\n> \n>> Do I need to do a VACUUM FULL ANALYZE instead?\n> \n> No.\n> \n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 09:50:01 up 22 days, 3:08, 4 users, load average: 4.29, 4.17, 4.11\n", "msg_date": "Wed, 14 Nov 2007 09:58:33 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Wed, Nov 14, 2007 at 07:12:45AM -0500, Jean-David Beyer wrote:\n> \n> I know there have been rollbacks but I do a REINDEX, CLUSTER, and VACUUM\n> ANALYZE before starting the inserts in question. Do I need to do a VACUUM\n> FULL ANALYZE instead?\n\nI had another idea. As Alvaro says, CLUSTER will do everything you need. \nBut are you sure there are _no other_ transactions open when you do that? \nThis could cause problems, and CLUSTER's behaviour with other open\ntransactions is not, um, friendly prior to the current beta.\n\nA\n\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Wed, 14 Nov 2007 10:22:49 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Nov 14, 2007 at 07:12:45AM -0500, Jean-David Beyer wrote:\n>> I know there have been rollbacks but I do a REINDEX, CLUSTER, and VACUUM\n>> ANALYZE before starting the inserts in question. Do I need to do a VACUUM\n>> FULL ANALYZE instead?\n> \n> I had another idea. As Alvaro says, CLUSTER will do everything you need. \n> But are you sure there are _no other_ transactions open when you do that? \n\nI am sure. I have a single-threaded program, so unless the postgres server\nprocesses begin and end transactions on their own initiative, the only\nthings that would initiate transactions would be my one of my applications\nthat I run only one at a time, or leaving psql running. But as I understand\nit, psql does not bother with transactions, and besides, I normally just do\nSELECTs with that. (I also do INSERTs and UPDATEs with it in shell scripts,\nbut I do not run those when I am running the application either.\n\n> This could cause problems, and CLUSTER's behaviour with other open\n> transactions is not, um, friendly prior to the current beta.\n> \nI suppose it might.\n\nRight now I put\n\n // Reset statistics counters.\n EXEC SQL BEGIN WORK;\n EXEC SQL\n\tSELECT pg_stat_reset();\n EXEC SQL COMMIT WORK;\n\ninto my application so that the statistics counters will not count previous\nUPDATEs and ROLLBACKs when the main program that I intend and believe to do\nonly INSERTs is running. It will make those statistics easier to read than\nhaving to subtract previous values to get the changes.\n\nWell, it will not work because I must be superuser (i.e., postgres) to\nexecute that, and if I am, I cannot read the input files. I will do it\nmanually with psql but that means I have to watch it run to do it at the\nright time.\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:20:01 up 22 days, 4:38, 4 users, load average: 6.16, 5.98, 5.62\n", "msg_date": "Wed, 14 Nov 2007 11:53:17 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Wed, Nov 14, 2007 at 11:53:17AM -0500, Jean-David Beyer wrote:\n> that I run only one at a time, or leaving psql running. But as I understand\n> it, psql does not bother with transactions, and besides, I normally just do\n\nNo, every statement in psql is a transaction. Even SELECT. Every statement\nunder PostgreSQL runs in a transaction. When you type \"SELECT (1)\", the\nserver implicitly adds the BEGIN; and END; around it.\n\n> into my application so that the statistics counters will not count previous\n> UPDATEs and ROLLBACKs when the main program that I intend and believe to do\n> only INSERTs is running. It will make those statistics easier to read than\n> having to subtract previous values to get the changes.\n\nYes.\n \n> Well, it will not work because I must be superuser (i.e., postgres) to\n> execute that, and if I am, I cannot read the input files. I will do it\n\nYou could grant superuser status to your user (or just connect as postgres\nuser) for the time being, while debugging this.\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Wed, 14 Nov 2007 11:58:23 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Wed, Nov 14, 2007 at 11:58:23AM -0500, Andrew Sullivan wrote:\n> No, every statement in psql is a transaction. Even SELECT. Every statement\n\nErr, to be clearer, \"Every statement in psql is _somehow_ part of a\ntransaction; if you don't start one explicitly, the statement runs on its\nown as a transaction.\"\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Wed, 14 Nov 2007 12:00:33 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Nov 14, 2007 at 07:12:45AM -0500, Jean-David Beyer wrote:\n>> I know there have been rollbacks but I do a REINDEX, CLUSTER, and\n>> VACUUM ANALYZE before starting the inserts in question. Do I need to do\n>> a VACUUM FULL ANALYZE instead?\n> \n> I had another idea. As Alvaro says, CLUSTER will do everything you need.\n> But are you sure there are _no other_ transactions open when you do\n> that? This could cause problems, and CLUSTER's behaviour with other open \n> transactions is not, um, friendly prior to the current beta.\n> \nThese were not done at exactly the same time, but as close as I can.\n\nREINDEX\nCLUSTER;\nCLUSTER\n (part of a shell script that runs the other stuff)\n\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860103.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860131.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860228.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860328.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860502.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860530.tsv' OK\nFile `/homeB/jdbeyer/stocks/DATA/valueLine/19860627.tsv' OK\n(this is showing the program being run on different data).\n\nstock=# SELECT * FROM pg_stat_database WHERE datname = 'stock';\n datid | datname | numbackends | xact_commit | xact_rollback | blks_read |\nblks_hit\n-------+---------+-------------+-------------+---------------+-----------+----------\n 16402 | stock | 2 | 152 | 0 | 18048 |\n15444563\n(1 row)\n\nstock=# SELECT * FROM pg_stat_all_tables WHERE schemaname = 'public' ORDER\nBY relname;\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n-------+------------+----------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 89000 | public | co_name | 0 | 0 | 0 |\n 0 | 0 | 0 | 0\n 89004 | public | company | 0 | 0 | 938764 |\n 938764 | 0 | 0 | 0\n 89029 | public | tick | 0 | 0 | 189737 |\n 279580 | 0 | 0 | 0\n 89034 | public | vl_as | 0 | 0 | 0 |\n 0 | 140840 | 0 | 0\n 89036 | public | vl_cf | 0 | 0 | 0 |\n 0 | 140840 | 0 | 0\n 89038 | public | vl_in | 0 | 0 | 0 |\n 0 | 185667 | 0 | 0\n 89040 | public | vl_li | 0 | 0 | 0 |\n 0 | 140840 | 0 | 0\n 89042 | public | vl_mi | 0 | 0 | 0 |\n 0 | 140840 | 0 | 0\n 89044 | public | vl_ranks | 0 | 0 | 0 |\n 0 | 189737 | 0 | 0\n(18 rows)\n\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_in\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_in\": scanned 2001 of 2001 pages,\ncontaining 183983 live rows and 52 dead rows; 3000 rows in sample, 183983\nestimated total rows\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_cf\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_cf\": scanned 1064 of 1064 pages,\ncontaining 134952 live rows and 89 dead rows; 3000 rows in sample, 134952\nestimated total rows\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_as\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_as\": scanned 1732 of 1732 pages,\ncontaining 134952 live rows and 120 dead rows; 3000 rows in sample, 134952\nestimated total rows\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_ranks\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_ranks\": scanned 1485 of 1485 pages,\ncontaining 188415 live rows and 162 dead rows; 3000 rows in sample, 188415\nestimated total rows\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_mi\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_mi\": scanned 1325 of 1325 pages,\ncontaining 134952 live rows and 191 dead rows; 3000 rows in sample, 134952\nestimated total rows\n2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_li\"\n2007-11-14 12:00:31 EST DEBUG: \"vl_li\": scanned 1326 of 1326 pages,\ncontaining 134952 live rows and 218 dead rows; 3000 rows in sample, 134952\nestimated total rows\n\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 11:55:01 up 22 days, 5:13, 3 users, load average: 5.13, 4.71, 4.74\n", "msg_date": "Wed, 14 Nov 2007 12:02:23 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Jean-David Beyer schrieb:\n> I am doing lots of INSERTs on a table that starts out empty (I did a\n> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n> on. I moved logging up to debug2 level to see what was going on, and I get\n> things like this:\n>\n> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and 1033\n> dead rows; 3000 rows in sample, 411224 estimated total rows\n>\n> A little later, it says:\n>\n> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n> dead rows; 3000 rows in sample, 538311 estimated total rows\n>\n> (I suppose that means autovacuum is working.) Is this normal, or have I got\n> something wrong? Why so many dead rows when just doing inserts? It is not\n> that I think the number is too high, considering the number of rows in the\n> table at the point where I copied this line. It is just that I do not\n> understand why there are any.\n>\n> \nDid you rollback some transactions? It will generate dead rows too - at \nleast I think so.\n", "msg_date": "Wed, 14 Nov 2007 20:21:24 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Mario Weilguni wrote:\n> Jean-David Beyer schrieb:\n>> I am doing lots of INSERTs on a table that starts out empty (I did a\n>> TRUNCATE on it). I am not, AFAIK, doing DELETEs or UPDATEs. Autovacuum is\n>> on. I moved logging up to debug2 level to see what was going on, and I\n>> get\n>> things like this:\n>>\n>> \"vl_as\": scanned 3000 of 5296 pages, containing 232944 live rows and\n>> 1033\n>> dead rows; 3000 rows in sample, 411224 estimated total rows\n>>\n>> A little later, it says:\n>>\n>> \"vl_as\": scanned 3000 of 6916 pages, containing 233507 live rows and 493\n>> dead rows; 3000 rows in sample, 538311 estimated total rows\n>>\n>> (I suppose that means autovacuum is working.) Is this normal, or have\n>> I got\n>> something wrong? Why so many dead rows when just doing inserts? It is not\n>> that I think the number is too high, considering the number of rows in\n>> the\n>> table at the point where I copied this line. It is just that I do not\n>> understand why there are any.\n>>\n>> \n> Did you rollback some transactions? It will generate dead rows too - at\n> least I think so.\n> \nNo, and the statistics confirm this.\n\nstock=> SELECT * FROM pg_stat_database WHERE datname = 'stock';\n datid | datname | numbackends | xact_commit | xact_rollback | blks_read |\nblks_hit\n-------+---------+-------------+-------------+---------------+-----------+-----------\n 16402 | stock | 1 | 1267 | 0 | 232234 |\n146426135\n(1 row)\n\nstock=> SELECT * FROM pg_stat_all_tables WHERE schemaname = 'public' ORDER\nBY relname;\n relid | schemaname | relname | seq_scan | seq_tup_read | idx_scan |\nidx_tup_fetch | n_tup_ins | n_tup_upd | n_tup_del\n-------+------------+----------+----------+--------------+----------+---------------+-----------+-----------+-----------\n 89000 | public | co_name | 7 | 215873 | 1 |\n 30839 | 0 | 0 | 0\n 89004 | public | company | 9 | 219519 | 5624483 |\n5648873 | 0 | 0 | 0\n 89008 | public | div | 7 | 0 | 1 |\n 0 | 0 | 0 | 0\n 89010 | public | djia | 4 | 2044 | 0 |\n 0 | 0 | 0 | 0\n 89012 | public | earn | 2 | 0 | 0 |\n 0 | 0 | 0 | 0\n 89014 | public | ibd | 5 | 0 | 1 |\n 0 | 0 | 0 | 0\n 89016 | public | merg | 2 | 0 | 0 |\n 0 | 0 | 0 | 0\n 89018 | public | price | 9 | 0 | 1 |\n 0 | 0 | 0 | 0\n 89022 | public | source | 3 | 27 | 0 |\n 0 | 0 | 0 | 0\n 89025 | public | sp_500 | 2 | 0 | 0 |\n 0 | 0 | 0 | 0\n 89027 | public | split | 3 | 0 | 1 |\n 0 | 0 | 0 | 0\n 89029 | public | tick | 13 | 400946 | 980983 |\n1510922 | 0 | 0 | 0\n 89034 | public | vl_as | 7 | 6524595 | 1 |\n 932085 | 932085 | 0 | 0\n 89036 | public | vl_cf | 7 | 6317808 | 1 |\n 902544 | 902544 | 0 | 0\n 89038 | public | vl_in | 7 | 6798351 | 1 |\n 971193 | 966989 | 0 | 0\n 89040 | public | vl_li | 7 | 6524595 | 1 |\n 932085 | 932085 | 0 | 0\n 89042 | public | vl_mi | 7 | 6368579 | 1 |\n 909797 | 909797 | 0 | 0\n 89044 | public | vl_ranks | 8 | 7624818 | 1 |\n 985548 | 980982 | 0 | 0\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 16:05:01 up 22 days, 9:23, 0 users, load average: 4.45, 4.11, 4.03\n", "msg_date": "Wed, 14 Nov 2007 16:07:36 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Jean-David Beyer wrote:\n> Mario Weilguni wrote:\n\n> > Did you rollback some transactions? It will generate dead rows too - at\n> > least I think so.\n> > \n> No, and the statistics confirm this.\n\nTo recap:\n\n- your app only does inserts\n- there has been no rollback lately\n- there are no updates\n- there are no deletes\n\nThe only other source of dead rows I can think is triggers ... do you\nhave any? (Not necessarily on this table -- perhaps triggers on other\ntables can cause updates on this one).\n\nOh, rolled back COPY can cause dead rows too.\n\n-- \nAlvaro Herrera http://www.PlanetPostgreSQL.org/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n", "msg_date": "Wed, 14 Nov 2007 18:35:04 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Jean-David Beyer wrote:\n> [snip]\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_in\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_in\": scanned 2001 of 2001 pages,\n> containing 183983 live rows and 52 dead rows; 3000 rows in sample, 183983\n> estimated total rows\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_cf\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_cf\": scanned 1064 of 1064 pages,\n> containing 134952 live rows and 89 dead rows; 3000 rows in sample, 134952\n> estimated total rows\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_as\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_as\": scanned 1732 of 1732 pages,\n> containing 134952 live rows and 120 dead rows; 3000 rows in sample, 134952\n> estimated total rows\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_ranks\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_ranks\": scanned 1485 of 1485 pages,\n> containing 188415 live rows and 162 dead rows; 3000 rows in sample, 188415\n> estimated total rows\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_mi\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_mi\": scanned 1325 of 1325 pages,\n> containing 134952 live rows and 191 dead rows; 3000 rows in sample, 134952\n> estimated total rows\n> 2007-11-14 12:00:31 EST DEBUG: analyzing \"public.vl_li\"\n> 2007-11-14 12:00:31 EST DEBUG: \"vl_li\": scanned 1326 of 1326 pages,\n> containing 134952 live rows and 218 dead rows; 3000 rows in sample, 134952\n> estimated total rows\n> \nWhat does vacuum verbose have to say about this situation?\nIt is possible that analyze is not getting the number of dead rows right?\nDoes analyze, followed by vacuum verbose give the same dead row counts?\n\nSorry for lots of questions, I'm just throwing ideas into the mix.\n\nRussell.\n> \n\n", "msg_date": "Thu, 15 Nov 2007 08:46:05 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Alvaro Herrera wrote:\n> Jean-David Beyer wrote:\n>> Mario Weilguni wrote:\n> \n>>> Did you rollback some transactions? It will generate dead rows too - at\n>>> least I think so.\n>>>\n>> No, and the statistics confirm this.\n> \n> To recap:\n> \n> - your app only does inserts\n\nTrue.\n\n> - there has been no rollback lately\n\nTrue.\n\n> - there are no updates\n\nTrue\n\n> - there are no deletes\n\nTrue.\n> \n> The only other source of dead rows I can think is triggers ... do you\n> have any? \n\nNo triggers at all. I have sequences that were not in the IBM DB2 version of\nthis stuff. But they are all done earlier, before the CLUSTER of the entire\ndatabase. Furthermore, they are only for two tables, not the ones that\nattracted my notice in the first place.\n\n> (Not necessarily on this table -- perhaps triggers on other\n> tables can cause updates on this one).\n> \n> Oh, rolled back COPY can cause dead rows too.\n> \nThe only copies I ever do are those inside dbdump -- dbrestore, and they\ncome after all this stuff. And they do not roll back -- though I suppose\nthey could in principle.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 16:45:01 up 22 days, 10:03, 1 user, load average: 4.20, 4.22, 4.17\n", "msg_date": "Wed, 14 Nov 2007 16:51:44 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Russell Smith <[email protected]> writes:\n> It is possible that analyze is not getting the number of dead rows right?\n\nHah, I think you are on to something. ANALYZE is telling the truth\nabout how many \"dead\" rows it saw, but its notion of \"dead\" is \"not good\naccording to SnapshotNow\". Thus, rows inserted by a not-yet-committed\ntransaction would be counted as dead. So if these are background\nauto-analyzes being done in parallel with inserting transactions that\nrun for awhile, seeing a few not-yet-committed rows would be\nunsurprising.\n\nI wonder if that is worth fixing? I'm not especially concerned about\nthe cosmetic aspect of it, but if we mistakenly launch an autovacuum\non the strength of an inflated estimate of dead rows, that could be\ncostly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 14 Nov 2007 17:46:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows. " }, { "msg_contents": "Alvaro Herrera wrote:\n> To recap:\n> \n> - your app only does inserts\n> - there has been no rollback lately\n> - there are no updates\n> - there are no deletes\n> \n> The only other source of dead rows I can think is triggers ... do you\n> have any? (Not necessarily on this table -- perhaps triggers on other\n> tables can cause updates on this one).\n> \n> Oh, rolled back COPY can cause dead rows too.\n\n\nWhat about an unreliable network that causes lot of disconnects? Wouldn't the server process do a rollback?\n\nCraig\n\n\n", "msg_date": "Wed, 14 Nov 2007 19:02:49 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Craig James wrote:\n> Alvaro Herrera wrote:\n>> To recap:\n>>\n>> - your app only does inserts\n>> - there has been no rollback lately\n>> - there are no updates\n>> - there are no deletes\n>>\n>> The only other source of dead rows I can think is triggers ... do you\n>> have any? (Not necessarily on this table -- perhaps triggers on other\n>> tables can cause updates on this one).\n>>\n>> Oh, rolled back COPY can cause dead rows too.\n> \n> \n> What about an unreliable network that causes lot of disconnects? \n> Wouldn't the server process do a rollback?\n> \nPerhaps in theory, but in practice my client and the postgreSQL servers are\non the same machine and the 127.0.0.1 is pretty reliable:\n\nlo Link encap:Local Loopback\n inet addr:127.0.0.1 Mask:255.0.0.0\n inet6 addr: ::1/128 Scope:Host\n UP LOOPBACK RUNNING MTU:16436 Metric:1\n RX packets:30097919 errors:0 dropped:0 overruns:0 frame:0\n TX packets:30097919 errors:0 dropped:0 overruns:0 carrier:0\n collisions:0 txqueuelen:0\n RX bytes:931924602 (888.7 MiB) TX bytes:931924602 (888.7 MiB)\n\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 22:10:01 up 22 days, 15:28, 0 users, load average: 4.25, 4.21, 4.12\n", "msg_date": "Wed, 14 Nov 2007 22:16:45 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "Tom Lane wrote:\n> Russell Smith <[email protected]> writes:\n>> It is possible that analyze is not getting the number of dead rows\n>> right?\n> \n> Hah, I think you are on to something. ANALYZE is telling the truth about\n> how many \"dead\" rows it saw, but its notion of \"dead\" is \"not good \n> according to SnapshotNow\". Thus, rows inserted by a not-yet-committed \n> transaction would be counted as dead. So if these are background \n> auto-analyzes being done in parallel with inserting transactions that run\n> for awhile,\n\nThey are.\n\n> seeing a few not-yet-committed rows would be unsurprising.\n\nThat is a very interesting possibility. I can see that it is certainly a\npossible explanation, since my insert transactions take between 0.04 to 0.1\nminutes (sorry, decimal stopwatch) of real time, typically putting 1700 rows\ninto about a half dozen tables. And the ANALYZE is whatever autovacuum\nchooses to do. So if new not-yet-committed rows are considered dead, that\nwould be a sufficient explanation.\n\nSo I am, retroactively, unsurprised.\n\n> I wonder if that is worth fixing? I'm not especially concerned about the\n> cosmetic aspect of it, but if we mistakenly launch an autovacuum on the\n> strength of an inflated estimate of dead rows, that could be costly.\n> \nWell, since I was more interested in the explanation than in the fixing, in\nthat sense I do not care if it is fixed or not. While it may create a slight\nslowdown (if it is an error), the applications run \"fast enough.\"\n\nI would not even get the fix until Red Hat get around to putting it in (I\nrun postgresql-8.1.9-1.el5 that is in their RHEL5 distribution), that\nprobably will not be until RHEL6 and the soonest, and I will probably skip\nthat one and wait until RHEL7 comes out in about 3 years.\n\nBut somewhere perhaps a reminder of this should be placed where someone like\nme would find it, so we would not have to go through this again for someone\nelse.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 22:05:01 up 22 days, 15:23, 0 users, load average: 4.16, 4.22, 4.10\n", "msg_date": "Wed, 14 Nov 2007 22:26:02 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "\nOn Nov 14, 2007, at 4:46 PM, Tom Lane wrote:\n\n> Russell Smith <[email protected]> writes:\n>> It is possible that analyze is not getting the number of dead rows \n>> right?\n>\n> Hah, I think you are on to something. ANALYZE is telling the truth\n> about how many \"dead\" rows it saw, but its notion of \"dead\" is \"not \n> good\n> according to SnapshotNow\". Thus, rows inserted by a not-yet-committed\n> transaction would be counted as dead. So if these are background\n> auto-analyzes being done in parallel with inserting transactions that\n> run for awhile, seeing a few not-yet-committed rows would be\n> unsurprising.\n\nWouldn't this result in a variable number of dead rows being reported \non separate runs including zero while no pending inserts are \nhappening? This may be a good way to verify that this is what is \nhappening if he can quiet down his app long enough to run an ANALYZE \nin isolation. Perhaps, if the ANALYZE runs fast enough he can just \nlock the table for the run.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Thu, 15 Nov 2007 10:02:36 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows. " }, { "msg_contents": "On Wed, 2007-11-14 at 17:46 -0500, Tom Lane wrote:\n> Russell Smith <[email protected]> writes:\n> > It is possible that analyze is not getting the number of dead rows right?\n> \n> Hah, I think you are on to something. ANALYZE is telling the truth\n> about how many \"dead\" rows it saw, but its notion of \"dead\" is \"not good\n> according to SnapshotNow\". Thus, rows inserted by a not-yet-committed\n> transaction would be counted as dead. So if these are background\n> auto-analyzes being done in parallel with inserting transactions that\n> run for awhile, seeing a few not-yet-committed rows would be\n> unsurprising.\n> \n> I wonder if that is worth fixing? I'm not especially concerned about\n> the cosmetic aspect of it, but if we mistakenly launch an autovacuum\n> on the strength of an inflated estimate of dead rows, that could be\n> costly.\n\nSounds to me like that could result in autovacuum kicking off while\ndoing large data loads. This sounds suspiciously like problem someone\non -novice was having - tripping over a windows autovac bug while doing\na data load\n\nhttp://archives.postgresql.org/pgsql-novice/2007-11/msg00025.php\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Fri, 16 Nov 2007 10:56:48 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": "On Nov 16, 2007 10:56 AM, Brad Nicholson <[email protected]> wrote:\n> On Wed, 2007-11-14 at 17:46 -0500, Tom Lane wrote:\n> > Russell Smith <[email protected]> writes:\n> > > It is possible that analyze is not getting the number of dead rows right?\n> >\n> > Hah, I think you are on to something. ANALYZE is telling the truth\n> > about how many \"dead\" rows it saw, but its notion of \"dead\" is \"not good\n> > according to SnapshotNow\". Thus, rows inserted by a not-yet-committed\n> > transaction would be counted as dead. So if these are background\n> > auto-analyzes being done in parallel with inserting transactions that\n> > run for awhile, seeing a few not-yet-committed rows would be\n> > unsurprising.\n> >\n> > I wonder if that is worth fixing? I'm not especially concerned about\n> > the cosmetic aspect of it, but if we mistakenly launch an autovacuum\n> > on the strength of an inflated estimate of dead rows, that could be\n> > costly.\n>\n> Sounds to me like that could result in autovacuum kicking off while\n> doing large data loads. This sounds suspiciously like problem someone\n> on -novice was having - tripping over a windows autovac bug while doing\n> a data load\n>\n> http://archives.postgresql.org/pgsql-novice/2007-11/msg00025.php\n\nI am almost 100% I've seen this behavior in the field...\n\nmerlin\n", "msg_date": "Fri, 16 Nov 2007 17:01:19 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." }, { "msg_contents": ">>> On Fri, Nov 16, 2007 at 4:01 PM, in message\n<[email protected]>, \"Merlin Moncure\"\n<[email protected]> wrote: \n> On Nov 16, 2007 10:56 AM, Brad Nicholson <[email protected]> wrote:\n>> On Wed, 2007-11-14 at 17:46 -0500, Tom Lane wrote:\n>> > Russell Smith <[email protected]> writes:\n>> > > It is possible that analyze is not getting the number of dead rows right?\n>> >\n>> > Hah, I think you are on to something. ANALYZE is telling the truth\n>> > about how many \"dead\" rows it saw, but its notion of \"dead\" is \"not good\n>> > according to SnapshotNow\". Thus, rows inserted by a not-yet-committed\n>> > transaction would be counted as dead. So if these are background\n>> > auto-analyzes being done in parallel with inserting transactions that\n>> > run for awhile, seeing a few not-yet-committed rows would be\n>> > unsurprising.\n>> >\n>> > I wonder if that is worth fixing? I'm not especially concerned about\n>> > the cosmetic aspect of it, but if we mistakenly launch an autovacuum\n>> > on the strength of an inflated estimate of dead rows, that could be\n>> > costly.\n>>\n>> Sounds to me like that could result in autovacuum kicking off while\n>> doing large data loads. This sounds suspiciously like problem someone\n>> on -novice was having - tripping over a windows autovac bug while doing\n>> a data load\n>>\n>> http://archives.postgresql.org/pgsql-novice/2007-11/msg00025.php\n> \n> I am almost 100% I've seen this behavior in the field...\n \nI know I've seen bulk loads go significantly faster with autovacuum\nturned off. It always seemed like a bigger difference than what the\nANALYZE would cause. I bet this explains it.\n \n-Kevin\n \n\n", "msg_date": "Fri, 16 Nov 2007 16:12:49 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Curious about dead rows." } ]
[ { "msg_contents": "\nThis is probably a FAQ, but I can't find a good answer...\n\nSo - are there common techniques to compensate for the lack of\nclustered/covering indexes in PostgreSQL? To be more specific - here is my\ntable (simplified):\n\ntopic_id int\npost_id int\npost_text varchar(1024)\n\nThe most used query is: SELECT post_id, post_text FROM Posts WHERE\ntopic_id=XXX. Normally I would have created a clustered index on topic_id,\nand the whole query would take ~1 disk seek.\n\nWhat would be the common way to handle this in PostgreSQL, provided that I\ncan't afford 1 disk seek per record returned?\n\n-- \nView this message in context: http://www.nabble.com/Clustered-covering-indexes-%28or-lack-thereof-%3A-%29-tf4789321.html#a13700848\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sun, 11 Nov 2007 22:59:05 -0800 (PST)", "msg_from": "adrobj <[email protected]>", "msg_from_op": true, "msg_subject": "Clustered/covering indexes (or lack thereof :-)" }, { "msg_contents": "adrobj wrote:\n> This is probably a FAQ, but I can't find a good answer...\n> \n> So - are there common techniques to compensate for the lack of\n> clustered/covering indexes in PostgreSQL? To be more specific - here is my\n> table (simplified):\n> \n> topic_id int\n> post_id int\n> post_text varchar(1024)\n> \n> The most used query is: SELECT post_id, post_text FROM Posts WHERE\n> topic_id=XXX. Normally I would have created a clustered index on topic_id,\n> and the whole query would take ~1 disk seek.\n> \n> What would be the common way to handle this in PostgreSQL, provided that I\n> can't afford 1 disk seek per record returned?\n\nYou can cluster the table, see \nhttp://www.postgresql.org/docs/8.2/interactive/sql-cluster.html.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 16 Nov 2007 19:33:07 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered/covering indexes (or lack thereof :-)" }, { "msg_contents": "On Sun, 2007-11-11 at 22:59 -0800, adrobj wrote:\n> This is probably a FAQ, but I can't find a good answer...\n> \n> So - are there common techniques to compensate for the lack of\n> clustered/covering indexes in PostgreSQL? To be more specific - here is my\n> table (simplified):\n> \n> topic_id int\n> post_id int\n> post_text varchar(1024)\n> \n> The most used query is: SELECT post_id, post_text FROM Posts WHERE\n> topic_id=XXX. Normally I would have created a clustered index on topic_id,\n> and the whole query would take ~1 disk seek.\n> \n> What would be the common way to handle this in PostgreSQL, provided that I\n> can't afford 1 disk seek per record returned?\n> \n\nPeriodically CLUSTER the table on the topic_id index. The table will not\nbe perfectly clustered at all times, but it will be close enough that it\nwon't make much difference.\n\nThere's still the hit of performing a CLUSTER, however.\n\nAnother option, if you have a relatively small number of topic_ids, is\nto break it into separate tables, one for each topic_id.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 16 Nov 2007 11:34:36 -0800", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered/covering indexes (or lack thereof :-)" }, { "msg_contents": "In response to Jeff Davis <[email protected]>:\n\n> On Sun, 2007-11-11 at 22:59 -0800, adrobj wrote:\n> > This is probably a FAQ, but I can't find a good answer...\n> > \n> > So - are there common techniques to compensate for the lack of\n> > clustered/covering indexes in PostgreSQL? To be more specific - here is my\n> > table (simplified):\n> > \n> > topic_id int\n> > post_id int\n> > post_text varchar(1024)\n> > \n> > The most used query is: SELECT post_id, post_text FROM Posts WHERE\n> > topic_id=XXX. Normally I would have created a clustered index on topic_id,\n> > and the whole query would take ~1 disk seek.\n> > \n> > What would be the common way to handle this in PostgreSQL, provided that I\n> > can't afford 1 disk seek per record returned?\n> > \n> \n> Periodically CLUSTER the table on the topic_id index. The table will not\n> be perfectly clustered at all times, but it will be close enough that it\n> won't make much difference.\n> \n> There's still the hit of performing a CLUSTER, however.\n> \n> Another option, if you have a relatively small number of topic_ids, is\n> to break it into separate tables, one for each topic_id.\n\nOr materialize the data, if performance is the utmost requirement.\n\nCreate second table:\nmaterialized_topics (\n topic_id int,\n post_ids int[],\n post_texts text[]\n)\n\nNow add a trigger to your original table that updates materialized_topics\nany time the first table is altered. Thus you always have fast lookups.\n\nOf course, this may be non-optimal if that table sees a lot of updates.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 16 Nov 2007 14:51:35 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered/covering indexes (or lack thereof :-)" } ]
[ { "msg_contents": "In order to get like queries to use an index with database initialized \nwith a UTF-8 character set I added a unique index to a table with a \nvarchar_pattern_ops\n\nThis table already had a unique constraint on the column so I dropped \nthe unique constraint.\n\nI can't give exact measurements however this caused my application to \nslow down considerably.\n\nThe only thing I can figure is that the varchar_pattern_ops operator \nis significantly slower ???\n\nIs there some other piece of the puzzle to fill in ?\n\nDave\n\n\n", "msg_date": "Mon, 12 Nov 2007 09:51:30 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "difference between a unique constraint and a unique index ???" }, { "msg_contents": "Dave Cramer wrote:\n> In order to get like queries to use an index with database initialized with \n> a UTF-8 character set I added a unique index to a table with a \n> varchar_pattern_ops\n>\n> This table already had a unique constraint on the column so I dropped the \n> unique constraint.\n>\n> I can't give exact measurements however this caused my application to slow \n> down considerably.\n>\n> The only thing I can figure is that the varchar_pattern_ops operator is \n> significantly slower ???\n>\n> Is there some other piece of the puzzle to fill in ?\n\nWell, AFAIK the index with varchar_pattern_ops is used for LIKE queries,\nwhereas the other one is going to be used for = queries. So you need to\nkeep both indexes.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n", "msg_date": "Mon, 12 Nov 2007 11:56:30 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difference between a unique constraint and a unique\n\tindex ???" }, { "msg_contents": "\nOn 12-Nov-07, at 9:56 AM, Alvaro Herrera wrote:\n\n> Dave Cramer wrote:\n>> In order to get like queries to use an index with database \n>> initialized with\n>> a UTF-8 character set I added a unique index to a table with a\n>> varchar_pattern_ops\n>>\n>> This table already had a unique constraint on the column so I \n>> dropped the\n>> unique constraint.\n>>\n>> I can't give exact measurements however this caused my application \n>> to slow\n>> down considerably.\n>>\n>> The only thing I can figure is that the varchar_pattern_ops \n>> operator is\n>> significantly slower ???\n>>\n>> Is there some other piece of the puzzle to fill in ?\n>\n> Well, AFAIK the index with varchar_pattern_ops is used for LIKE \n> queries,\n> whereas the other one is going to be used for = queries. So you \n> need to\n> keep both indexes.\n>\nYou would be correct, thanks for the quick answer.\n\nDave\n", "msg_date": "Mon, 12 Nov 2007 10:13:24 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difference between a unique constraint and a unique index ???" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Well, AFAIK the index with varchar_pattern_ops is used for LIKE queries,\n> whereas the other one is going to be used for = queries. So you need to\n> keep both indexes.\n\nGiven the current definition of text equality, it'd be possible to drop\n~=~ and have the standard = operator holding the place of equality in\nboth the regular and pattern_ops opclasses. Then it'd be possible to\nsupport regular equality queries, as well as LIKE, with only the\npattern_ops index.\n\nThis would break any applications explicitly using ~=~, but how many\nof those are there?\n\n(For backwards compatibility it'd be nice if we could allow both = and\n~=~ in the opclass, but the unique index on pg_amop seems to preclude\nthat.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Nov 2007 11:37:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: difference between a unique constraint and a unique index ??? " }, { "msg_contents": "\nOn 12-Nov-07, at 11:37 AM, Tom Lane wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n>> Well, AFAIK the index with varchar_pattern_ops is used for LIKE \n>> queries,\n>> whereas the other one is going to be used for = queries. So you \n>> need to\n>> keep both indexes.\n>\n> Given the current definition of text equality, it'd be possible to \n> drop\n> ~=~ and have the standard = operator holding the place of equality in\n> both the regular and pattern_ops opclasses. Then it'd be possible to\n> support regular equality queries, as well as LIKE, with only the\n> pattern_ops index.\n>\nThat would be ideal. Having two indexes on the same column isn't \noptimal.\n\nDave\n", "msg_date": "Mon, 12 Nov 2007 13:32:34 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: difference between a unique constraint and a unique index ??? " } ]
[ { "msg_contents": "(posting on pgsql-perf as I'm questioning the pertinence of the \nsettings, might not be the best place for the overall pb: apologies)\n\nPostgresql 8.1.10\nLinux Ubuntu: 2.6.17-12-server\n4GB RAM, machine is only used for this I do have less than 30 tables, 4 \nof them having between 10-40 million rows, size on disk is approximately 50G\nNothing spectacular on the install, it's mainly sandbox.\n\nRelevant bits of the postgresql.conf\nmax_connections = 15\nshared_buffers = 49152\nwork_mem = 16384\nmaintenance_work_mem = 32768\nmax_fsm_pages = 40000\neffective_cache_size = 100000\n\n\nI'm doing a rather 'simplistic' query, though heavy on hashing and \naggregate:\n\nFor the records:\nselect count(*) from action where action_date between '2007-10-01' and \n'2007-10-31'\n9647980\n\n\nThe query is:\n\nselect tspent, count(*) from (\nselect sum(time_spent)/60 as tspent from action\nwhere action_date between '2007-10-01' and '2007-10-31'\ngroup by action_date, user_id\n) as a\ngroup by tstpent\norder by tspent asc;\n\nI do receive a memory alloc error for a 1.5GB request size. So I may \nhave oversized something significantly that is exploding (work_mem ?)\n(I was running an explain analyze and had a pgsql_tmp dir reaching 2.9GB \nuntil it died with result similar error as with the query alone)\n\nERROR: invalid memory alloc request size 1664639562\nSQL state: XX000\n\nSometimes I do get:\n\nERROR: unexpected end of data\nSQL state: XX000\n\n\ntable is along the line of (sorry cannot give you the full table):\n\nCREATE TABLE action (\n id SERIAL,\n action_date DATE NOT NULL,\n time_spent INT NOT NULL,\n user_id TEXT NOT NULL, -- user id is a 38 character string\n ...\n);\n\nCREATE INDEX action_action_date_idx\n ON action USING btree(action_date);\n\nHere is an explain analyze for just 1 day:\n\n\"HashAggregate (cost=709112.04..709114.54 rows=200 width=8) (actual \ntime=9900.994..9902.188 rows=631 loops=1)\"\n\" -> HashAggregate (cost=706890.66..708001.35 rows=74046 width=49) \n(actual time=9377.654..9687.964 rows=122644 loops=1)\"\n\" -> Bitmap Heap Scan on action (cost=6579.73..701337.25 \nrows=740455 width=49) (actual time=2409.697..6756.027 rows=893351 loops=1)\"\n\" Recheck Cond: ((action_date >= '2007-10-01'::date) AND \n(action_date <= '2007-10-02'::date))\"\n\" -> Bitmap Index Scan on action_action_date_idx \n(cost=0.00..6579.73 rows=740455 width=0) (actual time=2373.837..2373.837 \nrows=893351 loops=1)\"\n\" Index Cond: ((action_date >= '2007-10-01'::date) \nAND (action_date <= '2007-10-02'::date))\"\n\"Total runtime: 9933.165 ms\"\n\n\n\n-- stephane\n", "msg_date": "Mon, 12 Nov 2007 20:57:48 +0100", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: \"invalid memory alloc request size\" or \"unexpected end of\n\tdata\" on large table" }, { "msg_contents": "Stephane Bailliez <[email protected]> writes:\n> ERROR: invalid memory alloc request size 1664639562\n\nThis sounds like corrupt data --- specifically, 1664639562 showing\nup where a variable-width field's length word ought to be. It\nmay or may not be relevant that the ASCII equivalent of that bit\npattern is Jb8c ... do you work with data that contains such\nsubstrings?\n\n> Sometimes I do get:\n> ERROR: unexpected end of data\n\nIf it's not 100% repeatable I'd start to wonder about flaky hardware.\nHave you run memory and disk diagnostics on this machine recently?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Nov 2007 17:54:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ERROR: \"invalid memory alloc request size\" or \"unexpected end of\n\tdata\" on large table" }, { "msg_contents": "Tom Lane wrote:\n> This sounds like corrupt data --- specifically, 1664639562 showing\n> up where a variable-width field's length word ought to be. It\n> may or may not be relevant that the ASCII equivalent of that bit\n> pattern is Jb8c ... do you work with data that contains such\n> substrings?\n> \n\nNot specifically but I cannot rule it out entirely, it 'could' be in one \nof the column which may have a combination of uppercase/lowercase/number \notherwise all other text entries would be lowercase.\n\n> If it's not 100% repeatable I'd start to wonder about flaky hardware.\n> Have you run memory and disk diagnostics on this machine recently? \nI did extensive tests a month or two ago (long crunching queries running \nnon stop for 24h) which were ok but honestly cannot say I'm not very \ntrusty in this particular hardware. Would need to put it offline and \nmemtest it for good obviously.\n\nI moved some data (and a bit more, same table has 35M rows) to another \nmachine (also 8.1.10 on ubuntu with 2.6.17-10 smp, 2 Xeon 2GHZ instead \nof P4 3GHz) and it passes with flying colors (though it's much much \nslower with the same settings so I need to check a few things in there, \nI had tmp dir topping 3GB so not sure if I could have more in memory)\n\nThanks for the insight, Tom.\n\n-- stephane\n", "msg_date": "Tue, 13 Nov 2007 16:06:50 +0100", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ERROR: \"invalid memory alloc request size\" or \"unexpected\n\tend of data\" on large table" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nhow hard would it be to give users the ability to\nconfigure different random_page and seq_page costs per tablespace?\n\nObviously this does not match with random_page_cost being a GUC variable\nbut nonetheless I think it'd make a nice feature.\n\nI just came across this problem because I own a solid state\ndisk but not all data is stored there.\n\nRegards,\n Jens-W. Schicke\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\n\niD8DBQFHOXtqzhchXT4RR5ARArkkAKCxljEjRF0vXDJyyLBVTVhnxJ/idwCeOYiD\nndXrD8ZFvh5XUmzz5nTZbiI=\n=871T\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 13 Nov 2007 11:24:42 +0100", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "random_page_cost etc. per tablespace?" } ]
[ { "msg_contents": "hi,\n\nwe are moving one database from postgresql-7.4 to postgresql-8.2.4.\n\nwe have some cronjobs set up that vacuum the database (some tables more \noften, some tables less often). now, in pg82, there is the possibility \nof using the autovacuum.\n\nmy question is: is it recommended to use it? or in other words, should i \nonly use autovacuum? or it's better to use manual-vacuuming? which one \nis the \"way of the future\" :) ? or should i use both auto-vacuum and \nmanual-vacuum?\n\nin other words, i'd like to find out, if we should simply stay with the \nvacuuming-cronjobs, or should we move to using auto-vacuum? and if we \nshould move, should we try to set it up the way that no manual-vacuuming \nis used anymore?\n\nthanks,\ngabor\n", "msg_date": "Fri, 16 Nov 2007 10:40:43 +0100", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum: recommended?" }, { "msg_contents": "On Fri, Nov 16, 2007 at 10:40:43AM +0100, Gábor Farkas wrote:\n> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n\nany particular reason why not 8.2.5?\n> \n> my question is: is it recommended to use it? or in other words, should i \n> only use autovacuum? or it's better to use manual-vacuuming? which one \n> is the \"way of the future\" :) ? or should i use both auto-vacuum and \n> manual-vacuum?\n\nautovacuum is definitely prefered (for most of the cases).\n\nyou might want to set vacuum delays though.\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Fri, 16 Nov 2007 10:45:41 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "[G�bor Farkas - Fri at 10:40:43AM +0100]\n> my question is: is it recommended to use it? or in other words, should i \n> only use autovacuum? or it's better to use manual-vacuuming? which one \n> is the \"way of the future\" :) ? or should i use both auto-vacuum and \n> manual-vacuum?\n\nNightly vacuums are great if the activity on the database is very low\nnight time. A combination is also good, the autovacuum will benefit\nfrom the nightly vacuum. My gut feeling says it's a good idea to leave\nautovacuum on, regardless of whether the nightly vacuums have been\nturned on or not.\n\nThat being said, we have some huge tables in our database and pretty\nmuch traffic, and got quite some performance problems when the\nautovacuum kicked in and started vacuuming those huge tables, so we're\ncurrently running without. Autovacuum can be tuned to not touch those\ntables, but we've chosen to leave it off.\n\n", "msg_date": "Fri, 16 Nov 2007 12:13:55 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Fri, 2007-11-16 at 12:13 +0100, Tobias Brox wrote:\n> [snip] should i use both auto-vacuum and \n> > manual-vacuum?\n\nI would say for 8.2 that's the best strategy (which might change with\n8.3 and it's multiple vacuum workers thingy).\n\n> That being said, we have some huge tables in our database and pretty\n> much traffic, and got quite some performance problems when the\n> autovacuum kicked in and started vacuuming those huge tables, so we're\n> currently running without. Autovacuum can be tuned to not touch those\n> tables, but we've chosen to leave it off.\n\nWe are doing that here, i.e. set up autovacuum not to touch big tables,\nand cover those with nightly vacuums if there is still some activity on\nthem, and one weekly complete vacuum of the whole DB (\"vacuum\" without\nother params, preferably as the postgres user to cover system tables\ntoo).\n\nIn fact we also have a few very frequently updated small tables, those\nare also covered by very frequent crontab vacuums because in 8.2\nautovacuum can spend quite some time vacuuming some medium sized tables\nand in that interval the small but frequently updated ones get bloated.\nThis should be better with 8.3 and multiple autovacuum workers.\n\nFor the \"disable for autovacuum\" part search for pg_autovacuum in the\ndocs.\n\nI would say the autovacuum + disable autovacuum on big tables + nightly\nvacuum + weekly vacuumdb + frequent crontab vacuum of very updated small\ntables works well in 8.2. One thing which could be needed is to also\nschedule continuous vacuum of big tables which are frequently updated,\nwith big delay settings to throttle the resources used by the vacuum. We\ndon't need that here because we don't update frequently our big\ntables...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 16 Nov 2007 12:56:34 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "> That being said, we have some huge tables in our database and pretty\n> much traffic, and got quite some performance problems when the\n> autovacuum kicked in and started vacuuming those huge tables, so we're\n> currently running without. Autovacuum can be tuned to not touch those\n> tables, but we've chosen to leave it off.\n\nWe had some performance problems with the autovacuum on large and\nfrequently modified tables too - but after a little bit of playing with\nthe parameters the overall performance is much better than it was before\nthe autovacuuming.\n\nThe table was quite huge (say 20k of products along with detailed\ndescriptions etc.) and was completely updated and about 12x each day, i.e.\nit qrew to about 12x the original size (and 11/12 of the rows were dead).\nThis caused a serious slowdown of the application each day, as the\ndatabase had to scan 12x more data.\n\nWe set up autovacuuming with the default parameters, but it interfered\nwith the usual traffic - we had to play a little with the parameters\n(increase the delays, decrease the duration or something like that) and\nnow it runs much better than before. No nightly vacuuming, no serious\nperformance degradation during the day, etc.\n\nSo yes - autovacuuming is recommended, but in some cases the default\nparameters have to be tuned a little bit.\n\ntomas\n\n", "msg_date": "Fri, 16 Nov 2007 14:38:23 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "[[email protected]]\n> The table was quite huge (say 20k of products along with detailed\n> descriptions etc.) and was completely updated and about 12x each day, i.e.\n> it qrew to about 12x the original size (and 11/12 of the rows were dead).\n> This caused a serious slowdown of the application each day, as the\n> database had to scan 12x more data.\n\nThe tables we had problems with are transaction-type tables with\nmillions of rows and mostly inserts to the table ... and, eventually\nsome few attributes being updated only on the most recent entries. I\ntried tuning a lot, but gave it up eventually. Vacuuming those tables\ntook a long time (even if only a very small fraction of the table was\ntouched) and the performance of the inserts to the table was reduced to\na level that could not be accepted.\n\nBy now we've just upgraded the hardware, so it could be worth playing\nwith it again, but our project manager is both paranoid and conservative\nand proud of it, so I would have to prove that autovacuum is good for us\nbefore I'm allowed to turn it on again ;-)\n\n", "msg_date": "Sun, 18 Nov 2007 20:00:53 +0100", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "hubert depesz lubaczewski wrote:\n> On Fri, Nov 16, 2007 at 10:40:43AM +0100, Gábor Farkas wrote:\n>> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n> \n> any particular reason why not 8.2.5?\n\nthe distribution i use only has 8.2.4 currently.\n\ngabor\n", "msg_date": "Sun, 18 Nov 2007 20:26:12 +0100", "msg_from": "gabor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Nov 18, 2007, at 1:26 PM, gabor wrote:\n> hubert depesz lubaczewski wrote:\n>> On Fri, Nov 16, 2007 at 10:40:43AM +0100, G�bor Farkas wrote:\n>>> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n>> any particular reason why not 8.2.5?\n>\n> the distribution i use only has 8.2.4 currently.\n\nThen I think you need to consider abandoning your distribution's \npackages or find a better distribution. IIRC, 8.2.5 is over 2-3 \nmonths old now; there's no reason a distribution shouldn't have it at \nthis point. (Unless of course you haven't kept your distribution up- \nto-date... ;)\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sun, 18 Nov 2007 22:08:40 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Nov 16, 2007, at 7:38 AM, [email protected] wrote:\n> The table was quite huge (say 20k of products along with detailed\n> descriptions etc.) and was completely updated and about 12x each \n> day, i.e.\n> it qrew to about 12x the original size (and 11/12 of the rows were \n> dead).\n> This caused a serious slowdown of the application each day, as the\n> database had to scan 12x more data.\n\nFWIW, 20k rows isn't all that big, so I'm assuming that the \ndescriptions make the table very wide. Unless those descriptions are \nwhat's being updated frequently, I suggest you put those in a \nseparate table (vertical partitioning). That will make the main table \nmuch easier to vacuum, as well as reducing the impact of the high \nchurn rate.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sun, 18 Nov 2007 22:11:15 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Nov 16, 2007, at 5:56 AM, Csaba Nagy wrote:\n> We are doing that here, i.e. set up autovacuum not to touch big \n> tables,\n> and cover those with nightly vacuums if there is still some \n> activity on\n> them, and one weekly complete vacuum of the whole DB (\"vacuum\" without\n> other params, preferably as the postgres user to cover system tables\n> too).\n\nIIRC, since 8.2 autovacuum will take note of manual vacuums so as not \nto needlessly vacuum something that's been recently vacuumed \nmanually. In other words, you shouldn't need to disable autovac for \nlarge tables if you vacuum them every night and their churn rate is \nlow enough to not trigger autovacuum during the day.\n\n> In fact we also have a few very frequently updated small tables, those\n> are also covered by very frequent crontab vacuums because in 8.2\n> autovacuum can spend quite some time vacuuming some medium sized \n> tables\n> and in that interval the small but frequently updated ones get \n> bloated.\n> This should be better with 8.3 and multiple autovacuum workers.\n\n+1. For tables that should always remain relatively small (ie: a web \nsession table), I usually recommend setting up a manual vacuum that \nruns every 1-5 minutes.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sun, 18 Nov 2007 22:14:09 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "> FWIW, 20k rows isn't all that big, so I'm assuming that the\n> descriptions make the table very wide. Unless those descriptions are\n> what's being updated frequently, I suggest you put those in a\n> separate table (vertical partitioning). That will make the main table\n> much easier to vacuum, as well as reducing the impact of the high\n> churn rate.\n\nYes, you're right - the table is quite wide, as it's a catalogue of a\npharmacy along with all the detailed descriptions and additional info etc.\nSo I guess it's 50 MB of data or something like that. That may not seem\nbad, but as I already said the table grew to about 12x the size during the\nday (so about 500MB of data, 450MB being dead rows). This is the 'central'\ntable of the system, and there are other quite heavily used databases as\nwell. Add some really stupid queries on this table (for example LIKE\nsearches on the table) and you easily end up with 100MB of permanent I/O\nduring the day.\n\nThe vertical partitioning would be overengineering in this case - we\nconsidered even that, but proper optimization of the update process\n(updating only those rows that really changed), along with a little bit of\nautovacuum tuning solved all the performance issues.\n\nTomas\n\n", "msg_date": "Mon, 19 Nov 2007 10:53:08 +0100 (CET)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "Decibel! wrote:\n> On Nov 18, 2007, at 1:26 PM, gabor wrote:\n>> hubert depesz lubaczewski wrote:\n>>> On Fri, Nov 16, 2007 at 10:40:43AM +0100, G�bor Farkas wrote:\n>>>> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n>>> any particular reason why not 8.2.5?\n>>\n>> the distribution i use only has 8.2.4 currently.\n> \n> Then I think you need to consider abandoning your distribution's\n> packages or find a better distribution. IIRC, 8.2.5 is over 2-3 months\n> old now; there's no reason a distribution shouldn't have it at this\n> point. (Unless of course you haven't kept your distribution\n> up-to-date... ;)\n\nSome people run distributions such as Red Hat Enterprise Linux 5 (their\nlatest); I do. postgresql that comes with that.\n\nNow once they pick a version of a program, they seldom change it. They do\nput security and bug fixes in it by back-porting the changes into the source\ncode and rebuilding it. I guess for postgresql the changes were too much for\nbackporting, so they upgraded from postgresql-8.1.4-1.1 that came with it\noriginally and are now up to postgresql-8.1.9-1.el5. I am pretty sure they\nwill never upgrade RHEL5 to the 8.2 series because they do not do it to get\nnew features.\n\nNow you may think there are better distributions than Red Hat Enterprise\nLinux 5, but enough people seem to think it good enough to pay for it and\nkeep Red Hat in business. I doubt they are all foolish.\n\nLuckily I do not seem to be troubled by the problems experienced by the O.P.\n\nI do know that if I try to use .rpms from other sources, I can get in a lot\nof trouble with incompatible libraries. And I cannot upgrade the libraries\nwithout damaging other programs.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:20:01 up 27 days, 1:38, 1 user, load average: 5.15, 5.20, 5.01\n", "msg_date": "Mon, 19 Nov 2007 08:39:08 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "In response to Jean-David Beyer <[email protected]>:\n\n> Decibel! wrote:\n> > On Nov 18, 2007, at 1:26 PM, gabor wrote:\n> >> hubert depesz lubaczewski wrote:\n> >>> On Fri, Nov 16, 2007 at 10:40:43AM +0100, Gábor Farkas wrote:\n> >>>> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n> >>> any particular reason why not 8.2.5?\n> >>\n> >> the distribution i use only has 8.2.4 currently.\n> > \n> > Then I think you need to consider abandoning your distribution's\n> > packages or find a better distribution. IIRC, 8.2.5 is over 2-3 months\n> > old now; there's no reason a distribution shouldn't have it at this\n> > point. (Unless of course you haven't kept your distribution\n> > up-to-date... ;)\n> \n> Some people run distributions such as Red Hat Enterprise Linux 5 (their\n> latest); I do. postgresql that comes with that.\n> \n> Now once they pick a version of a program, they seldom change it. They do\n> put security and bug fixes in it by back-porting the changes into the source\n> code and rebuilding it. I guess for postgresql the changes were too much for\n> backporting, so they upgraded from postgresql-8.1.4-1.1 that came with it\n> originally and are now up to postgresql-8.1.9-1.el5. I am pretty sure they\n> will never upgrade RHEL5 to the 8.2 series because they do not do it to get\n> new features.\n> \n> Now you may think there are better distributions than Red Hat Enterprise\n> Linux 5, but enough people seem to think it good enough to pay for it and\n> keep Red Hat in business. I doubt they are all foolish.\n> \n> Luckily I do not seem to be troubled by the problems experienced by the O.P.\n> \n> I do know that if I try to use .rpms from other sources, I can get in a lot\n> of trouble with incompatible libraries. And I cannot upgrade the libraries\n> without damaging other programs.\n\nI think you've missed the point.\n\nThe discussion is not that the distro is bad because it hasn't moved from\n8.1 -> 8.2. The comment is that it's bad because it hasn't updated a\nmajor branch with the latest bug fixes. i.e. it hasn't moved from 8.1.4\nto 8.1.5.\n\nIf this is indeed the case, I agree that such a distro isn't worth using.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 19 Nov 2007 08:51:42 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "Decibel! <[email protected]> writes:\n> FWIW, 20k rows isn't all that big, so I'm assuming that the \n> descriptions make the table very wide. Unless those descriptions are \n> what's being updated frequently, I suggest you put those in a \n> separate table (vertical partitioning). That will make the main table \n> much easier to vacuum, as well as reducing the impact of the high \n> churn rate.\n\nUh, you do realize that the TOAST mechanism does that pretty much\nautomatically?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Nov 2007 10:23:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended? " }, { "msg_contents": "Bill Moran wrote:\n> In response to Jean-David Beyer <[email protected]>:\n> \n>> Decibel! wrote:\n>>> On Nov 18, 2007, at 1:26 PM, gabor wrote:\n>>>> hubert depesz lubaczewski wrote:\n>>>>> On Fri, Nov 16, 2007 at 10:40:43AM +0100, G�bor Farkas wrote:\n>>>>>> we are moving one database from postgresql-7.4 to postgresql-8.2.4.\n>>>>> any particular reason why not 8.2.5?\n>>>> the distribution i use only has 8.2.4 currently.\n>>> Then I think you need to consider abandoning your distribution's\n>>> packages or find a better distribution. IIRC, 8.2.5 is over 2-3 months\n>>> old now; there's no reason a distribution shouldn't have it at this\n>>> point. (Unless of course you haven't kept your distribution\n>>> up-to-date... ;)\n>> Some people run distributions such as Red Hat Enterprise Linux 5 (their\n>> latest); I do. postgresql that comes with that.\n>>\n>> Now once they pick a version of a program, they seldom change it. They do\n>> put security and bug fixes in it by back-porting the changes into the source\n>> code and rebuilding it. I guess for postgresql the changes were too much for\n>> backporting, so they upgraded from postgresql-8.1.4-1.1 that came with it\n>> originally and are now up to postgresql-8.1.9-1.el5. I am pretty sure they\n>> will never upgrade RHEL5 to the 8.2 series because they do not do it to get\n>> new features.\n>>\n>> Now you may think there are better distributions than Red Hat Enterprise\n>> Linux 5, but enough people seem to think it good enough to pay for it and\n>> keep Red Hat in business. I doubt they are all foolish.\n>>\n[snip]\n> \n> I think you've missed the point.\n\nI think you are right.\n> \n> The discussion is not that the distro is bad because it hasn't moved from\n> 8.1 -> 8.2. The comment is that it's bad because it hasn't updated a\n> major branch with the latest bug fixes. i.e. it hasn't moved from 8.1.4\n> to 8.1.5.\n> \n> If this is indeed the case, I agree that such a distro isn't worth using.\n> \n... and I can keep RHEL5 because they went from 8.1.4 to 8.1.9. ;-)\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 10:40:01 up 27 days, 3:58, 2 users, load average: 4.43, 4.85, 5.17\n", "msg_date": "Mon, 19 Nov 2007 10:46:42 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\r\nHash: SHA1\r\n\r\nOn Mon, 19 Nov 2007 08:51:42 -0500\r\nBill Moran <[email protected]> wrote:\r\n\r\n> > Luckily I do not seem to be troubled by the problems experienced by\r\n> > the O.P.\r\n> > \r\n> > I do know that if I try to use .rpms from other sources, I can get\r\n> > in a lot of trouble with incompatible libraries. And I cannot\r\n> > upgrade the libraries without damaging other programs.\r\n> \r\n> I think you've missed the point.\r\n> \r\n> The discussion is not that the distro is bad because it hasn't moved\r\n> from 8.1 -> 8.2. The comment is that it's bad because it hasn't\r\n> updated a major branch with the latest bug fixes. i.e. it hasn't\r\n> moved from 8.1.4 to 8.1.5.\r\n> \r\n> If this is indeed the case, I agree that such a distro isn't worth\r\n> using.\r\n\r\nI would note, and Tom would actually be a better person to expound on\r\nthis that Red Hat has a tendency (at least they used to) to leave the\r\nminor number unchanged. E.g;\r\n\r\n8.1.4 is shipped with RHEL5 \r\nThey release a service update\r\nYou now have 8.1.4-1.9\r\n\r\nOr some such drivel. They do this because application vendors wet\r\nthemselves in fear if they see a version change midcyle no matter how\r\nmuch you tell them it is just security and data fixes...\r\n\r\n/me who has dealt with 3 \"enterprise\" vendors on this exact issues in\r\nthe last week.\r\n\r\nSincerely,\r\n\r\nJoshua D. Drake \r\n\r\n\r\n- -- \r\n\r\n === The PostgreSQL Company: Command Prompt, Inc. ===\r\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\r\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\r\n\t\t\tUNIQUE NOT NULL\r\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\r\nPostgreSQL Replication: http://www.commandprompt.com/products/\r\n\r\n-----BEGIN PGP SIGNATURE-----\r\nVersion: GnuPG v1.4.6 (GNU/Linux)\r\n\r\niD8DBQFHQcWGATb/zqfZUUQRAtYmAJ9QKuH/mou87XCwiBoDPiw+03ST7QCfRMlb\r\nn7+IVftfOrPBd2+CKA6B1N4=\r\n=MMKO\r\n-----END PGP SIGNATURE-----\r\n", "msg_date": "Mon, 19 Nov 2007 09:18:59 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Mon, 19 Nov 2007, Jean-David Beyer wrote:\n\n> I am pretty sure they will never upgrade RHEL5 to the 8.2 series because \n> they do not do it to get new features.\n\nThat's correct.\n\n> I do know that if I try to use .rpms from other sources, I can get in a \n> lot of trouble with incompatible libraries. And I cannot upgrade the \n> libraries without damaging other programs.\n\nYou're also right that this is tricky. I've written a guide that goes \nover the main issues involved at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgrpm.htm if you ever \nwanted to explore this as an option.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 19 Nov 2007 12:22:57 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended?" }, { "msg_contents": "On Nov 19, 2007, at 9:23 AM, Tom Lane wrote:\n> Decibel! <[email protected]> writes:\n>> FWIW, 20k rows isn't all that big, so I'm assuming that the\n>> descriptions make the table very wide. Unless those descriptions are\n>> what's being updated frequently, I suggest you put those in a\n>> separate table (vertical partitioning). That will make the main table\n>> much easier to vacuum, as well as reducing the impact of the high\n>> churn rate.\n>\n> Uh, you do realize that the TOAST mechanism does that pretty much\n> automatically?\n\n\nOnly if the row exceeds 2k, which for a lot of applications is huge. \nThis is exactly why I wish toast limits were configurable on a per- \ntable basis (I know there were changes here for 8.3, but IIRC it was \nonly for toast chunk size).\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 5 Dec 2007 17:55:12 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: recommended? " } ]
[ { "msg_contents": "\nHi,\n\nI am having performance problems running a number of queries\ninvolving views based on non-strict functions. I have reproduced the\nproblem with the simple test-case below which shows how the query plan\nis different depending on whether the view uses strict or non-strict\nfunctions (even though those columns do not appear in the WHERE\nclause).\n\nCREATE OR REPLACE FUNCTION times_ten_strict(int) RETURNS int\n AS 'SELECT $1*10' LANGUAGE SQL IMMUTABLE STRICT;\n\nCREATE OR REPLACE FUNCTION times_ten_nonstrict(int) RETURNS int\n AS 'SELECT COALESCE($1*10, 0)' LANGUAGE SQL IMMUTABLE;\n\nCREATE OR REPLACE FUNCTION setup()\nRETURNS void AS\n$$\nDECLARE\n val int;\nBEGIN\n DROP TABLE IF EXISTS t1 CASCADE;\n DROP TABLE IF EXISTS t2 CASCADE;\n\n CREATE TABLE t1\n (\n a1 int PRIMARY KEY,\n b1 int\n );\n\n val := 0;\n WHILE val < 10000 LOOP\n INSERT INTO t1 VALUES(val, val);\n val := val+1;\n END LOOP;\n\n CREATE TABLE t2\n (\n a2 int PRIMARY KEY\n );\n\n val := 0;\n WHILE val < 10000 LOOP\n INSERT INTO t2 VALUES(val);\n val := val+1;\n END LOOP;\n\n CREATE VIEW v2_strict AS SELECT a2, times_ten_strict(a2) AS b2 FROM t2;\n CREATE VIEW v2_nonstrict AS SELECT a2, times_ten_nonstrict(a2) AS b2 FROM t2;\nEND;\n$$ LANGUAGE plpgsql;\n\nSELECT setup();\nANALYZE t1;\nANALYZE t2;\n\nEXPLAIN ANALYZE SELECT * FROM t1 LEFT OUTER JOIN v2_strict v2 ON v2.a2=t1.b1 WHERE t1.a1=3;\nEXPLAIN ANALYZE SELECT * FROM t1 LEFT OUTER JOIN v2_nonstrict v2 ON v2.a2=t1.b1 WHERE t1.a1=3;\n\n(I know that I don't really need a left outer join in this example,\nbut my real data does, and it suffers from the same performance\nproblem, but worse because there is more data and the joins are more\ncomplex.)\n\nThe first query, from the view using a strict function has the\nexpected plan:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..16.55 rows=1 width=12) (actual time=0.044..0.055 rows=1 loops=1)\n -> Index Scan using t1_pkey on t1 (cost=0.00..8.27 rows=1 width=8) (actual time=0.015..0.017 rows=1 loops=1)\n Index Cond: (a1 = 3)\n -> Index Scan using t2_pkey on t2 (cost=0.00..8.27 rows=1 width=4) (actual time=0.012..0.016 rows=1 loops=1)\n Index Cond: (t2.a2 = t1.b1)\n Total runtime: 0.182 ms\n\n\nHowever, the second query, which is almost identical except that one\nof the columns being returned uses a non-strict function, has the\nfollowing plan:\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.00..413.27 rows=1 width=16) (actual time=0.057..47.511 rows=1 loops=1)\n Join Filter: (v2.a2 = t1.b1)\n -> Index Scan using t1_pkey on t1 (cost=0.00..8.27 rows=1 width=8) (actual time=0.012..0.019 rows=1 loops=1)\n Index Cond: (a1 = 3)\n -> Seq Scan on t2 (cost=0.00..180.00 rows=10000 width=4) (actual time=0.016..26.217 rows=10000 loops=1)\n Total runtime: 47.644 ms\n\n\nRather than using the primary key on t2, it does a full table\nscan. With multiple joins, this starts doing nested full table scans\nand becomes very inefficient, especially when the tables are much\nbigger than this.\n\nBoth functions have a volatility of IMMUTABLE, and I don't understand\nwhy the strictness of the function should affect the query plan.\n\nAny ideas what is going on?\n\nThanks,\n\nDean.\n\n_________________________________________________________________\n100ï؟½s of Music vouchers to be won with MSN Music\nhttps://www.musicmashup.co.uk\n", "msg_date": "Sun, 18 Nov 2007 10:34:15 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?windows-1256?Q?Performance_problem_(outer_join_+_view_+_non-strict_func?=\n\t=?windows-1256?Q?tions)=FE?=" }, { "msg_contents": "Dean Rasheed <[email protected]> writes:\n> I am having performance problems running a number of queries\n> involving views based on non-strict functions. I have reproduced the\n> problem with the simple test-case below which shows how the query plan\n> is different depending on whether the view uses strict or non-strict\n> functions (even though those columns do not appear in the WHERE\n> clause).\n\nSubqueries that produce non-nullable output columns can't be pulled up\nunderneath the nullable side of an outer join, because their output\nvalues wouldn't go to NULL properly when expanding an unmatched row\nfrom the other side of the join (see has_nullable_targetlist in\nprepjointree.c). In this context that means that we can't recognize\nthe option of using a inner indexscan for the table within the subquery.\n\nI have some vague ideas about how to eliminate that restriction,\nbut don't hold your breath. At the earliest it might happen in 8.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Nov 2007 13:28:39 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re:\n =?windows-1256?Q?Performance_problem_(outer_join_+_view_+_non-strict_func?=\n\t=?windows-1256?Q?tions)=FE?=" }, { "msg_contents": "\nAh yes, I see the problem. I see that it is also going to be a problem where I have used CASE..WHEN in the select list of views :-(\n\nNaively, couldn't the subquery be pulled up if any non-nullable columns from the right table t2 were automatically wrapped in a simple function which returned NULL when the table row isn't matched (eg. when t2.ctid is NULL)? I'm a complete newbie to Postgres, so I have no idea if this is really possible or how hard it would be to implement in practice.\n\nDean.\n\n\n>> I am having performance problems running a number of queries\n>> involving views based on non-strict functions. I have reproduced the\n>> problem with the simple test-case below which shows how the query plan\n>> is different depending on whether the view uses strict or non-strict\n>> functions (even though those columns do not appear in the WHERE\n>> clause).\n>\n> Subqueries that produce non-nullable output columns can't be pulled up\n> underneath the nullable side of an outer join, because their output\n> values wouldn't go to NULL properly when expanding an unmatched row\n> from the other side of the join (see has_nullable_targetlist in\n> prepjointree.c). In this context that means that we can't recognize\n> the option of using a inner indexscan for the table within the subquery.\n>\n> I have some vague ideas about how to eliminate that restriction,\n> but don't hold your breath. At the earliest it might happen in 8.4.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n_________________________________________________________________\nFeel like a local wherever you go.\nhttp://www.backofmyhand.com\n", "msg_date": "Sun, 18 Nov 2007 22:13:58 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "=?windows-1256?Q?RE:__Performance_problem_(outer_join_+_view_+_n?=\n\t=?windows-1256?Q?on-strict_functions)=FE?=" } ]
[ { "msg_contents": "Anyone know what is up with this? I have two queries here which return\nthe same results, one uses a left outer join to get some data from a\ntable which may not match a constraint, and one that uses a union to get\nthe data from each constraint and put them together. The second one\nisn't nearly as elegant but is 100 times faster. Any ideas?\n\nHost is linux 2.6.21.7.\nPostgres version is 8.2.4.\n\nschu\n\nexplain analyze select m.messageid from message_tab m, recipient_tab r\nleft outer join alias_tab a on a.alias = r.recipient where\nm.messagesendmailid = r.messagesendmailid and ( r.recipient = '<email>'\nor a.email = '<email>' ) order by m.messageid;\n\nSort (cost=251959.33..253060.77 rows=440575 width=4) (actual\ntime=27388.707..27389.431 rows=1183 loops=1)\n Sort Key: m.messageid\n -> Hash Join (cost=165940.27..204634.07 rows=440575 width=4)(actual\ntime=24156.311..27387.128 rows=1183 loops=1)\n Hash Cond: ((r.messagesendmailid)::text =\n(m.messagesendmailid)::text)\n -> Hash Left Join (cost=1.04..21379.06 rows=440575 width=18)\n(actual time=25.755..2985.690 rows=1680 loops=1)\n Hash Cond: ((r.recipient)::text = (a.alias)::text)\n Filter: (((r.recipient)::text = '<email>'::text) OR\n((a.email)::text = '<email>'::text))\n -> Seq Scan on recipient_tab r (cost=0.00..18022.93\nrows=879493 width=43) (actual time=12.217..2175.630 rows=875352 loops=1)\n -> Hash (cost=1.02..1.02 rows=2 width=136) (actual\ntime=1.723..1.723 rows=2 loops=1)\n -> Seq Scan on alias_tab a (cost=0.00..1.02\nrows=2 width=136) (actual time=1.708..1.713 rows=2 loops=1)\n -> Hash (cost=154386.99..154386.99 rows=612899 width=22)\n(actual time=23979.297..23979.297 rows=630294 loops=1)\n -> Seq Scan on message_tab m (cost=0.00..154386.99\nrows=612899 width=22) (actual time=60.388..23027.945 rows=630294 loops=1)\n Total runtime: 27391.457 ms\n(13 rows)\n\nexplain analyze select messageid from ( select m.messageid from\nmessage_tab m, recipient_tab r where m.messagesendmailid =\nr.messagesendmailid and r.recipient = '<email>' union select m.messageid\nfrom message_tab m, recipient_tab r, alias_tab a where\nm.messagesendmailid = r.messagesendmailid and r.recipient = a.alias and\na.email = '<email>') as query;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=37090.78..37108.56 rows=3556 width=4) (actual\ntime=24.784..27.078 rows=1183 loops=1)\n -> Sort (cost=37090.78..37099.67 rows=3556 width=4) (actual\ntime=24.781..25.516 rows=1183 loops=1)\n Sort Key: messageid\n -> Append (cost=57.32..36881.05 rows=3556 width=4) (actual\ntime=0.516..23.300 rows=1183 loops=1)\n -> Nested Loop (cost=57.32..17618.98 rows=1656 width=4)\n(actual time=0.514..15.268 rows=802 loops=1)\n -> Bitmap Heap Scan on recipient_tab r\n(cost=57.32..4307.33 rows=1656 width=18) (actual time=0.492..1.563\nrows=1299 loops=1)\n Recheck Cond: ((recipient)::text =\n'[email protected]'::text)\n -> Bitmap Index Scan on\nrecipient_recipient_idx (cost=0.00..56.90 rows=1656 width=0) (actual\ntime=0.471..0.471 rows=1299 loops=1)\n Index Cond: ((recipient)::text =\n'<email>'::text)\n -> Index Scan using message_messagesendmailid_idx\non message_tab m (cost=0.00..8.03 rows=1 width=22) (actual\ntime=0.008..0.008 rows=1 loops=1299)\n Index Cond: ((m.messagesendmailid)::text =\n(r.messagesendmailid)::text)\n -> Nested Loop (cost=67.21..19226.51 rows=1900 width=4)\n(actual time=0.337..6.666 rows=381 loops=1)\n -> Nested Loop (cost=67.21..4769.43 rows=1900\nwidth=18) (actual time=0.323..1.702 rows=381 loops=1)\n -> Seq Scan on alias_tab a (cost=0.00..1.02\nrows=1 width=68) (actual time=0.018..0.020 rows=1 loops=1)\n Filter: ((email)::text = '<email>'::text)\n -> Bitmap Heap Scan on recipient_tab r\n(cost=67.21..4744.66 rows=1900 width=43) (actual time=0.296..1.186\nrows=381 loops=1)\n Recheck Cond: ((r.recipient)::text =\n(a.alias)::text)\n -> Bitmap Index Scan on\nrecipient_recipient_idx (cost=0.00..66.73 rows=1900 width=0) (actual\ntime=0.206..0.206 rows=381 loops=1)\n Index Cond: ((r.recipient)::text\n= (a.alias)::text)\n -> Index Scan using message_messagesendmailid_idx\non message_tab m (cost=0.00..7.60 rows=1 width=22) (actual\ntime=0.009..0.010 rows=1 loops=381)\n Index Cond: ((m.messagesendmailid)::text =\n(r.messagesendmailid)::text)\n Total runtime: 27.827 ms\n(22 rows)\n\n", "msg_date": "Tue, 20 Nov 2007 19:14:31 -0900", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres ignoring index when using left outer join." } ]
[ { "msg_contents": "Hello all,\nI'm doing tests on various Database and in particular I'm running a \ncomparison between Oracle 10g and Postgres 8.1 on a dedicated server \nwith 2 processors Dual-Core AMD Opteron 2218 2.6 GHz, 4GB of memory \nand Debian GNU / Linux version 2.6.18-5. Performance is very similar up \nto 30 users, but from 40 onwards with Postgres fall quickly. That is \nnot what happens with Oracle that comes to 600 users. Can you help me \nwith the tuning ?\nThanks a lot\nMy postgresql.conf configuration is:\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command \nline\n# switch or PGDATA environment variable, represented here as \nConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another \ndirectory\nhba_file = '/etc/postgresql/8.1/main/pg_hba.conf' # host-based \nauthentication file\nident_file = '/etc/postgresql/8.1/main/pg_ident.conf' # IDENT \nconfiguration file\n\n# If external_pid_file is not explicitly set, no extra pid file is \nwritten.\nexternal_pid_file = '/var/run/postgresql/8.1-main.pid' # \nwrite an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost' # what IP address(es) to \nlisten on;\n # comma-\nseparated list of addresses;\n # defaults to \n'localhost', '*' = all\nlisten_addresses = '*'\nport = 5432\nmax_connections = 220\n# note: increasing max_connections costs ~400 bytes of shared memory \nper\n# connection slot, plus lock space (see max_locks_per_transaction). \nYou\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\nunix_socket_directory = '/var/run/postgresql'\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#bonjour_name = '' # defaults to the computer \nname\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\nssl = true\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\n#krb_server_hostname = '' # empty string matches any \nkeytab entry\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n # 0 selects the \nsystem default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects \nthe system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects \nthe system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 49152 # min 16 or max_connections*2, \n8KB each, 384MB\ntemp_buffers = 1000 # min 100, 8KB each\nmax_prepared_transactions = 350 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of \nshared memory\n# per transaction slot, plus lock space (see \nmax_locks_per_transaction).\nwork_mem = 1024 # min 64, size in \nKB\nmaintenance_work_mem = 524288 # min 1024, size in KB, -512\nMB-\nmax_stack_depth = 6144 # min 100, size in KB\n\n# - Free Space Map -\nmax_fsm_pages = 58000 # min max_fsm_relations*16, 6 \nbytes each\nmax_fsm_relations = 3000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0 # 0-1000 milliseconds\n#vacuum_cost_page_hit = 1 # 0-10000 credits\n#vacuum_cost_page_miss = 10 # 0-10000 credits\n#vacuum_cost_page_dirty = 20 # 0-10000 credits\n#vacuum_cost_limit = 200 # 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 5000 # 10-10000 milliseconds \nbetween rounds\nbgwriter_lru_percent = 0 # 0-100% of LRU buffers \nscanned/round\nbgwriter_lru_maxpages = 0 # 0-1000 buffers max \nwritten/round\nbgwriter_all_percent = 0 # 0-100% of all buffers \nscanned/round\nbgwriter_all_maxpages = 0 # 0-1000 buffers max \nwritten/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\nfsync = off # turns forced synchronization \non or off\n#wal_sync_method = fsync # the default is the first \noption\n # supported by the operating \nsystem:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page \nwrites\n#wal_buffers = 8 # min 4, 8KB each\n#commit_delay = 5 # range 0-100000, in \nmicroseconds\n#commit_siblings = 5 # range 1-1000\n# - Checkpoints -\n\ncheckpoint_segments = 100 # in logfile segments, min 1, \n16MB each\ncheckpoint_timeout = 1800 # range 30-3600, in seconds\n#checkpoint_warning = 30 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a \nlogfile\n # segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 196608 # typically 8KB each\n#random_page_cost = 4 # units are one sequential \npage fetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on \neffort\n#geqo_generations = 0 # selects default based on \neffort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of \nexplicit\n # JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr' # Valid values are \ncombinations of\n # stderr, syslog and eventlog,\n # depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off # Enable capturing of stderr \ninto log\n # files\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log' # Directory where log files \nare written\n # Can be absolute or relative \nto PGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name \npattern.\n # Can include strftime() \nescapes\n#log_truncate_on_rotation = off # If on, any existing log file of the \nsame\n # name as the new log file \nwill be\n # truncated rather than \nappended to. But\n # such truncation only occurs \non\n # time-driven rotation, not on \nrestarts\n # or size-driven rotation. \nDefault is\n # off, meaning append to \nexisting files\n # in all cases.\n#log_rotation_age = 1440 # Automatic rotation of \nlogfiles will\n # happen after so many \nminutes. 0 to\n # disable.\n#log_rotation_size = 10240 # Automatic rotation of \nlogfiles will\n # happen after so many \nkilobytes of log\n # output. 0 to disable.\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of \ndecreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\n#log_min_messages = notice # Values, in order of \ndecreasing detail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose \nmessages\n\n#log_min_error_statement = panic # Values in order of \nincreasing severity:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # panic(off)\n#log_min_duration_statement = -1 # -1 is disabled, 0 logs all \nstatements\n # and their durations, in \nmilliseconds.\n\n#silent_mode = off # DO NOT USE without syslog or\n # redirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\nlog_line_prefix = '%t ' # Special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = PID\n # %t = timestamp (no \nmilliseconds)\n # %m = timestamp with \nmilliseconds\n # %i = command tag\n # %c = session id\n # %l = session line number\n # %s = session start \ntimestamp\n # %x = transaction id\n # %q = stop here in non-\nsession\n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\n#log_statement = 'none' # none, mod, ddl, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = off\nstats_command_string = off\nstats_block_level = off\nstats_row_level = off\nstats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = off # enable autovacuum \nsubprocess?\n#autovacuum_naptime = 60 # time between autovacuum \nruns, in secs\n#autovacuum_vacuum_threshold = 1000 # min # of tuple updates \nbefore\n # vacuum\n#autovacuum_analyze_threshold = 500 # min # of tuple updates \nbefore\n # analyze\n\n#autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n # vacuum\n#autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n # analyze\n#autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay \nfor\n # autovac, -1 means use\n # vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit \nfor\n # autovac, -1 means use\n # vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#default_tablespace = '' # a tablespace name, '' uses\n # the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n\n#default_transaction_read_only = off\n#statement_timeout = 0 # 0 is disabled, in \nmilliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ\n # environment setting\n#australian_timezones = off\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to \ndatabase\n # encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'it_IT.UTF-8' # locale for system \nerror message\n # strings\nlc_monetary = 'it_IT.UTF-8' # locale for monetary \nformatting\nlc_numeric = 'it_IT.UTF-8' # locale for number \nformatting\nlc_time = 'it_IT.UTF-8' # locale for time \nformatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and \nthere are\n# max_locks_per_transaction * (max_connections + \nmax_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding # on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable \nclass names\n\n\n\n\n________________________________________________\nTiscali Voce 8 Mega (Telefono+Adsl). Attiva entro il 22/11/07: chiami in tutta Italia e navighi SENZA LIMITI A SOLI 4,95€ AL MESE FINO AL 31/03/2008! \nDal 1° aprile 2008 paghi 28,95 € al mese.\nhttp://abbonati.tiscali.it/telefono-adsl/prodotti/tc/voce8mega/\n\n", "msg_date": "Thu, 22 Nov 2007 15:09:52 +0100 (CET)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "tuning for TPC-C benchmark" }, { "msg_contents": "\"[email protected]\" <[email protected]> wrote:\n>\n> Hello all,\n> I'm doing tests on various Database and in particular I'm running a \n> comparison between Oracle 10g and Postgres 8.1 on a dedicated server \n> with 2 processors Dual-Core AMD Opteron 2218 2.6 GHz, 4GB of memory \n> and Debian GNU / Linux version 2.6.18-5. Performance is very similar up \n> to 30 users, but from 40 onwards with Postgres fall quickly. That is \n> not what happens with Oracle that comes to 600 users. Can you help me \n> with the tuning ?\n\nIf you're doing perf comparisons, you should start out with the latest\nPostgreSQL: 8.2.5\n\nAlso, beware that you may violate license agreements if you publish\nbenchmarks of Oracle ... and posting partial results to a mailing list\ncould potentially be considered \"publishing benchmarks\" to Oracle's\nlawyers.\n\nI've added a few more comments inline, but overall it looks like you've\ndone a good job tuning. In order to tweak it any further, we're probably\ngoing to need more details, such as iostat output during the run, details\nof the test you're running, etc.\n\n> Thanks a lot\n> My postgresql.conf configuration is:\n> #---------------------------------------------------------------------------\n> # FILE LOCATIONS\n> #---------------------------------------------------------------------------\n> \n> # The default values of these variables are driven from the -D command \n> line\n> # switch or PGDATA environment variable, represented here as \n> ConfigDir.\n> \n> #data_directory = 'ConfigDir' # use data in another \n> directory\n> hba_file = '/etc/postgresql/8.1/main/pg_hba.conf' # host-based \n> authentication file\n> ident_file = '/etc/postgresql/8.1/main/pg_ident.conf' # IDENT \n> configuration file\n> \n> # If external_pid_file is not explicitly set, no extra pid file is \n> written.\n> external_pid_file = '/var/run/postgresql/8.1-main.pid' # \n> write an extra pid file\n> \n> \n> #---------------------------------------------------------------------------\n> # CONNECTIONS AND AUTHENTICATION\n> #---------------------------------------------------------------------------\n> \n> # - Connection Settings -\n> \n> #listen_addresses = 'localhost' # what IP address(es) to \n> listen on;\n> # comma-\n> separated list of addresses;\n> # defaults to \n> 'localhost', '*' = all\n> listen_addresses = '*'\n> port = 5432\n> max_connections = 220\n> # note: increasing max_connections costs ~400 bytes of shared memory \n> per\n> # connection slot, plus lock space (see max_locks_per_transaction). \n> You\n> # might also need to raise shared_buffers to support more connections.\n> #superuser_reserved_connections = 2\n> unix_socket_directory = '/var/run/postgresql'\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777 # octal\n> #bonjour_name = '' # defaults to the computer \n> name\n> \n> # - Security & Authentication -\n> \n> #authentication_timeout = 60 # 1-600, in seconds\n> ssl = true\n> #password_encryption = on\n> #db_user_namespace = off\n> \n> # Kerberos\n> #krb_server_keyfile = ''\n> #krb_srvname = 'postgres'\n> #krb_server_hostname = '' # empty string matches any \n> keytab entry\n> #krb_caseins_users = off\n> \n> # - TCP Keepalives -\n> # see 'man 7 tcp' for details\n> \n> #tcp_keepalives_idle = 0 # TCP_KEEPIDLE, in seconds;\n> # 0 selects the \n> system default\n> #tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n> # 0 selects \n> the system default\n> #tcp_keepalives_count = 0 # TCP_KEEPCNT;\n> # 0 selects \n> the system default\n> \n> \n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> \n> # - Memory -\n> \n> shared_buffers = 49152 # min 16 or max_connections*2, \n> 8KB each, 384MB\n\nWith 4G of ram, you might want to try this closer to 1G and see if it\nhelps. You may want to install the pg_buffercache module to monitor\nshared_buffer usage. I doubt you want to use it during actual timing\nof the test, but it should help you get a feel for what the best\nsetting is for shared_buffers.\n\n> temp_buffers = 1000 # min 100, 8KB each\n> max_prepared_transactions = 350 # can be 0 or more\n> # note: increasing max_prepared_transactions costs ~600 bytes of \n> shared memory\n> # per transaction slot, plus lock space (see \n> max_locks_per_transaction).\n> work_mem = 1024 # min 64, size in \n> KB\n> maintenance_work_mem = 524288 # min 1024, size in KB, -512\n> MB-\n> max_stack_depth = 6144 # min 100, size in KB\n> \n> # - Free Space Map -\n> max_fsm_pages = 58000 # min max_fsm_relations*16, 6 \n> bytes each\n> max_fsm_relations = 3000 # min 100, ~70 bytes each\n> \n> # - Kernel Resource Usage -\n> \n> #max_files_per_process = 1000 # min 25\n> #preload_libraries = ''\n> \n> # - Cost-Based Vacuum Delay -\n> \n> #vacuum_cost_delay = 0 # 0-1000 milliseconds\n> #vacuum_cost_page_hit = 1 # 0-10000 credits\n> #vacuum_cost_page_miss = 10 # 0-10000 credits\n> #vacuum_cost_page_dirty = 20 # 0-10000 credits\n> #vacuum_cost_limit = 200 # 0-10000 credits\n> \n> # - Background writer -\n> \n> #bgwriter_delay = 5000 # 10-10000 milliseconds \n> between rounds\n> bgwriter_lru_percent = 0 # 0-100% of LRU buffers \n> scanned/round\n> bgwriter_lru_maxpages = 0 # 0-1000 buffers max \n> written/round\n> bgwriter_all_percent = 0 # 0-100% of all buffers \n> scanned/round\n> bgwriter_all_maxpages = 0 # 0-1000 buffers max \n> written/round\n\nIt looks like you're trying to disable the background writer. This will\ncause checkpoints to be more expensive. Can you verify that the perf\nproblems that you're seeing aren't the result of checkpoints?\n\n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> \n> # - Settings -\n> \n> fsync = off # turns forced synchronization \n> on or off\n> #wal_sync_method = fsync # the default is the first \n> option\n> # supported by the operating \n> system:\n> # open_datasync\n> # fdatasync\n> # fsync\n> # fsync_writethrough\n> # open_sync\n> #full_page_writes = on # recover from partial page \n> writes\n\nTurn this off.\n\n> #wal_buffers = 8 # min 4, 8KB each\n\nWhile it's difficult to know whether it will help, I'd bump this up to\n16 or 32 and see if it helps.\n\n> #commit_delay = 5 # range 0-100000, in \n> microseconds\n> #commit_siblings = 5 # range 1-1000\n> # - Checkpoints -\n> \n> checkpoint_segments = 100 # in logfile segments, min 1, \n> 16MB each\n> checkpoint_timeout = 1800 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # in seconds, 0 is off\n\nAre you seeing checkpoint warnings in the log?\n\n> # - Archiving -\n> \n> #archive_command = '' # command to use to archive a \n> logfile\n> # segment\n> \n> \n> #---------------------------------------------------------------------------\n> # QUERY TUNING\n> #---------------------------------------------------------------------------\n> \n> # - Planner Method Configuration -\n> \n> #enable_bitmapscan = on\n> #enable_hashagg = on\n> #enable_hashjoin = on\n> #enable_indexscan = on\n> #enable_mergejoin = on\n> #enable_nestloop = on\n> #enable_seqscan = on\n> #enable_sort = on\n> #enable_tidscan = on\n> \n> # - Planner Cost Constants -\n> \n> effective_cache_size = 196608 # typically 8KB each\n\nWhat else is running on this system? 4G - 400M shared buffers - 100M\nfor other OS activities = 3G.\n\n> #random_page_cost = 4 # units are one sequential \n> page fetch\n> # cost\n> #cpu_tuple_cost = 0.01 # (same)\n> #cpu_index_tuple_cost = 0.001 # (same)\n> #cpu_operator_cost = 0.0025 # (same)\n> \n> # - Genetic Query Optimizer -\n> \n> #geqo = on\n> #geqo_threshold = 12\n> #geqo_effort = 5 # range 1-10\n> #geqo_pool_size = 0 # selects default based on \n> effort\n> #geqo_generations = 0 # selects default based on \n> effort\n> #geqo_selection_bias = 2.0 # range 1.5-2.0\n> \n> # - Other Planner Options -\n> \n> #default_statistics_target = 10 # range 1-1000\n> #constraint_exclusion = off\n> #from_collapse_limit = 8\n> #join_collapse_limit = 8 # 1 disables collapsing of \n> explicit\n> # JOINs\n> \n> \n> #---------------------------------------------------------------------------\n> # ERROR REPORTING AND LOGGING\n> #---------------------------------------------------------------------------\n> \n> # - Where to Log -\n> \n> #log_destination = 'stderr' # Valid values are \n> combinations of\n> # stderr, syslog and eventlog,\n> # depending on platform.\n> \n> # This is used when logging to stderr:\n> #redirect_stderr = off # Enable capturing of stderr \n> into log\n> # files\n> \n> # These are only used if redirect_stderr is on:\n> #log_directory = 'pg_log' # Directory where log files \n> are written\n> # Can be absolute or relative \n> to PGDATA\n> #log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name \n> pattern.\n> # Can include strftime() \n> escapes\n> #log_truncate_on_rotation = off # If on, any existing log file of the \n> same\n> # name as the new log file \n> will be\n> # truncated rather than \n> appended to. But\n> # such truncation only occurs \n> on\n> # time-driven rotation, not on \n> restarts\n> # or size-driven rotation. \n> Default is\n> # off, meaning append to \n> existing files\n> # in all cases.\n> #log_rotation_age = 1440 # Automatic rotation of \n> logfiles will\n> # happen after so many \n> minutes. 0 to\n> # disable.\n> #log_rotation_size = 10240 # Automatic rotation of \n> logfiles will\n> # happen after so many \n> kilobytes of log\n> # output. 0 to disable.\n> # These are relevant when logging to syslog:\n> #syslog_facility = 'LOCAL0'\n> #syslog_ident = 'postgres'\n> \n> \n> # - When to Log -\n> \n> #client_min_messages = notice # Values, in order of \n> decreasing detail:\n> # debug5\n> # debug4\n> # debug3\n> # debug2\n> # debug1\n> # log\n> # notice\n> # warning\n> # error\n> \n> #log_min_messages = notice # Values, in order of \n> decreasing detail:\n> # debug5\n> # debug4\n> # debug3\n> # debug2\n> # debug1\n> # info\n> # notice\n> # warning\n> # error\n> # log\n> # fatal\n> # panic\n> \n> #log_error_verbosity = default # terse, default, or verbose \n> messages\n> \n> #log_min_error_statement = panic # Values in order of \n> increasing severity:\n> # debug5\n> # debug4\n> # debug3\n> # debug2\n> # debug1\n> # info\n> # notice\n> # warning\n> # error\n> # panic(off)\n> #log_min_duration_statement = -1 # -1 is disabled, 0 logs all \n> statements\n> # and their durations, in \n> milliseconds.\n> \n> #silent_mode = off # DO NOT USE without syslog or\n> # redirect_stderr\n> \n> # - What to Log -\n> \n> #debug_print_parse = off\n> #debug_print_rewritten = off\n> #debug_print_plan = off\n> #debug_pretty_print = off\n> #log_connections = off\n> #log_disconnections = off\n> #log_duration = off\n> log_line_prefix = '%t ' # Special values:\n> # %u = user name\n> # %d = database name\n> # %r = remote host and port\n> # %h = remote host\n> # %p = PID\n> # %t = timestamp (no \n> milliseconds)\n> # %m = timestamp with \n> milliseconds\n> # %i = command tag\n> # %c = session id\n> # %l = session line number\n> # %s = session start \n> timestamp\n> # %x = transaction id\n> # %q = stop here in non-\n> session\n> # processes\n> # %% = '%'\n> # e.g. '<%u%%%d> '\n> #log_statement = 'none' # none, mod, ddl, all\n> #log_hostname = off\n> \n> \n> #---------------------------------------------------------------------------\n> # RUNTIME STATISTICS\n> #---------------------------------------------------------------------------\n> \n> # - Statistics Monitoring -\n> \n> #log_parser_stats = off\n> #log_planner_stats = off\n> #log_executor_stats = off\n> #log_statement_stats = off\n> \n> # - Query/Index Statistics Collector -\n> \n> stats_start_collector = off\n> stats_command_string = off\n> stats_block_level = off\n> stats_row_level = off\n> stats_reset_on_server_start = off\n> \n> \n> #---------------------------------------------------------------------------\n> # AUTOVACUUM PARAMETERS\n> #---------------------------------------------------------------------------\n> \n> autovacuum = off # enable autovacuum \n> subprocess?\n> #autovacuum_naptime = 60 # time between autovacuum \n> runs, in secs\n> #autovacuum_vacuum_threshold = 1000 # min # of tuple updates \n> before\n> # vacuum\n> #autovacuum_analyze_threshold = 500 # min # of tuple updates \n> before\n> # analyze\n> \n> #autovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n> # vacuum\n> #autovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n> # analyze\n> #autovacuum_vacuum_cost_delay = -1 # default vacuum cost delay \n> for\n> # autovac, -1 means use\n> # vacuum_cost_delay\n> #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit \n> for\n> # autovac, -1 means use\n> # vacuum_cost_limit\n> \n> \n> #---------------------------------------------------------------------------\n> # CLIENT CONNECTION DEFAULTS\n> #---------------------------------------------------------------------------\n> \n> # - Statement Behavior -\n> \n> #search_path = '$user,public' # schema names\n> #default_tablespace = '' # a tablespace name, '' uses\n> # the default\n> #check_function_bodies = on\n> #default_transaction_isolation = 'read committed'\n> \n> #default_transaction_read_only = off\n> #statement_timeout = 0 # 0 is disabled, in \n> milliseconds\n> \n> # - Locale and Formatting -\n> \n> #datestyle = 'iso, mdy'\n> #timezone = unknown # actually, defaults to TZ\n> # environment setting\n> #australian_timezones = off\n> #extra_float_digits = 0 # min -15, max 2\n> #client_encoding = sql_ascii # actually, defaults to \n> database\n> # encoding\n> \n> # These settings are initialized by initdb -- they might be changed\n> lc_messages = 'it_IT.UTF-8' # locale for system \n> error message\n> # strings\n> lc_monetary = 'it_IT.UTF-8' # locale for monetary \n> formatting\n> lc_numeric = 'it_IT.UTF-8' # locale for number \n> formatting\n> lc_time = 'it_IT.UTF-8' # locale for time \n> formatting\n> \n> # - Other Defaults -\n> \n> #explain_pretty_print = on\n> #dynamic_library_path = '$libdir'\n> \n> \n> #---------------------------------------------------------------------------\n> # LOCK MANAGEMENT\n> #---------------------------------------------------------------------------\n> \n> #deadlock_timeout = 1000 # in milliseconds\n> #max_locks_per_transaction = 64 # min 10\n> # note: each lock table slot uses ~220 bytes of shared memory, and \n> there are\n> # max_locks_per_transaction * (max_connections + \n> max_prepared_transactions)\n> # lock table slots.\n> \n> \n> #---------------------------------------------------------------------------\n> # VERSION/PLATFORM COMPATIBILITY\n> #---------------------------------------------------------------------------\n> \n> # - Previous Postgres Versions -\n> \n> #add_missing_from = off\n> #backslash_quote = safe_encoding # on, off, or safe_encoding\n> #default_with_oids = off\n> #escape_string_warning = off\n> #regex_flavor = advanced # advanced, extended, or basic\n> #sql_inheritance = on\n> \n> # - Other Platforms & Clients -\n> \n> #transform_null_equals = off\n> \n> \n> #---------------------------------------------------------------------------\n> # CUSTOMIZED OPTIONS\n> #---------------------------------------------------------------------------\n> \n> #custom_variable_classes = '' # list of custom variable \n> class names\n\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 22 Nov 2007 09:33:04 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning for TPC-C benchmark" }, { "msg_contents": "On Nov 22, 2007 9:09 AM, [email protected] <[email protected]> wrote:\n> I'm doing tests on various Database and in particular I'm running a\n> comparison between Oracle 10g and Postgres 8.1 on a dedicated server\n\nAs Bill said, do not publish any part of the Oracle result anywhere.\n\n> with 2 processors Dual-Core AMD Opteron 2218 2.6 GHz, 4GB of memory\n> and Debian GNU / Linux version 2.6.18-5. Performance is very similar up\n> to 30 users, but from 40 onwards with Postgres fall quickly. That is\n> not what happens with Oracle that comes to 600 users. Can you help me\n> with the tuning ?\n\nI'm not sure which TPC-C kit you're using, but you should probably use DBT-2.\n\nhttp://sourceforge.net/project/showfiles.php?group_id=52479&package_id=54389&release_id=485705\nhttp://oss.oracle.com/projects/olt/\n\nAs for parameters, I'd start with:\n\n- Make sure wal and data are split and their RAIDs (if any) are\nconfigured properly.\n\nshared_buffers = 98304 (this may need to stay at your current one\ndepending on the cost of checkpoints)\nmax_prepared_transactions = 5 (this doesn't have anything to do with\nwhat it sounds like)\nmax_fsm_relations = 1000\nbgwriter_delay = 500\nwal_sync_method = open_sync (or try open_datasync)\nwal_buffers = 256\ncheckpoint_segments = 256 (if you have the space)\ncheckpoint_timeout = 1800\ncheckpoint_warning = 1740\neffective_cache_size = 346030\ndefault_statistics_target = 100\n\nI'm not sure whether DBT-2 supports it out-of-the-box, but you should\nalso look at changing default_transaction_isolation to serializable.\nKeep in mind that DBT-2 has several bugs in it. Though, I'm not sure\nwhether Oracle fixed them on their version either.\n\nIt also looks like you have fsync turned off, which means commits are\nnot guaranteed (unlike your Oracle configuration). If you want\napples-to-apples, you need to turn fsync on.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 22 Nov 2007 09:59:49 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning for TPC-C benchmark" }, { "msg_contents": "[email protected] wrote:\n> Hello all,\n> I'm doing tests on various Database and in particular I'm running a \n> comparison between Oracle 10g and Postgres 8.1 on a dedicated server \n> with 2 processors Dual-Core AMD Opteron 2218 2.6 GHz, 4GB of memory \n> and Debian GNU / Linux version 2.6.18-5. Performance is very similar up \n> to 30 users, but from 40 onwards with Postgres fall quickly. That is \n> not what happens with Oracle that comes to 600 users. Can you help me \n> with the tuning ?\n\nThe fact that you didn't give any details on your I/O configuration \ntells me that you don't have much experience with TPC-C. TPC-C is \nbasically limited by random I/O. That means that a good RAID controller \nand a lot of disks is a must. Looking at some of the results at \nwww.tpc.org, systems with 4 cores have multiple RAID controllers and \nabout a hundred hard drives.\n\nYou can of course run smaller tests, but those 4 cores are going spend \nall their time waiting for I/O. See for example these old DBT-2 results \nI ran to test the Load Distributed Checkpoints feature in 8.3.\n\nNow that we got that out of the way, what kind of a test configuration \nare you using? How many warehouses? Are you using the think-times, per \nthe spec, or are you running something like BenchmarkSQL which just \npushes as many queries it can to the server?\n\nI'm not sure what you mean by # of users, but you shouldn't use more \nthan 10-30 connections on a test like that. More won't help, because \nthey'll all have to queue for the same resources, whether it's I/O or CPU.\n\nHow long tests are you running? After some time, you'll need to run \nvacuums, which make a big difference.\n\n8.3 will perform better, thanks to HOT which reduces the need to vacuum, \nvarvarlen which reduces storage size, leading to better use of the cache \nand less I/O, and Load Distributed Checkpoints, which reduce the \ncheckpoint spikes which otherwise throw you over the response time \nrequirements.\n\nAnd last but not least, why are you running the benchmark? It's going to \nbe practically irrelevant for any real application. You should benchmark \nwith your application, and your data, to get a comparison that matters \nfor you.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 22 Nov 2007 19:22:41 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning for TPC-C benchmark" } ]
[ { "msg_contents": ">>> \"[email protected]\" <[email protected]> 11/22/07 8:09 AM >>>\n> Performance is very similar up\n> to 30 users, but from 40 onwards with Postgres fall quickly.\n \nI suggest testing with some form of connection pooling.\n\nMany database products will queue requests in those situations;\nwith PostgreSQL it is up to you to arrange that.\n \n-Kevin\n \n\n", "msg_date": "Thu, 22 Nov 2007 09:45:07 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: tuning for TPC-C benchmark" }, { "msg_contents": "On Nov 22, 2007 10:45 AM, Kevin Grittner <[email protected]> wrote:\n> I suggest testing with some form of connection pooling.\n\nYeah, that's one of the reasons I suggested DBT-2. It pools\nconnections and is the most mature TPC-C-like test for Postgres.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Thu, 22 Nov 2007 12:38:24 -0500", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: tuning for TPC-C benchmark" } ]
[ { "msg_contents": "\nHi,\n\nI am having a performance problem trying to query a view which is a\nUNION ALL of 2 tables. I have narrowed the problem down to my use of\nDOMAINS in the underlying table. So in the test-case below, when the\ncolumn \"a\" is of domain type foo_text, the query runs slowly using\nthe following plan:\n\n Subquery Scan foo_v (cost=0.00..798.00 rows=100 width=64) (actual time=0.049..24.763 rows=2 loops=1)\n Filter: (a = (('foo34'::text)::foo_text)::text)\n -> Append (cost=0.00..548.00 rows=20000 width=20) (actual time=0.007..20.338 rows=20000 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..274.00 rows=10000 width=20) (actual time=0.006..7.341 rows=10000 loops=1)\n -> Seq Scan on foo (cost=0.00..174.00 rows=10000 width=20) (actual time=0.004..2.366 rows=10000 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..274.00 rows=10000 width=10) (actual time=0.009..6.536 rows=10000 loops=1)\n -> Seq Scan on foo (cost=0.00..174.00 rows=10000 width=10) (actual time=0.007..2.746 rows=10000 loops=1)\n Total runtime: 24.811 ms\n\nHowever, when the column type is text, the query runs fast as I\nwould expect, using the PK index:\n\n Result (cost=0.00..16.55 rows=2 width=64) (actual time=0.015..0.025 rows=2 loops=1)\n -> Append (cost=0.00..16.55 rows=2 width=64) (actual time=0.014..0.023 rows=2 loops=1)\n -> Index Scan using foo_pkey on foo (cost=0.00..8.27 rows=1 width=20) (actual time=0.014..0.014 rows=1 loops=1)\n Index Cond: (a = (('foo34'::text)::foo_text)::text)\n -> Index Scan using foo_pkey on foo (cost=0.00..8.27 rows=1 width=10) (actual time=0.007..0.008 rows=1 loops=1)\n Index Cond: (a = (('foo34'::text)::foo_text)::text)\n Total runtime: 0.065 ms\n\n(PostgreSQL 8.2.5)\n\nAny ideas?\n\nThanks, Dean\n\n\n\nCREATE OR REPLACE FUNCTION setup()\nRETURNS void AS\n$$\nDECLARE\n val int;\nBEGIN\n DROP TABLE IF EXISTS foo CASCADE;\n DROP DOMAIN IF EXISTS foo_text;\n\n CREATE DOMAIN foo_text text;-- CONSTRAINT tt_check CHECK (VALUE LIKE 'foo%');\n\n CREATE TABLE foo\n (\n a foo_text PRIMARY KEY,\n b text\n );\n\n val := 0;\n WHILE val < 10000 LOOP\n INSERT INTO foo VALUES('foo'||val, 'bar'||val);\n val := val+1;\n END LOOP;\n\n CREATE VIEW foo_v AS\n (SELECT a,b from foo) UNION ALL (SELECT a,NULL::text AS b FROM foo);\nEND;\n$$ LANGUAGE plpgsql;\n\nSELECT setup();\nANALYZE foo;\n\nEXPLAIN ANALYZE SELECT * FROM foo_v WHERE a='foo34'::foo_text;\n\n_________________________________________________________________\nFeel like a local wherever you go.\nhttp://www.backofmyhand.com", "msg_date": "Fri, 23 Nov 2007 13:29:40 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem with UNION ALL view and domains" }, { "msg_contents": "On Nov 23, 2007 7:29 AM, Dean Rasheed <[email protected]> wrote:\n> I am having a performance problem trying to query a view which is a\n> UNION ALL of 2 tables. I have narrowed the problem down to my use of\n> DOMAINS in the underlying table. So in the test-case below, when the\n> column \"a\" is of domain type foo_text, the query runs slowly using\n> the following plan:\n\nI don't know much about DOMAINS, but I did learn somethings about\nviews, unions and where conditions when I posted a similar performance\nquestion. The best answer was, of course, from Tom Lane here:\n\nhttp://archives.postgresql.org/pgsql-performance/2007-11/msg00041.php\n\nIn my case, the data types in each segment of the union were not\noriginally identical, preventing the planner from efficiently pushing\nthe qualifications down to the individual segments prior to the union.\n\nIn your case the use of a DOMAIN type may be one of those 'special\ncases' forcing the planner to perform the union first, then apply the\nconditions.\n\nJeff\n", "msg_date": "Fri, 23 Nov 2007 10:29:07 -0600", "msg_from": "\"Jeff Larsen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with UNION ALL view and domains" }, { "msg_contents": "\"Jeff Larsen\" <[email protected]> writes:\n> On Nov 23, 2007 7:29 AM, Dean Rasheed <[email protected]> wrote:\n>> I am having a performance problem trying to query a view which is a\n>> UNION ALL of 2 tables. I have narrowed the problem down to my use of\n>> DOMAINS in the underlying table.\n\n> In my case, the data types in each segment of the union were not\n> originally identical, preventing the planner from efficiently pushing\n> the qualifications down to the individual segments prior to the union.\n\n> In your case the use of a DOMAIN type may be one of those 'special\n> cases' forcing the planner to perform the union first, then apply the\n> conditions.\n\nIt looks like the problem is that the UNION is taken as producing plain\ntext output, as you can see with \\d:\n\nregression=# \\d foo\n Table \"public.foo\"\n Column | Type | Modifiers \n--------+----------+-----------\n a | foo_text | not null\n b | text | \nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (a)\n\nregression=# \\d foo_v\n View \"public.foo_v\"\n Column | Type | Modifiers \n--------+------+-----------\n a | text | \n b | text | \nView definition:\n SELECT foo.a, foo.b\n FROM foo\nUNION ALL \n SELECT foo.a, NULL::text AS b\n FROM foo;\n\nTracing through the code, I see that this happens because\nselect_common_type() smashes all domains to base types before doing\nanything else. So even though all the inputs are in fact the same\ndomain type, you end up with the base type as the UNION result type.\n\nPossibly that could be improved sometime, but we certainly wouldn't try\nto change it in an existing release branch...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Nov 2007 12:41:58 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with UNION ALL view and domains " }, { "msg_contents": "\n> It looks like the problem is that the UNION is taken as producing plain\n> text output, as you can see with \\d:\n>\n> regression=# \\d foo\n> Table \"public.foo\"\n> Column | Type | Modifiers\n> --------+----------+-----------\n> a | foo_text | not null\n> b | text |\n> Indexes:\n> \"foo_pkey\" PRIMARY KEY, btree (a)\n>\n> regression=# \\d foo_v\n> View \"public.foo_v\"\n> Column | Type | Modifiers\n> --------+------+-----------\n> a | text |\n> b | text |\n> View definition:\n> SELECT foo.a, foo.b\n> FROM foo\n> UNION ALL\n> SELECT foo.a, NULL::text AS b\n> FROM foo;\n>\n> Tracing through the code, I see that this happens because\n> select_common_type() smashes all domains to base types before doing\n> anything else. So even though all the inputs are in fact the same\n> domain type, you end up with the base type as the UNION result type.\n>\n> Possibly that could be improved sometime, but we certainly wouldn't try\n> to change it in an existing release branch...\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\n\nThanks for your replies. It looks like I can cure the performance problem by casting to the\nbase type in the view definition:\n\nCREATE VIEW foo_v AS SELECT a::text,b from foo UNION ALL SELECT a::text,NULL:\\\n:text AS b FROM foo;\n\nInterestingly though, if I cast back to the domain type after taking the union, then the\nview has the correct type, but the performance problem comes back in a different way:\n\nCREATE VIEW foo_v AS SELECT foo_u.a::foo_text, foo_u.b FROM\n(SELECT a::text,b from foo UNION ALL SELECT a::text,NULL::text AS b FROM foo) as foo_u;\n\nlookup=> \\d foo_v\n View \"public.foo_v\"\n Column | Type | Modifiers\n--------+----------+-----------\n a | foo_text |\n b | text |\nView definition:\n SELECT foo_u.a::foo_text AS a, foo_u.b\n FROM ( SELECT foo.a::text AS a, foo.b\n FROM foo\nUNION ALL\n SELECT foo.a::text AS a, NULL::text AS b\n FROM foo) foo_u;\n\n Result (cost=0.00..399.00 rows=100 width=64) (actual time=0.023..6.777 rows=2 loops=1)\n -> Append (cost=0.00..399.00 rows=100 width=64) (actual time=0.022..6.775 rows=2 loops=1)\n -> Seq Scan on foo (cost=0.00..199.00 rows=50 width=20) (actual time=0.022..3.409 rows=1 loops=1)\n Filter: ((((a)::text)::foo_text)::text = (('foo34'::text)::foo_text)::text)\n -> Seq Scan on foo (cost=0.00..199.00 rows=50 width=10) (actual time=0.016..3.364 rows=1 loops=1)\n Filter: ((((a)::text)::foo_text)::text = (('foo34'::text)::foo_text)::text)\n Total runtime: 6.849 ms\n\nSo the planner has been able to push the condition down into the bottom tables, but it\ncan't use the PK index. Is this because of all the casts?\n\nDean.\n\n_________________________________________________________________\n100’s of Music vouchers to be won with MSN Music\nhttps://www.musicmashup.co.uk", "msg_date": "Sat, 24 Nov 2007 12:06:01 +0000", "msg_from": "Dean Rasheed <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with UNION ALL view and domains" } ]
[ { "msg_contents": "Hello,\n\n\nI'm having serious peformance problems with PostGreSQL and Windows Server 2003\nEnterprise Edition. The PostgreSQL Server don't starts if I set the shared\nbuffers high than\n1GB. All my programs can use only 3 GB of RAM and I have 8GB of RAM.\nWhen I try to execute a query in a table about 4 milion registers, my\napplication crashes with an error memory message.\n\nThe configuration:\n\nPostGreSQL 8.2.5\nO.S: Windows Server 2003 Enterprise Edition\n Service Pack 2\n\nComputer:\ndual quad core Intel(R) Xeon(R) CPU E5345 @ 2.33GHz\n8GB of RAM\nPhysical Address Extension\n3 HDs in RAID-5\n\n\nMy boot.ini:\n\n[boot loader]\ntimeout=30\ndefault=multi(0)disk(0)rdisk(0)partition(2)\\WINDOWS\n[operating systems]\nmulti(0)disk(0)rdisk(0)partition(2)\\WINDOWS=\"Windows Server 2003, Enterprise\"\n/fastdetect /PAE /NoExecute=OptOut /3GB\n\n\nPostGreSQL.conf:\n\n\nshared_buffers = 1024MB\t\t\t# min 128kB or max_connections*16kB\n\t\t\t\t\t# (change requires restart)\ntemp_buffers = 32MB\t\t\t# min 800kB\n#max_prepared_transactions = 5\t\t# can be 0 or more\n\t\t\t\t\t# (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem =512MB\t\t\t\t# min 64kB\nmaintenance_work_mem = 256MB\t\t# min 1MB\nmax_stack_depth = 2MB\t\t\t# min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 409600\t\t# min max_fsm_relations*16, 6 bytes each\n\t\t\t\t\t# (change requires restart)\n#max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\t\t\t\t\t# (change requires restart)\n\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n\n# - Checkpoints -\n\ncheckpoint_segments = 128\t\t# in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 15min\t\t# range 30s-1h\ncheckpoint_warning = 30s\t\t# 0 is off\n\n\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n\neffective_cache_size = 5120MB\n\n\n\n\n\nThanks, Cl�udia.\n\n\n\n\n", "msg_date": "Fri, 23 Nov 2007 21:20:47 -0200 (BRST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "[email protected] wrote:\n> I'm having serious peformance problems with PostGreSQL and Windows Server 2003\n> Enterprise Edition. The PostgreSQL Server don't starts if I set the shared\n> buffers high than\n> 1GB. All my programs can use only 3 GB of RAM and I have 8GB of RAM.\n> When I try to execute a query in a table about 4 milion registers, my\n> application crashes with an error memory message.\n\nWhat error message do you get if setting shared_buffers higher than 1GB? \nExactly what error message do you get when the application crashes?\n\n> work_mem =512MB\t\t\t\t# min 64kB\n\nThat's way too high for most applications. In a complex query, each sort \nor hash node can will use up work_mem amount of memory. That means that \nif you have a very complex query with several such nodes, it will run \nout of memory. Try something like 16MB.\n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 24 Nov 2007 08:53:24 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "Hi,\n\nWhen I set shared buffers higher than 1GB, PostGreSQL doens't start.\n\nWhen my application crashes I receive a message \"Out of memory\" or\n\"invalid sql statement\". But the sql statement is ok - if I execute it\nin a table with less registers, it works and it is very simple. When I monitor\nthe processes\nI can see that PostGreSQL allocs only 700 MB of memory, and my application 2GB.\nTotal: 3GB.\nMy program was made in Delphi 2006, and I use ADO via ODBC to connect to\nPostGreSQL.\n\nThe structure of my table:\n\nCREATE TABLE \"public\".\"fato_financeiro\" (\n \"CODCLI\" VARCHAR(6),\n \"PREST\" VARCHAR(4) NOT NULL,\n \"NUMTRANSVENDA\" VARCHAR(10) NOT NULL,\n \"RECNUM\" VARCHAR(8) NOT NULL,\n \"CODFORNEC\" VARCHAR(8),\n \"TIPO\" VARCHAR(2),\n \"NUMDOC\" VARCHAR(10),\n \"PREST_1\" VARCHAR(4),\n \"VALOR\" DOUBLE PRECISION,\n \"DTEMISSAO\" TIMESTAMP WITH TIME ZONE,\n \"DTVENC\" TIMESTAMP WITH TIME ZONE,\n \"DTPAG\" TIMESTAMP WITH TIME ZONE,\n \"VPAGO\" DOUBLE PRECISION,\n \"PAGO_PAG\" VARCHAR(9),\n \"ATRASADO\" VARCHAR(3),\n CONSTRAINT \"fato_financeiro_idx\" PRIMARY KEY(\"PREST\", \"NUMTRANSVENDA\", \"RECNUM\")\n) WITHOUT OIDS;\n\n\nSQL statement:\n\nselect\nfato_financeiro.\"TIPO\",\nfato_financeiro.\"NUMDOC\",\nfato_financeiro.\"PREST\",\nfato_financeiro.\"NUMDOC\",\nfato_financeiro.\"DTVENC\",\nfato_financeiro.\"DTPAG\",\nfato_financeiro.\"PAGO_PAG\",\nfato_financeiro.\"ATRASADO\",\nfato_financeiro.\"CODCLI\",\nfato_financeiro.\"CODFORNEC\",\nfato_financeiro.\"DTEMISSAO\"\nfrom fato_financeiro\n\n\n\nThanks,\nCl�udia.\n\n\n\n\n> [email protected] wrote:\n>> I'm having serious peformance problems with PostGreSQL and Windows Server 2003\n>> Enterprise Edition. The PostgreSQL Server don't starts if I set the shared\n>> buffers high than\n>> 1GB. All my programs can use only 3 GB of RAM and I have 8GB of RAM.\n>> When I try to execute a query in a table about 4 milion registers, my\n>> application crashes with an error memory message.\n>\n> What error message do you get if setting shared_buffers higher than 1GB?\n> Exactly what error message do you get when the application crashes?\n>\n>> work_mem =512MB\t\t\t\t# min 64kB\n>\n> That's way too high for most applications. In a complex query, each sort\n> or hash node can will use up work_mem amount of memory. That means that\n> if you have a very complex query with several such nodes, it will run\n> out of memory. Try something like 16MB.\n>\n>\n> --\n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n", "msg_date": "Sat, 24 Nov 2007 10:00:25 -0200 (BRST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "Hello,\n\n I forgot to say that I changed work_mem to 16 MB but I didn't have sucess.\nI received the same\nerror message.\n\n\nThanks,\nCl�udia Amorim.\n\n\n\n> Hi,\n>\n> When I set shared buffers higher than 1GB, PostGreSQL doens't start.\n>\n> When my application crashes I receive a message \"Out of memory\" or\n> \"invalid sql statement\". But the sql statement is ok - if I execute it\n> in a table with less registers, it works and it is very simple. When I monitor\n> the processes\n> I can see that PostGreSQL allocs only 700 MB of memory, and my application 2GB.\n> Total: 3GB.\n> My program was made in Delphi 2006, and I use ADO via ODBC to connect to\n> PostGreSQL.\n>\n> The structure of my table:\n>\n> CREATE TABLE \"public\".\"fato_financeiro\" (\n> \"CODCLI\" VARCHAR(6),\n> \"PREST\" VARCHAR(4) NOT NULL,\n> \"NUMTRANSVENDA\" VARCHAR(10) NOT NULL,\n> \"RECNUM\" VARCHAR(8) NOT NULL,\n> \"CODFORNEC\" VARCHAR(8),\n> \"TIPO\" VARCHAR(2),\n> \"NUMDOC\" VARCHAR(10),\n> \"PREST_1\" VARCHAR(4),\n> \"VALOR\" DOUBLE PRECISION,\n> \"DTEMISSAO\" TIMESTAMP WITH TIME ZONE,\n> \"DTVENC\" TIMESTAMP WITH TIME ZONE,\n> \"DTPAG\" TIMESTAMP WITH TIME ZONE,\n> \"VPAGO\" DOUBLE PRECISION,\n> \"PAGO_PAG\" VARCHAR(9),\n> \"ATRASADO\" VARCHAR(3),\n> CONSTRAINT \"fato_financeiro_idx\" PRIMARY KEY(\"PREST\", \"NUMTRANSVENDA\",\n> \"RECNUM\")\n> ) WITHOUT OIDS;\n>\n>\n> SQL statement:\n>\n> select\n> fato_financeiro.\"TIPO\",\n> fato_financeiro.\"NUMDOC\",\n> fato_financeiro.\"PREST\",\n> fato_financeiro.\"NUMDOC\",\n> fato_financeiro.\"DTVENC\",\n> fato_financeiro.\"DTPAG\",\n> fato_financeiro.\"PAGO_PAG\",\n> fato_financeiro.\"ATRASADO\",\n> fato_financeiro.\"CODCLI\",\n> fato_financeiro.\"CODFORNEC\",\n> fato_financeiro.\"DTEMISSAO\"\n> from fato_financeiro\n>\n>\n>\n> Thanks,\n> Cl�udia.\n>\n>\n>\n>\n>> [email protected] wrote:\n>>> I'm having serious peformance problems with PostGreSQL and Windows Server\n>>> 2003\n>>> Enterprise Edition. The PostgreSQL Server don't starts if I set the shared\n>>> buffers high than\n>>> 1GB. All my programs can use only 3 GB of RAM and I have 8GB of RAM.\n>>> When I try to execute a query in a table about 4 milion registers, my\n>>> application crashes with an error memory message.\n>>\n>> What error message do you get if setting shared_buffers higher than 1GB?\n>> Exactly what error message do you get when the application crashes?\n>>\n>>> work_mem =512MB\t\t\t\t# min 64kB\n>>\n>> That's way too high for most applications. In a complex query, each sort\n>> or hash node can will use up work_mem amount of memory. That means that\n>> if you have a very complex query with several such nodes, it will run\n>> out of memory. Try something like 16MB.\n>>\n>>\n>> --\n>> Heikki Linnakangas\n>> EnterpriseDB http://www.enterprisedb.com\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n\n\n", "msg_date": "Sun, 25 Nov 2007 02:57:01 -0200 (BRST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "On Nov 24, 2007 10:57 PM, <[email protected]> wrote:\n> Hello,\n>\n> I forgot to say that I changed work_mem to 16 MB but I didn't have sucess.\n> I received the same\n> error message.\n\nThe error message you're getting is from your client because it's\ngetting too big of a result set back at once.\n\nTry using a cursor\n", "msg_date": "Sun, 25 Nov 2007 00:25:56 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "\n\nI'm already using a cursor via ODBC.\n\n\n\n\n> On Nov 24, 2007 10:57 PM, <[email protected]> wrote:\n>> Hello,\n>>\n>> I forgot to say that I changed work_mem to 16 MB but I didn't have\n>> sucess.\n>> I received the same\n>> error message.\n>\n> The error message you're getting is from your client because it's\n> getting too big of a result set back at once.\n>\n> Try using a cursor\n>\n\n\n", "msg_date": "Sun, 25 Nov 2007 10:42:01 -0200 (BRST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "Are you then trying to process the whole data set at once? I'm pretty\ncertain the issue is your app, not pgsql, running out of memory.\n", "msg_date": "Sun, 25 Nov 2007 08:33:41 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "Hi,\n\nI'm using a cursor.\n\nHere is the a piece of log file (psqlodbc):\n\n[0.000]conn=02DE3A70, PGAPI_DriverConnect(\nin)='DSN=BI;UID=biuser;PWD=xxxxxxxxx;', fDriverCompletion=0\n[0.000]DSN info:\nDSN='BI',server='localhost',port='5432',dbase='BI',user='biuser',passwd='xxxxx'\n[0.000] \nonlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0'\n[0.000] conn_settings='',conn_encoding='(null)'\n[0.000] translation_dll='',translation_option=''\n[0.000]Driver Version='08.02.0400,200704270001' linking static Multithread library\n[0.000]Global Options: fetch=100, socket=4096, unknown_sizes=0,\nmax_varchar_size=255, max_longvarchar_size=8190\n[0.000] disable_optimizer=0, ksqo=1, unique_index=1,\nuse_declarefetch=1\n[0.000] text_as_longvarchar=1, unknowns_as_longvarchar=0,\nbools_as_char=1 NAMEDATALEN=64\n[0.000] extra_systable_prefixes='dd_;', conn_settings=''\nconn_encoding=''\n[0.046] [ PostgreSQL version string = '8.2.5' ]\n[0.046] [ PostgreSQL version number = '8.2' ]\n[0.046]conn=02DE3A70, query='select oid, typbasetype from pg_type where typname\n= 'lo''\n[0.046]NOTICE from backend during send_query: 'SLOG'\n[0.046]NOTICE from backend during send_query: 'C00000'\n[0.046]NOTICE from backend during send_query: 'Mstatement: select oid,\ntypbasetype from pg_type where typname = 'lo''\n[0.046]NOTICE from backend during send_query: 'Fpostgres.c'\n[0.046]NOTICE from backend during send_query: 'L811'\n[0.046]NOTICE from backend during send_query: 'Rexec_simple_query'\n[0.046] [ fetched 1 rows ]\n[0.046] [ Large Object oid = 17288 ]\n[0.046] [ Client encoding = 'LATIN9' (code = 16) ]\n[0.046]conn=02DE3A70,\nPGAPI_DriverConnect(out)='DSN=BI;DATABASE=BI;SERVER=localhost;PORT=5432;UID=biuser;PWD=xxxxxxxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=0;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=1;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'\n[0.062]STATEMENT ERROR: func=set_statement_option, desc='', errnum=30,\nerrmsg='The option may be for MS SQL Server(Set)'\n[0.062] \n------------------------------------------------------------\n[0.062] hdbc=02DE3A70, stmt=02DE85C8, result=00000000\n[0.062] prepare=0, internal=0\n[0.062] bindings=00000000, bindings_allocated=0\n[0.062] parameters=02DE8F48, parameters_allocated=1\n[0.062] statement_type=-2, statement='(NULL)'\n[0.062] stmt_with_params='(NULL)'\n[0.062] data_at_exec=-1, current_exec_param=-1, put_data=0\n[0.062] currTuple=-1, current_col=-1, lobj_fd=-1\n[0.062] maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0,\nscroll_concurrency=1\n[0.062] cursor_name=''\n[0.062] ----------------QResult Info\n-------------------------------\n[0.062]CONN ERROR: func=set_statement_option, desc='', errnum=0, errmsg='(NULL)'\n[0.062]\n\n\n\nThanks,\nCl�udia.\n\n\n\n> Are you then trying to process the whole data set at once? I'm pretty\n> certain the issue is your app, not pgsql, running out of memory.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n", "msg_date": "Sun, 25 Nov 2007 14:19:03 -0200 (BRST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" }, { "msg_contents": "There is no mention of Out of Memory in that piece of log.\n\[email protected] wrote:\n> Hi,\n> \n> I'm using a cursor.\n> \n> Here is the a piece of log file (psqlodbc):\n> \n> [0.000]conn=02DE3A70, PGAPI_DriverConnect(\n> in)='DSN=BI;UID=biuser;PWD=xxxxxxxxx;', fDriverCompletion=0\n> [0.000]DSN info:\n> DSN='BI',server='localhost',port='5432',dbase='BI',user='biuser',passwd='xxxxx'\n> [0.000] \n> onlyread='0',protocol='7.4',showoid='0',fakeoidindex='0',showsystable='0'\n> [0.000] conn_settings='',conn_encoding='(null)'\n> [0.000] translation_dll='',translation_option=''\n> [0.000]Driver Version='08.02.0400,200704270001' linking static Multithread library\n> [0.000]Global Options: fetch=100, socket=4096, unknown_sizes=0,\n> max_varchar_size=255, max_longvarchar_size=8190\n> [0.000] disable_optimizer=0, ksqo=1, unique_index=1,\n> use_declarefetch=1\n> [0.000] text_as_longvarchar=1, unknowns_as_longvarchar=0,\n> bools_as_char=1 NAMEDATALEN=64\n> [0.000] extra_systable_prefixes='dd_;', conn_settings=''\n> conn_encoding=''\n> [0.046] [ PostgreSQL version string = '8.2.5' ]\n> [0.046] [ PostgreSQL version number = '8.2' ]\n> [0.046]conn=02DE3A70, query='select oid, typbasetype from pg_type where typname\n> = 'lo''\n> [0.046]NOTICE from backend during send_query: 'SLOG'\n> [0.046]NOTICE from backend during send_query: 'C00000'\n> [0.046]NOTICE from backend during send_query: 'Mstatement: select oid,\n> typbasetype from pg_type where typname = 'lo''\n> [0.046]NOTICE from backend during send_query: 'Fpostgres.c'\n> [0.046]NOTICE from backend during send_query: 'L811'\n> [0.046]NOTICE from backend during send_query: 'Rexec_simple_query'\n> [0.046] [ fetched 1 rows ]\n> [0.046] [ Large Object oid = 17288 ]\n> [0.046] [ Client encoding = 'LATIN9' (code = 16) ]\n> [0.046]conn=02DE3A70,\n> PGAPI_DriverConnect(out)='DSN=BI;DATABASE=BI;SERVER=localhost;PORT=5432;UID=biuser;PWD=xxxxxxxxx;SSLmode=disable;ReadOnly=0;Protocol=7.4-1;FakeOidIndex=0;ShowOidColumn=0;RowVersioning=0;ShowSystemTables=0;ConnSettings=;Fetch=100;Socket=4096;UnknownSizes=0;MaxVarcharSize=255;MaxLongVarcharSize=8190;Debug=0;CommLog=1;Optimizer=0;Ksqo=1;UseDeclareFetch=1;TextAsLongVarchar=1;UnknownsAsLongVarchar=0;BoolsAsChar=1;Parse=0;CancelAsFreeStmt=0;ExtraSysTablePrefixes=dd_;;LFConversion=1;UpdatableCursors=1;DisallowPremature=0;TrueIsMinus1=0;BI=0;ByteaAsLongVarBinary=0;UseServerSidePrepare=0;LowerCaseIdentifier=0;XaOpt=1'\n> [0.062]STATEMENT ERROR: func=set_statement_option, desc='', errnum=30,\n> errmsg='The option may be for MS SQL Server(Set)'\n> [0.062] \n> ------------------------------------------------------------\n> [0.062] hdbc=02DE3A70, stmt=02DE85C8, result=00000000\n> [0.062] prepare=0, internal=0\n> [0.062] bindings=00000000, bindings_allocated=0\n> [0.062] parameters=02DE8F48, parameters_allocated=1\n> [0.062] statement_type=-2, statement='(NULL)'\n> [0.062] stmt_with_params='(NULL)'\n> [0.062] data_at_exec=-1, current_exec_param=-1, put_data=0\n> [0.062] currTuple=-1, current_col=-1, lobj_fd=-1\n> [0.062] maxRows=0, rowset_size=1, keyset_size=0, cursor_type=0,\n> scroll_concurrency=1\n> [0.062] cursor_name=''\n> [0.062] ----------------QResult Info\n> -------------------------------\n> [0.062]CONN ERROR: func=set_statement_option, desc='', errnum=0, errmsg='(NULL)'\n> [0.062]\n> \n> \n> \n> Thanks,\n> Cl�udia.\n> \n> \n> \n>> Are you then trying to process the whole data set at once? I'm pretty\n>> certain the issue is your app, not pgsql, running out of memory.\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 2: Don't 'kill -9' the postmaster\n>>\n> \n> \n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 26 Nov 2007 09:39:23 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with PostGreSQL and Windows 2003" } ]
[ { "msg_contents": "Hi all,\n\nI read that pg_dump can run while the database is being used and makes\n\"consistent backups\".\n\nI have a huge and *heavy* selected, inserted and updated database.\nCurrently I have a cron task that disconnect the database users, make a\nbackup using pg_dump and put the database online again. The problem is,\nnow there are too much information and everyday the database store more\nand more data, the backup process needs more and more time to run and I\nam thinking about to do the backup using a process that let me to do it\nwith the minimal interruptions for the users.\n\nI do not need a last second backup. I could the a backup with \"almost\nall\" the data but I need the information on it to be coherent. For\nexample, if the backup store information about an invoice it *must* to\nstore both header and items invoice information. I could live if the\nbackup does not store some invoices information when is ran, because\nthey ll be backuped the next time the backup process run. But I can not\nstore only a part of the invoices. That is I call a coherent backup.\n\nThe best for me is that the cron tab does a concurrent backup with all\nthe information until the time it starts to run while the clients are\nusing the database. Example: if the cron launch the backup process at\n12:30 AM, the backup moust be builded with all the information *until*\n12:30AM. So if I need to restore it I get a database coherent with the\nsame information like it was at 12:30AM. it does not matter if the\nprocess needs 4 hours to run.\n\nDoes the pg_dump create this kind of \"consistent backups\"? Or do I need\nto do the backups using another program?\n\nRegards\n\nPablo\n\n", "msg_date": "Sun, 25 Nov 2007 11:46:37 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "doubt with pg_dump and high concurrent used databases" }, { "msg_contents": "Pablo Alcaraz wrote:\n> I read that pg_dump can run while the database is being used and makes\n> \"consistent backups\".\n> \n> I have a huge and *heavy* selected, inserted and updated database.\n> Currently I have a cron task that disconnect the database users, make a\n> backup using pg_dump and put the database online again. The problem is,\n> now there are too much information and everyday the database store more\n> and more data, the backup process needs more and more time to run and I\n> am thinking about to do the backup using a process that let me to do it\n> with the minimal interruptions for the users.\n> \n> I do not need a last second backup. I could the a backup with \"almost\n> all\" the data but I need the information on it to be coherent. For\n> example, if the backup store information about an invoice it *must* to\n> store both header and items invoice information. I could live if the\n> backup does not store some invoices information when is ran, because\n> they ll be backuped the next time the backup process run. But I can not\n> store only a part of the invoices. That is I call a coherent backup.\n> \n> The best for me is that the cron tab does a concurrent backup with all\n> the information until the time it starts to run while the clients are\n> using the database. Example: if the cron launch the backup process at\n> 12:30 AM, the backup moust be builded with all the information *until*\n> 12:30AM. So if I need to restore it I get a database coherent with the\n> same information like it was at 12:30AM. it does not matter if the\n> process needs 4 hours to run.\n> \n> Does the pg_dump create this kind of \"consistent backups\"?\n\nYes, pg_dump is exactly what you need. It will dump the contents of the \ndatabase as they were when it started, regardless of how long it takes, \nand there's no need to shut down or disconnect concurrent users.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sun, 25 Nov 2007 18:17:59 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases" }, { "msg_contents": "On Nov 25, 2007, at 10:46 AM, Pablo Alcaraz wrote:\n\n> Hi all,\n>\n> I read that pg_dump can run while the database is being used and makes\n> \"consistent backups\".\n>\n> I have a huge and *heavy* selected, inserted and updated database.\n> Currently I have a cron task that disconnect the database users, \n> make a\n> backup using pg_dump and put the database online again. The problem \n> is,\n> now there are too much information and everyday the database store \n> more\n> and more data, the backup process needs more and more time to run \n> and I\n> am thinking about to do the backup using a process that let me to \n> do it\n> with the minimal interruptions for the users.\n>\n> I do not need a last second backup. I could the a backup with \"almost\n> all\" the data but I need the information on it to be coherent. For\n> example, if the backup store information about an invoice it *must* to\n> store both header and items invoice information. I could live if the\n> backup does not store some invoices information when is ran, because\n> they ll be backuped the next time the backup process run. But I can \n> not\n> store only a part of the invoices. That is I call a coherent backup.\n>\n> The best for me is that the cron tab does a concurrent backup with all\n> the information until the time it starts to run while the clients are\n> using the database. Example: if the cron launch the backup process at\n> 12:30 AM, the backup moust be builded with all the information *until*\n> 12:30AM. So if I need to restore it I get a database coherent with the\n> same information like it was at 12:30AM. it does not matter if the\n> process needs 4 hours to run.\n>\n> Does the pg_dump create this kind of \"consistent backups\"? Or do I \n> need\n> to do the backups using another program?\n\nYes, that is exactly what pg_dump does.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Sun, 25 Nov 2007 12:20:45 -0600", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases" }, { "msg_contents": "On 25/11/2007, Erik Jones <[email protected]> wrote:\n>\n> On Nov 25, 2007, at 10:46 AM, Pablo Alcaraz wrote:\n>\n> > Hi all,\n> >\n> > I read that pg_dump can run while the database is being used and makes\n> > \"consistent backups\".\n> >\n> > I have a huge and *heavy* selected, inserted and updated database.\n> > Currently I have a cron task that disconnect the database users,\n> > make a\n> > backup using pg_dump and put the database online again. The problem\n> > is,\n> > now there are too much information and everyday the database store\n> > more\n> > and more data, the backup process needs more and more time to run\n> > and I\n> > am thinking about to do the backup using a process that let me to\n> > do it\n> > with the minimal interruptions for the users.\n> >\n> > I do not need a last second backup. I could the a backup with \"almost\n> > all\" the data but I need the information on it to be coherent. For\n> > example, if the backup store information about an invoice it *must* to\n> > store both header and items invoice information. I could live if the\n> > backup does not store some invoices information when is ran, because\n> > they ll be backuped the next time the backup process run. But I can\n> > not\n> > store only a part of the invoices. That is I call a coherent backup.\n> >\n> > The best for me is that the cron tab does a concurrent backup with all\n> > the information until the time it starts to run while the clients are\n> > using the database. Example: if the cron launch the backup process at\n> > 12:30 AM, the backup moust be builded with all the information *until*\n> > 12:30AM. So if I need to restore it I get a database coherent with the\n> > same information like it was at 12:30AM. it does not matter if the\n> > process needs 4 hours to run.\n> >\n> > Does the pg_dump create this kind of \"consistent backups\"? Or do I\n> > need\n> > to do the backups using another program?\n>\n> Yes, that is exactly what pg_dump does.\n>\n>\nYes so long as you are using transactions correctly. Ie doing a begin before\neach invoice and a commit afterwards if your not bothering and using auto\ncommit you *may* have problems. pg_dump will show a constant state at the\ntime when the backup was started. If your database was not \"consistent\" at\nthat time you may have issues, But it will be constant from a database\npoint of view ie foreign keys, primary keys, check constraints, triggers\netc.\n\nIt all depends what you mean by consistent.\n\nPeter.\n\nOn 25/11/2007, Erik Jones <[email protected]> wrote:\nOn Nov 25, 2007, at 10:46 AM, Pablo Alcaraz wrote:> Hi all,>> I read that pg_dump can run while the database is being used and makes> \"consistent backups\".>> I have a huge and *heavy* selected, inserted and updated database.\n> Currently I have a cron task that disconnect the database users,> make a> backup using pg_dump and put the database online again. The problem> is,> now there are too much information and everyday the database store\n> more> and more data, the backup process needs more and more time to run> and I> am thinking about to do the backup using a process that let me to> do it> with the minimal interruptions for the users.\n>> I do not need a last second backup. I could the a backup with \"almost> all\" the data but I need the information on it to be coherent. For> example, if the backup store information about an invoice it *must* to\n> store both header and items invoice information. I could live if the> backup does not store some invoices information when is ran, because> they ll be backuped the next time the backup process run. But I can\n> not> store only a part of the invoices. That is I call a coherent backup.>> The best for me is that the cron tab does a concurrent backup with all> the information until the time it starts to run while the clients are\n> using the database. Example: if the cron launch the backup process at> 12:30 AM, the backup moust be builded with all the information *until*> 12:30AM. So if I need to restore it I get a database coherent with the\n> same information like it was at 12:30AM. it does not matter if the> process needs 4 hours to run.>> Does the pg_dump create this kind of \"consistent backups\"? Or do I> need\n> to do the backups using another program?Yes, that is exactly what pg_dump does.Yes so long as you are using transactions correctly. Ie doing a begin before each invoice and a commit afterwards if your not bothering and using auto commit you *may* have problems. pg_dump will show a constant state at the time when the backup was started. If your database was not \"consistent\"  at that time you may have issues, But it will be constant  from a  database point of view ie foreign keys, primary keys, check constraints, triggers etc.\nIt all depends what you mean by consistent.Peter.", "msg_date": "Sun, 25 Nov 2007 19:15:17 +0000", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases" }, { "msg_contents": "\"Peter Childs\" <[email protected]> writes:\n> On 25/11/2007, Erik Jones <[email protected]> wrote:\n>>> Does the pg_dump create this kind of \"consistent backups\"? Or do I\n>>> need to do the backups using another program?\n>> \n>> Yes, that is exactly what pg_dump does.\n>> \n> Yes so long as you are using transactions correctly. Ie doing a begin before\n> each invoice and a commit afterwards if your not bothering and using auto\n> commit you *may* have problems.\n\nI think you need to qualify that a bit more. What you're saying is that\nif an application has consistency requirements that are momentarily\nviolated during multi-statement updates, and it fails to wrap such\nupdates into a single transaction, then pg_dump could capture one of the\nintermediate states. That's true, but it's hardly pg_dump's fault.\nIf there were a system crash partway through such a sequence, the\nconsistency requirements would be violated afterwards, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Nov 2007 14:35:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases " }, { "msg_contents": "Tom Lane wrote:\n> \"Peter Childs\" <[email protected]> writes:\n> \n>> On 25/11/2007, Erik Jones <[email protected]> wrote:\n>> \n>>>> Does the pg_dump create this kind of \"consistent backups\"? Or do I\n>>>> need to do the backups using another program?\n>>>> \n>>> Yes, that is exactly what pg_dump does.\n>>>\n>>> \n>> Yes so long as you are using transactions correctly. Ie doing a begin before\n>> each invoice and a commit afterwards if your not bothering and using auto\n>> commit you *may* have problems.\n>> \n>\n> I think you need to qualify that a bit more. What you're saying is that\n> if an application has consistency requirements that are momentarily\n> violated during multi-statement updates, and it fails to wrap such\n> updates into a single transaction, then pg_dump could capture one of the\n> intermediate states. That's true, but it's hardly pg_dump's fault.\n> If there were a system crash partway through such a sequence, the\n> consistency requirements would be violated afterwards, too.\n>\n> \n\nAgree. In my case I define \"consistent database state\" like the state \nthe database has when the program that use it is stopped normally and \nwithout errors. In this \"state\" the program starts without troubles and \n\"everything looks fine\". I believe this behavior is because all the \ninserts and updates are made using transactions. Another things will be \na bug, it ll be fixed and it ll not be pg_dump fault.\n\nSo if pg_dump can capture a \"consistent state\" with all the data until \nthe start time, without all the pending open transaction updates/inserts \nin the same way that I did when I stopped the program before start \npg_dump, for me is usefull and enough to solve my problem.\n\nThanks to all!\n\nPablo\n", "msg_date": "Sun, 25 Nov 2007 17:37:45 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": true, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases" }, { "msg_contents": "On 25/11/2007, Pablo Alcaraz <[email protected]> wrote:\n>\n> Tom Lane wrote:\n> > \"Peter Childs\" <[email protected]> writes:\n> >\n> >> On 25/11/2007, Erik Jones <[email protected]> wrote:\n> >>\n> >>>> Does the pg_dump create this kind of \"consistent backups\"? Or do I\n> >>>> need to do the backups using another program?\n> >>>>\n> >>> Yes, that is exactly what pg_dump does.\n> >>>\n> >>>\n> >> Yes so long as you are using transactions correctly. Ie doing a begin\n> before\n> >> each invoice and a commit afterwards if your not bothering and using\n> auto\n> >> commit you *may* have problems.\n> >>\n> >\n> > I think you need to qualify that a bit more. What you're saying is that\n> > if an application has consistency requirements that are momentarily\n> > violated during multi-statement updates, and it fails to wrap such\n> > updates into a single transaction, then pg_dump could capture one of the\n> > intermediate states. That's true, but it's hardly pg_dump's fault.\n> > If there were a system crash partway through such a sequence, the\n> > consistency requirements would be violated afterwards, too.\n> >\n> >\n>\n> Agree. In my case I define \"consistent database state\" like the state\n> the database has when the program that use it is stopped normally and\n> without errors. In this \"state\" the program starts without troubles and\n> \"everything looks fine\". I believe this behavior is because all the\n> inserts and updates are made using transactions. Another things will be\n> a bug, it ll be fixed and it ll not be pg_dump fault.\n>\n> So if pg_dump can capture a \"consistent state\" with all the data until\n> the start time, without all the pending open transaction updates/inserts\n> in the same way that I did when I stopped the program before start\n> pg_dump, for me is usefull and enough to solve my problem.\n>\n> Thanks to all!\n>\n> Pablo\n>\n>\nGiven your long description over what you though was \"constant\" I thought it\nimportant that the answer yes but was given rather than just a plain yes.\nI've met quite a few apps that create inconstant databases when the\ndatabase its self is actually consistent.\n\nPeter\n\nOn 25/11/2007, Pablo Alcaraz <[email protected]> wrote:\nTom Lane wrote:> \"Peter Childs\" <[email protected]> writes:>>> On 25/11/2007, Erik Jones <[email protected]\n> wrote:>>>>>> Does the pg_dump create this kind of \"consistent backups\"? Or do I>>>> need to do the backups using another program?>>>>>>> Yes, that is exactly what pg_dump does.\n>>>>>>>> Yes so long as you are using transactions correctly. Ie doing a begin before>> each invoice and a commit afterwards if your not bothering and using auto>> commit you *may* have problems.\n>>>> I think you need to qualify that a bit more.  What you're saying is that> if an application has consistency requirements that are momentarily> violated during multi-statement updates, and it fails to wrap such\n> updates into a single transaction, then pg_dump could capture one of the> intermediate states.  That's true, but it's hardly pg_dump's fault.> If there were a system crash partway through such a sequence, the\n> consistency requirements would be violated afterwards, too.>>Agree. In my case I define \"consistent database state\" like the statethe database has when the program that use it is stopped normally and\nwithout errors. In this \"state\" the program starts without troubles and\"everything looks fine\". I believe this behavior is because all theinserts and updates are made using transactions. Another things will be\na bug, it ll be fixed and it ll not be pg_dump fault.So if pg_dump can capture a \"consistent state\" with all the data untilthe start time, without all the pending open transaction updates/inserts\nin the same way that I did when I stopped the program before startpg_dump, for me is usefull and enough to solve my problem.Thanks to all!PabloGiven your long description over what you though was \"constant\" I thought it important that the answer yes but was given rather than just a plain yes. I've met quite a few apps that create inconstant databases when  the database its self is actually consistent.\nPeter", "msg_date": "Mon, 26 Nov 2007 10:41:24 +0000", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: doubt with pg_dump and high concurrent used databases" } ]
[ { "msg_contents": "Hi all,\n\nI have a user who is looking to store 500+ GB of data in a database\n(and when all the indexes and metadata are factored in, it's going to\nbe more like 3-4 TB). He is wondering how well PostgreSQL scales with\nTB-sized databases and what can be done to help optimize them (mostly\nhardware and config parameters, maybe a little advocacy). I can't\nspeak on that since I don't have any DBs approaching that size.\n\nThe other part of this puzzle is that he's torn between MS SQL Server\n(running on Windows and unsupported by us) and PostgreSQL (running on\nLinux...which we would fully support). If any of you have ideas of how\nwell PostgreSQL compares to SQL Server, especially in TB-sized\ndatabases, that would be much appreciated.\n\nWe're running PG 8.2.5, by the way.\n\nPeter\n", "msg_date": "Mon, 26 Nov 2007 11:04:50 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "TB-sized databases" }, { "msg_contents": "Peter Koczan wrote:\n> Hi all,\n> \n> I have a user who is looking to store 500+ GB of data in a database\n> (and when all the indexes and metadata are factored in, it's going to\n> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n> TB-sized databases and what can be done to help optimize them (mostly\n> hardware and config parameters, maybe a little advocacy). I can't\n> speak on that since I don't have any DBs approaching that size.\n> \n> The other part of this puzzle is that he's torn between MS SQL Server\n> (running on Windows and unsupported by us) and PostgreSQL (running on\n> Linux...which we would fully support). If any of you have ideas of how\n> well PostgreSQL compares to SQL Server, especially in TB-sized\n> databases, that would be much appreciated.\n> \n> We're running PG 8.2.5, by the way.\n\nWell I can't speak to MS SQL-Server because all of our clients run \nPostgreSQL ;).. I can tell you we have many that are in the 500GB - \n1.5TB range.\n\nAll perform admirably as long as you have the hardware behind it and are \ndoing correct table structuring (such as table partitioning).\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Peter\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Mon, 26 Nov 2007 09:23:07 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "We have several TB database in production and it works well on\nHP rx1620 dual Itanium2, MSA 20, running Linux. It's read-only storage for\nastronomical catalogs with about 4-billions objects. We have custom\nindex for spherical coordinates which provide great performance.\n\nOleg\nOn Mon, 26 Nov 2007, Peter Koczan wrote:\n\n> Hi all,\n>\n> I have a user who is looking to store 500+ GB of data in a database\n> (and when all the indexes and metadata are factored in, it's going to\n> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n> TB-sized databases and what can be done to help optimize them (mostly\n> hardware and config parameters, maybe a little advocacy). I can't\n> speak on that since I don't have any DBs approaching that size.\n>\n> The other part of this puzzle is that he's torn between MS SQL Server\n> (running on Windows and unsupported by us) and PostgreSQL (running on\n> Linux...which we would fully support). If any of you have ideas of how\n> well PostgreSQL compares to SQL Server, especially in TB-sized\n> databases, that would be much appreciated.\n>\n> We're running PG 8.2.5, by the way.\n>\n> Peter\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 26 Nov 2007 20:34:01 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "I had a client that tried to use Ms Sql Server to run a 500Gb+ database. \nThe database simply colapsed. They switched to Teradata and it is \nrunning good. This database has now 1.5Tb+.\n\nCurrently I have clients using postgresql huge databases and they are \nhappy. In one client's database the biggest table has 237Gb+ (only 1 \ntable!) and postgresql run the database without problem using \npartitioning, triggers and rules (using postgresql 8.2.5).\n\nPablo\n\nPeter Koczan wrote:\n> Hi all,\n>\n> I have a user who is looking to store 500+ GB of data in a database\n> (and when all the indexes and metadata are factored in, it's going to\n> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n> TB-sized databases and what can be done to help optimize them (mostly\n> hardware and config parameters, maybe a little advocacy). I can't\n> speak on that since I don't have any DBs approaching that size.\n>\n> The other part of this puzzle is that he's torn between MS SQL Server\n> (running on Windows and unsupported by us) and PostgreSQL (running on\n> Linux...which we would fully support). If any of you have ideas of how\n> well PostgreSQL compares to SQL Server, especially in TB-sized\n> databases, that would be much appreciated.\n>\n> We're running PG 8.2.5, by the way.\n>\n> Peter\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n> \n\n", "msg_date": "Mon, 26 Nov 2007 13:44:23 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "I think either would work; both PostgreSQL and MS SQL Server have \nsuccess stories out there running VLDBs. It really depends on what you \nknow and what you have. If you have a lot of experience with Postgres \nrunning on Linux, and not much with SQL Server on Windows, of course the \nformer would be a better choice for you. You stand a much better chance \nworking with tools you know.\n\n\nPablo Alcaraz wrote:\n> I had a client that tried to use Ms Sql Server to run a 500Gb+ database. \n> The database simply colapsed. They switched to Teradata and it is \n> running good. This database has now 1.5Tb+.\n> \n> Currently I have clients using postgresql huge databases and they are \n> happy. In one client's database the biggest table has 237Gb+ (only 1 \n> table!) and postgresql run the database without problem using \n> partitioning, triggers and rules (using postgresql 8.2.5).\n> \n> Pablo\n> \n> Peter Koczan wrote:\n>> Hi all,\n>>\n>> I have a user who is looking to store 500+ GB of data in a database\n>> (and when all the indexes and metadata are factored in, it's going to\n>> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n>> TB-sized databases and what can be done to help optimize them (mostly\n>> hardware and config parameters, maybe a little advocacy). I can't\n>> speak on that since I don't have any DBs approaching that size.\n>>\n>> The other part of this puzzle is that he's torn between MS SQL Server\n>> (running on Windows and unsupported by us) and PostgreSQL (running on\n>> Linux...which we would fully support). If any of you have ideas of how\n>> well PostgreSQL compares to SQL Server, especially in TB-sized\n>> databases, that would be much appreciated.\n>>\n>> We're running PG 8.2.5, by the way.\n>>\n>> Peter\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>>\n>> http://archives.postgresql.org\n>>\n>> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Mon, 26 Nov 2007 14:16:28 -0500", "msg_from": "Stephen Cook <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Thanks all. This is just what I needed.\n\nOn Nov 26, 2007 1:16 PM, Stephen Cook <[email protected]> wrote:\n> I think either would work; both PostgreSQL and MS SQL Server have\n> success stories out there running VLDBs. It really depends on what you\n> know and what you have. If you have a lot of experience with Postgres\n> running on Linux, and not much with SQL Server on Windows, of course the\n> former would be a better choice for you. You stand a much better chance\n> working with tools you know.\n>\n>\n>\n> Pablo Alcaraz wrote:\n> > I had a client that tried to use Ms Sql Server to run a 500Gb+ database.\n> > The database simply colapsed. They switched to Teradata and it is\n> > running good. This database has now 1.5Tb+.\n> >\n> > Currently I have clients using postgresql huge databases and they are\n> > happy. In one client's database the biggest table has 237Gb+ (only 1\n> > table!) and postgresql run the database without problem using\n> > partitioning, triggers and rules (using postgresql 8.2.5).\n> >\n> > Pablo\n> >\n> > Peter Koczan wrote:\n> >> Hi all,\n> >>\n> >> I have a user who is looking to store 500+ GB of data in a database\n> >> (and when all the indexes and metadata are factored in, it's going to\n> >> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n> >> TB-sized databases and what can be done to help optimize them (mostly\n> >> hardware and config parameters, maybe a little advocacy). I can't\n> >> speak on that since I don't have any DBs approaching that size.\n> >>\n> >> The other part of this puzzle is that he's torn between MS SQL Server\n> >> (running on Windows and unsupported by us) and PostgreSQL (running on\n> >> Linux...which we would fully support). If any of you have ideas of how\n> >> well PostgreSQL compares to SQL Server, especially in TB-sized\n> >> databases, that would be much appreciated.\n> >>\n> >> We're running PG 8.2.5, by the way.\n> >>\n> >> Peter\n> >>\n> >> ---------------------------(end of broadcast)---------------------------\n> >> TIP 4: Have you searched our list archives?\n> >>\n> >> http://archives.postgresql.org\n> >>\n> >>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: don't forget to increase your free space map settings\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n", "msg_date": "Tue, 27 Nov 2007 14:18:51 -0600", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Tue, 2007-11-27 at 14:18 -0600, Peter Koczan wrote:\n\n> Thanks all. This is just what I needed.\n\nAll of those responses have cooked up quite a few topics into one. Large\ndatabases might mean text warehouses, XML message stores, relational\narchives and fact-based business data warehouses.\n\nThe main thing is that TB-sized databases are performance critical. So\nit all depends upon your workload really as to how well PostgreSQL, or\nanother other RDBMS vendor can handle them.\n\n\nAnyway, my reason for replying to this thread is that I'm planning\nchanges for PostgreSQL 8.4+ that will make allow us to get bigger and\nfaster databases. If anybody has specific concerns then I'd like to hear\nthem so I can consider those things in the planning stages.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Tue, 27 Nov 2007 20:57:25 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Simon Riggs wrote:\n> All of those responses have cooked up quite a few topics into one. Large\n> databases might mean text warehouses, XML message stores, relational\n> archives and fact-based business data warehouses.\n>\n> The main thing is that TB-sized databases are performance critical. So\n> it all depends upon your workload really as to how well PostgreSQL, or\n> another other RDBMS vendor can handle them.\n>\n>\n> Anyway, my reason for replying to this thread is that I'm planning\n> changes for PostgreSQL 8.4+ that will make allow us to get bigger and\n> faster databases. If anybody has specific concerns then I'd like to hear\n> them so I can consider those things in the planning stages\nit would be nice to do something with selects so we can recover a rowset \non huge tables using a criteria with indexes without fall running a full \nscan.\n\nIn my opinion, by definition, a huge database sooner or later will have \ntables far bigger than RAM available (same for their indexes). I think \nthe queries need to be solved using indexes enough smart to be fast on disk.\n\nPablo\n", "msg_date": "Tue, 27 Nov 2007 18:06:34 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Tue, 27 Nov 2007, Pablo Alcaraz wrote:\n> it would be nice to do something with selects so we can recover a rowset\n> on huge tables using a criteria with indexes without fall running a full\n> scan.\n\nYou mean: Be able to tell Postgres \"Don't ever do a sequential scan of\nthis table. It's silly. I would rather the query failed than have to wait\nfor a sequential scan of the entire table.\"\n\nYes, that would be really useful, if you have huge tables in your\ndatabase.\n\nMatthew\n\n-- \nTrying to write a program that can't be written is... well, it can be an\nenormous amount of fun! -- Computer Science Lecturer\n", "msg_date": "Wed, 28 Nov 2007 12:53:47 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "In response to Matthew <[email protected]>:\n\n> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:\n> > it would be nice to do something with selects so we can recover a rowset\n> > on huge tables using a criteria with indexes without fall running a full\n> > scan.\n> \n> You mean: Be able to tell Postgres \"Don't ever do a sequential scan of\n> this table. It's silly. I would rather the query failed than have to wait\n> for a sequential scan of the entire table.\"\n> \n> Yes, that would be really useful, if you have huge tables in your\n> database.\n\nIs there something wrong with:\nset enable_seqscan = off\n?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 28 Nov 2007 08:27:16 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 2007-11-28 at 08:27 -0500, Bill Moran wrote:\n> Is there something wrong with:\n> set enable_seqscan = off\n> ?\n\nNothing wrong with enable_seqscan = off except it is all or nothing type\nof thing... if you want the big table to never use seqscan, but a medium\ntable which is joined in should use it, then what you do ? And setting\nenable_seqscan = off will actually not mean the planner can't use a\nsequential scan for the query if no other alternative exist. In any case\nit doesn't mean \"please throw an error if you can't do this without a\nsequential scan\". \n\nIn fact an even more useful option would be to ask the planner to throw\nerror if the expected cost exceeds a certain threshold...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 28 Nov 2007 14:48:22 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "In response to Csaba Nagy <[email protected]>:\n\n> On Wed, 2007-11-28 at 08:27 -0500, Bill Moran wrote:\n> > Is there something wrong with:\n> > set enable_seqscan = off\n> > ?\n> \n> Nothing wrong with enable_seqscan = off except it is all or nothing type\n> of thing...\n\nIf that's true, then I have a bug report to file:\n\ntest=# set enable_seqscan=off;\nSET\ntest=# show enable_seqscan;\n enable_seqscan \n----------------\n off\n(1 row)\n\ntest=# set enable_seqscan=on;\nSET\ntest=# show enable_seqscan;\n enable_seqscan \n----------------\n on\n(1 row)\n\nIt looks to me to be session-alterable.\n\n> if you want the big table to never use seqscan, but a medium\n> table which is joined in should use it, then what you do ? And setting\n> enable_seqscan = off will actually not mean the planner can't use a\n> sequential scan for the query if no other alternative exist. In any case\n> it doesn't mean \"please throw an error if you can't do this without a\n> sequential scan\".\n\nTrue. It would still choose some other plan.\n\n> In fact an even more useful option would be to ask the planner to throw\n> error if the expected cost exceeds a certain threshold...\n\nInteresting concept.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 28 Nov 2007 08:54:41 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "\"Bill Moran\" <[email protected]> writes:\n\n> In response to Matthew <[email protected]>:\n>\n>> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:\n>> > it would be nice to do something with selects so we can recover a rowset\n>> > on huge tables using a criteria with indexes without fall running a full\n>> > scan.\n>> \n>> You mean: Be able to tell Postgres \"Don't ever do a sequential scan of\n>> this table. It's silly. I would rather the query failed than have to wait\n>> for a sequential scan of the entire table.\"\n>> \n>> Yes, that would be really useful, if you have huge tables in your\n>> database.\n>\n> Is there something wrong with:\n> set enable_seqscan = off\n> ?\n\nThis does kind of the opposite of what you would actually want here. What you\nwant is that if you give it a query which would be best satisfied by a\nsequential scan it should throw an error since you've obviously made an error\nin the query.\n\nWhat this does is it forces such a query to use an even *slower* method such\nas a large index scan. In cases where there isn't any other method it goes\nahead and does the sequential scan anyways.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Wed, 28 Nov 2007 13:55:02 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "In response to Gregory Stark <[email protected]>:\n\n> \"Bill Moran\" <[email protected]> writes:\n> \n> > In response to Matthew <[email protected]>:\n> >\n> >> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:\n> >> > it would be nice to do something with selects so we can recover a rowset\n> >> > on huge tables using a criteria with indexes without fall running a full\n> >> > scan.\n> >> \n> >> You mean: Be able to tell Postgres \"Don't ever do a sequential scan of\n> >> this table. It's silly. I would rather the query failed than have to wait\n> >> for a sequential scan of the entire table.\"\n> >> \n> >> Yes, that would be really useful, if you have huge tables in your\n> >> database.\n> >\n> > Is there something wrong with:\n> > set enable_seqscan = off\n> > ?\n> \n> This does kind of the opposite of what you would actually want here. What you\n> want is that if you give it a query which would be best satisfied by a\n> sequential scan it should throw an error since you've obviously made an error\n> in the query.\n> \n> What this does is it forces such a query to use an even *slower* method such\n> as a large index scan. In cases where there isn't any other method it goes\n> ahead and does the sequential scan anyways.\n\nAh. I misunderstood the intent of the comment.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 28 Nov 2007 08:59:35 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 2007-11-28 at 08:54 -0500, Bill Moran wrote:\n> > Nothing wrong with enable_seqscan = off except it is all or nothing type\n> > of thing...\n> \n> If that's true, then I have a bug report to file:\n[snip]\n> It looks to me to be session-alterable.\n\nI didn't mean that it can't be set per session, I meant that it is not\nfine grained enough to select the affected table but it affects _all_\ntables in a query... and big tables are rarely alone in a query.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Wed, 28 Nov 2007 15:03:44 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Matthew wrote:\n> On Tue, 27 Nov 2007, Pablo Alcaraz wrote:\n> \n>> it would be nice to do something with selects so we can recover a rowset\n>> on huge tables using a criteria with indexes without fall running a full\n>> scan.\n>> \n>\n> You mean: Be able to tell Postgres \"Don't ever do a sequential scan of\n> this table. It's silly. I would rather the query failed than have to wait\n> for a sequential scan of the entire table.\"\n>\n> Yes, that would be really useful, if you have huge tables in your\n> database.\n> \n\nThanks. That would be nice too. I want that Postgres does not fall so \neasy to do sequential scan if a field are indexed. if it concludes that \nthe index is *huge* and it does not fit in ram I want that Postgresql \nuses the index anyway because the table is *more than huge* and a \nsequential scan will take hours.\n\nI ll put some examples in a next mail.\n\nRegards\n\nPablo\n", "msg_date": "Wed, 28 Nov 2007 09:15:11 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 28 Nov 2007, Gregory Stark wrote:\n> > Is there something wrong with:\n> > set enable_seqscan = off\n> > ?\n>\n> This does kind of the opposite of what you would actually want here. What you\n> want is that if you give it a query which would be best satisfied by a\n> sequential scan it should throw an error since you've obviously made an error\n> in the query.\n>\n> What this does is it forces such a query to use an even *slower* method such\n> as a large index scan. In cases where there isn't any other method it goes\n> ahead and does the sequential scan anyways.\n\nThe query planner is not always right. I would like an option like\n\"set enable_seqscan = off\" but with the added effect of making Postgres\nreturn an error if there is no alternative to scanning the whole table,\nbecause I have obviously made a mistake setting up my indexes. I would\neffectively be telling Postgres \"For this table, I *know* that a full\ntable scan is dumb for all of my queries, even if the statistics say\notherwise.\"\n\nOf course, it would have to be slightly intelligent, because there are\ncircumstances where a sequential scan doesn't necessarily mean a full\ntable scan (for instance if there is a LIMIT), and where an index scan\n*does* mean a full table scan (for instance, selecting the whole table and\nordering by an indexed field).\n\nMatthew\n\n-- \nExistence is a convenient concept to designate all of the files that an\nexecutable program can potentially process. -- Fortran77 standard\n", "msg_date": "Wed, 28 Nov 2007 14:40:41 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Pablo Alcaraz wrote:\n> Simon Riggs wrote:\n>> All of those responses have cooked up quite a few topics into one. Large\n>> databases might mean text warehouses, XML message stores, relational\n>> archives and fact-based business data warehouses.\n>>\n>> The main thing is that TB-sized databases are performance critical. So\n>> it all depends upon your workload really as to how well PostgreSQL, or\n>> another other RDBMS vendor can handle them.\n>>\n>>\n>> Anyway, my reason for replying to this thread is that I'm planning\n>> changes for PostgreSQL 8.4+ that will make allow us to get bigger and\n>> faster databases. If anybody has specific concerns then I'd like to hear\n>> them so I can consider those things in the planning stages\n> it would be nice to do something with selects so we can recover a \n> rowset on huge tables using a criteria with indexes without fall \n> running a full scan.\n>\n> In my opinion, by definition, a huge database sooner or later will \n> have tables far bigger than RAM available (same for their indexes). I \n> think the queries need to be solved using indexes enough smart to be \n> fast on disk.\n>\n> Pablo\n\nI am dealing with a very huge database. I am not sure if all these \nthings could be solved with the current Postgres version using somes \nconfiguration parameters. I ll be happy to read your suggestions and \nideas about these queries.\n\nIn my opinion there are queries that I think they ll need to be tuned \nfor \"huge databases\" (huge databases = a database which relevant \ntables(indexes) are (will be) far bigger that all the ram available):\n\n-- example table\nCREATE TABLE homes (\n id bigserial,\n name text,\n location text,\n bigint money_win,\n int zipcode;\n);\nCREATE INDEX money_win_idx ON homes(money_win);\nCREATE INDEX zipcode_idx ON homes(zipcode);\n\n\nSELECT max( id) from homes;\nI think the information to get the max row quickly could be found using \nthe pk index. Idem min( id).\n\nSELECT max( id) from homes WHERE id > 8000000000;\nSame, but useful to find out the same thing in partitioned tables (using \nid like partition criteria). It would be nice if Postgres would not need \nthe WHERE clause to realize it does not need to scan every single \npartition, but only the last. Idem min(id).\n\nSELECT * from homes WHERE money_win = 1300000000;\nPostgres thinks too easily to solve these kind of queries that it must \nto do a sequential scan where the table (or the index) does not fix in \nmemory if the number of rows is not near 1 (example: if the query \nreturns 5000 rows). Same case with filters like 'WHERE money_win >= xx', \n'WHERE money_win BETWEEN xx AND yy'. But I do not know if this behavior \nis because I did a wrong posgresql's configuration or I missed something.\n\nSELECT count( *) from homes;\nit would be *cute* that Postgres stores this value and only recalculate \nif it thinks the stored value is wrong (example: after an anormal \nshutdown).\n\nSELECT zipcode, count( zipcode) as n from homes GROUP BY zipcode;\nit would be *very cute* that Postgres could store this value (or is this \nthere?) on the index or wherever and it only recalculates if it thinks \nthe stored value is wrong (example: after an anormal shutdown).\n\nIn my opinion, partitioned tables in \"huge databases\" would be the \nusual, not the exception. It would be important (for me at least) that \nthese queries could be fast solved when they run in partitioned tables.\n\nMaybe one or more of these queries could be solved using some kind of \noptimization. But I do not discover which ones (I ll be happy to read \nsuggestions :D). I am sure a lot/all these queries could be solved using \nsome kind of triggers/sequence to store information to solve the stuff. \nBut in general the information is there right now (is it there?) and the \nqueries only need that the server could look in the right place. A \ntrigger/function using some pgsql supported languages probably will \nconsume far more CPU resources to find out the same information that \nexist right now and we need to do it using transactions (more perfomance \ncosts) only to be sure we are fine if the server has an anormal shutdown.\n\nCurrently I have several 250Gb+ tables with billions of rows (little \nrows like the homes table example). I partitioned and distributed the \npartitions/index in different tablespaces, etc. I think \"I did not need\" \nso much partitions like I have right now (300+ for some tables and \ngrowing). I just would need enough partitions to distribute the tables \nin differents tablespaces. I did so much partitions because the \nperfomance with really big tables is not enough good for me when the \nprograms run these kind of queries and the insert/update speed is worst \nand worst with the time.\n\nI hope that a couple of tables will be 1Tb+ in a few months... buy more \nand more RAM is an option but not a solution because eventually the \ndatabase will be far bigger than ram available.\n\nLast but not least, it would be *excelent* that this kind of \noptimization would be posible without weird non standard sql sentences. \nI think that Postgresql would be better with huge databases if it can \nsolve for itself these kind of queries in the fastest way or at least we \nare abled to tell it to choice a different criteria. I could help it \nusing postgresql.conf to activate/deactivate some behavior or to use \nsome system table to tell the criteria I want with some tables (like \nautovacuum does right now with table exception vacuums) or using non \nstandard DDL to define that criteria.\n\nBut the thing is that the programmers must be able to use standard SQL \nfor selects/inserts/updates/deletes with 'where' and 'group by' clauses. \nIn my case the programs are builded with java + JPA, so standard SQL \n(but no DDL) is important to keep the things like they are. :)\n\nWell, that's my 2cents feedback.\n\nRegards\n\nPablo\n\nPD: Sorry my broken english.\n", "msg_date": "Wed, 28 Nov 2007 09:57:14 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 28 Nov 2007, Csaba Nagy wrote:\n\n> On Wed, 2007-11-28 at 08:27 -0500, Bill Moran wrote:\n>> Is there something wrong with:\n>> set enable_seqscan = off\n>> ?\n>\n> Nothing wrong with enable_seqscan = off except it is all or nothing type\n> of thing... if you want the big table to never use seqscan, but a medium\n> table which is joined in should use it, then what you do ? And setting\n> enable_seqscan = off will actually not mean the planner can't use a\n> sequential scan for the query if no other alternative exist. In any case\n> it doesn't mean \"please throw an error if you can't do this without a\n> sequential scan\".\n>\n> In fact an even more useful option would be to ask the planner to throw\n> error if the expected cost exceeds a certain threshold...\n\nand even better if the option can be overridden for a specific transaction \nor connection. that way it can be set relativly low for normal operations, \nbut when you need to do an expensive query you can change it for that \nquery.\n\nDavid Lang\n", "msg_date": "Wed, 28 Nov 2007 07:03:26 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Pablo Alcaraz escribi�:\n\n> In my opinion there are queries that I think they ll need to be tuned for \n> \"huge databases\" (huge databases = a database which relevant \n> tables(indexes) are (will be) far bigger that all the ram available):\n>\n> -- example table\n> CREATE TABLE homes (\n> id bigserial,\n> name text,\n> location text,\n> bigint money_win,\n> int zipcode;\n> );\n> CREATE INDEX money_win_idx ON homes(money_win);\n> CREATE INDEX zipcode_idx ON homes(zipcode);\n\nYour example does not work, so I created my own for your first item.\n\nalvherre=# create table test (a int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \"test_pkey\" for table \"test\"\nCREATE TABLE\nalvherre=# insert into test select * from generate_series(1, 100000);\nINSERT 0 100000\nalvherre=# analyze test;\nANALYZE\n\n> SELECT max( id) from homes;\n> I think the information to get the max row quickly could be found using the \n> pk index. Idem min( id).\n\nalvherre=# explain analyze select max(a) from test;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.03..0.04 rows=1 width=0) (actual time=0.054..0.057 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.03 rows=1 width=4) (actual time=0.041..0.043 rows=1 loops=1)\n -> Index Scan Backward using test_pkey on test (cost=0.00..3148.26 rows=100000 width=4) (actual time=0.034..0.034 rows=1 loops=1)\n Filter: (a IS NOT NULL)\n Total runtime: 0.143 ms\n(6 rows)\n\n\n> SELECT max( id) from homes WHERE id > 8000000000;\n> Same, but useful to find out the same thing in partitioned tables (using id \n> like partition criteria). It would be nice if Postgres would not need the \n> WHERE clause to realize it does not need to scan every single partition, \n> but only the last. Idem min(id).\n\nYeah, this could be improved.\n\n> SELECT * from homes WHERE money_win = 1300000000;\n> Postgres thinks too easily to solve these kind of queries that it must to \n> do a sequential scan where the table (or the index) does not fix in memory \n> if the number of rows is not near 1 (example: if the query returns 5000 \n> rows). Same case with filters like 'WHERE money_win >= xx', 'WHERE \n> money_win BETWEEN xx AND yy'. But I do not know if this behavior is because \n> I did a wrong posgresql's configuration or I missed something.\n\nThere are thresholds to switch from index scan to seqscans. It depends\non the selectivity of the clauses.\n\n> SELECT count( *) from homes;\n> it would be *cute* that Postgres stores this value and only recalculate if \n> it thinks the stored value is wrong (example: after an anormal shutdown).\n\nThis is not as easy as you put it for reasons that have been discussed\nat length. I'll only say that there are workarounds to make counting\nquick.\n\n> SELECT zipcode, count( zipcode) as n from homes GROUP BY zipcode;\n> it would be *very cute* that Postgres could store this value (or is this \n> there?) on the index or wherever and it only recalculates if it thinks the \n> stored value is wrong (example: after an anormal shutdown).\n\nSame as above.\n\n\n> Last but not least, it would be *excelent* that this kind of optimization \n> would be posible without weird non standard sql sentences.\n\nRight. If you can afford to sponsor development, it could make them a\nreality sooner.\n\n-- \nAlvaro Herrera Valdivia, Chile ICBM: S 39� 49' 18.1\", W 73� 13' 56.4\"\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n", "msg_date": "Wed, 28 Nov 2007 12:12:05 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:\n\n> In fact an even more useful option would be to ask the planner to throw\n> error if the expected cost exceeds a certain threshold...\n\nWell, I've suggested it before: \n\nstatement_cost_limit on pgsql-hackers, 1 March 2006\n\nWould people like me to re-write and resubmit this patch for 8.4?\n\nTom's previous concerns were along the lines of \"How would know what to\nset it to?\", given that the planner costs are mostly arbitrary numbers.\n\nAny bright ideas, or is it we want it and we don't care about the\npossible difficulties?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 28 Nov 2007 17:22:56 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Tue, 2007-11-27 at 18:06 -0500, Pablo Alcaraz wrote:\n> Simon Riggs wrote:\n> > All of those responses have cooked up quite a few topics into one. Large\n> > databases might mean text warehouses, XML message stores, relational\n> > archives and fact-based business data warehouses.\n> >\n> > The main thing is that TB-sized databases are performance critical. So\n> > it all depends upon your workload really as to how well PostgreSQL, or\n> > another other RDBMS vendor can handle them.\n> >\n> >\n> > Anyway, my reason for replying to this thread is that I'm planning\n> > changes for PostgreSQL 8.4+ that will make allow us to get bigger and\n> > faster databases. If anybody has specific concerns then I'd like to hear\n> > them so I can consider those things in the planning stages\n> it would be nice to do something with selects so we can recover a rowset \n> on huge tables using a criteria with indexes without fall running a full \n> scan.\n> \n> In my opinion, by definition, a huge database sooner or later will have \n> tables far bigger than RAM available (same for their indexes). I think \n> the queries need to be solved using indexes enough smart to be fast on disk.\n\nOK, I agree with this one. \n\nI'd thought that index-only plans were only for OLTP, but now I see they\ncan also make a big difference with DW queries. So I'm very interested\nin this area now.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Wed, 28 Nov 2007 17:28:28 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 28 Nov 2007, Simon Riggs wrote:\n> statement_cost_limit on pgsql-hackers, 1 March 2006\n>\n> Would people like me to re-write and resubmit this patch for 8.4?\n\nYes please. The more options, the better.\n\n> Tom's previous concerns were along the lines of \"How would know what to\n> set it to?\", given that the planner costs are mostly arbitrary numbers.\n>\n> Any bright ideas, or is it we want it and we don't care about the\n> possible difficulties?\n\nI think this is something that the average person should just knuckle down\nand work out.\n\nAt the moment on my work's system, we call EXPLAIN before queries to find\nout if it will take too long. This would improve performance by stopping\nus having to pass the query into the query planner twice.\n\nMatthew\n\n-- \nAn ant doesn't have a lot of processing power available to it. I'm not trying\nto be speciesist - I wouldn't want to detract you from such a wonderful\ncreature, but, well, there isn't a lot there, is there?\n -- Computer Science Lecturer\n", "msg_date": "Wed, 28 Nov 2007 17:34:37 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Wed, 28 Nov 2007, Simon Riggs wrote:\n\n> On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:\n>\n>> In fact an even more useful option would be to ask the planner to throw\n>> error if the expected cost exceeds a certain threshold...\n>\n> Well, I've suggested it before:\n>\n> statement_cost_limit on pgsql-hackers, 1 March 2006\n>\n> Would people like me to re-write and resubmit this patch for 8.4?\n>\n> Tom's previous concerns were along the lines of \"How would know what to\n> set it to?\", given that the planner costs are mostly arbitrary numbers.\n\narbitrary numbers are fine if they are relativly consistant with each \nother.\n\nwill a plan with a estimated cost of 1,000,000 take approximatly 100 times \nas long as one with a cost of 10,000?\n\nor more importantly, will a plan with an estimated cost of 2000 reliably \ntake longer then one with an estimated cost of 1000?\n\nDavid Lang\n\n> Any bright ideas, or is it we want it and we don't care about the\n> possible difficulties?\n>\n>\n", "msg_date": "Wed, 28 Nov 2007 13:55:23 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n\n> On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:\n>\n>> In fact an even more useful option would be to ask the planner to throw\n>> error if the expected cost exceeds a certain threshold...\n>\n> Well, I've suggested it before: \n>\n> statement_cost_limit on pgsql-hackers, 1 March 2006\n>\n> Would people like me to re-write and resubmit this patch for 8.4?\n>\n> Tom's previous concerns were along the lines of \"How would know what to\n> set it to?\", given that the planner costs are mostly arbitrary numbers.\n\nHm, that's only kind of true.\n\nSince 8.mumble seq_page_cost is itself configurable meaning you can adjust the\nbase unit and calibrate all the parameters to be time in whatever unit you\nchoose.\n\nBut even assuming you haven't so adjusted seq_page_cost and all the other\nparameters to match the numbers aren't entirely arbitrary. They represent time\nin units of \"however long a single sequential page read takes\".\n\nObviously few people know how long such a page read takes but surely you would\njust run a few sequential reads of large tables and set the limit to some\nmultiple of whatever you find.\n\nThis isn't going to precise to the level of being able to avoid executing any\nquery which will take over 1000ms. But it is going to be able to catch\nunconstrained cross joins or large sequential scans or such.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 29 Nov 2007 11:59:51 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> \"Simon Riggs\" <[email protected]> writes:\n>> Tom's previous concerns were along the lines of \"How would know what to\n>> set it to?\", given that the planner costs are mostly arbitrary numbers.\n\n> Hm, that's only kind of true.\n\nThe units are not the problem. The problem is that you are staking\nnon-failure of your application on the planner's estimates being\npretty well in line with reality. Not merely in line enough that\nit picks a reasonably cheap plan, but in line enough that if it\nthinks plan A is 10x more expensive than plan B, then the actual\nratio is indeed somewhere near 10.\n\nGiven that this list spends all day every day discussing cases where the\nplanner is wrong, I'd have to think that that's a bet I wouldn't take.\n\nYou could probably avoid this risk by setting the cutoff at something\nlike 100 or 1000 times what you really want to tolerate, but how\nuseful is it then?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2007 10:45:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:\n> Gregory Stark <[email protected]> writes:\n> > \"Simon Riggs\" <[email protected]> writes:\n> >> Tom's previous concerns were along the lines of \"How would know what to\n> >> set it to?\", given that the planner costs are mostly arbitrary numbers.\n> \n> > Hm, that's only kind of true.\n> \n> The units are not the problem. The problem is that you are staking\n> non-failure of your application on the planner's estimates being\n> pretty well in line with reality. Not merely in line enough that\n> it picks a reasonably cheap plan, but in line enough that if it\n> thinks plan A is 10x more expensive than plan B, then the actual\n> ratio is indeed somewhere near 10.\n> \n> Given that this list spends all day every day discussing cases where the\n> planner is wrong, I'd have to think that that's a bet I wouldn't take.\n\nI think you have a point, but the alternative is often much worse. \n\nIf an SQL statement fails because of too high cost, we can investigate\nthe problem and re-submit. If a website slows down because somebody\nallowed a very large query to execute then everybody is affected, not\njust the person who ran the bad query. Either way the guy that ran the\nquery loses, but without constraints in place one guy can kill everybody\nelse also.\n\n> You could probably avoid this risk by setting the cutoff at something\n> like 100 or 1000 times what you really want to tolerate, but how\n> useful is it then?\n\nStill fairly useful, as long as we understand its a blunt instrument.\n\nIf the whole performance of your system depends upon indexed access then\nrogue queries can have disastrous, unpredictable consequences. Many\nsites construct their SQL dynamically, so a mistake in a seldom used\ncode path can allow killer queries through. Even the best DBAs have been\nknown to make mistakes.\n\ne.g. An 80GB table has 8 million blocks in it. \n- So putting a statement_cost limit = 1 million would allow some fairly\nlarge queries but prevent anything that did a SeqScan (or worse).\n- Setting it 10 million is going to prevent things like sorting the\nwhole table without a LIMIT\n- Setting it at 100 million is going to prevent unconstrained product\njoins etc..\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 29 Nov 2007 16:14:26 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "* Simon Riggs ([email protected]) wrote:\n> On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:\n> > Given that this list spends all day every day discussing cases where the\n> > planner is wrong, I'd have to think that that's a bet I wouldn't take.\n> \n> I think you have a point, but the alternative is often much worse. \n\nI'm not convinced you've outlined the consequences of implementing a\nplan cost limit sufficiently.\n\n> If an SQL statement fails because of too high cost, we can investigate\n> the problem and re-submit. If a website slows down because somebody\n> allowed a very large query to execute then everybody is affected, not\n> just the person who ran the bad query. Either way the guy that ran the\n> query loses, but without constraints in place one guy can kill everybody\n> else also.\n\nIt's entirely possible (likely even) that most of the users accessing a\nwebpage are using the same queries and the same tables. If the estimates\nfor those tables ends up changing enough that PG adjusts the plan cost to\nbe above the plan cost limit then *all* of the users would be affected.\n\nThe plan cost isn't going to change for just one user if it's the same\nquery that a bunch of users are using. I'm not sure if handling the\ntrue 'rougue query' case with this limit would actually be a net\nimprovment overall in a website-based situation.\n\nI could see it being useful to set a 'notice_on_high_cost_query'\nvariable where someone working in a data warehouse situation would get a\nnotice if the query he's hand-crafting has a very high cost (in which\ncase he could ctrl-c it if he thinks something is wrong, rather than\nwaiting 5 hours before realizing he forgot a join clause), but the\nwebsite with the one rougue query run by one user seems a stretch.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 29 Nov 2007 11:42:53 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:\n> Given that this list spends all day every day discussing cases where the\n> planner is wrong, I'd have to think that that's a bet I wouldn't take.\n> \n> You could probably avoid this risk by setting the cutoff at something\n> like 100 or 1000 times what you really want to tolerate, but how\n> useful is it then?\n\nIt would still be useful in the sense that if the planner is taking\nwrong estimates you must correct it somehow... raise statistics target,\nrewrite query or other tweaking, you should do something. An error is\nsometimes better than gradually decreasing performance because of too\nlow statistics target for example. So if the error is thrown because of\nwrong estimate, it is still a valid error raising a signal that the DBA\nhas to do something about it.\n\nIt's still true that if the planner estimates too low, it will raise no\nerror and will take the resources. But that's just what we have now, so\nit wouldn't be a regression of any kind...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Thu, 29 Nov 2007 17:54:32 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Simon Riggs wrote:\n> On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:\n>\n> \n>> In fact an even more useful option would be to ask the planner to throw\n>> error if the expected cost exceeds a certain threshold...\n>> \n>\n> Well, I've suggested it before: \n>\n> statement_cost_limit on pgsql-hackers, 1 March 2006\n>\n> Would people like me to re-write and resubmit this patch for 8.4?\n>\n> Tom's previous concerns were along the lines of \"How would know what to\n> set it to?\", given that the planner costs are mostly arbitrary numbers.\n>\n> Any bright ideas, or is it we want it and we don't care about the\n> possible difficulties?\n>\n> \n\nKnowing how to set it is a problem - but a possibly bigger one is that \nmonster query crippling your DW system, so I'd say lets have it.\n\nCheers\n\nMark\n", "msg_date": "Fri, 30 Nov 2007 12:31:01 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Simon Riggs wrote:\n> On Tue, 2007-11-27 at 18:06 -0500, Pablo Alcaraz wrote:\n> \n>> Simon Riggs wrote:\n>> \n>>> All of those responses have cooked up quite a few topics into one. Large\n>>> databases might mean text warehouses, XML message stores, relational\n>>> archives and fact-based business data warehouses.\n>>>\n>>> The main thing is that TB-sized databases are performance critical. So\n>>> it all depends upon your workload really as to how well PostgreSQL, or\n>>> another other RDBMS vendor can handle them.\n>>>\n>>>\n>>> Anyway, my reason for replying to this thread is that I'm planning\n>>> changes for PostgreSQL 8.4+ that will make allow us to get bigger and\n>>> faster databases. If anybody has specific concerns then I'd like to hear\n>>> them so I can consider those things in the planning stages\n>>> \n>> it would be nice to do something with selects so we can recover a rowset \n>> on huge tables using a criteria with indexes without fall running a full \n>> scan.\n>>\n>> In my opinion, by definition, a huge database sooner or later will have \n>> tables far bigger than RAM available (same for their indexes). I think \n>> the queries need to be solved using indexes enough smart to be fast on disk.\n>> \n>\n> OK, I agree with this one. \n>\n> I'd thought that index-only plans were only for OLTP, but now I see they\n> can also make a big difference with DW queries. So I'm very interested\n> in this area now.\n>\n> \nIf that's true, then you want to get behind the work Gokulakannan \nSomasundaram \n(http://archives.postgresql.org/pgsql-hackers/2007-10/msg00220.php) has \ndone with relation to thick indexes. I would have thought that concept \nparticularly useful in DW. Only having to scan indexes on a number of \njoin tables would be a huge win for some of these types of queries.\n\nMy tiny point of view would say that is a much better investment than \nsetting up the proposed parameter. I can see the use of the parameter \nthough. Most of the complaints about indexes having visibility is about \nupdate /delete contention. I would expect in a DW that those things \naren't in the critical path like they are in many other applications. \nEspecially with partitioning and previous partitions not getting may \nupdates, I would think there could be great benefit. I would think that \nmany of Pablo's requests up-thread would get significant performance \nbenefit from this type of index. But as I mentioned at the start, \nthat's my tiny point of view and I certainly don't have the resources to \ndirect what gets looked at for PostgreSQL.\n\nRegards\n\nRussell Smith\n\n", "msg_date": "Fri, 30 Nov 2007 17:41:53 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Fri, 2007-11-30 at 17:41 +1100, Russell Smith wrote:\n> Simon Riggs wrote:\n> > On Tue, 2007-11-27 at 18:06 -0500, Pablo Alcaraz wrote:\n> > \n> >> Simon Riggs wrote:\n> >> \n> >>> All of those responses have cooked up quite a few topics into one. Large\n> >>> databases might mean text warehouses, XML message stores, relational\n> >>> archives and fact-based business data warehouses.\n> >>>\n> >>> The main thing is that TB-sized databases are performance critical. So\n> >>> it all depends upon your workload really as to how well PostgreSQL, or\n> >>> another other RDBMS vendor can handle them.\n> >>>\n> >>>\n> >>> Anyway, my reason for replying to this thread is that I'm planning\n> >>> changes for PostgreSQL 8.4+ that will make allow us to get bigger and\n> >>> faster databases. If anybody has specific concerns then I'd like to hear\n> >>> them so I can consider those things in the planning stages\n> >>> \n> >> it would be nice to do something with selects so we can recover a rowset \n> >> on huge tables using a criteria with indexes without fall running a full \n> >> scan.\n> >>\n> >> In my opinion, by definition, a huge database sooner or later will have \n> >> tables far bigger than RAM available (same for their indexes). I think \n> >> the queries need to be solved using indexes enough smart to be fast on disk.\n> >> \n> >\n> > OK, I agree with this one. \n> >\n> > I'd thought that index-only plans were only for OLTP, but now I see they\n> > can also make a big difference with DW queries. So I'm very interested\n> > in this area now.\n> >\n> > \n> If that's true, then you want to get behind the work Gokulakannan \n> Somasundaram \n> (http://archives.postgresql.org/pgsql-hackers/2007-10/msg00220.php) has \n> done with relation to thick indexes. I would have thought that concept \n> particularly useful in DW. Only having to scan indexes on a number of \n> join tables would be a huge win for some of these types of queries.\n\nHmm, well I proposed that in Jan/Feb, but I'm sure others have also.\n\nI don't think its practical to add visibility information to *all*\nindexes, but I like Heikki's Visibility Map proposal much better.\n\n> My tiny point of view would say that is a much better investment than \n> setting up the proposed parameter. \n\nThey are different things entirely, with dissimilar dev costs also. We\ncan have both.\n\n> I can see the use of the parameter \n> though. \n\nGood\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 30 Nov 2007 08:40:19 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On 11/29/07, Gregory Stark <[email protected]> wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> > On Wed, 2007-11-28 at 14:48 +0100, Csaba Nagy wrote:\n\n> >> In fact an even more useful option would be to ask the planner to throw\n> >> error if the expected cost exceeds a certain threshold...\n\n> > Tom's previous concerns were along the lines of \"How would know what to\n> > set it to?\", given that the planner costs are mostly arbitrary numbers.\n\n> Hm, that's only kind of true.\n\n> Obviously few people know how long such a page read takes but surely you would\n> just run a few sequential reads of large tables and set the limit to some\n> multiple of whatever you find.\n>\n> This isn't going to precise to the level of being able to avoid executing any\n> query which will take over 1000ms. But it is going to be able to catch\n> unconstrained cross joins or large sequential scans or such.\n\nIsn't that what statement_timeout is for? Since this is entirely based\non estimates, using arbitrary fuzzy numbers for this seems fine to me;\nprecision isn't really the goal.\n", "msg_date": "Fri, 30 Nov 2007 02:15:45 -0800", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "> Isn't that what statement_timeout is for? Since this is entirely based\n> on estimates, using arbitrary fuzzy numbers for this seems fine to me;\n> precision isn't really the goal.\n\nThere's an important difference to statement_timeout: this proposal\nwould avoid completely taking any resources if it estimates it can't be\nexecuted in proper time, but statement_timeout will allow a bad query to\nrun at least statement_timeout long...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 30 Nov 2007 11:29:43 +0100", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Hi Peter,\n\nIf you run into a scaling issue with PG (you will at those scales 1TB+), you\ncan deploy Greenplum DB which is PG 8.2.5 compatible. A large internet\ncompany (look for press soon) is in production with a 150TB database on a\nsystem capable of doing 400TB and we have others in production at 60TB,\n40TB, etc. We can provide references when needed - note that we had 20\nsuccessful customer references supporting Gartner's magic quadrant report on\ndata warehouses which put Greenplum in the \"upper visionary\" area of the\nmagic quadrant - which only happens if your customers can scale (see this:\nhttp://www.esj.com/business_intelligence/article.aspx?EditorialsID=8712)\n\nIn other words, no matter what happens you'll be able to scale up with your\nPostgres strategy.\n\n- Luke\n\n\nOn 11/26/07 10:44 AM, \"Pablo Alcaraz\" <[email protected]> wrote:\n\n> I had a client that tried to use Ms Sql Server to run a 500Gb+ database.\n> The database simply colapsed. They switched to Teradata and it is\n> running good. This database has now 1.5Tb+.\n> \n> Currently I have clients using postgresql huge databases and they are\n> happy. In one client's database the biggest table has 237Gb+ (only 1\n> table!) and postgresql run the database without problem using\n> partitioning, triggers and rules (using postgresql 8.2.5).\n> \n> Pablo\n> \n> Peter Koczan wrote:\n>> Hi all,\n>> \n>> I have a user who is looking to store 500+ GB of data in a database\n>> (and when all the indexes and metadata are factored in, it's going to\n>> be more like 3-4 TB). He is wondering how well PostgreSQL scales with\n>> TB-sized databases and what can be done to help optimize them (mostly\n>> hardware and config parameters, maybe a little advocacy). I can't\n>> speak on that since I don't have any DBs approaching that size.\n>> \n>> The other part of this puzzle is that he's torn between MS SQL Server\n>> (running on Windows and unsupported by us) and PostgreSQL (running on\n>> Linux...which we would fully support). If any of you have ideas of how\n>> well PostgreSQL compares to SQL Server, especially in TB-sized\n>> databases, that would be much appreciated.\n>> \n>> We're running PG 8.2.5, by the way.\n>> \n>> Peter\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Have you searched our list archives?\n>> \n>> http://archives.postgresql.org\n>> \n>> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Fri, 30 Nov 2007 07:45:27 -0800", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Thursday 29 November 2007 11:14, Simon Riggs wrote:\n> On Thu, 2007-11-29 at 10:45 -0500, Tom Lane wrote:\n> > Gregory Stark <[email protected]> writes:\n> > > \"Simon Riggs\" <[email protected]> writes:\n> > >> Tom's previous concerns were along the lines of \"How would know what\n> > >> to set it to?\", given that the planner costs are mostly arbitrary\n> > >> numbers.\n> > >\n> > > Hm, that's only kind of true.\n> >\n> > The units are not the problem. The problem is that you are staking\n> > non-failure of your application on the planner's estimates being\n> > pretty well in line with reality. Not merely in line enough that\n> > it picks a reasonably cheap plan, but in line enough that if it\n> > thinks plan A is 10x more expensive than plan B, then the actual\n> > ratio is indeed somewhere near 10.\n> >\n> > Given that this list spends all day every day discussing cases where the\n> > planner is wrong, I'd have to think that that's a bet I wouldn't take.\n>\n> I think you have a point, but the alternative is often much worse.\n>\n> If an SQL statement fails because of too high cost, we can investigate\n> the problem and re-submit. If a website slows down because somebody\n> allowed a very large query to execute then everybody is affected, not\n> just the person who ran the bad query. Either way the guy that ran the\n> query loses, but without constraints in place one guy can kill everybody\n> else also.\n>\n> > You could probably avoid this risk by setting the cutoff at something\n> > like 100 or 1000 times what you really want to tolerate, but how\n> > useful is it then?\n>\n> Still fairly useful, as long as we understand its a blunt instrument.\n>\n> If the whole performance of your system depends upon indexed access then\n> rogue queries can have disastrous, unpredictable consequences. Many\n> sites construct their SQL dynamically, so a mistake in a seldom used\n> code path can allow killer queries through. Even the best DBAs have been\n> known to make mistakes.\n>\n\nIf the whole performance of your system depends upon indexed access, then \nmaybe you need a database that gives you a way to force index access at the \nquery level? \n\n> e.g. An 80GB table has 8 million blocks in it.\n> - So putting a statement_cost limit = 1 million would allow some fairly\n> large queries but prevent anything that did a SeqScan (or worse).\n> - Setting it 10 million is going to prevent things like sorting the\n> whole table without a LIMIT\n> - Setting it at 100 million is going to prevent unconstrained product\n> joins etc..\n\nI think you're completly overlooking the effect of disk latency has on query \ntimes. We run queries all the time that can vary from 4 hours to 12 hours in \ntime based solely on the amount of concurrent load on the system, even though \nthey always plan with the same cost. \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Wed, 5 Dec 2007 15:07:21 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Nov 28, 2007, at 7:27 AM, Bill Moran wrote:\n> Is there something wrong with:\n> set enable_seqscan = off\n\n\nNote that in cases of very heavy skew, that won't work. It only adds \n10M to the cost estimate for a seqscan, and it's definitely possible \nto have an index scan that looks even more expensive.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 5 Dec 2007 18:07:58 -0600", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Robert,\n\nOn Wed, 2007-12-05 at 15:07 -0500, Robert Treat wrote:\n\n> If the whole performance of your system depends upon indexed access, then \n> maybe you need a database that gives you a way to force index access at the \n> query level? \n\nThat sounds like a request for hints, which is OT here, ISTM.\n\nThe issue is that if somebody issues a \"large query\" then it will be a\nproblem whichever plan the query takes. Forcing index scans can make a\nplan more expensive than a seq scan in many cases.\n\n> > e.g. An 80GB table has 8 million blocks in it.\n> > - So putting a statement_cost limit = 1 million would allow some fairly\n> > large queries but prevent anything that did a SeqScan (or worse).\n> > - Setting it 10 million is going to prevent things like sorting the\n> > whole table without a LIMIT\n> > - Setting it at 100 million is going to prevent unconstrained product\n> > joins etc..\n> \n> I think you're completly overlooking the effect of disk latency has on query \n> times. We run queries all the time that can vary from 4 hours to 12 hours in \n> time based solely on the amount of concurrent load on the system, even though \n> they always plan with the same cost. \n\nNot at all. If we had statement_cost_limit then it would be applied\nafter planning and before execution begins. The limit would be based\nupon the planner's estimate, not the likely actual execution time. \n\nSo yes a query may vary in execution time by a large factor as you\nsuggest, and it would be difficult to set the proposed parameter\naccurately. However, the same is also true of statement_timeout, which\nwe currently support, so I don't see this point as an blocker.\n\nWhich leaves us at the burning question: Would you use such a facility,\nor would the difficulty in setting it exactly prevent you from using it\nfor real?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 06 Dec 2007 09:38:16 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Thu, Dec 06, 2007 at 09:38:16AM +0000, Simon Riggs wrote:\n>The issue is that if somebody issues a \"large query\" then it will be a\n>problem whichever plan the query takes. Forcing index scans can make a\n>plan more expensive than a seq scan in many cases.\n\nOTOH, the planner can really screw up queries on really large databases. \nIIRC, the planner can use things like unique constraints to get some \nidea, e.g., of how many rows will result from a join. Unfortunately, \nthe planner can't apply those techniques to certain constructs common in \nreally large db's (e.g., partitioned tables--how do you do a unique \nconstraint on a partitioned table?) I've got some queries that the \nplanner thinks will return on the order of 10^30 rows for that sort of \nreason. In practice, the query may return 10^3 rows, and the difference \nbetween the seq scan and the index scan is the difference between a \nquery that takes a few seconds and a query that I will never run to \ncompletion. I know the goal would be to make the planner understand \nthose queries better, but for now the answer seems to be to craft the \nqueries very carefully and run explain first, making sure you see index \nscans in the right places.\n\nMike Stone\n", "msg_date": "Thu, 06 Dec 2007 10:42:14 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Michael Stone <[email protected]> writes:\n> OTOH, the planner can really screw up queries on really large databases. \n> IIRC, the planner can use things like unique constraints to get some \n> idea, e.g., of how many rows will result from a join. Unfortunately, \n> the planner can't apply those techniques to certain constructs common in \n> really large db's (e.g., partitioned tables--how do you do a unique \n> constraint on a partitioned table?) I've got some queries that the \n> planner thinks will return on the order of 10^30 rows for that sort of \n> reason. In practice, the query may return 10^3 rows, and the difference \n> between the seq scan and the index scan is the difference between a \n> query that takes a few seconds and a query that I will never run to \n> completion. I know the goal would be to make the planner understand \n> those queries better,\n\nIndeed, and if you've got examples where it's that far off, you should\nreport them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2007 11:13:18 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "On Thu, 6 Dec 2007, Tom Lane wrote:\n> Indeed, and if you've got examples where it's that far off, you should\n> report them.\n\nOo, oo, I have one!\n\nSo, this query bit us a while back. We had two tables being joined\ntogether in a query by a key column. The key column was an integer, and\nfor the first table it had a range from zero to a bazillion. For the\nsecond table, it had a range from half a bazillion to half a bazillion\nplus a hundred. The first table had a bazillion rows, and the second table\nhad only about a hundred. Both tables were well ordered on the key column.\nBoth tables had an index on the key column.\n\nSo, our query was like this:\n\nSELECT * FROM table1, table2 WHERE table1.key = table2.key LIMIT 1\n\n... because we wanted to find out if there were *any* hits between the two\ntables. The query took hours. To understand why, let's look at the query\nwithout the LIMIT. For this query, Postgres would perform a nested loop,\niterating over all rows in the small table, and doing a hundred index\nlookups in the big table. This completed very quickly. However, adding the\nLIMIT meant that suddenly a merge join was very attractive to the planner,\nas it estimated the first row to be returned within milliseconds, without\nneeding to sort either table.\n\nThe problem is that Postgres didn't know that the first hit in the big\ntable would be about half-way through, after doing a index sequential scan\nfor half a bazillion rows.\n\nWe fixed this query by changing it to:\n\nSELECT * FROM table1, table2 WHERE table1.key = table2.key\n AND table1.key >= (SELECT MIN(key) FROM table2)\n AND table1.key <= (SELECT MAX(key) FROM table2)\n\n... which artificially limited the index sequential scan of table2 to\nstart from the earliest possible hit and only continue to the last\npossible hit. This query completed quickly, as the min and max could be\nanswered quickly by the indexes.\n\nStill, it's a pity Postgres couldn't work that out for itself, having all\nthe information present in its statistics and indexes. AIUI the planner\ndoesn't peek into indexes - maybe it should.\n\nMatthew\n\n-- \nIn the beginning was the word, and the word was unsigned,\nand the main() {} was without form and void...\n", "msg_date": "Thu, 6 Dec 2007 17:46:35 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Matthew <[email protected]> writes:\n> ... For this query, Postgres would perform a nested loop,\n> iterating over all rows in the small table, and doing a hundred index\n> lookups in the big table. This completed very quickly. However, adding the\n> LIMIT meant that suddenly a merge join was very attractive to the planner,\n> as it estimated the first row to be returned within milliseconds, without\n> needing to sort either table.\n\n> The problem is that Postgres didn't know that the first hit in the big\n> table would be about half-way through, after doing a index sequential scan\n> for half a bazillion rows.\n\nHmm. IIRC, there are smarts in there about whether a mergejoin can\nterminate early because of disparate ranges of the two join variables.\nSeems like it should be straightforward to fix it to also consider\nwhether the time-to-return-first-row will be bloated because of\ndisparate ranges. I'll take a look --- but it's probably too late\nto consider this for 8.3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2007 12:55:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "On Thu, 6 Dec 2007, Tom Lane wrote:\n> Matthew <[email protected]> writes:\n> > ... For this query, Postgres would perform a nested loop,\n> > iterating over all rows in the small table, and doing a hundred index\n> > lookups in the big table. This completed very quickly. However, adding the\n> > LIMIT meant that suddenly a merge join was very attractive to the planner,\n> > as it estimated the first row to be returned within milliseconds, without\n> > needing to sort either table.\n>\n> > The problem is that Postgres didn't know that the first hit in the big\n> > table would be about half-way through, after doing a index sequential scan\n> > for half a bazillion rows.\n>\n> Hmm. IIRC, there are smarts in there about whether a mergejoin can\n> terminate early because of disparate ranges of the two join variables.\n> Seems like it should be straightforward to fix it to also consider\n> whether the time-to-return-first-row will be bloated because of\n> disparate ranges. I'll take a look --- but it's probably too late\n> to consider this for 8.3.\n\nVery cool. Would that be a planner cost estimate fix (so it avoids the\nmerge join), or a query execution fix (so it does the merge join on the\ntable subset)?\n\nMatthew\n\n-- \nI've run DOOM more in the last few days than I have the last few\nmonths. I just love debugging ;-) -- Linus Torvalds\n", "msg_date": "Thu, 6 Dec 2007 18:03:09 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Matthew <[email protected]> writes:\n> On Thu, 6 Dec 2007, Tom Lane wrote:\n>> Hmm. IIRC, there are smarts in there about whether a mergejoin can\n>> terminate early because of disparate ranges of the two join variables.\n\n> Very cool. Would that be a planner cost estimate fix (so it avoids the\n> merge join), or a query execution fix (so it does the merge join on the\n> table subset)?\n\nCost estimate fix. Basically what I'm thinking is that the startup cost\nattributed to a mergejoin ought to account for any rows that have to be\nskipped over before we reach the first join pair. In general this is\nhard to estimate, but for mergejoin it can be estimated using the same\ntype of logic we already use at the other end.\n\nAfter looking at the code a bit, I'm realizing that there's actually a\nbug in there as of 8.3: mergejoinscansel() is expected to be able to\nderive numbers for either direction of scan, but if it's asked to\ncompute numbers for a DESC-order scan, it looks for a pg_stats entry\nsorted with '>', which isn't gonna be there. It needs to know to\nlook for an '<' histogram and switch the min/max. So the lack of\nsymmetry here is causing an actual bug in logic that already exists.\nThat makes the case for fixing this now a bit stronger ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2007 13:34:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "On Thu, Dec 06, 2007 at 11:13:18AM -0500, Tom Lane wrote:\n>Indeed, and if you've got examples where it's that far off, you should\n>report them.\n\nYeah, the trick is to get it to a digestable test case. The basic \nscenario (there are more tables & columns in the actual case) is a set \nof tables partitioned by date with a number of columns in one table \nreferencing rows in the others:\n\nTable A (~5bn rows / 100's of partitions)\ntime Bkey1 Ckey1 Bkey2 Ckey2\n\nTable B (~1bn rows / 100's of partitions)\nBkey Bval\n\nTable C (~.5bn rows / 100's of partitions)\nCkey Cval\n\nBkey and Ckey are unique, but the planner doesn't know that.\n\nMike Stone\n", "msg_date": "Thu, 06 Dec 2007 14:50:33 -0500", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Tom Lane wrote:\n> Michael Stone <[email protected]> writes:\n>> OTOH, the planner can really screw up queries on really large databases. \n>> ... I've got some queries that the \n>> planner thinks will return on the order of 10^30 rows for that sort of \n>> reason. In practice, the query may return 10^3 rows....\n> \n> Indeed, and if you've got examples where it's that far off, you should\n> report them.\n\nIf I read this right, I've got quite a few cases where the planner\nexpects 1 row but gets over 2000.\n\nAnd within the plan, it looks like there's a step where it expects\n511 rows and gets 2390779 which seems to be off by a factor of 4600x.\n\nAlso shown below it seems that if I use \"OFFSET 0\" as a \"hint\"\nI can force a much (10x) better plan. I wonder if there's room for\na pgfoundry project for a patch set that lets us use more hints\nthan OFFSET 0.\n\n Ron\n\nlogs=# analyze;\nANALYZE\nlogs=# explain analyze select * from fact natural join d_ref natural join d_uag where ref_host = 'download.com.com' and ref_path = '/[path_removed].html' and useragent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=2827.72..398919.05 rows=1 width=242) (actual time=69175.963..141550.628 rows=2474 loops=1)\n Hash Cond: (fact.ref_id = d_ref.ref_id)\n -> Nested Loop (cost=2819.88..398908.65 rows=511 width=119) (actual time=3094.740..139361.235 rows=2390779 loops=1)\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=45.937..45.948 rows=1 loops=1)\n Index Cond: ((useragent)::text = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)'::text)\n -> Bitmap Heap Scan on fact (cost=2819.88..396449.49 rows=196223 width=32) (actual time=3048.770..135653.875 rows=2390779 loops=1)\n Recheck Cond: (fact.uag_id = d_uag.uag_id)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=1713.148..1713.148 rows=2390779 loops=1)\n Index Cond: (fact.uag_id = d_uag.uag_id)\n -> Hash (cost=7.83..7.83 rows=1 width=127) (actual time=62.841..62.841 rows=2 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=62.813..62.823 rows=2 loops=1)\n Index Cond: (((ref_path)::text = '[path_removed].html'::text) AND ((ref_host)::text = 'download.com.com'::text))\n Total runtime: 141563.733 ms\n(13 rows)\n\n\n\n############ using \"offset 0\" to force a better plan.\n\n\nlogs=# explain analyze select * from fact natural join (select * from d_ref natural join d_uag where ref_host = 'download.com.com' and ref_path = '/[path_removed].html' and useragent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)' offset 0) as a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=6465.12..7575.91 rows=367 width=2096) (actual time=2659.251..14703.343 rows=2474 loops=1)\n -> Limit (cost=0.00..14.22 rows=1 width=218) (actual time=114.968..115.140 rows=2 loops=1)\n -> Nested Loop (cost=0.00..14.22 rows=1 width=218) (actual time=114.964..115.127 rows=2 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=75.891..75.900 rows=2 loops=1)\n Index Cond: (((ref_path)::text = '[path_removed].html'::text) AND ((ref_host)::text = 'download.com.com'::text))\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=19.582..19.597 rows=1 loops=2)\n Index Cond: ((useragent)::text = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)'::text)\n -> Bitmap Heap Scan on fact (cost=6465.12..7556.18 rows=367 width=32) (actual time=2240.090..7288.145 rows=1237 loops=2)\n Recheck Cond: ((fact.uag_id = a.uag_id) AND (fact.ref_id = a.ref_id))\n -> BitmapAnd (cost=6465.12..6465.12 rows=367 width=0) (actual time=2221.539..2221.539 rows=0 loops=2)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=1633.032..1633.032 rows=2390779 loops=2)\n Index Cond: (fact.uag_id = a.uag_id)\n -> Bitmap Index Scan on i__fact__ref_id (cost=0.00..3581.50 rows=253913 width=0) (actual time=150.614..150.614 rows=77306 loops=2)\n Index Cond: (fact.ref_id = a.ref_id)\n Total runtime: 14710.870 ms\n(15 rows)\n\n", "msg_date": "Thu, 06 Dec 2007 17:28:24 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "I wrote:\n> Hmm. IIRC, there are smarts in there about whether a mergejoin can\n> terminate early because of disparate ranges of the two join variables.\n> Seems like it should be straightforward to fix it to also consider\n> whether the time-to-return-first-row will be bloated because of\n> disparate ranges.\n\nI've posted a proposed patch for this:\nhttp://archives.postgresql.org/pgsql-patches/2007-12/msg00025.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2007 20:55:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>\n>> Also shown below it seems that if I use \"OFFSET 0\" as a \"hint\"\n>> I can force a much (10x) better plan. I wonder if there's room for\n>> a pgfoundry project for a patch set that lets us use more hints\n>> than OFFSET 0.\n>>\n> There's something fishy about this --- given that that plan has a lower\n> cost estimate, it should've picked it without any artificial\n> constraints.\n\n\nI think the reason it's not picking it was discussed back in this thread\ntoo.\nhttp://archives.postgresql.org/pgsql-performance/2005-03/msg00675.php\nhttp://archives.postgresql.org/pgsql-performance/2005-03/msg00684.php\nMy offset 0 is forcing the outer join.\n[Edit: Ugh - meant cartesian join - which helps this kind of query.]\n\n> What PG version are you using?\n\n logs=# select version();\n\n version\n ----------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)\n (1 row)\n\n> Do you perhaps have a low setting for join_collapse_limit?\n\n\n logs=# show join_collapse_limit;\n join_collapse_limit\n ---------------------\n 8\n (1 row)\n\n Actually, IIRC back in that other thread, \"set join_collapse_limit =1;\"\n helped\n http://archives.postgresql.org/pgsql-performance/2005-03/msg00663.php\n\n", "msg_date": "Thu, 06 Dec 2007 19:55:17 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Tom Lane wrote:\n>> There's something fishy about this --- given that that plan has a lower\n>> cost estimate, it should've picked it without any artificial\n>> constraints.\n\n> I think the reason it's not picking it was discussed back in this thread\n> too.\n> http://archives.postgresql.org/pgsql-performance/2005-03/msg00675.php\n> http://archives.postgresql.org/pgsql-performance/2005-03/msg00684.php\n> My offset 0 is forcing the outer join.\n> [Edit: Ugh - meant cartesian join - which helps this kind of query.]\n\nAh; I missed the fact that the two relations you want to join first\ndon't have any connecting WHERE clause.\n\nThe concern I mentioned in the above thread was basically that I don't\nwant the planner to go off chasing Cartesian join paths in general ---\nthey're usually useless and would result in an exponential explosion\nin the number of join paths considered in many-table queries.\n\nHowever, in this case the reason that the Cartesian join might be\nrelevant is that both of them are needed in order to form an inner\nindexscan on the big table. I wonder if we could drive consideration\nof the Cartesian join off of noticing that. It'd take some rejiggering\naround best_inner_indexscan(), or somewhere in that general vicinity.\n\nWay too late for 8.3, but something to think about for next time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Dec 2007 23:14:10 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>> Tom Lane wrote:\n>>> ...given that that plan has a lower cost estimate, it \n>>> should've picked it without any artificialconstraints.\n> \n>>I think the reason it's not picking it was discussed back...\n>> http://archives.postgresql.org/pgsql-performance/2005-03/msg00675.php\n> ...\n> \n> The concern I mentioned in the above thread was basically that I don't\n> want the planner to go off chasing Cartesian join paths in general...\n> However, in this case the reason that the Cartesian join might be\n> relevant is that both of them are needed in order to form an inner\n> indexscan on the big table....\n\nInteresting.... I think Simon mentioned last time that this type of\nquery is quite common for standard star schema data warehouses.\nAnd it seem to me the Cartesian join on the dimension tables is\noften pretty harmless since each dimension table would often return\njust 1 row; and the size of the fact table is such that it's useful\nto touch it as little as possible.\n\n> Way too late for 8.3, but something to think about for next time.\n\nNo problem.. we've been working around it since that last\nthread in early '05 with early 8.0, IIRC. :-)\n\nThanks to the excellent postgres hints system (\"offset 0\" and\n\"set join_collapse_limit=1\") we can get the plans we want\npretty easily. :-)\n", "msg_date": "Thu, 06 Dec 2007 21:11:40 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Thursday 06 December 2007 04:38, Simon Riggs wrote:\n> Robert,\n>\n> On Wed, 2007-12-05 at 15:07 -0500, Robert Treat wrote:\n> > If the whole performance of your system depends upon indexed access, then\n> > maybe you need a database that gives you a way to force index access at\n> > the query level?\n>\n> That sounds like a request for hints, which is OT here, ISTM.\n>\n\nIf you want to eat peas, and someone suggests you use a knife, can I only \nargue the validity of using a knife? I'd rather just recommend a spoon. \n\n> > I think you're completly overlooking the effect of disk latency has on\n> > query times. We run queries all the time that can vary from 4 hours to\n> > 12 hours in time based solely on the amount of concurrent load on the\n> > system, even though they always plan with the same cost.\n>\n> Not at all. If we had statement_cost_limit then it would be applied\n> after planning and before execution begins. The limit would be based\n> upon the planner's estimate, not the likely actual execution time.\n>\n\nThis is nice, but it doesnt prevent \"slow queries\" reliably (which seemed to \nbe in the original complaints), since query time cannot be directly traced \nback to statement cost. \n\n> So yes a query may vary in execution time by a large factor as you\n> suggest, and it would be difficult to set the proposed parameter\n> accurately. However, the same is also true of statement_timeout, which\n> we currently support, so I don't see this point as an blocker.\n>\n> Which leaves us at the burning question: Would you use such a facility,\n> or would the difficulty in setting it exactly prevent you from using it\n> for real?\n\nI'm not sure. My personal instincts are that the solution is too fuzzy for me \nto rely on, and if it isnt reliable, it's not a good solution. If you look at \nall of the things people seem to think this will solve, I think I can raise \nan alternative option that would be a more definitive solution:\n\n\"prevent queries from taking longer than x\" -> statement_timeout.\n\n\"prevent planner from switching to bad plan\" -> hint system\n\n\"prevent query from consuming too many resources\" -> true resource \nrestrictions at the database level\n\nI'm not so much against the idea of a statement cost limit, but I think we \nneed to realize that it does not really solve as many problems as people \nthink, in cases where it will help it often will do so poorly, and that there \nare probably better solutions available to those problems. Of course if you \nback me into a corner I'll agree a poor solution is better than no solution, \nso... \n\n-- \nRobert Treat\nBuild A Brighter LAMP :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Fri, 07 Dec 2007 12:45:08 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>> Tom Lane wrote:\n>>> There's something fishy about this --- given that that plan has a lower\n>>> cost estimate, it should've picked it without any artificial\n>>> constraints.\n\nOne final thing I find curious about this is that the estimated\nnumber of rows is much closer in the \"offset 0\" form of the query.\n\nSince the logic itself is identical, I would have expected the\nestimated total number of rows for both forms of this query to\nbe identical.\n\nAny reason the two plans estimate a different total number of rows?\n\n\n\n(explain statements for the two forms of the same query\nfrom earlier in the thread here:\nhttp://archives.postgresql.org/pgsql-performance/2007-12/msg00088.php )\n", "msg_date": "Fri, 07 Dec 2007 12:46:21 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "On Fri, 2007-12-07 at 12:45 -0500, Robert Treat wrote:\n> On Thursday 06 December 2007 04:38, Simon Riggs wrote:\n\n> > > I think you're completly overlooking the effect of disk latency has on\n> > > query times. We run queries all the time that can vary from 4 hours to\n> > > 12 hours in time based solely on the amount of concurrent load on the\n> > > system, even though they always plan with the same cost.\n> >\n> > Not at all. If we had statement_cost_limit then it would be applied\n> > after planning and before execution begins. The limit would be based\n> > upon the planner's estimate, not the likely actual execution time.\n> >\n> \n> This is nice, but it doesnt prevent \"slow queries\" reliably (which seemed to \n> be in the original complaints), since query time cannot be directly traced \n> back to statement cost. \n\nHmm, well it can be directly traced, just not with the accuracy you\ndesire.\n\nWe can improve the accuracy, but then we would need to run the query\nfirst in order to find out it was killing us.\n\n> > So yes a query may vary in execution time by a large factor as you\n> > suggest, and it would be difficult to set the proposed parameter\n> > accurately. However, the same is also true of statement_timeout, which\n> > we currently support, so I don't see this point as an blocker.\n> >\n> > Which leaves us at the burning question: Would you use such a facility,\n> > or would the difficulty in setting it exactly prevent you from using it\n> > for real?\n> \n> I'm not sure. My personal instincts are that the solution is too fuzzy for me \n> to rely on, and if it isnt reliable, it's not a good solution. If you look at \n> all of the things people seem to think this will solve, I think I can raise \n> an alternative option that would be a more definitive solution:\n> \n> \"prevent queries from taking longer than x\" -> statement_timeout.\n> \n> \"prevent planner from switching to bad plan\" -> hint system\n> \n> \"prevent query from consuming too many resources\" -> true resource \n> restrictions at the database level\n\nI like and agree with your list, as an overview. I differ slightly on\nspecifics.\n\n> I'm not so much against the idea of a statement cost limit, but I think we \n> need to realize that it does not really solve as many problems as people \n> think, in cases where it will help it often will do so poorly, and that there \n> are probably better solutions available to those problems. Of course if you \n> back me into a corner I'll agree a poor solution is better than no solution, \n> so... \n\nstatement_cost_limit isn't a panacea for all performance ills, its just\none weapon in the armoury. I'm caught somewhat in that whatever I\npropose as a concrete next step, somebody says I should have picked\nanother. Oh well.\n\nOn specific points:\n\nWith hints I prefer a declarative approach, will discuss later in\nrelease cycle.\n\nThe true resource restrictions sound good, but its still magic numbers.\nHow many I/Os are allowed before you kill the query? How much CPU? Those\nare still going to be guessed at. How do we tell the difference between\na random I/O and a sequential I/O - there's no difference as far as\nPostgres is concerned in the buffer manager, but it can cause a huge\nperformance difference. Whether you use real resource limits or\nstatement cost limits you still need to work out the size of your table\nand then guess at appropriate limits.\n\nEvery other system I've seen uses resource limits, but the big problem\nis that they are applied after something has been running for a long\ntime. It's kinda like saying I'll treat the gangrene when it reaches my\nknee. I prefer to avoid the problem before it starts to hurt at all, so\nI advocate learning the lessons from other systems, not simply follow\nthem. But having said that, I'm not against having them; its up to the\nadministrator how they want to manage their database, not me.\n\nWhat resource limit parameters would you choose? (temp disk space etc..)\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Tue, 11 Dec 2007 23:37:57 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "\nI have _not_ added a TODO for this item. Let me know if one is needed.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n> > Tom Lane wrote:\n> >> There's something fishy about this --- given that that plan has a lower\n> >> cost estimate, it should've picked it without any artificial\n> >> constraints.\n> \n> > I think the reason it's not picking it was discussed back in this thread\n> > too.\n> > http://archives.postgresql.org/pgsql-performance/2005-03/msg00675.php\n> > http://archives.postgresql.org/pgsql-performance/2005-03/msg00684.php\n> > My offset 0 is forcing the outer join.\n> > [Edit: Ugh - meant cartesian join - which helps this kind of query.]\n> \n> Ah; I missed the fact that the two relations you want to join first\n> don't have any connecting WHERE clause.\n> \n> The concern I mentioned in the above thread was basically that I don't\n> want the planner to go off chasing Cartesian join paths in general ---\n> they're usually useless and would result in an exponential explosion\n> in the number of join paths considered in many-table queries.\n> \n> However, in this case the reason that the Cartesian join might be\n> relevant is that both of them are needed in order to form an inner\n> indexscan on the big table. I wonder if we could drive consideration\n> of the Cartesian join off of noticing that. It'd take some rejiggering\n> around best_inner_indexscan(), or somewhere in that general vicinity.\n> \n> Way too late for 8.3, but something to think about for next time.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 17 Mar 2008 19:47:12 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have _not_ added a TODO for this item. Let me know if one is needed.\n\nPlease do, I think it's an open issue.\n\n* Consider Cartesian joins when both relations are needed to form an\n indexscan qualification for a third relation\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Mar 2008 20:28:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > I have _not_ added a TODO for this item. Let me know if one is needed.\n> \n> Please do, I think it's an open issue.\n> \n> * Consider Cartesian joins when both relations are needed to form an\n> indexscan qualification for a third relation\n\nDone, with URL added.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 17 Mar 2008 20:43:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n>> I have _not_ added a TODO for this item. Let me know if one is needed.\n> \n> Please do, I think it's an open issue.\n> \n> * Consider Cartesian joins when both relations are needed to form an\n> indexscan qualification for a third relation\n> \n\n\nWould another possible condition for considering\nCartesian joins be be:\n\n * Consider Cartesian joins when a unique constraint can prove\n that at most one row will be pulled from one of the tables\n that would be part of this join?\n\nIn the couple cases where this happened to me it was\nin queries on a textbook star schema like this:\n\n select * from fact\n join dim1 using (dim1_id)\n join dim2 using (dim2_id)\n where dim1.value = 'something'\n and dim2.valuex = 'somethingelse'\n and dim2.valuey = 'more';\n\nand looking up all the IDs before hitting the huge\nfact table. Often in these cases the where clause\non the dimension tables are on values with a unique\nconstraint.\n\nIf I understand right - if the constraint can prove\nit'll return at most 1 row - that means the cartesian\njoin is provably safe from blowing up.\n\nNot sure if that's redundant with the condition you\nmentioned, or if it's yet a separate condition where\nwe might also want to consider cartesian joins.\n\nRon M\n\n", "msg_date": "Mon, 17 Mar 2008 22:48:35 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "All that helps to pgsql to perform good in a TB-sized database \nenviroment is a Good Think (r) :D\n\nPablo\n\nBruce Momjian wrote:\n> I have _not_ added a TODO for this item. Let me know if one is needed.\n", "msg_date": "Tue, 18 Mar 2008 10:07:13 -0300", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Would another possible condition for considering\n> Cartesian joins be be:\n\n> * Consider Cartesian joins when a unique constraint can prove\n> that at most one row will be pulled from one of the tables\n> that would be part of this join?\n\nWhat for? That would still lead us to consider large numbers of totally\nuseless joins.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Mar 2008 10:49:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases " }, { "msg_contents": "Tom Lane wrote:\n> Ron Mayer <[email protected]> writes:\n>> Would another possible condition for considering\n>> Cartesian joins be be:\n> \n>> * Consider Cartesian joins when a unique constraint can prove\n>> that at most one row will be pulled from one of the tables\n>> that would be part of this join?\n> \n> What for? That would still lead us to consider large numbers of totally\n> useless joins.\n> \n> \t\t\tregards, tom lane\n\nOften I get order-of-magnitude better queries by forcing the cartesian\njoin even without multi-column indexes.\n\nExplain analyze results below.\n\n\n\nHere's an example with your typical star schema.\n fact is the central fact table.\n d_ref is a dimension table for the referrer\n d_uag is a dimension table for the useragent.\n\nForcing the cartesan join using \"offset 0\" makes\nthe the query take 14 ms (estimated cost 7575).\n\nIf I don't force the cartesian join the query takes\nover 100ms (estimated cost 398919).\n\nIndexes are on each dimension; but no multi-column\nindexes (since the ad-hoc queries can hit any permutation\nof dimensions).\n\nlogs=# explain analyze select * from fact natural join (select * from d_ref natural join d_uag where ref_host = 'www.real.com' and ref_path = '/products/player/more_info/moreinfo.html' and ref_query = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV=' and useragent = 'Mozilla/4.08 [en] (WinNT; U ;Nav)' offset 0 ) as a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=6465.12..7575.91 rows=367 width=2096) (actual time=14.152..14.192 rows=4 loops=1)\n -> Limit (cost=0.00..14.22 rows=1 width=218) (actual time=0.084..0.102 rows=1 loops=1)\n -> Nested Loop (cost=0.00..14.22 rows=1 width=218) (actual time=0.082..0.096 rows=1 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=0.056..0.058 rows=1 loops=1)\n Index Cond: (((ref_path)::text = '/products/player/more_info/moreinfo.html'::text) AND ((ref_host)::text = 'www.real.com'::text) AND ((ref_query)::text = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV='::text))\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=0.020..0.029 rows=1 loops=1)\n Index Cond: ((useragent)::text = 'Mozilla/4.08 [en] (WinNT; U ;Nav)'::text)\n -> Bitmap Heap Scan on fact (cost=6465.12..7556.18 rows=367 width=32) (actual time=14.053..14.066 rows=4 loops=1)\n Recheck Cond: ((fact.uag_id = a.uag_id) AND (fact.ref_id = a.ref_id))\n -> BitmapAnd (cost=6465.12..6465.12 rows=367 width=0) (actual time=14.016..14.016 rows=0 loops=1)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=2.258..2.258 rows=7960 loops=1)\n Index Cond: (fact.uag_id = a.uag_id)\n -> Bitmap Index Scan on i__fact__ref_id (cost=0.00..3581.50 rows=253913 width=0) (actual time=9.960..9.960 rows=13751 loops=1)\n Index Cond: (fact.ref_id = a.ref_id)\n Total runtime: 14.332 ms\n(15 rows)\n\nlogs=#\n\n\n\nlogs=# explain analyze select * from fact natural join (select * from d_ref natural join d_uag where ref_host = 'www.real.com' and ref_path = '/products/player/more_info/moreinfo.html' and ref_query = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV=' and useragent = 'Mozilla/4.08 [en] (WinNT; U ;Nav)' ) as a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=2827.72..398919.05 rows=1 width=242) (actual time=78.777..107.038 rows=4 loops=1)\n Hash Cond: (fact.ref_id = d_ref.ref_id)\n -> Nested Loop (cost=2819.88..398908.65 rows=511 width=119) (actual time=6.311..101.843 rows=7960 loops=1)\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=0.021..0.029 rows=1 loops=1)\n Index Cond: ((useragent)::text = 'Mozilla/4.08 [en] (WinNT; U ;Nav)'::text)\n -> Bitmap Heap Scan on fact (cost=2819.88..396449.49 rows=196223 width=32) (actual time=6.273..91.645 rows=7960 loops=1)\n Recheck Cond: (fact.uag_id = d_uag.uag_id)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=5.117..5.117 rows=7960 loops=1)\n Index Cond: (fact.uag_id = d_uag.uag_id)\n -> Hash (cost=7.83..7.83 rows=1 width=127) (actual time=0.069..0.069 rows=1 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=0.059..0.062 rows=1 loops=1)\n Index Cond: (((ref_path)::text = '/products/player/more_info/moreinfo.html'::text) AND ((ref_host)::text = 'www.real.com'::text) AND ((ref_query)::text = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV='::text))\n Total runtime: 107.193 ms\n(13 rows)\n", "msg_date": "Tue, 18 Mar 2008 10:33:26 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" }, { "msg_contents": "Ron Mayer wrote:\n> Tom Lane wrote:\n>> Ron Mayer <[email protected]> writes:\n>>> Would another possible condition for considering\n>>> Cartesian joins be be:\n>>> * Consider Cartesian joins when a unique constraint can prove\n>>> that at most one row will be pulled from one of the tables\n>>> that would be part of this join?\n>>\n>> What for? That would still lead us to consider large numbers of totally\n>> useless joins.\n> \n> Often I get order-of-magnitude better queries by forcing the cartesian\n> join even without multi-column indexes.\n\nAh - and sometimes even 2 order of magnitude improvements.\n\n1.1 seconds with Cartesian join, 200 seconds if it\ndoesn't use it.\n\n\n\nlogs=# explain analyze select * from fact natural join (select * from d_ref natural join d_uag where ref_host = 'www.real.com' and ref_path = '/products/player/more_info/moreinfo.html' and ref_query = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV=' and useragent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98)' offset 0 ) as a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=6465.12..7575.91 rows=367 width=2096) (actual time=1118.741..1119.207 rows=122 loops=1)\n -> Limit (cost=0.00..14.22 rows=1 width=218) (actual time=0.526..0.542 rows=1 loops=1)\n -> Nested Loop (cost=0.00..14.22 rows=1 width=218) (actual time=0.524..0.537 rows=1 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=0.168..0.170 rows=1 loops=1)\n Index Cond: (((ref_path)::text = '/products/player/more_info/moreinfo.html'::text) AND ((ref_host)::text = 'www.real.com'::text) AND ((ref_query)::text = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV='::text))\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=0.347..0.355 rows=1 loops=1)\n Index Cond: ((useragent)::text = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98)'::text)\n -> Bitmap Heap Scan on fact (cost=6465.12..7556.18 rows=367 width=32) (actual time=1118.196..1118.491 rows=122 loops=1)\n Recheck Cond: ((fact.uag_id = a.uag_id) AND (fact.ref_id = a.ref_id))\n -> BitmapAnd (cost=6465.12..6465.12 rows=367 width=0) (actual time=1115.565..1115.565 rows=0 loops=1)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=813.859..813.859 rows=1183470 loops=1)\n Index Cond: (fact.uag_id = a.uag_id)\n -> Bitmap Index Scan on i__fact__ref_id (cost=0.00..3581.50 rows=253913 width=0) (actual time=8.667..8.667 rows=13751 loops=1)\n Index Cond: (fact.ref_id = a.ref_id)\n Total runtime: 1122.245 ms\n(15 rows)\n\nlogs=# explain analyze select * from fact natural join (select * from d_ref natural join d_uag where ref_host = 'www.real.com' and ref_path = '/products/player/more_info/moreinfo.html' and ref_query = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV=' and useragent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98)' ) as a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=2827.72..398919.05 rows=1 width=242) (actual time=114138.193..200622.416 rows=122 loops=1)\n Hash Cond: (fact.ref_id = d_ref.ref_id)\n -> Nested Loop (cost=2819.88..398908.65 rows=511 width=119) (actual time=1524.600..199522.182 rows=1183470 loops=1)\n -> Index Scan using i_uag__val on d_uag (cost=0.00..6.38 rows=1 width=91) (actual time=0.023..0.033 rows=1 loops=1)\n Index Cond: ((useragent)::text = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows 98)'::text)\n -> Bitmap Heap Scan on fact (cost=2819.88..396449.49 rows=196223 width=32) (actual time=1524.562..197627.135 rows=1183470 loops=1)\n Recheck Cond: (fact.uag_id = d_uag.uag_id)\n -> Bitmap Index Scan on i__fact__uag_id (cost=0.00..2770.83 rows=196223 width=0) (actual time=758.888..758.888 rows=1183470 loops=1)\n Index Cond: (fact.uag_id = d_uag.uag_id)\n -> Hash (cost=7.83..7.83 rows=1 width=127) (actual time=0.067..0.067 rows=1 loops=1)\n -> Index Scan using i_ref__val on d_ref (cost=0.00..7.83 rows=1 width=127) (actual time=0.058..0.060 rows=1 loops=1)\n Index Cond: (((ref_path)::text = '/products/player/more_info/moreinfo.html'::text) AND ((ref_host)::text = 'www.real.com'::text) AND ((ref_query)::text = '?ID=370&DC=&LANG=&PN=RealOne%20Player&PV=6.0.11.818&PT=&OS=&CM=&CMV=&LS=&RE=&RA=&RV='::text))\n Total runtime: 200625.636 ms\n(13 rows)\n\nlogs=#\n", "msg_date": "Tue, 18 Mar 2008 18:17:22 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TB-sized databases" } ]
[ { "msg_contents": "Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo\nsuficientemente robusto para manejar una plataforma transaccional de 2000 tx\nper second. necesito conocer la manera de separar mi operacion transaccional\nde la aquella que es de consulta sabiendo que existe informacion comun para\nambos ambientes.\n\nLes pido su ayuda y un poco de su experticia en el tema.\n\n\nGracias a todos\n\nHola amigos, les escribo por que necesito conocer si PostgreSQL es lo suficientemente robusto para manejar una plataforma transaccional de 2000 tx per second. necesito conocer la manera de separar mi operacion transaccional de la aquella que es de consulta sabiendo que existe informacion comun para ambos ambientes.\nLes pido su ayuda y un poco de su experticia en el tema.Gracias a todos", "msg_date": "Mon, 26 Nov 2007 15:22:46 -0500", "msg_from": "\"Fabio Arias\" <[email protected]>", "msg_from_op": true, "msg_subject": "Base de Datos Transaccional" }, { "msg_contents": "Translaterating ...\n\n\"Also sprach Fabio Arias:\"\n> Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo\n> suficientemente robusto para manejar una plataforma transaccional de 2000 tx\n> per second. necesito conocer la manera de separar mi operacion transaccional\n> de la aquella que es de consulta sabiendo que existe informacion comun para\n> ambos ambientes.\n\nHi peeps. I'm writing because I need to know if PSQL is sufficiently\nsolid nowadaysie to be able to run a transaction-based platform that hums\nalong at 2000 transactions/s [quite whether he means \"run\" (control) or\n\"run as\" (be) I'm not sure, and maybe neither is he ...]. I need to\nfigure how to properly separate/differentiate the transactional\noperations from the queries, given that there's some information common\nto both contexts [and what he means by that I don't know .. I think\nhe's asking how to design stuff so that there's a useful separation of\nconcerns, bearing in mind that most folks will be doing queries, and\nsome folks will be doing updates or other transactions, and ne'er the\ntwain should get mixed up].\n\n> Les pido su ayuda y un poco de su experticia en el tema.\n\nI'm asking to borrow some help and a lttle of your expertise in this\narea [very free transaltion endeth here].\n\n> Gracias a todos\n\nTa in advance.\n\nPTB\n\nPS ... I think he means \"is there anything in particular I should do\nwhen designing a system that has both readers and writers\"?\n\n\n", "msg_date": "Mon, 26 Nov 2007 21:36:18 +0100 (MET)", "msg_from": "\"Peter T. Breuer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Base de Datos Transaccional" }, { "msg_contents": "Si tenes el hardware necesario y planificas el deployment de la base de \ndatos apropiadamente sin dudas puede llegar a manejar esa carga.\n\nSaludos\n\nPablo\n\nFabio Arias wrote:\n> Hola amigos, les escribo por que necesito conocer si PostgreSQL es lo \n> suficientemente robusto para manejar una plataforma transaccional de \n> 2000 tx per second. necesito conocer la manera de separar mi operacion \n> transaccional de la aquella que es de consulta sabiendo que existe \n> informacion comun para ambos ambientes.\n>\n> Les pido su ayuda y un poco de su experticia en el tema.\n>\n>\n> Gracias a todos\n\n", "msg_date": "Mon, 26 Nov 2007 15:53:42 -0500", "msg_from": "Pablo Alcaraz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Base de Datos Transaccional" } ]
[ { "msg_contents": "Is there a source comparing PostgreSQL performance (say, using\npgbench) out of the box for various Linux distributions? Alternately,\nis there an analysis anywhere of the potential gains from building a\ncustom kernel and just what customizations are most relevant to a\nPostgreSQL server?\n\nSome background - in investigating the overhead of adopting OpenVZ\nvirtualization, I ran pgbench tests on PostgreSQL running in a virtual\nenvironment (VE) and compared to PostgreSQL running directly on the\nhardware node (HN) under the current stable OpenVZ kernel with no VE\nrunning. The results were roughly in line with expectations based on\nOpenVZ documentation (5% fewer transactions per second.)\n\nFor completeness, I then ran the same tests with the current stock\nFedora 8 kernel running natively on the same hardware (after all this\nis the true non-virtual alternative.) Surprisingly, this test\nperformed markedly worse than under the OpenVZ kernel (either on HN or\nin VE) even though the latter is from the 2.6.18 series and has added\nbaggage to support OpenVZ's OS virtualization. Multiple pgbench runs\narrive confirm this conclusion.\n\nThe PostgreSQL server version (8.2.5), configuration, hardware,\netc. are identical (actually same HD filesystem image mounted at\n/var/lib/pgsql) for each test. Similarly, other than the kernel, the\nOS is identical - stock Fedora 8 with up to date packages for each\ntest.\n\nI double-checked the kernel architecture via uname:\n\nFedora 8:\nLinux 2.6.23.1-49.fc8 #1 SMP Thu Nov 8 21:41:26 EST 2007 i686 i686 i386\nGNU/Linux\n\nOpenVZ:\nLinux 2.6.18-8.1.15.el5.028stab049.1 #1 SMP Thu Nov 8 16:23:12 MSK 2007\ni686 i686 i386 GNU/Linux\n\nSo, what's different between these tests? I'm seeing performance\ndifferences of between +65% to +90% transactions per second of the\nOpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\nthis reflective of different emphasis between RHEL and Fedora kernel\nbuilds? Some OpenVZ optimization on top of the RHEL5 build? Something\nelse? Where should I look?\n\nany insights much appreciated,\n\nDamon Hart\n\n\n", "msg_date": "Mon, 26 Nov 2007 17:50:08 -0500", "msg_from": "Damon Hart <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL performance on various distribution stock kernels" }, { "msg_contents": "On 11/26/07, Damon Hart <[email protected]> wrote:\n> So, what's different between these tests? I'm seeing performance\n> differences of between +65% to +90% transactions per second of the\n> OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> this reflective of different emphasis between RHEL and Fedora kernel\n> builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> else? Where should I look?\n\nA recent FreeBSD benchmark (which also tested Linux performance) found\nmajor performance differences between recent versions of the kernel,\npossibly attributable to the new so-called completely fair scheduler:\n\n http://archives.postgresql.org/pgsql-performance/2007-11/msg00132.php\n\nNo idea if it's relevant.\n\nAlexander.\n", "msg_date": "Tue, 27 Nov 2007 00:00:20 +0100", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on various distribution stock kernels" }, { "msg_contents": "On Nov 26, 2007 4:50 PM, Damon Hart <[email protected]> wrote:\n>\n> So, what's different between these tests? I'm seeing performance\n> differences of between +65% to +90% transactions per second of the\n> OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> this reflective of different emphasis between RHEL and Fedora kernel\n> builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> else? Where should I look?\n>\n> any insights much appreciated,\n\nHow many TPS are you seeing on each one? If you are running 10krpm\ndrives and seeing more than 166.66 transactions per second, then your\ndrives are likely lying to you and not actually fsyncing, and it could\nbe that fsync() on IDE / SATA has been implemented in later kernels\nand it isn't lying.\n\nHard to say for sure.\n\nWhat does vmstat 1 have to say on each system when it's under load?\n", "msg_date": "Mon, 26 Nov 2007 17:00:53 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on various distribution stock kernels" }, { "msg_contents": "Damon Hart <[email protected]> writes:\n> So, what's different between these tests? I'm seeing performance\n> differences of between +65% to +90% transactions per second of the\n> OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> this reflective of different emphasis between RHEL and Fedora kernel\n> builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> else? Where should I look?\n\nConsidering how raw Fedora 8 is, I think what you've probably found is a\nperformance bug that should be reported to the kernel hackers.\n\nJust to confirm: this *is* the same filesystem in both cases, right?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Nov 2007 18:06:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on various distribution stock kernels " }, { "msg_contents": "On Nov 26, 2007 5:00 PM, Alexander Staubo <[email protected]> wrote:\n> On 11/26/07, Damon Hart <[email protected]> wrote:\n> > So, what's different between these tests? I'm seeing performance\n> > differences of between +65% to +90% transactions per second of the\n> > OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> > this reflective of different emphasis between RHEL and Fedora kernel\n> > builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> > else? Where should I look?\n>\n> A recent FreeBSD benchmark (which also tested Linux performance) found\n> major performance differences between recent versions of the kernel,\n> possibly attributable to the new so-called completely fair scheduler:\n>\n> http://archives.postgresql.org/pgsql-performance/2007-11/msg00132.php\n\nYeah, I wondered about that too, but thought the completely fair\nscheduler was not on by default so didn't mention it. Hmmm. I\nwonder.\n", "msg_date": "Mon, 26 Nov 2007 17:06:39 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on various distribution stock kernels" }, { "msg_contents": "On Mon, 2007-11-26 at 17:00 -0600, Scott Marlowe wrote:\n> On Nov 26, 2007 4:50 PM, Damon Hart <[email protected]> wrote:\n> >\n> > So, what's different between these tests? I'm seeing performance\n> > differences of between +65% to +90% transactions per second of the\n> > OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> > this reflective of different emphasis between RHEL and Fedora kernel\n> > builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> > else? Where should I look?\n> >\n> > any insights much appreciated,\n> \n> How many TPS are you seeing on each one? If you are running 10krpm\n> drives and seeing more than 166.66 transactions per second, then your\n> drives are likely lying to you and not actually fsyncing, and it could\n> be that fsync() on IDE / SATA has been implemented in later kernels\n> and it isn't lying.\n> \n> Hard to say for sure.\n> \n> What does vmstat 1 have to say on each system when it's under load?\n\nI will have to repeat the tests to give you any vmstat info, but perhaps\na little more raw input might be useful.\n\nTest H/W:\n\nDell Precision 650 Dual \nIntel CPU: Dual XEON 2.4GHz 512k Cache \nRAM: 4GB of DDR ECC \nHard Drive: 4 x 36GB 10K 68Pin SCSI Hard Drive \n\npgbench\nscale: 50\nclients: 50\ntransactions per client: 100\n\nstats for 30 runs each kernel in TPS (excluding connections\nestablishing)\n\nOpenVZ (RHEL5 derived 2.6.18 series)\n\naverage: 446\nmaximum: 593\nminimum: 95\nstdev: 151\nmedian: 507\n\nstock Fedora 8 (2.6.23 series)\n\naverage: 270\nmaximum: 526\nminimum: 83\nstdev: 112\nmedian: 268\n\nDoes your 10K RPM drive 166 TPS ceiling apply in this arrangement with\nmultiple disks (the PostgreSQL volume spans three drives, segregated\nfrom the OS) and multiple pgbench clients? I'm fuzzy on whether these\nfactors even enter into that rule of thumb. At least as far as the\nPostgreSQL configuration is concerned, fsync has not been changed from\nthe default.\n\nDamon\n\n\n", "msg_date": "Mon, 26 Nov 2007 18:40:24 -0500", "msg_from": "Damon Hart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance on various distribution stock\n\tkernels" }, { "msg_contents": "On Mon, 2007-11-26 at 18:06 -0500, Tom Lane wrote:\n> Damon Hart <[email protected]> writes:\n> > So, what's different between these tests? I'm seeing performance\n> > differences of between +65% to +90% transactions per second of the\n> > OpenVZ kernel running on the HN over the stock Fedora 8 kernel. Is\n> > this reflective of different emphasis between RHEL and Fedora kernel\n> > builds? Some OpenVZ optimization on top of the RHEL5 build? Something\n> > else? Where should I look?\n> \n> Considering how raw Fedora 8 is, I think what you've probably found is a\n> performance bug that should be reported to the kernel hackers.\n> \n\nNot being a kernel hacker, any suggestions on how to provide more useful\nfeedback than just pgbench TPS comparison and hardware specs? What's the\nbest forum, presuming this does boil down to kernel issues.\n\n> Just to confirm: this *is* the same filesystem in both cases, right?\n> \n> \t\t\tregards, tom lane\n\nYes, same filesystem simply booting different kernels.\n\nDamon\n\n\n", "msg_date": "Mon, 26 Nov 2007 18:41:39 -0500", "msg_from": "Damon Hart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL performance on various distribution stock\n\tkernels" }, { "msg_contents": "On Mon, 26 Nov 2007, Damon Hart wrote:\n\n> Fedora 8:\n> Linux 2.6.23.1-49.fc8 #1 SMP Thu Nov 8 21:41:26 EST 2007 i686 i686 i386\n> GNU/Linux\n>\n> OpenVZ:\n> Linux 2.6.18-8.1.15.el5.028stab049.1 #1 SMP Thu Nov 8 16:23:12 MSK 2007\n> i686 i686 i386 GNU/Linux\n\n2.6.23 introduced a whole new scheduler: \nhttp://www.linux-watch.com/news/NS2939816251.html\nso it's rather different from earlier 2.6 releases, and so new that there \ncould easily be performance bugs.\n\n> Does your 10K RPM drive 166 TPS ceiling apply in this arrangement with \n> multiple disks\n\nNumber of disks has nothing to do with it; it depends only on the rate the \ndisk with the WAL volume is spinning at. But that's for a single client.\n\n> pgbench\n> scale: 50\n> clients: 50\n> transactions per client: 100\n\nWith this many clients, you can get far more transactions per second \ncommitted than the max for a single client (166@10K rpm). What you're \nseeing, somewhere around 500 per second, is reasonable.\n\nNote that you're doing two things that make pgbench less useful than it \ncan be:\n\n1) The number of transactions you're committing is trivial, which is one \nreason why your test runs have such a huge variation. Try 10000 \ntransactions/client if you want something that doesn't vary quite so much. \nIf it doesn't run for a couple of minutes, you're not going to get good \nrepeatability.\n\n2) The way pgbench works, it takes a considerable amount of resources to \nsimulate this many clients. You might get higher (and more realistic) \nnumbers if you run the pgbench client on another system than the server.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 26 Nov 2007 19:57:20 -0500 (EST)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL performance on various distribution stock\n kernels" } ]
[ { "msg_contents": "Hello,\n\nI have ran into an interesting problem with 8.1 and i would like anybody\nto explain me if there's a problem with the planner or there's a problem\nwith myself. In both cases a solution is welcome. The following query:\n\nSELECT sum(qty) FROM\n\t_abi_main_pof_r ampr\n\tinner join _abi_main_items ami on ampr.itemid=ami.id\n\tinner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid\n\tinner join _abi_main_contacts amc on ampt.contactid=amc.codest\nWHERE\n\tampt.doctypeid in ('BCA','BSC')\n\tand amc.descr ilike '%SOCO%'\n\tand ampr.type=0;\n\nis explain analyze'd as follows\n\nAggregate (cost=1220.65..1220.66 rows=1 width=4) (actual \ntime=95937.824..95937.824 rows=1 loops=1)\n -> Nested Loop (cost=0.00..1220.65 rows=1 width=4) (actual \ntime=3695.250..95936.292 rows=1503 loops=1)\n -> Nested Loop (cost=0.00..1214.64 rows=1 width=8) (actual \ntime=3695.229..95924.697 rows=1503 loops=1)\n Join Filter: (\"inner\".poftid = \"outer\".poftid)\n -> Nested Loop (cost=0.00..79.16 rows=1 width=4) \n(actual time=0.039..22.547 rows=2437 loops=1)\n Join Filter: (\"inner\".contactid = \"outer\".codest)\n -> Seq Scan on _abi_main_contacts amc \n(cost=0.00..1.29 rows=1 width=4) (actual time=0.029..0.034 rows=1 loops=1)\n Filter: ((descr)::text ~~* '%SOCO%'::text)\n -> Seq Scan on _abi_main_pof_t ampt \n(cost=0.00..77.53 rows=27 width=8) (actual time=0.006..15.820 rows=2702 \nloops=1)\n Filter: (((doctypeid)::text = 'BCA'::text) OR \n((doctypeid)::text = 'BSC'::text))\n -> Seq Scan on _abi_main_pof_r ampr (cost=0.00..1132.81 \nrows=214 width=12) (actual time=0.034..35.986 rows=8271 loops=2437)\n Filter: (\"type\" = 0)\n -> Index Scan using _abi_docks_items_pkey on _abi_main_items \nami (cost=0.00..5.99 rows=1 width=4) (actual time=0.005..0.005 rows=1 \nloops=1503)\n Index Cond: (\"outer\".itemid = ami.id)\nTotal runtime: 95937.950 ms\n\n...The same query, but with a condition change as ampr.type != 1 instead\nof ampr.type=0\n\nSELECT sum(qty) FROM\n\t_abi_main_pof_r ampr\n\tinner join _abi_main_items ami on ampr.itemid=ami.id\n\tinner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid\n\tinner join _abi_main_contacts amc on ampt.contactid=amc.codest\nWHERE\n\tampt.doctypeid in ('BCA','BSC')\n\tand amc.descr ilike '%SOCO%'\n\tand ampr.type != 1;\n\n\nis explain analyze'd as follows:\n\nAggregate (cost=1446.13..1446.14 rows=1 width=4) (actual \ntime=81.609..81.609 rows=1 loops=1)\n -> Nested Loop (cost=77.60..1446.12 rows=2 width=4) (actual \ntime=22.597..80.944 rows=1503 loops=1)\n -> Nested Loop (cost=77.60..1434.12 rows=2 width=8) (actual \ntime=22.577..72.785 rows=1503 loops=1)\n Join Filter: (\"inner\".contactid = \"outer\".codest)\n -> Seq Scan on _abi_main_contacts amc (cost=0.00..1.29 \nrows=1 width=4) (actual time=0.030..0.036 rows=1 loops=1)\n Filter: ((descr)::text ~~* '%SOCO%'::text)\n -> Hash Join (cost=77.60..1427.52 rows=425 width=12) \n(actual time=22.536..69.034 rows=8271 loops=1)\n Hash Cond: (\"outer\".poftid = \"inner\".poftid)\n -> Seq Scan on _abi_main_pof_r ampr \n(cost=0.00..1132.81 rows=42571 width=12) (actual time=0.035..37.045 \nrows=8271 loops=1)\n Filter: (\"type\" <> 1)\n -> Hash (cost=77.53..77.53 rows=27 width=8) \n(actual time=22.471..22.471 rows=2702 loops=1)\n -> Seq Scan on _abi_main_pof_t ampt \n(cost=0.00..77.53 rows=27 width=8) (actual time=0.006..20.482 rows=2702 \nloops=1)\n Filter: (((doctypeid)::text = \n'BCA'::text) OR ((doctypeid)::text = 'BSC'::text))\n -> Index Scan using _abi_docks_items_pkey on _abi_main_items \nami (cost=0.00..5.99 rows=1 width=4) (actual time=0.003..0.004 rows=1 \nloops=1503)\n Index Cond: (\"outer\".itemid = ami.id)\nTotal runtime: 81.735 ms\n\n\n\nThe evidence is that with the first condition the query planner seems to\nchoose a 'lower cost' solution which lead to an execution time of 95\nseconds (!!!) but in the second 'higher cost' solution the time is 85\nmilliseconds. The result set in this case is exactly the same. It is\nevident to me that there must be a problem with this.\n\nAnybody ?\n\nTIA\n\nGianluca\n\n\n", "msg_date": "Tue, 27 Nov 2007 08:24:20 +0100", "msg_from": "Gianluca Alberici <[email protected]>", "msg_from_op": true, "msg_subject": "8.1 planner problem ?" }, { "msg_contents": "Gianluca Alberici wrote:\n> I have ran into an interesting problem with 8.1 and i would like anybody\n> to explain me if there's a problem with the planner or there's a problem\n> with myself. In both cases a solution is welcome. The following query:\n> \n> SELECT sum(qty) FROM\n> _abi_main_pof_r ampr\n> inner join _abi_main_items ami on ampr.itemid=ami.id\n> inner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid\n> inner join _abi_main_contacts amc on ampt.contactid=amc.codest\n> WHERE\n> ampt.doctypeid in ('BCA','BSC')\n> and amc.descr ilike '%SOCO%'\n> and ampr.type=0;\n> \n> is explain analyze'd as follows\n> \n> Aggregate (cost=1220.65..1220.66 rows=1 width=4) (actual \n> time=95937.824..95937.824 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..1220.65 rows=1 width=4) (actual \n> time=3695.250..95936.292 rows=1503 loops=1)\n> -> Nested Loop (cost=0.00..1214.64 rows=1 width=8) (actual \n> time=3695.229..95924.697 rows=1503 loops=1)\n> Join Filter: (\"inner\".poftid = \"outer\".poftid)\n> -> Nested Loop (cost=0.00..79.16 rows=1 width=4) (actual \n> time=0.039..22.547 rows=2437 loops=1)\n> Join Filter: (\"inner\".contactid = \"outer\".codest)\n> -> Seq Scan on _abi_main_contacts amc \n> (cost=0.00..1.29 rows=1 width=4) (actual time=0.029..0.034 rows=1 loops=1)\n> Filter: ((descr)::text ~~* '%SOCO%'::text)\n> -> Seq Scan on _abi_main_pof_t ampt \n> (cost=0.00..77.53 rows=27 width=8) (actual time=0.006..15.820 rows=2702 \n> loops=1)\n> Filter: (((doctypeid)::text = 'BCA'::text) OR \n> ((doctypeid)::text = 'BSC'::text))\n> -> Seq Scan on _abi_main_pof_r ampr (cost=0.00..1132.81 \n> rows=214 width=12) (actual time=0.034..35.986 rows=8271 loops=2437)\n> Filter: (\"type\" = 0)\n\nThe problem is in the estimate of that seq scan of _abi_main_pof_r \ntable. The planner estimates that there's 214 rows with type=0, but in \nreality there's 8271. That misestimate throws off the rest of the plan.\n\n> ...The same query, but with a condition change as ampr.type != 1 instead\n> of ampr.type=0\n> \n> SELECT sum(qty) FROM\n> _abi_main_pof_r ampr\n> inner join _abi_main_items ami on ampr.itemid=ami.id\n> inner join _abi_main_pof_t ampt on ampr.poftid=ampt.poftid\n> inner join _abi_main_contacts amc on ampt.contactid=amc.codest\n> WHERE\n> ampt.doctypeid in ('BCA','BSC')\n> and amc.descr ilike '%SOCO%'\n> and ampr.type != 1;\n> \n> \n> is explain analyze'd as follows:\n> \n> Aggregate (cost=1446.13..1446.14 rows=1 width=4) (actual \n> time=81.609..81.609 rows=1 loops=1)\n> -> Nested Loop (cost=77.60..1446.12 rows=2 width=4) (actual \n> time=22.597..80.944 rows=1503 loops=1)\n> -> Nested Loop (cost=77.60..1434.12 rows=2 width=8) (actual \n> time=22.577..72.785 rows=1503 loops=1)\n> Join Filter: (\"inner\".contactid = \"outer\".codest)\n> -> Seq Scan on _abi_main_contacts amc (cost=0.00..1.29 \n> rows=1 width=4) (actual time=0.030..0.036 rows=1 loops=1)\n> Filter: ((descr)::text ~~* '%SOCO%'::text)\n> -> Hash Join (cost=77.60..1427.52 rows=425 width=12) \n> (actual time=22.536..69.034 rows=8271 loops=1)\n> Hash Cond: (\"outer\".poftid = \"inner\".poftid)\n> -> Seq Scan on _abi_main_pof_r ampr \n> (cost=0.00..1132.81 rows=42571 width=12) (actual time=0.035..37.045 \n> rows=8271 loops=1)\n> Filter: (\"type\" <> 1)\n\nHere you can see that the planner is estimating that there's 42571 rows \nwith type <> 1, which is also way off the reality in the other \ndirection, but this time the plan it chooses happens to be better.\n\nRun ANALYZE. The planner needs to have statistics on how many rows of \neach type there is to come up with the proper plan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 27 Nov 2007 10:12:38 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.1 planner problem ?" } ]
[ { "msg_contents": "I have a query that takes about 7000 ms in average to complete the first \ntime it runs. Subsequent runs complete in only 50 ms. That is more than \na factor 100 faster! How can I make the query perform good in the first \nrun too?\n\nQuery and output from both first and second run of Explain Analyze is \npasted here:\n\nhttp://rafb.net/p/yrKyoA17.html\n", "msg_date": "Tue, 27 Nov 2007 17:33:36 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Query only slow on first run" }, { "msg_contents": "On Tue, Nov 27, 2007 at 05:33:36PM +0100, cluster wrote:\n> I have a query that takes about 7000 ms in average to complete the first \n> time it runs. Subsequent runs complete in only 50 ms. That is more than \n> a factor 100 faster! How can I make the query perform good in the first \n> run too?\n\nProbably by buying much faster disk hardware. You'll note that the query\nplans you posted are the same, except for the actual time it took to get the\nresults back. That tells me you have slow storage. On subsequent runs,\nthe data is cached, so it's fast.\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Tue, 27 Nov 2007 11:55:43 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "On Tuesday 27 November 2007 09:33:36 cluster wrote:\n> I have a query that takes about 7000 ms in average to complete the first\n> time it runs. Subsequent runs complete in only 50 ms. That is more than\n> a factor 100 faster! How can I make the query perform good in the first\n> run too?\n>\n> Query and output from both first and second run of Explain Analyze is\n> pasted here:\n>\n> http://rafb.net/p/yrKyoA17.html\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n\nThe query is faster the second run because the data pages were pulled into the \nbuffer pool during the first run. I would suggest using the explain plan from \nthe first run and test your changes on a recently started instance (or at \nleast on an instance where enough activity has transpired to effectively \nrotate the buffer pages).\n\n/Kevin\n", "msg_date": "Tue, 27 Nov 2007 09:57:07 -0700", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "Andrew Sullivan <[email protected]> writes:\n> On Tue, Nov 27, 2007 at 05:33:36PM +0100, cluster wrote:\n>> I have a query that takes about 7000 ms in average to complete the first \n>> time it runs. Subsequent runs complete in only 50 ms. That is more than \n>> a factor 100 faster! How can I make the query perform good in the first \n>> run too?\n\n> Probably by buying much faster disk hardware.\n\nOr buy more RAM, so that the data can stay cached.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2007 12:26:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run " }, { "msg_contents": ">> Probably by buying much faster disk hardware.\n> Or buy more RAM, so that the data can stay cached.\n\nSo the only problem here is lack of RAM and/or disk speed?\n", "msg_date": "Tue, 27 Nov 2007 19:21:38 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "In response to cluster <[email protected]>:\n\n> >> Probably by buying much faster disk hardware.\n> > Or buy more RAM, so that the data can stay cached.\n> \n> So the only problem here is lack of RAM and/or disk speed?\n\nNot automatically, but the chances that more RAM and/or faster disks will\nimprove this situation are probably 90% or better.\n\nOther things that could cause this problem are poor schema design, and\nunreasonable expectations.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 27 Nov 2007 13:55:50 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> -----Original Message-----\n> From: cluster\n> \n> >> Probably by buying much faster disk hardware.\n> > Or buy more RAM, so that the data can stay cached.\n> \n> So the only problem here is lack of RAM and/or disk speed?\n\nI don't think you can reach that conclusion yet. Like everybody said the\nreason the query was faster the second time was that the disk pages were\ncached in RAM, and pulling the data out of RAM is way faster than disk. If\nI were you, I would try to optimize the query for when the disk pages aren't\nin RAM. In order to test the query without having anything cached you need\nto clear out Postgres's shared buffers and the OS cache. That can be\ntricky, but it may be as easy as running a big select on another table.\n\nAs for optimizing the query, I noticed that all three joins are done by\nnested loops. I wonder if another join method would be faster. Have you\nanalyzed all the tables? You aren't disabling hash joins or merge joins are\nyou? If you aren't, then as a test I would try disabling nested loops by\ndoing \"set enable_nestloop=false\" and see if the query is any faster for\nyou. If it is faster without nested loops, then you might need to look into\nchanging some settings.\n\nDave\n\n", "msg_date": "Tue, 27 Nov 2007 13:04:28 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> As for optimizing the query, I noticed that all three joins are done by\n> nested loops. I wonder if another join method would be faster. Have you\n> analyzed all the tables?\n\nYes. I did a VACUUM FULL ANALYZE before running the test queries. Also I \nhave just performed an ANALYZE just to be sure everything was really \nanalyzed.\n\n> You aren't disabling hash joins or merge joins are\n> you?\n\nNope.\n\n> If you aren't, then as a test I would try disabling nested loops by\n> doing \"set enable_nestloop=false\" and see if the query is any faster for\n> you.\n\nIf I disable the nested loops, the query becomes *much* slower.\n\nA thing that strikes me is the following. As you can see I have the \nconstraint: q.status = 1. Only a small subset of the data set has this \nstatus. I have an index on q.status but for some reason this is not \nused. Instead the constraint are ensured with a \"Filter: (q.status = 1)\" \nin an index scan for the primary key in the \"q\" table. If the small \nsubset having q.status = 1 could be isolated quickly using an index, I \nwould expect the query to perform better. I just don't know why the \nplanner doesn't use the index on q.status.\n", "msg_date": "Tue, 27 Nov 2007 23:51:40 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "On Tue, Nov 27, 2007 at 11:51:40PM +0100, cluster wrote:\n> A thing that strikes me is the following. As you can see I have the \n> constraint: q.status = 1. Only a small subset of the data set has this \n> status. I have an index on q.status but for some reason this is not used. \n> Instead the constraint are ensured with a \"Filter: (q.status = 1)\" in an \n> index scan for the primary key in the \"q\" table. If the small subset having \n> q.status = 1 could be isolated quickly using an index, I would expect the \n> query to perform better. I just don't know why the planner doesn't use the \n> index on q.status.\n\nAn index scan (as opposed to a bitmap index scan) can only use one index at a\ntime, so it will choose the most selective one. Here it quite correctly\nrecognizes that there will only be one matching record for the given\nquestion_id, so it uses the primary key instead.\n\nYou could make an index on (question_id,status) (or a partial index on\nquestion id, with status=1 as the filter), but I'm not sure how much it would\nhelp you unless the questions table is extremely big. It doesn't appear to\nbe; in fact, it appears to be all in RAM, so that's not your bottleneck.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 28 Nov 2007 00:38:33 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> -----Original Message-----\n> From: cluster\n> \n> If I disable the nested loops, the query becomes *much* slower.\n> \n> A thing that strikes me is the following. As you can see I have the\n> constraint: q.status = 1. Only a small subset of the data set \n> has this status. I have an index on q.status but for some \n> reason this is not used. Instead the constraint are ensured \n> with a \"Filter: (q.status = 1)\" \n> in an index scan for the primary key in the \"q\" table. If the \n> small subset having q.status = 1 could be isolated quickly \n> using an index, I would expect the query to perform better. I \n> just don't know why the planner doesn't use the index on q.status.\n> \n\nWhat version of Postgres are you using? Do you know what your\njoin_collapse_limit is set to?\n\nYou might be able to force it to scan for questions with a status of 1 first\nto see if it helps by changing the FROM clause to: \n\nFROM posts p, question_tags qt, (SELECT * FROM questions WHERE status = 1\nOFFSET 0) q\n\nDave\n\n\n", "msg_date": "Tue, 27 Nov 2007 17:47:31 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> You could make an index on (question_id,status) (or a partial index on\n> question id, with status=1 as the filter), but I'm not sure how much it would\n> help you unless the questions table is extremely big. It doesn't appear to\n> be; in fact, it appears to be all in RAM, so that's not your bottleneck.\n\nWouldn't help, because the accesses to \"questions\" are not the problem.\nThe query's spending nearly all its time in the scan of \"posts\", and\nI'm wondering why --- doesn't seem like it should take 6400msec to fetch\n646 rows, unless perhaps the data is just horribly misordered relative\nto the index. Which may in fact be the case ... what exactly is that\n\"random_number\" column, and why are you desirous of ordering by it?\nFor that matter, if it is what it sounds like, why is it sane to group\nby it? You'll probably always get groups of one row ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Nov 2007 19:25:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run " }, { "msg_contents": "On Tue, Nov 27, 2007 at 07:25:54PM -0500, Tom Lane wrote:\n>> You could make an index on (question_id,status) (or a partial index on\n>> question id, with status=1 as the filter), but I'm not sure how much it would\n>> help you unless the questions table is extremely big. It doesn't appear to\n>> be; in fact, it appears to be all in RAM, so that's not your bottleneck.\n> Wouldn't help, because the accesses to \"questions\" are not the problem.\n\nYes, that was my point too. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 28 Nov 2007 01:28:18 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> The query's spending nearly all its time in the scan of \"posts\", and\n> I'm wondering why --- doesn't seem like it should take 6400msec to fetch\n> 646 rows, unless perhaps the data is just horribly misordered relative\n> to the index. Which may in fact be the case ...\n\nYes, they probably are. I use the random_number column in order to \nreceive a semi random sample subset from the large amount of rows. The \ntechnique is described in [1]. This subset is later used for some \nstatistical investigation, but this is somewhat irrelevant here. In \norder to receive the sample fast, I have made an index on the \nrandom_number column.\n\n> what exactly is that\n> \"random_number\" column\n\nA random float that is initialized when the row is created and never \nmodified afterwards. The physical row ordering will clearly not match \nthe random_number ordering. However, other queries uses a row ordering \nby the primary key so I don't think it would make much sense to make the \nindex on random_number a clustering index just in order to speed up this \nsingle query.\n\n> and why are you desirous of ordering by it?\n\nIn order to simulate a random pick of K rows. See [1].\n\n> For that matter, if it is what it sounds like, why is it sane to group\n> by it? You'll probably always get groups of one row ...\n\nFor each random_number, another table (question_tags) holds zero or more \nrows satisfying a number of constraints. I need to count(*) the number \nof corresponding question_tag rows for each random_number.\n\nWe have primarily two tables of interest here: questions (~100k rows) \nand posts (~400k rows). Each post refers to a question, but only the \n\"posts\" rows for which the corresponding \"question.status = 1\" are \nrelevant. This reduces the number of relevant question rows to about \n10k. Within the post rows corresponding to these 10k questions I would \nlike to pick a random sample of size K.\n\n[1] http://archives.postgresql.org/pgsql-general/2007-10/msg01240.php\n\n", "msg_date": "Wed, 28 Nov 2007 02:08:40 +0100", "msg_from": "tmp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "tmp wrote:\n>> what exactly is that\n>> \"random_number\" column\n> \n> A random float that is initialized when the row is created and never \n> modified afterwards. The physical row ordering will clearly not match \n> the random_number ordering. However, other queries uses a row ordering \n> by the primary key so I don't think it would make much sense to make the \n> index on random_number a clustering index just in order to speed up this \n> single query.\n> \n>> and why are you desirous of ordering by it?\n> \n> In order to simulate a random pick of K rows. See [1].\n\nA trick that I used is to sample the random column once, and create a much smaller table of the first N rows, where N is the sample size you want, and use that.\n\nIf you need a different N samples each time, you can create a temporary table, put your random N rows into that, do an ANALYZE, and then join to this smaller table. The overall performance can be MUCH faster even though you're creating and populating a whole table, than the plan that Postgres comes up with. This seems wrong-headed (why shouldn't Postgres be able to be as efficient on its own?), but it works.\n\nCraig\n\n", "msg_date": "Wed, 28 Nov 2007 06:56:33 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> -----Original Message-----\n> From: tmp\n> We have primarily two tables of interest here: questions \n> (~100k rows) and posts (~400k rows). Each post refers to a \n> question, but only the \"posts\" rows for which the \n> corresponding \"question.status = 1\" are relevant. This \n> reduces the number of relevant question rows to about 10k. \n\nEarlier you said only a small subset of questions have a status of 1, so I\nassumed you meant like 100 not 10k :) According to the explain analyze\nthere are only 646 rows in posts which match your criteria, so it does seem\nlike scanning posts first might be the right thing to do. \n\n", "msg_date": "Wed, 28 Nov 2007 10:00:05 -0600", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "\"Dave Dutcher\" <[email protected]> writes:\n> ... According to the explain analyze\n> there are only 646 rows in posts which match your criteria, so it does seem\n> like scanning posts first might be the right thing to do. \n\nNo, that's not right. What the output actually shows is that only 646\nposts rows were needed to produce the first 200 aggregate rows, which was\nenough to satisfy the LIMIT. The planner is evidently doing things this\nway in order to exploit the presence of the LIMIT --- if it had to\ncompute all the aggregate results it would likely have picked a\ndifferent plan.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2007 12:12:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run " }, { "msg_contents": ">> I'm wondering why --- doesn't seem like it should take 6400msec to fetch\n>> 646 rows, unless perhaps the data is just horribly misordered relative\n>> to the index. Which may in fact be the case ...\n\nHmm, actually I still don't understand why it takes 6400 ms to fetch the \nrows. As far as I can see the index used is \"covering\" so that real row \nlookups shouldn't be necessary. Also, only the the random_numbers \ninduces by questions with status = 1 should be considered - and this \npart is a relatively small subset.\n\nIn general, I don't understand why the query is so I/O dependant as it \napparently is.\n", "msg_date": "Wed, 28 Nov 2007 21:16:08 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "On Wed, Nov 28, 2007 at 09:16:08PM +0100, cluster wrote:\n> Hmm, actually I still don't understand why it takes 6400 ms to fetch the \n> rows. As far as I can see the index used is \"covering\" so that real row \n> lookups shouldn't be necessary.\n\nThe indexes don't contain visibility information, so Postgres has to look up\nthe row on disk to verify it isn't dead.\n\n> Also, only the the random_numbers induces by questions with status = 1\n> should be considered - and this part is a relatively small subset.\n\nAgain, you'll need to have a combined index if you want this to help you any.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 28 Nov 2007 21:34:42 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> The indexes don't contain visibility information, so Postgres has to look up\n> the row on disk to verify it isn't dead.\n\nI guess this fact drastically decreases the performance. :-(\nThe number of rows with a random_number will just grow over time while \nthe number of questions with status = 1 will always be somewhat constant \nat about 10.000 or most likely much less.\n\nI could really use any kind of suggestion on how to improve the query in \norder to make it scale better for large data sets The 6-7000 ms for a \nclean run is really a showstopper. Need to get it below 70 ms somehow.\n", "msg_date": "Wed, 28 Nov 2007 22:15:59 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "cluster wrote:\n>> The indexes don't contain visibility information, so Postgres has to \n>> look up the row on disk to verify it isn't dead.\n> \n> I guess this fact drastically decreases the performance. :-( The number\n> of rows with a random_number will just grow over time while the number of\n> questions with status = 1 will always be somewhat constant at about\n> 10.000 or most likely much less.\n> \n> I could really use any kind of suggestion on how to improve the query in \n> order to make it scale better for large data sets The 6-7000 ms for a \n> clean run is really a showstopper. Need to get it below 70 ms somehow.\n> \nHere is a suggestion that I have not tried. This might not make sense,\ndepending on how often you do this.\n\nMake two tables whose DDL is almost the same. In one, put all the rows with\nstatus = 1, and in the other put all the rows whose status != 1.\n\nNow all the other queries you run would probably need to join both tables,\nso maybe you make a hash index on the right fields so that would go fast.\n\nNow for the status = 1 queries, you just look at that smaller table. This\nwould obviously be faster.\n\nFor the other queries, you would get stuck with the join. You would have to\nweigh the overall performance issue vs. the performance of this special query.\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 16:55:01 up 2 days, 22:43, 0 users, load average: 4.31, 4.32, 4.20\n", "msg_date": "Wed, 28 Nov 2007 17:03:29 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "cluster <[email protected]> writes:\n> I could really use any kind of suggestion on how to improve the query in \n> order to make it scale better for large data sets The 6-7000 ms for a \n> clean run is really a showstopper. Need to get it below 70 ms somehow.\n\nBuy a faster disk?\n\nYou're essentially asking for a random sample of data that is not\ncurrently in memory. You're not going to get that without some I/O.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2007 17:24:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run " }, { "msg_contents": "On Nov 28, 2007 3:15 PM, cluster <[email protected]> wrote:\n> > The indexes don't contain visibility information, so Postgres has to look up\n> > the row on disk to verify it isn't dead.\n>\n> I guess this fact drastically decreases the performance. :-(\n> The number of rows with a random_number will just grow over time while\n> the number of questions with status = 1 will always be somewhat constant\n> at about 10.000 or most likely much less.\n\nHave you tried a partial index?\n\ncreate index xyz on tablename (random) where status = 1\n\n> I could really use any kind of suggestion on how to improve the query in\n> order to make it scale better for large data sets The 6-7000 ms for a\n> clean run is really a showstopper. Need to get it below 70 ms somehow.\n\nAlso, look into clustering the table on status or random every so often.\n\nMore importantly, you might need to research a faster way to get your\nrandom results\n", "msg_date": "Thu, 29 Nov 2007 00:49:41 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "> You're essentially asking for a random sample of data that is not\n> currently in memory. You're not going to get that without some I/O.\n\nNo, that sounds reasonable enough. But do you agree with the statement \nthat my query will just get slower and slower over time as the number of \nposts increases while the part having status = 1 is constant?\n(Therefore, as the relevant fraction becomes smaller over time, the \n\"Filter: status = 1\" operation becomes slower.)\n", "msg_date": "Thu, 29 Nov 2007 12:07:37 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query only slow on first run" }, { "msg_contents": "cluster <[email protected]> writes:\n>> You're essentially asking for a random sample of data that is not\n>> currently in memory. You're not going to get that without some I/O.\n\n> No, that sounds reasonable enough. But do you agree with the statement \n> that my query will just get slower and slower over time as the number of \n> posts increases while the part having status = 1 is constant?\n\nNo, not as long as it sticks to that plan. The time's basically\ndetermined by the number of aggregate rows the LIMIT asks for,\ntimes the average number of \"post\" rows per aggregate group.\nAnd as far as you said the latter number is not going to increase.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Nov 2007 10:32:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query only slow on first run " } ]
[ { "msg_contents": "\nHi all.\n\nI'm wanting to write a new GiST index system, to improve performance on\nsome queries I am running. I have had quite a look through the docs and\ncode, and I'm not convinced that it is possible to do what I want. This is\nwhat I am wanting to index:\n\nCREATE INDEX range_index ON table(a, b) USING fancy_new_index;\n\nand then:\n\nSELECT * FROM table WHERE a > 1 AND b < 4;\n\nand have that answered by the index.\n\nNow, generating an index format that can answer that particular\narrangement of constraints is easy. I can do that. However, getting\nmultiple values into the GiST functions is something I don't know how to\ndo. As far as I can see, I would need to create a composite type and index\nthat, like the contrib package seg does. This would change my SQL to:\n\nCREATE INDEX range_index ON table(fancy_type(a, b)) USING fancy_index;\n\nSELECT * FROM table WHERE fancy_type(a, b) &^£@! fancy_type(1, 4);\n\nwhich I don't want to do.\n\nSo, has this problem been solved before? Is there an already-existing\nindex that will speed up my query? Is there a way to get more than one\nvalue into a GiST index?\n\nThanks,\n\nMatthew\n\n-- \nIf you let your happiness depend upon how somebody else feels about you,\nnow you have to control how somebody else feels about you. -- Abraham Hicks\n", "msg_date": "Tue, 27 Nov 2007 18:28:23 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "GiST indexing tuples" }, { "msg_contents": "On Tue, Nov 27, 2007 at 06:28:23PM +0000, Matthew wrote:\n> SELECT * FROM table WHERE a > 1 AND b < 4;\n\nThis sounds like something an R-tree can do.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 27 Nov 2007 19:38:55 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexing tuples" }, { "msg_contents": "On Tue, 27 Nov 2007, Steinar H. Gunderson wrote:\n> On Tue, Nov 27, 2007 at 06:28:23PM +0000, Matthew wrote:\n> > SELECT * FROM table WHERE a > 1 AND b < 4;\n>\n> This sounds like something an R-tree can do.\n\nI *know* that. However, Postgres (as far as I can see) doesn't provide a\nsimple R-tree index that will index two integers. This means I have to\nwrite one myself. I'm asking whether it is possible to get two values into\na GiST index, which would allow me to implement this.\n\nMatthew\n\n-- \nIt is better to keep your mouth closed and let people think you are a fool\nthan to open it and remove all doubt. -- Mark Twain\n", "msg_date": "Wed, 28 Nov 2007 13:08:04 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexing tuples" }, { "msg_contents": "\"Matthew\" <[email protected]> writes:\n\n> On Tue, 27 Nov 2007, Steinar H. Gunderson wrote:\n>> On Tue, Nov 27, 2007 at 06:28:23PM +0000, Matthew wrote:\n>> > SELECT * FROM table WHERE a > 1 AND b < 4;\n>>\n>> This sounds like something an R-tree can do.\n>\n> I *know* that. However, Postgres (as far as I can see) doesn't provide a\n> simple R-tree index that will index two integers. This means I have to\n> write one myself. I'm asking whether it is possible to get two values into\n> a GiST index, which would allow me to implement this.\n\nThe database is capable of determining that a>1 and b<4 are both conditions\nwhich a single index can satisfy.\n\nHowever GIST itself treats each column of the index independently applying the\nfirst column then the second one and so on like a traditional btree index, so\nit doesn't really do what you would want.\n\nI did propose a while back that GIST should consider all columns\nsimultaneously in the same style as rtree. \n\nHowever this would require making GIST somewhat less general in another sense.\nCurrently page splits can be handled arbitrarily but if you have to be able to\ncombine different datatypes it would mean having to come up with a standard\nalgorithm which would work everywhere. (I suggested making everything work in\nterms of \"distance\" and then using the n-space vector distance (ie\nsqrt((a1-b1)^2+(a2-b2)^2+...).) This means GIST wouldn't be as general as\nit is now but it would allow us to handle cases like yours automatically.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Wed, 28 Nov 2007 13:36:48 +0000", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexing tuples" }, { "msg_contents": "Matthew <[email protected]> writes:\n>> This sounds like something an R-tree can do.\n\n> I *know* that. However, Postgres (as far as I can see) doesn't provide a\n> simple R-tree index that will index two integers. This means I have to\n> write one myself. I'm asking whether it is possible to get two values into\n> a GiST index, which would allow me to implement this.\n\nHave you looked at contrib/seg/ ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Nov 2007 10:34:52 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexing tuples " }, { "msg_contents": "On Wed, 28 Nov 2007, Tom Lane wrote:\n> Have you looked at contrib/seg/ ?\n\nYes, I had a pretty good look at that. However, I believe that in order to\nuse seg's indexes, I would need to put my data into seg's data type, and\nreformat my query, as I stated in my original message. What I'm looking\nfor is a general R-tree (or similar) index that will index multiple\ncolumns of normal data types.\n\nFor instance, the normal B-tree index on (a, b) is able to answer queries\nlike \"a = 5 AND b > 1\" or \"a > 5\". An R-tree would be able to index these,\nplus queries like \"a > 5 AND b < 1\".\n\nAs far as I can see, it is not possible at the moment to write such an\nindex system for GiST, which is a shame because the actual R-tree\nalgorithm is very simple. It's just a matter of communicating both values\nfrom the query to the index code.\n\nMatthew\n\n-- \nI have an inferiority complex. But it's not a very good one.\n", "msg_date": "Wed, 28 Nov 2007 16:08:51 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexing tuples " }, { "msg_contents": "Matthew wrote:\n> For instance, the normal B-tree index on (a, b) is able to answer queries\n> like \"a = 5 AND b > 1\" or \"a > 5\". An R-tree would be able to index these,\n> plus queries like \"a > 5 AND b < 1\".\n\nSorry in advance if this is a stupid question, but how is this better \nthan two index, one on \"a\" and one on \"b\"? I supposed there could be a \nspace savings but beyond that?\n\n", "msg_date": "Thu, 29 Nov 2007 15:23:10 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexing tuples" }, { "msg_contents": "On Thu, Nov 29, 2007 at 03:23:10PM -0500, Matthew T. O'Connor wrote:\n> Sorry in advance if this is a stupid question, but how is this better than \n> two index, one on \"a\" and one on \"b\"? I supposed there could be a space \n> savings but beyond that?\n\nYou could index on both columns simultaneously without a bitmap index scan.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 29 Nov 2007 21:30:24 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST indexing tuples" }, { "msg_contents": "On Thu, 29 Nov 2007, Matthew T. O'Connor wrote:\n> Matthew wrote:\n> > For instance, the normal B-tree index on (a, b) is able to answer queries\n> > like \"a = 5 AND b > 1\" or \"a > 5\". An R-tree would be able to index these,\n> > plus queries like \"a > 5 AND b < 1\".\n>\n> Sorry in advance if this is a stupid question, but how is this better\n> than two index, one on \"a\" and one on \"b\"? I supposed there could be a\n> space savings but beyond that?\n\nImagine you have a table with columns \"a\" and \"b\". The table has a\nbazillion rows, and the values of \"a\" and \"b\" both range from a negative\nbazillion to a positive bazillion. (Note this is exactly the case in our\ndatabase, for some value of a bazillion.) Then, you run the query:\n\nSELECT * FROM table WHERE a > 5 AND b < 1;\n\nSo, an index on \"a\" will return half a bazillion results for the\nconstraint \"a > 5\". Likewise, the index on \"b\" will return half a\nbazillion results for the constraint \"b < 1\". However, the intersection of\nthese two constraints could be just a few rows. (Note this is exactly the\ncase in our database.)\n\nNow, Postgres has two options. It could use just the one index and filter\nhalf a bazillion rows (which is what it used to do), or it could create a\nbitmap with a bazillion bits from each index, and do a logical AND\noperation on them to create a new bitmap with just a few bits set (which\nit now can do under some circumstances). Either way, it's going to be a\nheavy operation.\n\nAn R-tree index on \"a, b\" would instantly return just those few rows,\nwithout using significant amounts of memory or time.\n\nHope that helps,\n\nMatthew\n\n-- \nPatron: \"I am looking for a globe of the earth.\"\nLibrarian: \"We have a table-top model over here.\"\nPatron: \"No, that's not good enough. Don't you have a life-size?\"\nLibrarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Fri, 30 Nov 2007 13:45:41 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST indexing tuples" } ]
[ { "msg_contents": "Hi All,\n\nWe are having a table whose data we need to bucketize and show. This is\na continuously growing table (archival is a way to trim it to size).\nWe are facing 2 issues here:\n\n1. When the records in the table are in the range of 10K, it works fine\nfor some time after starting postgres server. But as time passes, the\nentire machine becomes slower and slower - to the extent that we need to\ngo for a restart. Though taskmgr does not show any process consuming\nextra-ordinary amount of CPU / Memory. After a restart of postgres\nserver, things come back to normal. What may be going wrong here?\n\n2. When the records cross 200K, the queries (even \"select count(*) from\n_TABLE_\") start taking minutes, and sometimes does not return back at\nall. We were previously using MySql and at least this query used to work\nOK there. [Our queries are of the form \"select sum(col1), sum(col2),\ncount(col3) ... where .... group by ... \" ]. Any suggestions ... \n\nBelow is the tuning parameter changes thet we did with the help from\ninternet:\n\nWe are starting postgres with the options [-o \"-B 4096\"], later we added\n\na \"-S 1024\" as well - without any visible improvement.\nMachine has 1GB RAM.\n\nshadkam\n", "msg_date": "Wed, 28 Nov 2007 15:00:55 +0530", "msg_from": "\"Shadkam Islam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Windows XP selects are very slow" }, { "msg_contents": "Shadkam Islam wrote:\n> Hi All,\n> \n> We are having a table whose data we need to bucketize and show. This is\n> a continuously growing table (archival is a way to trim it to size).\n> We are facing 2 issues here:\n> \n> 1. When the records in the table are in the range of 10K, it works fine\n> for some time after starting postgres server. But as time passes, the\n> entire machine becomes slower and slower - to the extent that we need to\n> go for a restart. Though taskmgr does not show any process consuming\n> extra-ordinary amount of CPU / Memory. After a restart of postgres\n> server, things come back to normal. What may be going wrong here?\n\nDo you have any connections sat \"idle in transaction\"?\nAre you happy that vacuuming is happening?\nAre you happy that the configuration values are sensible for your hardware?\n\n> 2. When the records cross 200K, the queries (even \"select count(*) from\n> _TABLE_\") start taking minutes, and sometimes does not return back at\n> all. We were previously using MySql and at least this query used to work\n> OK there. [Our queries are of the form \"select sum(col1), sum(col2),\n> count(col3) ... where .... group by ... \" ]. Any suggestions ... \n\nWell, \"SELECT count(*) FROM TABLE\" *is* slow in PG, because it needs to \ncheck visibility of each row and hence scan the table. Shouldn't be \nminutes though, not unless you've turned vacuuming off. A table of \n200,000 rows isn't particularly large.\n\nCan you give an example of a particular query that's too slow and the \nEXPLAIN ANALYSE to go with it? Oh, and the schema and sizes for the \ntables involved if possible.\n\n> Below is the tuning parameter changes thet we did with the help from\n> internet:\n\nJust \"the internet\" in general, or any particular pages?\n\n> We are starting postgres with the options [-o \"-B 4096\"], later we added\n> \n> a \"-S 1024\" as well - without any visible improvement.\n> Machine has 1GB RAM.\n\nWhy on earth are you fiddling with PG's command-line options? You can \nset all of this stuff in the postgresql.conf file, and I recommend you \ndo so.\n\nSo that's 8k*4096 or 32MB of shared buffers and 1MB of sort memory. If \nyour queries are doing lots of sorting and sum()ing then that's probably \nnot enough.\n\nYou might want to try issuing \"SET work_mem=...\" for various values \nbefore each query and see if there's a good value for your workload.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 28 Nov 2007 12:28:21 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows XP selects are very slow" }, { "msg_contents": "PG generally comes with very basic default settings, one *start* maybe \nthis page for you\n\nhttp://www.webservices.uiuc.edu/postgresql/\n\nThen obviously you will need to work though your query plans and iterate.\n\nShadkam Islam wrote:\n> Hi All,\n>\n> We are having a table whose data we need to bucketize and show. This is\n> a continuously growing table (archival is a way to trim it to size).\n> We are facing 2 issues here:\n>\n> 1. When the records in the table are in the range of 10K, it works fine\n> for some time after starting postgres server. But as time passes, the\n> entire machine becomes slower and slower - to the extent that we need to\n> go for a restart. Though taskmgr does not show any process consuming\n> extra-ordinary amount of CPU / Memory. After a restart of postgres\n> server, things come back to normal. What may be going wrong here?\n>\n> 2. When the records cross 200K, the queries (even \"select count(*) from\n> _TABLE_\") start taking minutes, and sometimes does not return back at\n> all. We were previously using MySql and at least this query used to work\n> OK there. [Our queries are of the form \"select sum(col1), sum(col2),\n> count(col3) ... where .... group by ... \" ]. Any suggestions ... \n>\n> Below is the tuning parameter changes thet we did with the help from\n> internet:\n>\n> We are starting postgres with the options [-o \"-B 4096\"], later we added\n>\n> a \"-S 1024\" as well - without any visible improvement.\n> Machine has 1GB RAM.\n>\n> shadkam\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n", "msg_date": "Wed, 28 Nov 2007 21:15:09 +0500", "msg_from": "\"Usama Munir Dar\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Windows XP selects are very slow" } ]
[ { "msg_contents": "Hi folks,\n\nAn apparent optimizer regression between 8.2.1 & 8.2.3 ? : \n\n\tselect pk,... from tbl where tsv @@ to_tsquery(...) order by pk limit 10 \n\ndisadvantageously uses PK index scan against a 2.5 million row (vacuum analysed) table whenever limit<=16 , leading to an increase in query time from sub 100ms to 4 seconds typically.\n\nWith identical freshly vaccuum analyzed table, 8.2.1 does the same only when limit <= 3\n\nAlthough it's not a difference in principle, the later behaviour is more problematic as it is much more likely to be encountered in practice as part of a results paging scheme (with OFFSET N) \n\nChanging the ORDER BY clause to pk ||'' seems to get around the problem without any substantial execution overhead.\n\nAnyone aware of any alternate workaround or info on likely behaviour in 8.3 ?\n\n\nBrendan\n\n* ** *** ** * ** *** ** * ** *** ** *\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed.\nAny views or opinions presented are solely those of the author, and do not necessarily\nrepresent those of ESB.\nIf you have received this email in error please notify the sender.\n\nAlthough ESB scans e-mail and attachments for viruses, it does not guarantee\nthat either are virus-free and accepts no liability for any damage sustained\nas a result of viruses.\n\nCompany Registration Information: http://www.esb.ie/companies\n* ** *** ** * ** *** ** * ** *** ** *\n\n\n\n\n\n\nOptimizer regression 8.2.1 -> 8.2.3 on TSEARCH2 queries with ORDER BY and LIMIT\n\n\n\nHi folks,\n\nAn apparent optimizer regression between 8.2.1 & 8.2.3 ? : \n\n        select pk,... from tbl where tsv @@ to_tsquery(...) order by pk limit 10 \n\ndisadvantageously uses PK index scan against a 2.5 million row (vacuum analysed) table whenever limit<=16 , leading to an increase in query time from sub 100ms to 4 seconds typically.\nWith identical freshly vaccuum analyzed table, 8.2.1 does the same only when limit <= 3\n\nAlthough it's not a difference in principle, the later behaviour is more problematic as it is much more likely to be encountered in practice as part of a results paging scheme (with OFFSET N) \nChanging the ORDER BY clause to pk ||'' seems to get around the problem without any substantial execution overhead.\n\nAnyone aware of any alternate workaround or info on likely behaviour in 8.3 ?\n\n\nBrendan\n\n\n* ** *** ** * ** *** ** * ** *** ** *\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed.\nAny views or opinions presented are solely those of the author, and do not necessarily\nrepresent those of ESB.\nIf you have received this email in error please notify the sender.\n\nAlthough ESB scans e-mail and attachments for viruses, it does not guarantee\nthat either are virus-free and accepts no liability for any damage sustained\nas a result of viruses.\n\nCompany Registration Information: http://www.esb.ie/companies\n* ** *** ** * ** *** ** * ** *** ** *", "msg_date": "Wed, 28 Nov 2007 15:30:22 -0000", "msg_from": "\"Brendan McMahon\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer regression 8.2.1 -> 8.2.3 on TSEARCH2 queries with ORDER BY\n\tand LIMIT" } ]
[ { "msg_contents": "I have a legacy system still on 7.4 (I know, I know...the upgrade is\ncoming soon).\n\nI have a fairly big spike happening once a day, every day, at the same\ntime. It happens during a checkpoint, no surprise there. I know the\nsolution to the problem (upgrade to a modern version), but what I'm\nlooking for as an explanation as to why one particular checkpoint would\nbe so bad on a low volume system, so I can appease certain management\nconcerns. \n\nThis is a _really _low volume system, less than 500 writes/hour. Normal\noperation sees checkpoint related spikes of around 200-300 milliseconds.\nWe always checkpoint at the checkpoint timeout (every 5 minutes).\nDuring this one checkpoint, I'm seeing transactions running 2-3 seconds.\nDuring this time, writes are < 5/minute.\n\nRelevant settings:\nshared_buffers = 10000\n\ncheckpoint_segments = 30\ncheckpoint_timeout = 300\n\nWhat gives?\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Thu, 29 Nov 2007 10:10:54 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "7.4 Checkpoint Question" }, { "msg_contents": "On Thu, Nov 29, 2007 at 10:10:54AM -0500, Brad Nicholson wrote:\n> This is a _really _low volume system, less than 500 writes/hour. Normal\n> operation sees checkpoint related spikes of around 200-300 milliseconds.\n> We always checkpoint at the checkpoint timeout (every 5 minutes).\n> During this one checkpoint, I'm seeing transactions running 2-3 seconds.\n> During this time, writes are < 5/minute.\n\n> What gives?\n\npg_dump? Remember that it has special locks approximately equivalent\n(actually eq? I forget) with SERIALIZABLE mode, which makes things rather\ndifferent.\n\nA\n\n-- \nAndrew Sullivan\nOld sigs will return after re-constitution of blue smoke\n", "msg_date": "Thu, 29 Nov 2007 11:02:19 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 Checkpoint Question" }, { "msg_contents": "On Thu, 2007-11-29 at 10:10 -0500, Brad Nicholson wrote:\n> I have a legacy system still on 7.4 (I know, I know...the upgrade is\n> coming soon).\n> \n> I have a fairly big spike happening once a day, every day, at the same\n> time. It happens during a checkpoint, no surprise there. I know the\n> solution to the problem (upgrade to a modern version), but what I'm\n> looking for as an explanation as to why one particular checkpoint would\n> be so bad on a low volume system, so I can appease certain management\n> concerns. \n> \n> This is a _really _low volume system, less than 500 writes/hour. Normal\n> operation sees checkpoint related spikes of around 200-300 milliseconds.\n> We always checkpoint at the checkpoint timeout (every 5 minutes).\n> During this one checkpoint, I'm seeing transactions running 2-3 seconds.\n> During this time, writes are < 5/minute.\n> \n> Relevant settings:\n> shared_buffers = 10000\n> \n> checkpoint_segments = 30\n> checkpoint_timeout = 300\n> \n> What gives?\n\nIf the timing is regular, its most likely a human-initiated action\nrather then a behavioural characteristic.\n\nVACUUM runs in background at that time, updates loads of blocks which\nneed to be written out at checkpoint time. That slows queries down at\nthat time but not others.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 29 Nov 2007 16:14:21 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 7.4 Checkpoint Question" }, { "msg_contents": "\nOn Thu, 2007-11-29 at 16:14 +0000, Simon Riggs wrote:\n> On Thu, 2007-11-29 at 10:10 -0500, Brad Nicholson wrote:\n> > I have a legacy system still on 7.4 (I know, I know...the upgrade is\n> > coming soon).\n> > \n> > I have a fairly big spike happening once a day, every day, at the same\n> > time. It happens during a checkpoint, no surprise there. I know the\n> > solution to the problem (upgrade to a modern version), but what I'm\n> > looking for as an explanation as to why one particular checkpoint would\n> > be so bad on a low volume system, so I can appease certain management\n> > concerns. \n> > \n> > This is a _really _low volume system, less than 500 writes/hour. Normal\n> > operation sees checkpoint related spikes of around 200-300 milliseconds.\n> > We always checkpoint at the checkpoint timeout (every 5 minutes).\n> > During this one checkpoint, I'm seeing transactions running 2-3 seconds.\n> > During this time, writes are < 5/minute.\n> > \n> > Relevant settings:\n> > shared_buffers = 10000\n> > \n> > checkpoint_segments = 30\n> > checkpoint_timeout = 300\n> > \n> > What gives?\n> \n> If the timing is regular, its most likely a human-initiated action\n> rather then a behavioural characteristic.\n> \n> VACUUM runs in background at that time, updates loads of blocks which\n> need to be written out at checkpoint time. That slows queries down at\n> that time but not others.\n\nBingo. Big vacuum daily vacuum completes shortly before this chckpoint.\n\nThanks. \n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n\n", "msg_date": "Thu, 29 Nov 2007 11:18:39 -0500", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 7.4 Checkpoint Question" } ]
[ { "msg_contents": "Does anyone have any white papers or basic guides for a large RAM \nserver?\n\nWe are consolidating two databases to enable better data-mining that \ncurrently run on a 4 GB and 2 GB machine. The data issues on the 4 \nGB machine are numerous, things like \"create index\" fail and update \nqueries fail from out of memory issues. Re-factoring the data is \nhelping, but isn't finishing the job.\n\nThe new machine will have 48 GB of RAM, so figuring out starting \npoints for the Shared Buffers and Work_mem/Maintenance_work_mem is \ngoing to be a crap shoot, since the defaults still seem to be based \nupon 256MB of RAM or less.\n\nUsage:\n Most of the time, the database is being hit with a handle of \npoorly written and unoptimized queries from a Ruby on Rails app that \nis being refactored as a simple Ruby-DBI app since we need to support \nour legacy code but don't need the Rails environment, just a lot of \nSQL writes. Some stored procedures should streamline this. However, \neach transaction will do about 5 or 6 writes.\n Common Usage: we have a reporting tool that is also being \nrefactored, but does a lot of aggregate queries. None of these take \nmore than 500 ms after indexing on the 2 GB database, so assuming \nthat more RAM should help and eliminate the problems.\n Problem Usage: we have a 20GB table with 120m rows that we are \nsplitting into some sub-tables. Generally, we do large data pulls \nfrom here, 1 million - 4 million records at a time, stored in a new \ntable for export. These queries are problematic because we are \nunable to index the database for the queries that we run because we \nget out of memory errors. Most of my cleanup has restored to FOR-IN \nloops via pl-pgsql to manage the data one row at a time. This is \nproblematic because many of these scripts are taking 4-5 days to run.\n Other usage: we will import between 10k and 10m rows at one time \nout of CSVs into the big database table. I got my gig here because \nthis was all failing and the data was becoming worthless. These \nimports involve a lot of writes.\n\n Our simultaneous queries are small, and currently run \nacceptably. It's the big imports, data-mining pulls, and system \nmanipulation were we routinely wait days on the query that we are \nlooking to speed up.\n\nThanks,\nAlex\n", "msg_date": "Thu, 29 Nov 2007 13:28:35 -0500", "msg_from": "Alex Hochberger <[email protected]>", "msg_from_op": true, "msg_subject": "Configuring a Large RAM PostgreSQL Server" }, { "msg_contents": "Alex Hochberger wrote:\n> Does anyone have any white papers or basic guides for a large RAM server?\n> \n> We are consolidating two databases to enable better data-mining that \n> currently run on a 4 GB and 2 GB machine. The data issues on the 4 GB \n> machine are numerous, things like \"create index\" fail and update queries \n> fail from out of memory issues.\n\n> Problem Usage: we have a 20GB table with 120m rows that we are \n> splitting into some sub-tables. Generally, we do large data pulls from \n> here, 1 million - 4 million records at a time, stored in a new table for \n> export. These queries are problematic because we are unable to index \n> the database for the queries that we run because we get out of memory \n> errors.\n\nWould it not make sense to find out why you are getting these errors first?\n\nIt's not normal to get \"out of memory\" when rebuilding an index.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 29 Nov 2007 19:15:15 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring a Large RAM PostgreSQL Server" }, { "msg_contents": "It's not on rebuilding the index, it's on CREATE INDEX.\n\nI attribute it to wrong setting, Ubuntu bizarre-ness, and general \nproblems.\n\nWe need new hardware, the servers are running on aging \ninfrastructure, and we decided to get a new system that will last us \nthe next 3-4 years all at once.\n\nBut many large queries are getting Out of Memory errors.\n\nAlex\n\nOn Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:\n\n> Alex Hochberger wrote:\n>> Does anyone have any white papers or basic guides for a large RAM \n>> server?\n>> We are consolidating two databases to enable better data-mining \n>> that currently run on a 4 GB and 2 GB machine. The data issues on \n>> the 4 GB machine are numerous, things like \"create index\" fail and \n>> update queries fail from out of memory issues.\n>\n>> Problem Usage: we have a 20GB table with 120m rows that we are \n>> splitting into some sub-tables. Generally, we do large data pulls \n>> from here, 1 million - 4 million records at a time, stored in a \n>> new table for export. These queries are problematic because we \n>> are unable to index the database for the queries that we run \n>> because we get out of memory errors.\n>\n> Would it not make sense to find out why you are getting these \n> errors first?\n>\n> It's not normal to get \"out of memory\" when rebuilding an index.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n", "msg_date": "Thu, 29 Nov 2007 14:21:10 -0500", "msg_from": "Alex Hochberger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuring a Large RAM PostgreSQL Server" }, { "msg_contents": "> On Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:\n\n>> Alex Hochberger wrote:\n>>> Problem Usage: we have a 20GB table with 120m rows that we are \n>>> splitting into some sub-tables. Generally, we do large data pulls from \n>>> here, 1 million - 4 million records at a time, stored in a new table for \n>>> export. These queries are problematic because we are unable to index the \n>>> database for the queries that we run because we get out of memory errors.\n>>\n>> Would it not make sense to find out why you are getting these errors \n>> first?\n\nAlex Hochberger wrote:\n> It's not on rebuilding the index, it's on CREATE INDEX.\n>\n> I attribute it to wrong setting, Ubuntu bizarre-ness, and general problems.\n\nPlease do not top-post. I reformatted your message for clarity.\n\nRichard is still correct: it is not normal to get out-of-memory errors\nduring index building, regardless of age of servers and Linux distro.\nPerhaps you just have a maintenance_work_mem setting that's too large\nfor your server.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n", "msg_date": "Thu, 29 Nov 2007 16:29:19 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring a Large RAM PostgreSQL Server" }, { "msg_contents": "Alex,\n\n> The new machine will have 48 GB of RAM, so figuring out starting  \n> points for the Shared Buffers and Work_mem/Maintenance_work_mem is  \n> going to be a crap shoot, since the defaults still seem to be based  \n> upon 256MB of RAM or less.\n\nWhy a crap shoot?\n\nSet shared_buffers to 12GB. Set work_mem to 20GB / # of concurrent active \nconnections (check your logs). Set Maint_mem to 2GB (I don't think we can \nactually use more). Then tune from there.\n\nAlso, use 8.2 or later, and you'd better compile 64-bit.\n\n-- \nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Thu, 29 Nov 2007 21:50:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring a Large RAM PostgreSQL Server" } ]
[ { "msg_contents": "How can I clear the pg_stats views without restarting PostgreSQL? I\nthought there was a function.\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nHow can I clear the pg_stats views without restarting PostgreSQL? \nI thought there was a function.\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu", "msg_date": "Thu, 29 Nov 2007 15:46:42 -0600", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "clear pg_stats" }, { "msg_contents": "Campbell, Lance wrote:\n> How can I clear the pg_stats views without restarting PostgreSQL? I\n> thought there was a function.\n\nSELECT pg_stat_reset();\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 29 Nov 2007 21:54:08 +0000", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear pg_stats" }, { "msg_contents": "Campbell, Lance wrote:\n> How can I clear the pg_stats views without restarting PostgreSQL? I\n> thought there was a function.\n\npg_stat_reset()\n\n//Magnus\n", "msg_date": "Thu, 29 Nov 2007 22:57:25 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: clear pg_stats" } ]
[ { "msg_contents": "Can anyone explain the following odd behavior?\nI have a query that completes in about 90 ms. If I append LIMIT to the \nvery end, eg. \"LIMIT 500\" the evaluation time increases to about 800 ms.\nHow can performance get *worse* by giving the database the option to \nstop the evaluation earlier (when it reaches the output 500 rows)?\n\nI have pasted both queries together with output from explain analyze here:\n http://pastebin.com/m3c0d1896\n", "msg_date": "Fri, 30 Nov 2007 12:16:08 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Appending \"LIMIT\" to query drastically decreases performance" }, { "msg_contents": "On Fri, 30 Nov 2007, cluster wrote:\n> Can anyone explain the following odd behavior?\n> I have a query that completes in about 90 ms. If I append LIMIT to the\n> very end, eg. \"LIMIT 500\" the evaluation time increases to about 800 ms.\n> How can performance get *worse* by giving the database the option to\n> stop the evaluation earlier (when it reaches the output 500 rows)?\n\nThe planner doesn't always get it right. Simple.\n\nHave you done a \"VACUUM FULL ANALYSE\" recently?\n\nMatthew\n\n-- \nIt is better to keep your mouth closed and let people think you are a fool\nthan to open it and remove all doubt. -- Mark Twain\n", "msg_date": "Fri, 30 Nov 2007 13:30:28 +0000 (GMT)", "msg_from": "Matthew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Appending \"LIMIT\" to query drastically decreases\n performance" }, { "msg_contents": "In response to Matthew <[email protected]>:\n\n> On Fri, 30 Nov 2007, cluster wrote:\n> > Can anyone explain the following odd behavior?\n> > I have a query that completes in about 90 ms. If I append LIMIT to the\n> > very end, eg. \"LIMIT 500\" the evaluation time increases to about 800 ms.\n> > How can performance get *worse* by giving the database the option to\n> > stop the evaluation earlier (when it reaches the output 500 rows)?\n> \n> The planner doesn't always get it right. Simple.\n> \n> Have you done a \"VACUUM FULL ANALYSE\" recently?\n\nI don't know about the \"FULL\" part ... but certainly an ANALYZE.\n\nPlease post EXPLAIN ANALYZE output for the two queries.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Fri, 30 Nov 2007 09:20:33 -0500", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Appending \"LIMIT\" to query drastically decreases\n performance" }, { "msg_contents": "> Please post EXPLAIN ANALYZE output for the two queries.\n\nAs I wrote in my first post, I pasted this together with the two queries \nat pastebin.com:\n http://pastebin.com/m3c0d1896\n", "msg_date": "Fri, 30 Nov 2007 17:01:11 +0100", "msg_from": "cluster <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Appending \"LIMIT\" to query drastically decreases performance" } ]