threads
listlengths
1
275
[ { "msg_contents": "Hi. We are running Postgres 8.3.7 on an eight-processor Linux system.\nBecause the machine has very little local disk, the database files are on\na file system running GPFS.\n\nThe machine is mostly dedicated to processing images. After the images\nare processed, the image attributes and processing parameters are\nwritten to the database. This is repeated nightly.\n\nGenerally, unless image processing is taking place, queries are pretty\nfast - ms to seconds. But deletes are very slow from the largest tables,\nwhich currently have 12 M rows: on the order of four minutes for 60 rows.\nWe don't have to do a lot of deletions, but do need to be able to do some\nfrom time to time, and generally several thousand at a time.\n\nWe also don't have many users - generally no more than one to five\nconnections at a time.\n\nWhile waiting for some deletions, I went to search.postgresql.org and\ntyped \"slow delete\" in the search field.\n\nPer what I read I tried \"explain analyze delete...\":\n> subtest=> explain analyze delete from table1 where id > 11592550;\n> QUERY\n> PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using table1_pkey on table1 (cost=0.00..2565.25 rows=1136\n> width=6) (actual time=77.819..107.476 rows=50 loops=1)\n> Index Cond: (id > 11592550)\n> Trigger for constraint table2_table1_id_fkey: time=198484.158 calls=50\n> Total runtime: 198591.894 ms\n> (4 rows)\nwhich immediately showed me that I had forgotten about the foreign key in\nanother table that references the primary key in the table where\nI am trying to do the deletions: table2.table1_id -> table1.id.\n\nThe posts I read and the time above suggest that I should create an\nindex on\nthe foreign key constraint field in table2 , so I am waiting for that\nindex to be\ncreated.\n\nMy questions are:\n(1) is my interpretation of the posts correct, i.e., if I am deleting\nrows from\ntable1, where the pkey of table 1 is a fkey in table 2, then do I need\nto create an\nindex on the fkey field in table 2?\n(2) do you have any suggestions on how I can determine why it is taking\nseveral hours to create an index on a field in a table with 12 M rows? does\nthat seem like a reasonable amount of time? I have maintenance_work_mem\nset to 512MB - is that too low, or is that the wrong config parameter to\nchange?\n[ps aux shows \"CREATE INDEX waiting\"; there is nothing (no image processing)\nrunning on the machine at this time]\n(3) would I be better off dropping the foreign keys? in general, is it\nworkable to\nhave foreign keys on tables with > 100 M rows (assuming I create all of\nthe 'right'\nindexes)?\n\nThank you,\nJanet\n\n\n\n\n\n\n", "msg_date": "Thu, 25 Jun 2009 19:33:11 -0700", "msg_from": "Janet Jacobsen <[email protected]>", "msg_from_op": true, "msg_subject": "slow DELETE on 12 M row table" }, { "msg_contents": "On Fri, Jun 26, 2009 at 3:33 AM, Janet Jacobsen<[email protected]> wrote:\n> (1) is my interpretation of the posts correct, i.e., if I am deleting\n> rows from\n> table1, where the pkey of table 1 is a fkey in table 2, then do I need\n> to create an\n> index on the fkey field in table 2?\n\nExactly right. The index on the table2 is optional but deletes and\nupdates on table1 will be very slow without it as it has to do a full\ntable scan of table2 to ensure no references remain.\n\n> (2) do you have any suggestions on how I can determine why it is taking\n> several hours to create an index on a field in a table with 12 M rows?  does\n> that seem like a reasonable amount of time?  I have maintenance_work_mem\n> set to 512MB - is that too low, or is that the wrong config parameter to\n> change?\n> [ps aux shows \"CREATE INDEX waiting\"; there is nothing (no image processing)\n> running on the machine at this time]\n\n512MB is a perfectly reasonable maintenance_work_mem. Larger than that\nis overkill.\n\n\"waiting\" means it's blocked trying to acquire a lock. Some open\ntransaction has the table you're trying to index locked. Look in\npg_locks and pg_stat_activity to find out who.\n\n\n> (3) would I be better off dropping the foreign keys?  in general, is it\n> workable to\n> have foreign keys on tables with > 100 M rows (assuming I create all of\n> the 'right'\n> indexes)?\n\nIf you have the right indexes then the size of the table shouldn't be\na large factor. The number of transactions per second being processed\nare perhaps more of a factor but even on very busy systems, most of\nthe time foreign key constraints aren't a problem to keep.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 26 Jun 2009 04:17:49 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "Greg Stark wrote:\n> \"waiting\" means it's blocked trying to acquire a lock. Some open\n> transaction has the table you're trying to index locked. Look in\n> pg_locks and pg_stat_activity to find out who.\n\nOr you might find CREATE INDEX CONCURRENTLY fits your situation.\n\nhttp://www.postgresql.org/docs/8.3/static/sql-createindex.html#SQL-CREATEINDEX-CONCURRENTLY\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 26 Jun 2009 06:47:41 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "Thank you for the answers. Very helpful.\n\nBetween the time that I sent my original post and saw your reply,\nI tried to drop a couple of foreign key constraints. The alter\ntable statements also showed up as \"waiting\" when I ran ps aux. \n\nI took your suggestion to run pg_locks and pg_stat_activity.\npg_stat_activity showed that I had three statements that were\nwaiting, and that there was one user whose query was given\nas \"<insufficient privilege>\". I killed the process associated\nwith that user, and my three waiting statements executed\nimmediately. \n\nI assume that killing the user's process released the lock on the\ntable. This user has only SELECT privileges. Under what\nconditions would a SELECT lock a table. The user connects\nto the database via a (Python?) script that runs on another\nmachine. Would this way of connecting to the database result\nin a lock?\n\nThanks,\nJanet\n\n\nOn 25/06/2009 08:17 p.m., Greg Stark wrote:\n> On Fri, Jun 26, 2009 at 3:33 AM, Janet Jacobsen<[email protected]> wrote:\n> \n>> (1) is my interpretation of the posts correct, i.e., if I am deleting\n>> rows from\n>> table1, where the pkey of table 1 is a fkey in table 2, then do I need\n>> to create an\n>> index on the fkey field in table 2?\n>> \n>\n> Exactly right. The index on the table2 is optional but deletes and\n> updates on table1 will be very slow without it as it has to do a full\n> table scan of table2 to ensure no references remain.\n>\n> \n>> (2) do you have any suggestions on how I can determine why it is taking\n>> several hours to create an index on a field in a table with 12 M rows? does\n>> that seem like a reasonable amount of time? I have maintenance_work_mem\n>> set to 512MB - is that too low, or is that the wrong config parameter to\n>> change?\n>> [ps aux shows \"CREATE INDEX waiting\"; there is nothing (no image processing)\n>> running on the machine at this time]\n>> \n>\n> 512MB is a perfectly reasonable maintenance_work_mem. Larger than that\n> is overkill.\n>\n> \"waiting\" means it's blocked trying to acquire a lock. Some open\n> transaction has the table you're trying to index locked. Look in\n> pg_locks and pg_stat_activity to find out who.\n>\n>\n> \n>> (3) would I be better off dropping the foreign keys? in general, is it\n>> workable to\n>> have foreign keys on tables with > 100 M rows (assuming I create all of\n>> the 'right'\n>> indexes)?\n>> \n>\n> If you have the right indexes then the size of the table shouldn't be\n> a large factor. The number of transactions per second being processed\n> are perhaps more of a factor but even on very busy systems, most of\n> the time foreign key constraints aren't a problem to keep.\n>\n> \n\n\n\n\n\n\n\nThank you for the answers.  Very helpful.\n\nBetween the time that I sent my original post and saw your reply,\nI tried to drop a couple of foreign key constraints.  The alter\ntable statements also showed up as \"waiting\" when I ran ps aux.  \n\nI took your suggestion to run pg_locks and pg_stat_activity.\npg_stat_activity showed that I had three statements that were\nwaiting, and that there was one user whose query was given \nas \"<insufficient privilege>\".  I killed the process associated\nwith that user, and my three waiting statements executed\nimmediately.  \n\nI assume that killing the user's process released the lock on the\ntable.  This user has only SELECT privileges.  Under what\nconditions would a SELECT lock a table.  The user connects\nto the database via a (Python?) script that runs on another\nmachine.  Would this way of connecting to the database result\nin a lock?\n\nThanks,\nJanet\n\n\nOn 25/06/2009 08:17 p.m., Greg Stark wrote:\n\nOn Fri, Jun 26, 2009 at 3:33 AM, Janet Jacobsen<[email protected]> wrote:\n \n\n(1) is my interpretation of the posts correct, i.e., if I am deleting\nrows from\ntable1, where the pkey of table 1 is a fkey in table 2, then do I need\nto create an\nindex on the fkey field in table 2?\n \n\n\nExactly right. The index on the table2 is optional but deletes and\nupdates on table1 will be very slow without it as it has to do a full\ntable scan of table2 to ensure no references remain.\n\n \n\n(2) do you have any suggestions on how I can determine why it is taking\nseveral hours to create an index on a field in a table with 12 M rows?  does\nthat seem like a reasonable amount of time?  I have maintenance_work_mem\nset to 512MB - is that too low, or is that the wrong config parameter to\nchange?\n[ps aux shows \"CREATE INDEX waiting\"; there is nothing (no image processing)\nrunning on the machine at this time]\n \n\n\n512MB is a perfectly reasonable maintenance_work_mem. Larger than that\nis overkill.\n\n\"waiting\" means it's blocked trying to acquire a lock. Some open\ntransaction has the table you're trying to index locked. Look in\npg_locks and pg_stat_activity to find out who.\n\n\n \n\n(3) would I be better off dropping the foreign keys?  in general, is it\nworkable to\nhave foreign keys on tables with > 100 M rows (assuming I create all of\nthe 'right'\nindexes)?\n \n\n\nIf you have the right indexes then the size of the table shouldn't be\na large factor. The number of transactions per second being processed\nare perhaps more of a factor but even on very busy systems, most of\nthe time foreign key constraints aren't a problem to keep.", "msg_date": "Fri, 26 Jun 2009 00:34:07 -0700", "msg_from": "Janet Jacobsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "On Fri, Jun 26, 2009 at 9:34 AM, Janet Jacobsen<[email protected]> wrote:\n\n> I assume that killing the user's process released the lock on the\n> table.  This user has only SELECT privileges.  Under what\n> conditions would a SELECT lock a table.  The user connects\n> to the database via a (Python?) script that runs on another\n> machine.  Would this way of connecting to the database result\n> in a lock?\n\nWas this process 'idle in transaction' perhaps? Does this Python\nscript use any ORM, like SQLAlchemy? If not, which library does it use\nto connect? If it's psycopg2, which isolation level (autocommit, read\ncommitted, serializable) is set?\n\nRegards,\nMarcin\n", "msg_date": "Fri, 26 Jun 2009 12:40:40 +0200", "msg_from": "=?UTF-8?Q?Marcin_St=C4=99pnicki?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "Hi. The user in question is using psycopg2, which he uses\npsycopg2:\n> import psycopg2\n> conn = psycopg2.connect(\"dbname=%s user=%s host=%s password=%s port=%s\" ...)\n> pg_cursor = conn.cursor()\n> pg_cursor.execute(<select string>)\n> rows = pg_cursor.fetchall()\nNote that\n(1) he said that he does not set an isolation level, and\n(2) he does not close the database connection after the\nfetchall - instead he has a Python sleep command, so\nhe is checking the database every 60 s to see whether\nnew entries have been added to a given table. (His\ncode is part of the analysis pipeline - we process the\nimage data and load it into the database, and other\ngroups fetch the data from the database and do some\nanalyses.)\n\nYes, it is the case that the user's process shows up in\nps aux as \"idle in transaction\".\n\nWhat would you recommend in this case? Should the\nuser set the isolation_level for psycopg, and if so to what?\n\nIs there any Postgres configuration parameter that I\nshould set?\n\nShould the user close the database connection after\nevery fetchall?\n\nThank you for any help you can give.\n\nJanet\n\n\nMarcin Stępnicki wrote\n> On Fri, Jun 26, 2009 at 9:34 AM, Janet Jacobsen<[email protected]> wrote:\n>\n> \n>> I assume that killing the user's process released the lock on the\n>> table. This user has only SELECT privileges. Under what\n>> conditions would a SELECT lock a table. The user connects\n>> to the database via a (Python?) script that runs on another\n>> machine. Would this way of connecting to the database result\n>> in a lock?\n>> \n>\n> Was this process 'idle in transaction' perhaps? Does this Python\n> script use any ORM, like SQLAlchemy? If not, which library does it use\n> to connect? If it's psycopg2, which isolation level (autocommit, read\n> committed, serializable) is set?\n>\n> Regards,\n> Marcin\n>\n> \n\n", "msg_date": "Fri, 26 Jun 2009 18:16:57 -0700", "msg_from": "Janet Jacobsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "2009/6/26 Janet Jacobsen <[email protected]>:\n> Hi.  The user in question is using psycopg2, which he uses\n> psycopg2:\n>> import psycopg2\n>> conn = psycopg2.connect(\"dbname=%s  user=%s host=%s password=%s port=%s\" ...)\n>> pg_cursor = conn.cursor()\n>> pg_cursor.execute(<select string>)\n>> rows = pg_cursor.fetchall()\n> Note that\n> (1) he said that he does not set an isolation level, and\n> (2) he does not close the database connection after the\n> fetchall - instead he has a Python sleep command, so\n> he is checking the database every 60 s to see whether\n> new entries have been added to a given table.  (His\n> code is part of the analysis pipeline - we process the\n> image data and load it into the database, and other\n> groups fetch the data from the database and do some\n> analyses.)\n>\n> Yes, it is the case that the user's process shows up in\n> ps aux as \"idle in transaction\".\n>\n> What would you recommend in this case?  Should the\n> user set the isolation_level for psycopg, and if so to what?\n>\n> Is there any Postgres configuration parameter that I\n> should set?\n>\n> Should the user close the database connection after\n> every fetchall?\n\nYou need to COMMIT or ROLLBACK the in-process transaction and then not\nstart a new transaction until you're ready to execute the next query.\nPossibly calling .commit() after executing your query might be all you\nneed to do, but never having used psycopg2 I couldn't say. You might\ntry asking on the psycopg mailing list.\n\n...Robert\n", "msg_date": "Fri, 26 Jun 2009 21:36:42 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "2009/6/27 Scott Carey <[email protected]>:\n> In addition to the above, note that long lived transactions cause all sorts\n> of other problems in the database.  In particular, table and index bloat can\n> become severe due to this sort of poor client behavior if there is a lot of\n> update or delete activity.  You can find out with \"vacuum analyze verbose\"\n> on tables of interest whether there are a high ratio of dead tuples in the\n> tables and indexes.\n\nYes indeed... by the by, I understand Alvaro Herrera has improved\nthis situation considerably for the forthcoming 8.4 release.\n\n...Robert\n", "msg_date": "Sat, 27 Jun 2009 22:09:36 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "Hi. I posted a question about a very slow DELETE on a table\nwith 12 M rows last week, and I wanted to (1) thank everyone\nwho provided a reply since each clue helped to find the solution,\nand (2) give the solution.\n\nThe slow DELETE was due to another user having a lock on\nthe table - which several people on this list pointed out must\nbe the case. Since the user was only running SELECT on\nthe table (no inserts, deletes, or updates), it wasn't obvious at\nfirst whether or how his process was locking the table.\n\nRobert suggested the need for a commit or rollback, as well as\nposting to the psycopg list. Pasted below is the response that\nI got from Federico Di Gregorio.\n\nThe user added a conn.rollback() to his script, and that solved\nthe problem. Now it is possible to delete rows, create indexes,\netc. without having to kill the user's process.\n\nMany thanks,\nJanet\n\n\nRobert Haas wrote:\n> 2009/6/26 Janet Jacobsen <[email protected]>:\n> \n>> Hi. The user in question is using psycopg2, which he uses\n>> psycopg2:\n>> \n>>> import psycopg2\n>>> conn = psycopg2.connect(\"dbname=%s user=%s host=%s password=%s port=%s\" ...)\n>>> pg_cursor = conn.cursor()\n>>> pg_cursor.execute(<select string>)\n>>> rows = pg_cursor.fetchall()\n>>> \n>> Note that\n>> (1) he said that he does not set an isolation level, and\n>> (2) he does not close the database connection after the\n>> fetchall - instead he has a Python sleep command, so\n>> he is checking the database every 60 s to see whether\n>> new entries have been added to a given table. (His\n>> code is part of the analysis pipeline - we process the\n>> image data and load it into the database, and other\n>> groups fetch the data from the database and do some\n>> analyses.)\n>>\n>> Yes, it is the case that the user's process shows up in\n>> ps aux as \"idle in transaction\".\n>>\n>> What would you recommend in this case? Should the\n>> user set the isolation_level for psycopg, and if so to what?\n>>\n>> Is there any Postgres configuration parameter that I\n>> should set?\n>>\n>> Should the user close the database connection after\n>> every fetchall?\n>> \n>\n> You need to COMMIT or ROLLBACK the in-process transaction and then not\n> start a new transaction until you're ready to execute the next query.\n> Possibly calling .commit() after executing your query might be all you\n> need to do, but never having used psycopg2 I couldn't say. You might\n> try asking on the psycopg mailing list.\n>\n> ...Robert\n> \n\n> Il giorno lun, 29/06/2009 alle 12.26 -0700, Janet Jacobsen ha scritto:\n> [snip]\n> \n>> > The user told me that he does not close the database connection\n>> > after the fetchall - instead he has a Python sleep command, so that \n>> > he is checking the database every 60 s to see whether new entries\n>> > have been added to a given table\n>> > His code is part of an analysis pipeline, whereas the part of the\n>> > database that I work on is loading processed data into the\n>> > database.\n>> > Is there something missing from his code sample, like a commit or\n>> > a set_isolation_level, that if added would prevent the \"idle in\n>> > transaction\" from happening? \n>> \n>\n> The user is wrong and you're right, the \"idle in transaction\" can be\n> avoided by both a commit() (or rollback()) before going to sleep or by\n> setting the transaction mode to \"autocommit\":\n>\n> conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)\n>\n> Hope this helps,\n> federico\n>\n> -- Federico Di Gregorio http://people.initd.org/fog Debian GNU/Linux\n> Developer [email protected] INIT.D Developer [email protected] Sei una\n> bergogna. Vergonga. Vergogna. -- Valentina\n", "msg_date": "Thu, 02 Jul 2009 15:48:23 -0700", "msg_from": "Janet Jacobsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow DELETE on 12 M row table" }, { "msg_contents": "On Fri, Jun 26, 2009 at 1:34 AM, Janet Jacobsen<[email protected]> wrote:\n> Thank you for the answers.  Very helpful.\n>\n> Between the time that I sent my original post and saw your reply,\n> I tried to drop a couple of foreign key constraints.  The alter\n> table statements also showed up as \"waiting\" when I ran ps aux.\n>\n> I took your suggestion to run pg_locks and pg_stat_activity.\n> pg_stat_activity showed that I had three statements that were\n> waiting, and that there was one user whose query was given\n> as \"<insufficient privilege>\".  I killed the process associated\n> with that user, and my three waiting statements executed\n> immediately.\n\nFYI, that means you, the user, don't have sufficient privileges to\nview their query.\n", "msg_date": "Thu, 2 Jul 2009 18:21:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow DELETE on 12 M row table" } ]
[ { "msg_contents": "\nI have a partitioned table with a multi-column unique index.  The table is partitioned on a timestamp with time zone column.  (I realize this has nothing to do with the unique index.)  The original unique index was in the order (timestamptz, varchar, text, text) and most queries against it were slow.  I changed the index order to (varchar, text, timestamptz, text) and queries now fly, but loading data (via copy from stdin) in the table is 2-4 times slower.  The unique index is required during the load.  \n\nThe original index is in the same order as the table's columns (2,3,4,5), while the changed index is in column order (3,5,2,4).  I've tested this several times and the effect is repeatable.  It does not seem the column order in the table matters to the insert/index performance, just the column order in the index.\n\nWhy would changing the column order on a unique index cause data loading or index servicing to slow down? Page splits in the b-tree, maybe?\n\nThanks in advance for any advice.\n\n\n\n \n", "msg_date": "Fri, 26 Jun 2009 10:25:29 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Insert performance and multi-column index order" }, { "msg_contents": "On Fri, 26 Jun 2009, [email protected] wrote:\n\n> The original unique index was in the order (timestamptz, varchar, text, \n> text) and most queries against it were slow.� I changed the index order \n> to (varchar, text, timestamptz, text) and queries now fly, but loading \n> data (via copy from stdin) in the table is 2-4 times slower.\n\nIs the input data closer to being sorted by the timestamptz field than the \nvarchar field? What you might be seeing is that the working set of index \npages needed to keep building the varchar index are bigger or have more of \na random access component to them as they spill in and out of the buffer \ncache. Usually you can get a better idea what the difference is by \ncomparing the output from vmstat while the two are loading. More random \nread/write requests in the mix will increase the waiting for I/O \npercentage while not increasing the total amount read/written per second.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Sat Jun 27 22:25:32 2009\nReceived: from localhost (unknown [200.46.208.211])\n\tby mail.postgresql.org (Postfix) with ESMTP id C571F6344C8\n\tfor <[email protected]>; Sat, 27 Jun 2009 22:25:31 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by localhost (mx1.hub.org [200.46.208.211]) (amavisd-maia, port 10024)\n with ESMTP id 63209-06\n for <[email protected]>;\n Sat, 27 Jun 2009 22:25:22 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from EXHUB018-2.exch018.msoutlookonline.net (exhub018-2.exch018.msoutlookonline.net [64.78.17.17])\n\tby mail.postgresql.org (Postfix) with ESMTP id 62F426344A5\n\tfor <[email protected]>; Sat, 27 Jun 2009 22:25:30 -0300 (ADT)\nReceived: from EXVMBX018-1.exch018.msoutlookonline.net ([64.78.17.47]) by\n EXHUB018-2.exch018.msoutlookonline.net ([64.78.17.17]) with mapi; Sat, 27 Jun\n 2009 18:25:27 -0700\nFrom: Scott Carey <[email protected]>\nTo: Robert Haas <[email protected]>, Janet Jacobsen <[email protected]>\nCC: =?utf-8?B?TWFyY2luIFN0xJlwbmlja2k=?= <[email protected]>,\n\t\"[email protected]\" <[email protected]>\nDate: Sat, 27 Jun 2009 18:25:24 -0700\nSubject: Re: slow DELETE on 12 M row table\nThread-Topic: [PERFORM] slow DELETE on 12 M row table\nThread-Index: Acn2x8xtTl5UcYF4RD2ysShqVy3ScgAx4LkJ\nMessage-ID: <C66C1494.8CEB%[email protected]>\nIn-Reply-To: <[email protected]>\nAccept-Language: en-US\nContent-Language: en\nX-MS-Has-Attach:\nX-MS-TNEF-Correlator:\nacceptlanguage: en-US\nContent-Type: text/plain; charset=\"utf-8\"\nContent-Transfer-Encoding: base64\nMIME-Version: 1.0\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0.461 tagged_above=0 required=5 tests=AWL=-0.440,\n URIBL_RHS_DOB=0.901\nX-Spam-Level: \nX-Archive-Number: 200906/376\nX-Sequence-Number: 34620\n\nDQoNCg0KT24gNi8yNi8wOSA2OjM2IFBNLCAiUm9iZXJ0IEhhYXMiIDxyb2JlcnRtaGFhc0BnbWFp\nbC5jb20+IHdyb3RlOg0KDQo+IDIwMDkvNi8yNiBKYW5ldCBKYWNvYnNlbiA8anNqYWNvYnNlbkBs\nYmwuZ292PjoNCj4+IEhpLiDCoFRoZSB1c2VyIGluIHF1ZXN0aW9uIGlzIHVzaW5nIHBzeWNvcGcy\nLCB3aGljaCBoZSB1c2VzDQo+PiBwc3ljb3BnMjoNCj4+PiBpbXBvcnQgcHN5Y29wZzINCj4+PiBj\nb25uID0gcHN5Y29wZzIuY29ubmVjdCgiZGJuYW1lPSVzIMKgdXNlcj0lcyBob3N0PSVzIHBhc3N3\nb3JkPSVzIHBvcnQ9JXMiDQo+Pj4gLi4uKQ0KPj4+IHBnX2N1cnNvciA9IGNvbm4uY3Vyc29yKCkN\nCj4+PiBwZ19jdXJzb3IuZXhlY3V0ZSg8c2VsZWN0IHN0cmluZz4pDQo+Pj4gcm93cyA9IHBnX2N1\ncnNvci5mZXRjaGFsbCgpDQo+PiBOb3RlIHRoYXQNCj4+ICgxKSBoZSBzYWlkIHRoYXQgaGUgZG9l\ncyBub3Qgc2V0IGFuIGlzb2xhdGlvbiBsZXZlbCwgYW5kDQo+PiAoMikgaGUgZG9lcyBub3QgY2xv\nc2UgdGhlIGRhdGFiYXNlIGNvbm5lY3Rpb24gYWZ0ZXIgdGhlDQo+PiBmZXRjaGFsbCAtIGluc3Rl\nYWQgaGUgaGFzIGEgUHl0aG9uIHNsZWVwIGNvbW1hbmQsIHNvDQo+PiBoZSBpcyBjaGVja2luZyB0\naGUgZGF0YWJhc2UgZXZlcnkgNjAgcyB0byBzZWUgd2hldGhlcg0KPj4gbmV3IGVudHJpZXMgaGF2\nZSBiZWVuIGFkZGVkIHRvIGEgZ2l2ZW4gdGFibGUuIMKgKEhpcw0KPj4gY29kZSBpcyBwYXJ0IG9m\nIHRoZSBhbmFseXNpcyBwaXBlbGluZSAtIHdlIHByb2Nlc3MgdGhlDQo+PiBpbWFnZSBkYXRhIGFu\nZCBsb2FkIGl0IGludG8gdGhlIGRhdGFiYXNlLCBhbmQgb3RoZXINCj4+IGdyb3VwcyBmZXRjaCB0\naGUgZGF0YSBmcm9tIHRoZSBkYXRhYmFzZSBhbmQgZG8gc29tZQ0KPj4gYW5hbHlzZXMuKQ0KPj4g\nDQo+PiBZZXMsIGl0IGlzIHRoZSBjYXNlIHRoYXQgdGhlIHVzZXIncyBwcm9jZXNzIHNob3dzIHVw\nIGluDQo+PiBwcyBhdXggYXMgImlkbGUgaW4gdHJhbnNhY3Rpb24iLg0KPj4gDQo+PiBXaGF0IHdv\ndWxkIHlvdSByZWNvbW1lbmQgaW4gdGhpcyBjYXNlPyDCoFNob3VsZCB0aGUNCj4+IHVzZXIgc2V0\nIHRoZSBpc29sYXRpb25fbGV2ZWwgZm9yIHBzeWNvcGcsIGFuZCBpZiBzbyB0byB3aGF0Pw0KPj4g\nDQo+PiBJcyB0aGVyZSBhbnkgUG9zdGdyZXMgY29uZmlndXJhdGlvbiBwYXJhbWV0ZXIgdGhhdCBJ\nDQo+PiBzaG91bGQgc2V0Pw0KPj4gDQo+PiBTaG91bGQgdGhlIHVzZXIgY2xvc2UgdGhlIGRhdGFi\nYXNlIGNvbm5lY3Rpb24gYWZ0ZXINCj4+IGV2ZXJ5IGZldGNoYWxsPw0KPiANCj4gWW91IG5lZWQg\ndG8gQ09NTUlUIG9yIFJPTExCQUNLIHRoZSBpbi1wcm9jZXNzIHRyYW5zYWN0aW9uIGFuZCB0aGVu\nIG5vdA0KPiBzdGFydCBhIG5ldyB0cmFuc2FjdGlvbiB1bnRpbCB5b3UncmUgcmVhZHkgdG8gZXhl\nY3V0ZSB0aGUgbmV4dCBxdWVyeS4NCj4gUG9zc2libHkgY2FsbGluZyAuY29tbWl0KCkgYWZ0ZXIg\nZXhlY3V0aW5nIHlvdXIgcXVlcnkgbWlnaHQgYmUgYWxsIHlvdQ0KPiBuZWVkIHRvIGRvLCBidXQg\nbmV2ZXIgaGF2aW5nIHVzZWQgcHN5Y29wZzIgSSBjb3VsZG4ndCBzYXkuICBZb3UgbWlnaHQNCj4g\ndHJ5IGFza2luZyBvbiB0aGUgcHN5Y29wZyBtYWlsaW5nIGxpc3QuDQo+IA0KPiAuLi5Sb2JlcnQN\nCj4gDQoNCkluIGFkZGl0aW9uIHRvIHRoZSBhYm92ZSwgbm90ZSB0aGF0IGxvbmcgbGl2ZWQgdHJh\nbnNhY3Rpb25zIGNhdXNlIGFsbCBzb3J0cw0Kb2Ygb3RoZXIgcHJvYmxlbXMgaW4gdGhlIGRhdGFi\nYXNlLiAgSW4gcGFydGljdWxhciwgdGFibGUgYW5kIGluZGV4IGJsb2F0IGNhbg0KYmVjb21lIHNl\ndmVyZSBkdWUgdG8gdGhpcyBzb3J0IG9mIHBvb3IgY2xpZW50IGJlaGF2aW9yIGlmIHRoZXJlIGlz\nIGEgbG90IG9mDQp1cGRhdGUgb3IgZGVsZXRlIGFjdGl2aXR5LiAgWW91IGNhbiBmaW5kIG91dCB3\naXRoICJ2YWN1dW0gYW5hbHl6ZSB2ZXJib3NlIg0Kb24gdGFibGVzIG9mIGludGVyZXN0IHdoZXRo\nZXIgdGhlcmUgYXJlIGEgaGlnaCByYXRpbyBvZiBkZWFkIHR1cGxlcyBpbiB0aGUNCnRhYmxlcyBh\nbmQgaW5kZXhlcy4NCg0KPiAtLQ0KPiBTZW50IHZpYSBwZ3NxbC1wZXJmb3JtYW5jZSBtYWlsaW5n\nIGxpc3QgKHBnc3FsLXBlcmZvcm1hbmNlQHBvc3RncmVzcWwub3JnKQ0KPiBUbyBtYWtlIGNoYW5n\nZXMgdG8geW91ciBzdWJzY3JpcHRpb246DQo+IGh0dHA6Ly93d3cucG9zdGdyZXNxbC5vcmcvbWFp\nbHByZWYvcGdzcWwtcGVyZm9ybWFuY2UNCj4gDQoNCg==\n", "msg_date": "Sat, 27 Jun 2009 01:08:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance and multi-column index order" } ]
[ { "msg_contents": "Hello, all.\n\nI'm finding that write performance of a certain stored procedure is\nabysmal. I need to be able to sustain approximately 20 calls to this\nprocedure per second, but am finding that, on the average, each call\ntakes 2 seconds in itself, in addition to pegging a single processor\nat 100% for the duration of the call. Additionally, while the stored\nprocedure calls are being made a single worker does a full-table scan\nonce every half-hours.\n\nBeing a software developer more than a DBA I hope those on this list\nwill be kind enough to help me troubleshoot and correct this issue. I\ndo not know what information would be exactly pertinent, but I have\nincluded table definitions, configurations and the function in\nquestion below. I am using PostgreSQL 8.3 on a Linux Intel Core Duo\nsystem with 2GB of RAM and am running Postgres on XFS. Here are the\nrelevant settings of my postgresql.conf:\n\n max_connections = 25\n shared_buffers = 512MB\n max_fsm_pages = 153600\n fsync = off\n synchronous_commit = off\n wal_writer_delay = 10000ms\n commit_delay = 100000\n commit_siblings = 100\n checkpoint_segments = 64\n checkpoint_completion_target = 0.9\n effective_cache_size = 1024MB\n track_activities = on\n track_counts = on\n update_process_title = on\n autovacuum = on\n log_autovacuum_min_duration = 1000\n autovacuum_vacuum_threshold = 50\n autovacuum_analyze_threshold = 50\n\nHere is the relevant table definition:\n\n DROP TABLE IF EXISTS amazon_items CASCADE;\n CREATE TABLE amazon_items (\n asin char(10) PRIMARY KEY,\n locale varchar(10) NOT NULL DEFAULT 'US',\n currency_code char(3) DEFAULT 'USD',\n isbn char(13),\n sales_rank integer,\n offers text,\n offer_pages integer DEFAULT 10,\n offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n UNIQUE (asin, locale)\n );\n\nThe stored procedure in question, plus supporting procedures:\n\n CREATE OR REPLACE FUNCTION item_data_insert(\n iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\niweight INTEGER,\n ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n RETURNS VOID AS\n $$\n DECLARE\n y integer[];\n BEGIN\n y[1] := iwidth;\n y[2] := ilength;\n y[3] := iheight;\n y[4] := iweight;\n BEGIN\n INSERT INTO item_details\n (isbn, title, author, binding, list_price, dimensions)\n VALUES\n (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n EXCEPTION WHEN unique_violation THEN\n UPDATE item_details SET\n title = ititle,\n author = iauthor,\n binding = ibinding,\n list_price = ilist_price,\n dimensions = y\n WHERE isbn = iisbn;\n END;\n BEGIN\n INSERT INTO amazon_items\n (asin, sales_rank, offers, offer_pages, isbn)\n VALUES\n (iasin, isales_rank, crunch(ioffers), ioffer_pages, iisbn);\n EXCEPTION WHEN unique_violation THEN\n IF isales_rank IS NOT NULL THEN\n UPDATE amazon_items SET\n sales_rank = isales_rank\n WHERE asin = iasin;\n END IF;\n IF ioffers IS NOT NULL THEN\n UPDATE amazon_items SET\n offers = crunch(ioffers),\n offers_last_updated = CURRENT_TIMESTAMP,\n offer_pages = ioffer_pages\n WHERE asin = iasin;\n END IF;\n END;\n END;\n $$\n LANGUAGE plpgsql;\n\n CREATE OR REPLACE FUNCTION crunch(text)\n RETURNS text AS\n $$\n BEGIN\n RETURN encode(text2bytea($1), 'base64');\n END;\n $$\n LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n\n CREATE OR REPLACE FUNCTION text2bytea(text)\n RETURNS bytea AS\n $$\n BEGIN\n RETURN $1;\n END;\n $$\n LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n\nThanks,\nBrian\n", "msg_date": "Fri, 26 Jun 2009 12:30:40 -0700", "msg_from": "Brian Troutwine <[email protected]>", "msg_from_op": true, "msg_subject": "Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Friday 26 June 2009, Brian Troutwine <[email protected]> wrote:\n> CREATE TABLE amazon_items (\n> asin char(10) PRIMARY KEY,\n> locale varchar(10) NOT NULL DEFAULT 'US',\n> currency_code char(3) DEFAULT 'USD',\n> isbn char(13),\n> sales_rank integer,\n> offers text,\n> offer_pages integer DEFAULT 10,\n> offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n> UNIQUE (asin, locale)\n> );\n>\n\nIndexes are good things. Try them. Particularly on the isbn field.\n\n-- \nOvershoot = http://www.theoildrum.com/files/evoltuion_timeline.JPG\n", "msg_date": "Fri, 26 Jun 2009 12:40:32 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Fri, Jun 26, 2009 at 3:30 PM, Brian\nTroutwine<[email protected]> wrote:\n> Hello, all.\n>\n> I'm finding that write performance of a certain stored procedure is\n> abysmal. I need to be able to sustain approximately 20 calls to this\n> procedure per second, but am finding that, on the average, each call\n> takes 2 seconds in itself, in addition to pegging a single processor\n> at 100% for the duration of the call. Additionally, while the stored\n> procedure calls are being made a single worker does a full-table scan\n> once every half-hours.\n>\n> Being a software developer more than a DBA I hope those on this list\n> will be kind enough to help me troubleshoot and correct this issue. I\n> do not know what information would be exactly pertinent, but I have\n> included table definitions, configurations and the function in\n> question below. I am using PostgreSQL 8.3 on a Linux Intel Core Duo\n> system with 2GB of RAM and am running Postgres on XFS. Here are the\n> relevant settings of my postgresql.conf:\n>\n>  max_connections       = 25\n>  shared_buffers = 512MB\n>  max_fsm_pages = 153600\n>  fsync =       off\n>  synchronous_commit = off\n>  wal_writer_delay = 10000ms\n>  commit_delay = 100000\n>  commit_siblings = 100\n>  checkpoint_segments = 64\n>  checkpoint_completion_target = 0.9\n>  effective_cache_size = 1024MB\n>  track_activities = on\n>  track_counts = on\n>  update_process_title = on\n>  autovacuum = on\n>  log_autovacuum_min_duration = 1000\n>  autovacuum_vacuum_threshold = 50\n>  autovacuum_analyze_threshold = 50\n>\n> Here is the relevant table definition:\n>\n>  DROP TABLE IF EXISTS amazon_items CASCADE;\n>  CREATE TABLE amazon_items (\n>        asin         char(10) PRIMARY KEY,\n>        locale       varchar(10) NOT NULL DEFAULT 'US',\n>        currency_code char(3) DEFAULT 'USD',\n>        isbn         char(13),\n>        sales_rank   integer,\n>        offers       text,\n>        offer_pages  integer DEFAULT 10,\n>        offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>        UNIQUE (asin, locale)\n>  );\n>\n> The stored procedure in question, plus supporting procedures:\n>\n>  CREATE OR REPLACE FUNCTION item_data_insert(\n>        iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n>        iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\n> iweight INTEGER,\n>        ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n>        ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n>  RETURNS VOID AS\n>  $$\n>  DECLARE\n>         y  integer[];\n>  BEGIN\n>         y[1] := iwidth;\n>         y[2] := ilength;\n>         y[3] := iheight;\n>         y[4] := iweight;\n>  BEGIN\n>         INSERT INTO item_details\n>                 (isbn, title, author, binding, list_price, dimensions)\n>                 VALUES\n>                 (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n>         EXCEPTION WHEN unique_violation THEN\n>                 UPDATE item_details SET\n>                        title = ititle,\n>                        author = iauthor,\n>                        binding = ibinding,\n>                        list_price = ilist_price,\n>                        dimensions = y\n>                 WHERE isbn = iisbn;\n>         END;\n>         BEGIN\n>                 INSERT INTO amazon_items\n>                 (asin, sales_rank, offers, offer_pages, isbn)\n>                 VALUES\n>                 (iasin, isales_rank, crunch(ioffers), ioffer_pages, iisbn);\n>         EXCEPTION WHEN unique_violation THEN\n>                 IF isales_rank IS NOT NULL THEN\n>                    UPDATE amazon_items SET\n>                        sales_rank = isales_rank\n>                    WHERE asin = iasin;\n>                 END IF;\n>                 IF ioffers IS NOT NULL THEN\n>                    UPDATE amazon_items SET\n>                           offers = crunch(ioffers),\n>                           offers_last_updated = CURRENT_TIMESTAMP,\n>                           offer_pages = ioffer_pages\n>                    WHERE asin = iasin;\n>                 END IF;\n>         END;\n>  END;\n>  $$\n>  LANGUAGE plpgsql;\n>\n>  CREATE OR REPLACE FUNCTION crunch(text)\n>  RETURNS text AS\n>  $$\n>  BEGIN\n>     RETURN encode(text2bytea($1), 'base64');\n>  END;\n>  $$\n>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n>\n>  CREATE OR REPLACE FUNCTION text2bytea(text)\n>  RETURNS bytea AS\n>  $$\n>  BEGIN\n>       RETURN $1;\n>  END;\n>  $$\n>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n\nsome general tips:\n*) use indexes to optimize where and join conditions. for example,\nupdate yadda set yadda where foo = bar, make sure that there is an\nindex on foo. As alan noted this is almost definitely your problem.\n\n*) prefer '_' to 'i' to prefix arguments (more readable and less\nchance for error).\n\n*) use varchar, not char (always).\n\nmerlin\n", "msg_date": "Fri, 26 Jun 2009 15:48:46 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "> Indexes are good things. Try them. Particularly on the isbn field.\n\nI'm not sure why amazon_items.isbn should be given an index.\nitem_details.isbn is used in a WHERE clause and is given an index\naccordingly, but not amazon_items.isbn.\n\nBrian\n\n\n\nOn Fri, Jun 26, 2009 at 12:40 PM, Alan Hodgson<[email protected]> wrote:\n> On Friday 26 June 2009, Brian Troutwine <[email protected]> wrote:\n>>  CREATE TABLE amazon_items (\n>>         asin         char(10) PRIMARY KEY,\n>>         locale       varchar(10) NOT NULL DEFAULT 'US',\n>>         currency_code char(3) DEFAULT 'USD',\n>>         isbn         char(13),\n>>         sales_rank   integer,\n>>         offers       text,\n>>         offer_pages  integer DEFAULT 10,\n>>         offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>         UNIQUE (asin, locale)\n>>  );\n>>\n>\n> Indexes are good things. Try them. Particularly on the isbn field.\n>\n> --\n> Overshoot = http://www.theoildrum.com/files/evoltuion_timeline.JPG\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 26 Jun 2009 13:35:00 -0700", "msg_from": "Brian Troutwine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "> Turn commit delay and commit siblings off.\n\nWhy?\n\nBrian\n\nOn Fri, Jun 26, 2009 at 1:06 PM, Scott Mead<[email protected]> wrote:\n> -- sorry for the top-post and short response.\n>\n> Turn commit delay and commit siblings off.\n>\n> --Scott\n>\n> On 6/26/09, Brian Troutwine <[email protected]> wrote:\n>> Hello, all.\n>>\n>> I'm finding that write performance of a certain stored procedure is\n>> abysmal. I need to be able to sustain approximately 20 calls to this\n>> procedure per second, but am finding that, on the average, each call\n>> takes 2 seconds in itself, in addition to pegging a single processor\n>> at 100% for the duration of the call. Additionally, while the stored\n>> procedure calls are being made a single worker does a full-table scan\n>> once every half-hours.\n>>\n>> Being a software developer more than a DBA I hope those on this list\n>> will be kind enough to help me troubleshoot and correct this issue. I\n>> do not know what information would be exactly pertinent, but I have\n>> included table definitions, configurations and the function in\n>> question below. I am using PostgreSQL 8.3 on a Linux Intel Core Duo\n>> system with 2GB of RAM and am running Postgres on XFS. Here are the\n>> relevant settings of my postgresql.conf:\n>>\n>>  max_connections       = 25\n>>  shared_buffers = 512MB\n>>  max_fsm_pages = 153600\n>>  fsync =       off\n>>  synchronous_commit = off\n>>  wal_writer_delay = 10000ms\n>>  commit_delay = 100000\n>>  commit_siblings = 100\n>>  checkpoint_segments = 64\n>>  checkpoint_completion_target = 0.9\n>>  effective_cache_size = 1024MB\n>>  track_activities = on\n>>  track_counts = on\n>>  update_process_title = on\n>>  autovacuum = on\n>>  log_autovacuum_min_duration = 1000\n>>  autovacuum_vacuum_threshold = 50\n>>  autovacuum_analyze_threshold = 50\n>>\n>> Here is the relevant table definition:\n>>\n>>  DROP TABLE IF EXISTS amazon_items CASCADE;\n>>  CREATE TABLE amazon_items (\n>>         asin         char(10) PRIMARY KEY,\n>>         locale       varchar(10) NOT NULL DEFAULT 'US',\n>>         currency_code char(3) DEFAULT 'USD',\n>>         isbn         char(13),\n>>         sales_rank   integer,\n>>         offers       text,\n>>         offer_pages  integer DEFAULT 10,\n>>         offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>         UNIQUE (asin, locale)\n>>  );\n>>\n>> The stored procedure in question, plus supporting procedures:\n>>\n>>  CREATE OR REPLACE FUNCTION item_data_insert(\n>>         iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n>>         iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\n>> iweight INTEGER,\n>>         ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n>>         ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n>>  RETURNS VOID AS\n>>  $$\n>>  DECLARE\n>>          y  integer[];\n>>  BEGIN\n>>          y[1] := iwidth;\n>>          y[2] := ilength;\n>>          y[3] := iheight;\n>>          y[4] := iweight;\n>>  BEGIN\n>>          INSERT INTO item_details\n>>                  (isbn, title, author, binding, list_price, dimensions)\n>>                  VALUES\n>>                  (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n>>          EXCEPTION WHEN unique_violation THEN\n>>                  UPDATE item_details SET\n>>                         title = ititle,\n>>                         author = iauthor,\n>>                         binding = ibinding,\n>>                         list_price = ilist_price,\n>>                         dimensions = y\n>>                  WHERE isbn = iisbn;\n>>          END;\n>>          BEGIN\n>>                  INSERT INTO amazon_items\n>>                  (asin, sales_rank, offers, offer_pages, isbn)\n>>                  VALUES\n>>                  (iasin, isales_rank, crunch(ioffers), ioffer_pages, iisbn);\n>>          EXCEPTION WHEN unique_violation THEN\n>>                  IF isales_rank IS NOT NULL THEN\n>>                     UPDATE amazon_items SET\n>>                         sales_rank = isales_rank\n>>                     WHERE asin = iasin;\n>>                  END IF;\n>>                  IF ioffers IS NOT NULL THEN\n>>                     UPDATE amazon_items SET\n>>                            offers = crunch(ioffers),\n>>                            offers_last_updated = CURRENT_TIMESTAMP,\n>>                            offer_pages = ioffer_pages\n>>                     WHERE asin = iasin;\n>>                  END IF;\n>>          END;\n>>  END;\n>>  $$\n>>  LANGUAGE plpgsql;\n>>\n>>  CREATE OR REPLACE FUNCTION crunch(text)\n>>  RETURNS text AS\n>>  $$\n>>  BEGIN\n>>      RETURN encode(text2bytea($1), 'base64');\n>>  END;\n>>  $$\n>>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n>>\n>>  CREATE OR REPLACE FUNCTION text2bytea(text)\n>>  RETURNS bytea AS\n>>  $$\n>>  BEGIN\n>>        RETURN $1;\n>>  END;\n>>  $$\n>>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n>>\n>> Thanks,\n>> Brian\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> --\n> Sent from my mobile device\n>\n> --\n> Scott Mead\n> Sr. Systems Engineer\n> EnterpriseDB\n>\n> [email protected]\n> C: 607 765 1395\n> www.enterprisedb.com\n>\n", "msg_date": "Fri, 26 Jun 2009 13:36:54 -0700", "msg_from": "Brian Troutwine <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Fri, Jun 26, 2009 at 4:36 PM, Brian Troutwine\n<[email protected]>wrote:\n\n> > Turn commit delay and commit siblings off.\n>\n> Why?\n\n\n Sorry about the short and sweet, was driving:\n\n Having those settings enabled basically does the following:\n\n \" Do not complete the I/O for a commit until you have either\ncommit_siblings commits also ready, or you have waited .55 seconds.\"\n\n Basically, if you make 1 commit, you will sit there waiting until either\n99 other commits take place, or ~ 1/2 second goes by. This is really\ndesigned to alleviate the i/o involved with the commit process, and since\nyou've turned fsync off anyway (which means when I commit, don't write to\ndisk, just to memory), you're waiting around for 99 of your best buddies to\ncome along for 1/2 second for basically... nothing.\n\nI will note btw, that fsync=off is really only recommended when you aren't\nconcerned about your data in the event of disk / power / general node\nfailure. With fsync=off, your journal (REDO / xlog / WAL whatever you want\nto call it) is not consistent with the latest changes to your database,\nrisking data loss in the event of failure.\n\nTest it out, let me know how it goes.\n\n--SCott\n\n>\n>\n> Brian\n>\n> On Fri, Jun 26, 2009 at 1:06 PM, Scott Mead<[email protected]>\n> wrote:\n> > -- sorry for the top-post and short response.\n> >\n> > Turn commit delay and commit siblings off.\n> >\n> > --Scott\n> >\n> > On 6/26/09, Brian Troutwine <[email protected]> wrote:\n> >> Hello, all.\n> >>\n> >> I'm finding that write performance of a certain stored procedure is\n> >> abysmal. I need to be able to sustain approximately 20 calls to this\n> >> procedure per second, but am finding that, on the average, each call\n> >> takes 2 seconds in itself, in addition to pegging a single processor\n> >> at 100% for the duration of the call. Additionally, while the stored\n> >> procedure calls are being made a single worker does a full-table scan\n> >> once every half-hours.\n> >>\n> >> Being a software developer more than a DBA I hope those on this list\n> >> will be kind enough to help me troubleshoot and correct this issue. I\n> >> do not know what information would be exactly pertinent, but I have\n> >> included table definitions, configurations and the function in\n> >> question below. I am using PostgreSQL 8.3 on a Linux Intel Core Duo\n> >> system with 2GB of RAM and am running Postgres on XFS. Here are the\n> >> relevant settings of my postgresql.conf:\n> >>\n> >> max_connections = 25\n> >> shared_buffers = 512MB\n> >> max_fsm_pages = 153600\n> >> fsync = off\n> >> synchronous_commit = off\n> >> wal_writer_delay = 10000ms\n> >> commit_delay = 100000\n> >> commit_siblings = 100\n> >> checkpoint_segments = 64\n> >> checkpoint_completion_target = 0.9\n> >> effective_cache_size = 1024MB\n> >> track_activities = on\n> >> track_counts = on\n> >> update_process_title = on\n> >> autovacuum = on\n> >> log_autovacuum_min_duration = 1000\n> >> autovacuum_vacuum_threshold = 50\n> >> autovacuum_analyze_threshold = 50\n> >>\n> >> Here is the relevant table definition:\n> >>\n> >> DROP TABLE IF EXISTS amazon_items CASCADE;\n> >> CREATE TABLE amazon_items (\n> >> asin char(10) PRIMARY KEY,\n> >> locale varchar(10) NOT NULL DEFAULT 'US',\n> >> currency_code char(3) DEFAULT 'USD',\n> >> isbn char(13),\n> >> sales_rank integer,\n> >> offers text,\n> >> offer_pages integer DEFAULT 10,\n> >> offers_last_updated timestamp NOT NULL DEFAULT\n> CURRENT_TIMESTAMP,\n> >> UNIQUE (asin, locale)\n> >> );\n> >>\n> >> The stored procedure in question, plus supporting procedures:\n> >>\n> >> CREATE OR REPLACE FUNCTION item_data_insert(\n> >> iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n> >> iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\n> >> iweight INTEGER,\n> >> ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n> >> ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n> >> RETURNS VOID AS\n> >> $$\n> >> DECLARE\n> >> y integer[];\n> >> BEGIN\n> >> y[1] := iwidth;\n> >> y[2] := ilength;\n> >> y[3] := iheight;\n> >> y[4] := iweight;\n> >> BEGIN\n> >> INSERT INTO item_details\n> >> (isbn, title, author, binding, list_price, dimensions)\n> >> VALUES\n> >> (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n> >> EXCEPTION WHEN unique_violation THEN\n> >> UPDATE item_details SET\n> >> title = ititle,\n> >> author = iauthor,\n> >> binding = ibinding,\n> >> list_price = ilist_price,\n> >> dimensions = y\n> >> WHERE isbn = iisbn;\n> >> END;\n> >> BEGIN\n> >> INSERT INTO amazon_items\n> >> (asin, sales_rank, offers, offer_pages, isbn)\n> >> VALUES\n> >> (iasin, isales_rank, crunch(ioffers), ioffer_pages,\n> iisbn);\n> >> EXCEPTION WHEN unique_violation THEN\n> >> IF isales_rank IS NOT NULL THEN\n> >> UPDATE amazon_items SET\n> >> sales_rank = isales_rank\n> >> WHERE asin = iasin;\n> >> END IF;\n> >> IF ioffers IS NOT NULL THEN\n> >> UPDATE amazon_items SET\n> >> offers = crunch(ioffers),\n> >> offers_last_updated = CURRENT_TIMESTAMP,\n> >> offer_pages = ioffer_pages\n> >> WHERE asin = iasin;\n> >> END IF;\n> >> END;\n> >> END;\n> >> $$\n> >> LANGUAGE plpgsql;\n> >>\n> >> CREATE OR REPLACE FUNCTION crunch(text)\n> >> RETURNS text AS\n> >> $$\n> >> BEGIN\n> >> RETURN encode(text2bytea($1), 'base64');\n> >> END;\n> >> $$\n> >> LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n> >>\n> >> CREATE OR REPLACE FUNCTION text2bytea(text)\n> >> RETURNS bytea AS\n> >> $$\n> >> BEGIN\n> >> RETURN $1;\n> >> END;\n> >> $$\n> >> LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n> >>\n> >> Thanks,\n> >> Brian\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected])\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n> >>\n> >\n> > --\n> > Sent from my mobile device\n> >\n> > --\n> > Scott Mead\n> > Sr. Systems Engineer\n> > EnterpriseDB\n> >\n> > [email protected]\n> > C: 607 765 1395\n> > www.enterprisedb.com\n> >\n>\n\nOn Fri, Jun 26, 2009 at 4:36 PM, Brian Troutwine <[email protected]> wrote:\n> Turn commit delay and commit siblings off.\n\nWhy?  Sorry about the short and sweet, was driving:    Having those settings enabled basically does the following:   \" Do not complete the I/O for a commit until you have either commit_siblings commits also ready, or you have waited .55 seconds.\"\n  Basically, if you make 1 commit, you will sit there waiting until either 99 other commits take place, or ~ 1/2 second goes by.  This is really designed to alleviate the i/o involved with the commit process, and since you've turned fsync off anyway (which means when I commit, don't write to disk, just to memory), you're waiting around for 99 of your best buddies to come along for 1/2 second for basically... nothing.\nI will note btw, that fsync=off is really only recommended when you aren't concerned about your data in the event of disk / power / general node failure.  With fsync=off, your journal (REDO / xlog / WAL whatever you want to call it) is not consistent with the latest changes to your database, risking data loss in the event of failure.\nTest it out, let me know how it goes.--SCott\n\nBrian\n\nOn Fri, Jun 26, 2009 at 1:06 PM, Scott Mead<[email protected]> wrote:\n> -- sorry for the top-post and short response.\n>\n> Turn commit delay and commit siblings off.\n>\n> --Scott\n>\n> On 6/26/09, Brian Troutwine <[email protected]> wrote:\n>> Hello, all.\n>>\n>> I'm finding that write performance of a certain stored procedure is\n>> abysmal. I need to be able to sustain approximately 20 calls to this\n>> procedure per second, but am finding that, on the average, each call\n>> takes 2 seconds in itself, in addition to pegging a single processor\n>> at 100% for the duration of the call. Additionally, while the stored\n>> procedure calls are being made a single worker does a full-table scan\n>> once every half-hours.\n>>\n>> Being a software developer more than a DBA I hope those on this list\n>> will be kind enough to help me troubleshoot and correct this issue. I\n>> do not know what information would be exactly pertinent, but I have\n>> included table definitions, configurations and the function in\n>> question below. I am using PostgreSQL 8.3 on a Linux Intel Core Duo\n>> system with 2GB of RAM and am running Postgres on XFS. Here are the\n>> relevant settings of my postgresql.conf:\n>>\n>>  max_connections       = 25\n>>  shared_buffers = 512MB\n>>  max_fsm_pages = 153600\n>>  fsync =       off\n>>  synchronous_commit = off\n>>  wal_writer_delay = 10000ms\n>>  commit_delay = 100000\n>>  commit_siblings = 100\n>>  checkpoint_segments = 64\n>>  checkpoint_completion_target = 0.9\n>>  effective_cache_size = 1024MB\n>>  track_activities = on\n>>  track_counts = on\n>>  update_process_title = on\n>>  autovacuum = on\n>>  log_autovacuum_min_duration = 1000\n>>  autovacuum_vacuum_threshold = 50\n>>  autovacuum_analyze_threshold = 50\n>>\n>> Here is the relevant table definition:\n>>\n>>  DROP TABLE IF EXISTS amazon_items CASCADE;\n>>  CREATE TABLE amazon_items (\n>>         asin         char(10) PRIMARY KEY,\n>>         locale       varchar(10) NOT NULL DEFAULT 'US',\n>>         currency_code char(3) DEFAULT 'USD',\n>>         isbn         char(13),\n>>         sales_rank   integer,\n>>         offers       text,\n>>         offer_pages  integer DEFAULT 10,\n>>         offers_last_updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,\n>>         UNIQUE (asin, locale)\n>>  );\n>>\n>> The stored procedure in question, plus supporting procedures:\n>>\n>>  CREATE OR REPLACE FUNCTION item_data_insert(\n>>         iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n>>         iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\n>> iweight INTEGER,\n>>         ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n>>         ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n>>  RETURNS VOID AS\n>>  $$\n>>  DECLARE\n>>          y  integer[];\n>>  BEGIN\n>>          y[1] := iwidth;\n>>          y[2] := ilength;\n>>          y[3] := iheight;\n>>          y[4] := iweight;\n>>  BEGIN\n>>          INSERT INTO item_details\n>>                  (isbn, title, author, binding, list_price, dimensions)\n>>                  VALUES\n>>                  (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n>>          EXCEPTION WHEN unique_violation THEN\n>>                  UPDATE item_details SET\n>>                         title = ititle,\n>>                         author = iauthor,\n>>                         binding = ibinding,\n>>                         list_price = ilist_price,\n>>                         dimensions = y\n>>                  WHERE isbn = iisbn;\n>>          END;\n>>          BEGIN\n>>                  INSERT INTO amazon_items\n>>                  (asin, sales_rank, offers, offer_pages, isbn)\n>>                  VALUES\n>>                  (iasin, isales_rank, crunch(ioffers), ioffer_pages, iisbn);\n>>          EXCEPTION WHEN unique_violation THEN\n>>                  IF isales_rank IS NOT NULL THEN\n>>                     UPDATE amazon_items SET\n>>                         sales_rank = isales_rank\n>>                     WHERE asin = iasin;\n>>                  END IF;\n>>                  IF ioffers IS NOT NULL THEN\n>>                     UPDATE amazon_items SET\n>>                            offers = crunch(ioffers),\n>>                            offers_last_updated = CURRENT_TIMESTAMP,\n>>                            offer_pages = ioffer_pages\n>>                     WHERE asin = iasin;\n>>                  END IF;\n>>          END;\n>>  END;\n>>  $$\n>>  LANGUAGE plpgsql;\n>>\n>>  CREATE OR REPLACE FUNCTION crunch(text)\n>>  RETURNS text AS\n>>  $$\n>>  BEGIN\n>>      RETURN encode(text2bytea($1), 'base64');\n>>  END;\n>>  $$\n>>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n>>\n>>  CREATE OR REPLACE FUNCTION text2bytea(text)\n>>  RETURNS bytea AS\n>>  $$\n>>  BEGIN\n>>        RETURN $1;\n>>  END;\n>>  $$\n>>  LANGUAGE 'plpgsql' IMMUTABLE STRICT;\n>>\n>> Thanks,\n>> Brian\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> --\n> Sent from my mobile device\n>\n> --\n> Scott Mead\n> Sr. Systems Engineer\n> EnterpriseDB\n>\n> [email protected]\n> C: 607 765 1395\n> www.enterprisedb.com\n>", "msg_date": "Fri, 26 Jun 2009 17:03:30 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Fri, 26 Jun 2009, Scott Mead wrote:\n\n> �� �Having those settings enabled basically does the following:\n> �� \" Do not complete the I/O for a commit until you have either commit_siblings commits also ready, or you have waited .55 seconds.\"\n> \n> ��Basically, if you make 1 commit, you will sit there waiting until either 99 other commits take place, or ~ 1/2 second goes by.\n\nYou're right that it should be removed, but this explanation is wrong. \nThe behavior as configured is actually \"if there are >=100 other \ntransactions in progress, wait 0.1 second before committing after the \nfirst one gets committed\", in hopes that one of the other 100 might also \njoin along in the disk write.\n\nSince in this case max_connections it set to 100, it's actually impossible \nfor the commit_delay/commit_siblings behavior to trigger give this \nconfiguration. That's one reason it should be removed. The other is that \ni general, if you don't exactly what you're doing, you shouldn't be \ntouching either parameters; they don't do what people expect them to and \nit's extremely unlikely you'll encounter any of the rare use cases where \nthey might help.\n\nI don't think any of the sync or write parameters have anything to do with \nthis problem though, it seems like a problem with the referential bits \ntaking too long to execute.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Fri Jun 26 20:19:10 2009\nReceived: from maia.hub.org (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id 6AF6A632BE8\n\tfor <[email protected]>; Fri, 26 Jun 2009 20:19:10 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 16732-02\n for <[email protected]>;\n Fri, 26 Jun 2009 20:19:07 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from sss.pgh.pa.us (sss.pgh.pa.us [66.207.139.130])\n\tby mail.postgresql.org (Postfix) with ESMTP id 2F383631873\n\tfor <[email protected]>; Fri, 26 Jun 2009 20:19:07 -0300 (ADT)\nReceived: from sss2.sss.pgh.pa.us (tgl@localhost [127.0.0.1])\n\tby sss.pgh.pa.us (8.14.2/8.14.2) with ESMTP id n5QNJ5Tu011765;\n\tFri, 26 Jun 2009 19:19:05 -0400 (EDT)\nTo: [email protected]\ncc: [email protected]\nSubject: Re: Insert performance and multi-column index order \nIn-reply-to: <[email protected]> \nReferences: <[email protected]>\nComments: In-reply-to [email protected]\n\tmessage dated \"Fri, 26 Jun 2009 10:25:29 -0700\"\nDate: Fri, 26 Jun 2009 19:19:05 -0400\nMessage-ID: <[email protected]>\nFrom: Tom Lane <[email protected]>\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Archive-Number: 200906/371\nX-Sequence-Number: 34615\n\[email protected] writes:\n> Why would changing the column order on a unique index cause data loading or index servicing to slow down? Page splits in the b-tree, maybe?\n\nYeah, perhaps. Tell us about the data distributions in the columns?\nIs there any ordering to the keys that're being inserted?\n\nIt's not in the least surprising that different column orders might be\nbetter or worse suited for particular queries. I'm mildly interested\nin the question of why the bulk load speed is different, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 26 Jun 2009 18:34:20 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": ">\n> You're right that it should be removed, but this explanation is wrong. The\n> behavior as configured is actually \"if there are >=100 other transactions in\n> progress, wait 0.1 second before committing after the first one gets\n> committed\", in hopes that one of the other 100 might also join along in the\n> disk write.\n\n\n Thanks for the correction. My question is how you're getting .1 seconds\nfrom his commit_delay?\n\nif (CommitDelay > 0 && enableFsync &&\n CountActiveBackends() >= CommitSiblings)\n pg_usleep(CommitDelay);\n\n Wouldn't this actually be 1 second based on a commit_delay of 100000?\n\n\n\n>\n>\n> Since in this case max_connections it set to 100, it's actually impossible\n> for the commit_delay/commit_siblings behavior to trigger give this\n> configuration. That's one reason it should be removed. The other is that i\n> general, if you don't exactly what you're doing, you shouldn't be touching\n> either parameters; they don't do what people expect them to and it's\n> extremely unlikely you'll encounter any of the rare use cases where they\n> might help.\n\n\n After looking, I agree, thanks again for the correction Greg.\n\n--Scott\n\n \nYou're right that it should be removed, but this explanation is wrong. The behavior as configured is actually \"if there are >=100 other transactions in progress, wait 0.1 second before committing after the first one gets committed\", in hopes that one of the other 100 might also join along in the disk write.\n  Thanks for the correction.  My question is how you're getting .1 seconds from his commit_delay?if (CommitDelay > 0 && enableFsync &&    CountActiveBackends() >= CommitSiblings)\n         pg_usleep(CommitDelay);  Wouldn't this actually be 1 second based on a commit_delay of 100000? \n\n\nSince in this case max_connections it set to 100, it's actually impossible for the commit_delay/commit_siblings behavior to trigger give this configuration.  That's one reason it should be removed.  The other is that i general, if you don't exactly what you're doing, you shouldn't be touching either parameters; they don't do what people expect them to and it's extremely unlikely you'll encounter any of the rare use cases where they might help.\n   After looking, I agree, thanks again for the correction Greg. --Scott", "msg_date": "Mon, 29 Jun 2009 08:54:40 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Fri, Jun 26, 2009 at 4:36 PM, Brian\nTroutwine<[email protected]> wrote:\n>> *) use indexes to optimize where and join conditions.  for example,\n>> update yadda set yadda where foo = bar, make sure that there is an\n>> index on foo.  As alan noted this is almost definitely your problem.\n>\n> To my knowledge, I have. amazon_items.isbn does not have an index but\n> it is not used, unless I'm overlooking something, in a where\n> condition. item_details.isbn is and does, however.\n>\n>> *) use varchar, not char (always).\n>\n> Why?\n>\n\nchar(n) included the padding up to 'n' both on disk and in data\nreturned. It's slower and can be wasteful.\n\nDid you figure out your issue? I'm pretty sure its an index issue or\nsome other basic optimization problem.\n\nmerlin\n", "msg_date": "Mon, 29 Jun 2009 11:00:36 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Terrible Write Performance of a Stored Procedure" }, { "msg_contents": "On Jun 26, 9:30 pm, [email protected] (Brian Troutwine) wrote:\n> Hello, all.\n>\n>  CREATE OR REPLACE FUNCTION item_data_insert(\n>         iasin TEXT, iauthor TEXT, ibinding TEXT, icurrency_code TEXT,\n>         iisbn TEXT, iheight INTEGER, iwidth INTEGER, ilength INTEGER,\n> iweight INTEGER,\n>         ilist_price INTEGER, iproduct_group TEXT, isales_rank INTEGER,\n>         ititle TEXT, ioffer_pages INTEGER, ioffers TEXT)\n>  RETURNS VOID AS\n>  $$\n>  DECLARE\n>          y  integer[];\n>  BEGIN\n>          y[1] := iwidth;\n>          y[2] := ilength;\n>          y[3] := iheight;\n>          y[4] := iweight;\n>  BEGIN\n>          INSERT INTO item_details\n>                  (isbn, title, author, binding, list_price, dimensions)\n>                  VALUES\n>                  (iisbn, ititle, iauthor, ibinding, ilist_price, y);\n>          EXCEPTION WHEN unique_violation THEN\n>                  UPDATE item_details SET\n>                         title = ititle,\n>                         author = iauthor,\n>                         binding = ibinding,\n>                         list_price = ilist_price,\n>                         dimensions = y\n>                  WHERE isbn = iisbn;\n>          END;\n>          BEGIN\n>                  INSERT INTO amazon_items\n>                  (asin, sales_rank, offers, offer_pages, isbn)\n>                  VALUES\n>                  (iasin, isales_rank, crunch(ioffers), ioffer_pages, iisbn);\n>          EXCEPTION WHEN unique_violation THEN\n>                  IF isales_rank IS NOT NULL THEN\n>                     UPDATE amazon_items SET\n>                         sales_rank = isales_rank\n>                     WHERE asin = iasin;\n>                  END IF;\n>                  IF ioffers IS NOT NULL THEN\n>                     UPDATE amazon_items SET\n>                            offers = crunch(ioffers),\n>                            offers_last_updated = CURRENT_TIMESTAMP,\n>                            offer_pages = ioffer_pages\n>                     WHERE asin = iasin;\n>                  END IF;\n>          END;\n>  END;\n>  $$\n>  LANGUAGE plpgsql;\n>\n\nHi, did the index on isbn field help?\n\nAnother note, that is more fine tuning actually, then the real cause\nof the slow execution of your procedure. If you are expecting to\nupdate more, then insert, then you probably should not wait for the\nexception to be thrown as all the BEGIN EXCEPTION END blocks are more\nexpensive to execute, then simple calls. Have a look here:\nhttp://www.postgresql.org/docs/8.3/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING\nAlso note that if you UPDATE first, and then try to INSERT only when\nUPDATE could not find anything to update, you really HAVE to expect\nINSERT to fail and then retry updating, as another, parallel\ntransaction, could be fast enough to INSERT a record after you tried\nto update and before your transaction starts to insert.\n\nWith best regards,\n\n-- Valentine Gogichashvili\n", "msg_date": "Tue, 30 Jun 2009 01:46:31 -0700 (PDT)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Terrible Write Performance of a Stored Procedure" } ]
[ { "msg_contents": "I have a table about 50 million rows. There are a few writers to pump\ndata into the table at the rate of 40000 row/hours. Most the time, the\nSELECT is less than 100 ms. However sometime it is very slow, from 30\nseconds to 500 seconds. The database is vacuum analyze regularly.\n\nOne months ago, this type of slow query happened about a few time per\nday. But recently, the slow query happens more frequent at the rate of\nonce every 10 minutes or less. There seesm not relation to the\ndatabase loading or the type of query. If I manually execute these\nquery, it is returns in less than 1 seconds.\n\nI just wonder where should I start to look?\n\nThanks\n\nShawn.\n", "msg_date": "Mon, 29 Jun 2009 09:33:40 -0400", "msg_from": "Sean Ma <[email protected]>", "msg_from_op": true, "msg_subject": "random slow query" }, { "msg_contents": "On 06/29/2009 03:33 PM, Sean Ma wrote:\n> I have a table about 50 million rows. There are a few writers to pump\n> data into the table at the rate of 40000 row/hours. Most the time, the\n> SELECT is less than 100 ms. However sometime it is very slow, from 30\n> seconds to 500 seconds. The database is vacuum analyze regularly.\n>\n> One months ago, this type of slow query happened about a few time per\n> day. But recently, the slow query happens more frequent at the rate of\n> once every 10 minutes or less. There seesm not relation to the\n> database loading or the type of query. If I manually execute these\n> query, it is returns in less than 1 seconds.\n>\n> I just wonder where should I start to look?\nThe slow queries could be waiting for locks - so you could enable \nlog_lock_waits to see if that is the issue.\n\nAndres\n\n", "msg_date": "Mon, 29 Jun 2009 15:36:05 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Sean Ma <[email protected]> wrote: \n> I have a table about 50 million rows. There are a few writers to\n> pump data into the table at the rate of 40000 row/hours. Most the\n> time, the SELECT is less than 100 ms. However sometime it is very\n> slow, from 30 seconds to 500 seconds. The database is vacuum analyze\n> regularly.\n \nWhat version of PostgreSQL is this? On what OS? What hardware?\n \nWe had similar problems on some of our servers under 8.2 and earlier\ndue to the tendency of PostgreSQL to build up a very large set of\ndirty pages and then throw them all at the drives with an immediate\nfsync. The RAID controller queued up the requests, and fast reads got\nstuck in the queue behind all those writes. You may want to look at\nthis excellent coverage of the topic by Greg Smith:\n \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm\n \nWe got around the problem by keeping the checkpoint interval and\nshared buffer size fairly small, and making the background writer\nfairly aggressive. What works for you, if this is your problem, may\nbe different. I've heard that some have had to tune their OS caching\nconfiguration.\n \n-Kevin\n", "msg_date": "Mon, 29 Jun 2009 09:53:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Hi Sean,\n\nSean Ma wrote:\n> One months ago, this type of slow query happened about a few time per\n> day. But recently, the slow query happens more frequent at the rate of\n> once every 10 minutes or less. There seesm not relation to th\n\nWhat is your hardware (memory, CPU type and such)?\n\nThis seems like a cache issue to me, but I can't tell for sure without \nsome additional information on your system:\n\n1) What is the amount of a) available memory b) free memory and c) \nmemory available to i/o buffers?\n\n2) What is the swap usage if any?\n\n3) What is the CPU load? Any noticeable patterns in CPU load?\n\nYou can use /usr/bin/top to obtain most of this information.\n\nMike\n\n\n", "msg_date": "Mon, 29 Jun 2009 16:26:30 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "top - 10:18:58 up 224 days, 15:10, 2 users, load average: 6.27, 7.33, 6\nTasks: 239 total, 1 running, 238 sleeping, 0 stopped, 0 zombie\nCpu(s): 5.0%us, 0.7%sy, 0.0%ni, 61.5%id, 32.7%wa, 0.0%hi, 0.1%si, 0\nMem: 32962804k total, 32802612k used, 160192k free, 325360k buffers\nSwap: 8193140k total, 224916k used, 7968224k free, 30829456k cached\n\nDidn't really see the pattern, typical the cpu load is only about 40%\n\nOn Mon, Jun 29, 2009 at 7:26 PM, Mike Ivanov<[email protected]> wrote:\n> Hi Sean,\n>\n> Sean Ma wrote:\n>>\n>> One months ago, this type of slow query happened about a few time per\n>> day. But recently, the slow query happens more frequent at the rate of\n>> once every 10 minutes or less. There seesm not relation to th\n>\n> What is your hardware (memory, CPU type and such)?\n>\n> This seems like a cache issue to me, but I can't tell for sure without some\n> additional information on your system:\n>\n> 1) What is the amount of a) available memory b) free memory and c) memory\n> available to i/o buffers?\n>\n> 2) What is the swap usage if any?\n>\n> 3) What is the CPU load? Any noticeable patterns in CPU load?\n>\n> You can use /usr/bin/top to obtain most of this information.\n>\n> Mike\n>\n>\n>\n", "msg_date": "Tue, 30 Jun 2009 10:21:14 -0400", "msg_from": "Sean Ma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: random slow query" }, { "msg_contents": "Hi Sean,\n\nWell, the overall impression is your machine is badly overloaded. Look:\n\n> top - 10:18:58 up 224 days, 15:10, 2 users, load average: 6.27, 7.33, 6\n> \nThe load average of 6.5 means there are six and a half processes \ncompeting for the same CPU (and this system apparently has only one). \nThis approximately equals to 500% overload.\n\nRecommendation: either add more CPU's or eliminate process competition \nby moving them to other boxes.\n\n> Tasks: 239 total, 1 running, 238 sleeping, 0 stopped, 0 zombie\n> \nThis supports what I said above. There are only 92 processes running on \nmy laptop and I think it is too much. Do you have Apache running on the \nsame machine?\n\n> Cpu(s): 5.0%us, 0.7%sy, 0.0%ni, 61.5%id, 32.7%wa, 0.0%hi, 0.1%si, 0\n> \nWaiting time (wa) is rather high, which means processes wait on locks or \nfor IO, another clue for concurrency issues on this machine.\n\n> Mem: 32962804k total, 32802612k used, 160192k free, 325360k buffers\n> \nBuffers are about 10% of all the memory which is OK, but I tend to give \nbuffers some more room.\n\nRecommendation: eliminate unneeded processes, decrease (yes, decrease) \nthe Postgres cache buffers if they are set too high.\n\n> Swap: 8193140k total, 224916k used, 7968224k free, 30829456k cached\n> \n200M paged out. It should be zero except of an emergency. 3G of cached \nswap is a sign of some crazy paging activity in thepast. Those \nunexplainable slowdowns are very likely caused by that.\n\n> Didn't really see the pattern, typical the cpu load is only about 40%\n> \n40% is too much, really. I start worrying when it is above 10%.\n\nConclusion:\n\n- the system bears more load than it can handle\n- the machine needs an upgrade\n- Postges is competing with something (presumably Apache) - separate them.\n\nThat should help.\n\nCheers,\nMike\n\n", "msg_date": "Tue, 30 Jun 2009 10:23:35 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tue, Jun 30, 2009 at 11:23 AM, Mike Ivanov<[email protected]> wrote:\n> Hi Sean,\n>\n> Well, the overall impression is your machine is badly overloaded. Look:\n>\n>> top - 10:18:58 up 224 days, 15:10,  2 users,  load average: 6.27, 7.33, 6\n>>\n>\n> The load average of 6.5 means there are six and a half processes competing\n> for the same CPU (and this system apparently has only one). This\n> approximately equals to 500% overload.\n>\n> Recommendation: either add more CPU's or eliminate process competition by\n> moving them to other boxes.\n\nWell, we can't be sure OP's only got one core. However, given that\nthe OPs posting shows mostly idle and wait state, the real issue isn't\nthe number of cores, it's the IO subsystem is too slow for the load.\nMore cores wouldn't fix that.\n\n>> Tasks: 239 total,   1 running, 238 sleeping,   0 stopped,   0 zombie\n>>\n>\n> This supports what I said above. There are only 92 processes running on my\n> laptop and I think it is too much. Do you have Apache running on the same\n> machine?\n\nMy production PG server that runs ONLY pg has 222 processes on it.\nIt's no big deal. Unless they're all trying to get cpu time, which\ngenerally isn't the case.\n\n>> Cpu(s):  5.0%us,  0.7%sy,  0.0%ni, 61.5%id, 32.7%wa,  0.0%hi,  0.1%si,  0\n>>\n>\n> Waiting time (wa) is rather high, which means processes wait on locks or for\n> IO, another clue for concurrency issues on this machine.\n\nMore likely just a slow IO subsystem. Like a single drive or\nsomething. adding drives in a RAID-1 or RAID-10 etc usually helps.\n\n>> Mem:  32962804k total, 32802612k used,   160192k free,   325360k buffers\n>>\n>\n> Buffers are about 10% of all the memory which is OK, but I tend to give\n> buffers some more room.\n\nThis is kernel buffers, not pg buffers. It's set by the OS\nsemi-automagically. In this case it's 325M out of 32 Gig, so it's\nwell under 10%, which is typical.\n\n>> Swap:  8193140k total,   224916k used,  7968224k free, 30829456k cached\n>>\n>\n> 200M paged out. It should be zero except of an emergency.\n\nNot true. Linux will happily swap out seldom used processes to make\nroom in memory for more kernel cache etc. You can adjust this\ntendency by setting swappiness.\n\n\n> 3G of cached swap\n> is a sign of some crazy paging activity in thepast. Those unexplainable\n> slowdowns are very likely caused by that.\n\nNo, they're not. It's 30G btw, and it's not swap that's cached, it's\nthe kernel using extra memory to cache data to / from the hard drives.\n It's normal, and shouldn't worry anybody. In fact it's a good sign\nthat you're not using way too much memory for any one process.\n\n>> Didn't really see the pattern, typical the cpu load is only about 40%\n>>\n>\n> 40% is too much, really. I start worrying when it is above 10%.\n\nReally? I have eight cores on my production servers and many batch\njobs I run put all 8 cores at 90% for extended periods. Since that\nmachine is normally doing a lot of smaller cached queries, it hardly\neven notices.\n\n> Conclusion:\n>\n> - the system bears more load than it can handle\n\nYes, too much IO load. I agree on that.\n\n> - the machine needs an upgrade\n\nYes, more hard drives / better caching RAID controller.\n", "msg_date": "Tue, 30 Jun 2009 11:46:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Hi Mike,\n\nThanks for the details. Yes, besides another mysql server running on\nthe same server, there is also an homegrown application that frequent\nread/write the file system.\n\nThe postgres shared cache is at 4G, is that too big?\n\nThanks\n\nSean\n\nOn Tue, Jun 30, 2009 at 1:23 PM, Mike Ivanov<[email protected]> wrote:\n> Hi Sean,\n>\n> Well, the overall impression is your machine is badly overloaded. Look:\n>\n>> top - 10:18:58 up 224 days, 15:10,  2 users,  load average: 6.27, 7.33, 6\n>>\n>\n> The load average of 6.5 means there are six and a half processes competing\n> for the same CPU (and this system apparently has only one). This\n> approximately equals to 500% overload.\n>\n> Recommendation: either add more CPU's or eliminate process competition by\n> moving them to other boxes.\n>\n>> Tasks: 239 total,   1 running, 238 sleeping,   0 stopped,   0 zombie\n>>\n>\n> This supports what I said above. There are only 92 processes running on my\n> laptop and I think it is too much. Do you have Apache running on the same\n> machine?\n>\n>> Cpu(s):  5.0%us,  0.7%sy,  0.0%ni, 61.5%id, 32.7%wa,  0.0%hi,  0.1%si,  0\n>>\n>\n> Waiting time (wa) is rather high, which means processes wait on locks or for\n> IO, another clue for concurrency issues on this machine.\n>\n>> Mem:  32962804k total, 32802612k used,   160192k free,   325360k buffers\n>>\n>\n> Buffers are about 10% of all the memory which is OK, but I tend to give\n> buffers some more room.\n>\n> Recommendation: eliminate unneeded processes, decrease (yes, decrease) the\n> Postgres cache buffers if they are set too high.\n>\n>> Swap:  8193140k total,   224916k used,  7968224k free, 30829456k cached\n>>\n>\n> 200M paged out. It should be zero except of an emergency. 3G of cached swap\n> is a sign of some crazy paging activity in thepast. Those unexplainable\n> slowdowns are very likely caused by that.\n>\n>> Didn't really see the pattern, typical the cpu load is only about 40%\n>>\n>\n> 40% is too much, really. I start worrying when it is above 10%.\n>\n> Conclusion:\n>\n> - the system bears more load than it can handle\n> - the machine needs an upgrade\n> - Postges is competing with something (presumably Apache) - separate them.\n>\n> That should help.\n>\n> Cheers,\n> Mike\n>\n>\n", "msg_date": "Tue, 30 Jun 2009 13:49:24 -0400", "msg_from": "Sean Ma <[email protected]>", "msg_from_op": true, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tue, Jun 30, 2009 at 11:49 AM, Sean Ma<[email protected]> wrote:\n> Hi Mike,\n>\n> Thanks for the details. Yes, besides another mysql server running on\n> the same server, there is also an homegrown application that frequent\n> read/write the file system.\n>\n> The postgres shared cache is at 4G, is that too big?\n\nNot for a machine with 32Gig of ram.\n", "msg_date": "Tue, 30 Jun 2009 11:59:52 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Sean,\n\n> Yes, besides another mysql server running on\n> the same server, \nWhich is a really bad idea :-)\n\n> The postgres shared cache is at 4G, is that too big?\n> \nOK, I have misread the total memory amount which was 32G, and I thought \nit was 3G. Thanks to Scott Marlow who pointed that out. In this case 4G \nfor shared buffers is good.\n\nActually, I take back my words on swap, too. 200M swapped is less \nimportant when you have a plenty of memory.\n\nRegards,\nMike\n\n", "msg_date": "Tue, 30 Jun 2009 11:00:07 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Scott Marlowe wrote:\n>>\n>> The postgres shared cache is at 4G, is that too big?\n>> \n>\n> Not for a machine with 32Gig of ram.\n>\n> \n\nHe could even add some more.\n\nMike\n\n", "msg_date": "Tue, 30 Jun 2009 11:01:59 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tue, Jun 30, 2009 at 12:01 PM, Mike Ivanov<[email protected]> wrote:\n> Scott Marlowe wrote:\n>>>\n>>> The postgres shared cache is at 4G, is that too big?\n>>>\n>>\n>> Not for a machine with 32Gig of ram.\n>>\n>>\n>\n> He could even add some more.\n\nDefinitely. Really depends on how big his data set is, and how well\npgsql is at caching it versus the kernel. I've found that with a\nreally big dataset, like 250G to 1T range, the kernel is almost always\nbetter at caching a lot of it, and if you're operating on a few\nhundred meg at a time anyway, then smaller shared_buffers helps.\n\nOTOH, if you're working on a 5G data set, it's often helpful to turn\nup shared_buffers enough to cover that.\n\nOTOH, if you're running a busy transaction oriented db (lots of small\nupdates) larger shared_buffers will slow you down quite a bit.\n", "msg_date": "Tue, 30 Jun 2009 12:06:33 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Hi Scott,\n\n> Well, we can't be sure OP's only got one core. \n\nIn fact, we can, Sean posted what top -b -n 1 says. There was only one \nCPU line.\n\n> the number of cores, it's the IO subsystem is too slow for the load.\n> More cores wouldn't fix that.\n> \n\nWhile I agree on the IO, more cores would definitely help to improve \n~6.5 load average.\n\n> My production PG server that runs ONLY pg has 222 processes on it.\n> It's no big deal. Unless they're all trying to get cpu time, which\n> generally isn't the case.\n> \n222 / 8 cores = ridiculous 27 processes per core, while the OP has 239.\n\n> More likely just a slow IO subsystem. Like a single drive or\n> something. adding drives in a RAID-1 or RAID-10 etc usually helps.\n> \n\nAbsolutely.\n\n> This is kernel buffers, not pg buffers. It's set by the OS\n> semi-automagically. In this case it's 325M out of 32 Gig, so it's\n> well under 10%, which is typical.\n> \n\nYou can control the FS buffers indirectly by not allowing running \nprocesses to take too much memory. If you have like 40% free, there are \ngood chances the system will use that memory for buffers. If you let \nthem eat up 90% and swap out some more, there is no room for buffers and \nthe system will have to swap out something when it really needs it.\n\n> Not true. Linux will happily swap out seldom used processes to make\n> room in memory for more kernel cache etc. You can adjust this\n> tendency by setting swappiness.\n> \n\nThis is fine until one of those processes wakes up. Then your FS cache \nis dumped.\n\n> It's 30G btw, \n\nYeah, I couldn't believe my eyes :-)\n\n> > 3G of cached swap\n> and it's not swap that's cached, it's\n> the kernel using extra memory to cache data to / from the hard drives.\n> \n\nOh please.. it *is*: \nhttp://www.linux-tutorial.info/modules.php?name=MContent&pageid=314\n\n> It's normal, and shouldn't worry anybody. In fact it's a good sign\n> that you're not using way too much memory for any one process.\n> \n\nIt says exactly the opposite.\n\n> Really? I have eight cores on my production servers and many batch\n> jobs I run put all 8 cores at 90% for extended periods. Since that\n> machine is normally doing a lot of smaller cached queries, it hardly\n> even notices.\n> \n\nThe OP's machine is doing a lot of write ops, which is different.\n\n> Yes, more hard drives / better caching RAID controller.\n> \n+1\n\nBTW, nearly full file system can be another source of problems.\n\nCheers,\nMike\n\n\n", "msg_date": "Tue, 30 Jun 2009 11:22:00 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tue, Jun 30, 2009 at 12:22 PM, Mike Ivanov<[email protected]> wrote:\n> Hi Scott,\n>\n>> Well, we can't be sure OP's only got one core.\n>\n> In fact, we can, Sean posted what top -b -n 1 says. There was only one CPU\n> line.\n\nMissed that.\n\n>\n>> the number of cores, it's the IO subsystem is too slow for the load.\n>> More cores wouldn't fix that.\n>>\n>\n> While I agree on the IO, more cores would definitely help to improve ~6.5\n> load average.\n\nNo, it won't. You can have 1000 cores, and if they're all waiting on\nIO, you'll have the same load.\n\n>> My production PG server that runs ONLY pg has 222 processes on it.\n>> It's no big deal.  Unless they're all trying to get cpu time, which\n>> generally isn't the case.\n>>\n>\n> 222 / 8 cores = ridiculous 27 processes per core, while the OP has 239.\n\nBut most of those processes are asleep and doing nothing. My\nproduction machine is an RHEL 5.2 machine doing only one thing really,\nand it's got that many processes on it. It's fine.\n\n\n>> More likely just a slow IO subsystem.  Like a single drive or\n>> something.  adding drives in a RAID-1 or RAID-10 etc usually helps.\n>>\n>\n> Absolutely.\n>\n>> This is kernel buffers, not pg buffers.  It's set by the OS\n>> semi-automagically.  In this case it's 325M out of 32 Gig, so it's\n>> well under 10%, which is typical.\n>>\n>\n> You can control the FS buffers indirectly by not allowing running processes\n> to take too much memory. If you have like 40% free, there are good chances\n> the system will use that memory for buffers. If you let them eat up 90% and\n> swap out some more, there is no room for buffers and the system will have to\n> swap out something when it really needs it.\n\nClose, but it'll use that memory for cache. Large buffers are not\ntypical in linux, large kernel caches are.\n\n>> Not true.  Linux will happily swap out seldom used processes to make\n>> room in memory for more kernel cache etc.  You can adjust this\n>> tendency by setting swappiness.\n>>\n>\n> This is fine until one of those processes wakes up. Then your FS cache is\n> dumped.\n\nYep.\n\n>> > 3G of cached swap\n>> and it's not swap that's cached, it's\n>> the kernel using extra memory to cache data to / from the hard drives.\n>\n> Oh please.. it *is*:\n> http://www.linux-tutorial.info/modules.php?name=MContent&pageid=314\n\nIf that tutorial says that, then that tutorial is wrong. I'm guessing\nwhat that tutorial is talking about, and what top is saying are two\nvery different things though.\n\n>>  It's normal, and shouldn't worry anybody.  In fact it's a good sign\n>> that you're not using way too much memory for any one process.\n>>\n>\n> It says exactly the opposite.\n\nSorry, but you are wrong here. Look up a better tutorial on what the\ncache entry for top means. It's most assuredly not about swap cache,\nit's kernel cache.\n\n>> Yes, more hard drives / better caching RAID controller.\n>>\n>\n> +1\n>\n> BTW, nearly full file system can be another source of problems.\n\nYeah, ran into that a while back, causes lots of fragmentation.\n", "msg_date": "Tue, 30 Jun 2009 12:30:21 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tue, Jun 30, 2009 at 12:22 PM, Mike Ivanov<[email protected]> wrote:\n>> > 3G of cached swap\n>> and it's not swap that's cached, it's\n>> the kernel using extra memory to cache data to / from the hard drives.\n>>\n>\n> Oh please.. it *is*:\n> http://www.linux-tutorial.info/modules.php?name=MContent&pageid=314\n\nAlso think about it, the OP has 8G of swap and 30Gig cached. How /\nwhy would you be caching 30Gigs worth of data when there's only 8G to\ncache anyway?\n", "msg_date": "Tue, 30 Jun 2009 12:31:54 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "On Tuesday 30 June 2009, Mike Ivanov <[email protected]> wrote:\n> Hi Scott,\n>\n> > Well, we can't be sure OP's only got one core.\n>\n> In fact, we can, Sean posted what top -b -n 1 says. There was only one\n> CPU line.\n>\n\nRecent versions of top on Linux (on RedHat 5 anyway) may show only one \ncombined CPU line unless you break them out with an option.\n\n> > the number of cores, it's the IO subsystem is too slow for the load.\n> > More cores wouldn't fix that.\n>\n> While I agree on the IO, more cores would definitely help to improve\n> ~6.5 load average.\n\nNo, I agree with the previous poster. His load is entirely due to IO wait. \nOnly one of those processes was trying to do anything. IO wait shows up as \nhigh load averages.\n", "msg_date": "Tue, 30 Jun 2009 11:47:20 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Scott Marlowe wrote:\n> Also think about it, the OP has 8G of swap and 30Gig cached. How /\n> why would you be caching 30Gigs worth of data when there's only 8G to\n> cache anyway?\n> \n\nYou're right, I have misread it again :-)\n\nCheers,\nMike\n\n\n\n", "msg_date": "Tue, 30 Jun 2009 12:02:07 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Alan Hodgson wrote:\n> On Tuesday 30 June 2009, Mike Ivanov <[email protected]> wrote:\n>> Hi Scott,\n>>\n>>> Well, we can't be sure OP's only got one core.\n>> In fact, we can, Sean posted what top -b -n 1 says. There was only one\n>> CPU line.\n>>\n> \n> Recent versions of top on Linux (on RedHat 5 anyway) may show only one \n> combined CPU line unless you break them out with an option.\n\nI have not noticed that to be the case. I ran RHEL3 from early 2004 until a\nlittle after RHEL5 came out. I now run that (updated whenever updates come\nout), and I do not recall ever setting any flag to get it to split the CPU\ninto 4 pieces.\n\nI know the flag is there, but I do not recall ever setting it.\n> \n>>> the number of cores, it's the IO subsystem is too slow for the load.\n>>> More cores wouldn't fix that.\n>> While I agree on the IO, more cores would definitely help to improve\n>> ~6.5 load average.\n> \n> No, I agree with the previous poster. His load is entirely due to IO wait. \n> Only one of those processes was trying to do anything. IO wait shows up as \n> high load averages.\n> \nIf you run xosview, you can see all that stuff broken out, in my case at\none-second intervals. It shows user, nice, system, idle, wait, hardware\ninterrupt, software interrupt.\n\nIt also shows disk read, write, and idle time.\n\nLots of other stuff too.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 14:55:01 up 12 days, 1:44, 3 users, load average: 4.34, 4.36, 4.41\n", "msg_date": "Tue, 30 Jun 2009 15:06:05 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Scott Marlowe wrote:\n> Close, but it'll use that memory for cache. Large buffers are not\n> typical in linux, large kernel caches are.\n> \nOK, we're talking about different things. You're right.\n\n> If that tutorial says that, then that tutorial is wrong. I'm guessing\n> what that tutorial is talking about, and what top is saying are two\n> very different things though.\n> \nThen it is an amazingly common misconception. I guess it first appeared \nin some book and then reproduced by zillion blogs. Essentially this is \nwhat Goolgle brings you on 'swap cache' query.\n\nThanks for clearing that out.\n\n>>> It's normal, and shouldn't worry anybody. In fact it's a good sign\n>>> that you're not using way too much memory for any one process\n>> It says exactly the opposite.\n>> \n\nThis time I agree :-)\n\nCheers,\nMike\n\n", "msg_date": "Tue, 30 Jun 2009 12:10:06 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Well, this is going to be a bit redundant but:\n\n\nOn 6/30/09 11:22 AM, \"Mike Ivanov\" <[email protected]> wrote:\n\n> Hi Scott,\n> \n>> Well, we can't be sure OP's only got one core.\n> \n> In fact, we can, Sean posted what top -b -n 1 says. There was only one\n> CPU line.\n\nI do not believe that setting means what you think it means. Here is the\nsame output for a machine with two quad-core cpus.\n\n$ top -b -n 1\ntop - 12:43:06 up 264 days, 1:47, 5 users, load average: 0.24, 0.25, 0.71\nTasks: 253 total, 1 running, 252 sleeping, 0 stopped, 0 zombie\nCpu(s): 5.1%us, 0.5%sy, 0.0%ni, 93.9%id, 0.5%wa, 0.0%hi, 0.1%si,\n0.0%st\nMem: 16432232k total, 13212684k used, 3219548k free, 5992k buffers\nSwap: 2040244k total, 180k used, 2040064k free, 7775732k cached\n\n From the man page:\n\nWhen you see ¹Cpu(s):¹ in the summary area, the ¹1¹ toggle is On and all\ncpu information is gathered in a single\n line. Otherwise, each cpu is displayed separately as: ¹Cpu0,\nCpu1, ...¹\n\n\n\n> \n>> the number of cores, it's the IO subsystem is too slow for the load.\n>> More cores wouldn't fix that.\n>> \n> \n> While I agree on the IO, more cores would definitely help to improve\n> ~6.5 load average.\n> \n\nLoad average is one of the more useless values to look at on a system unless\nyou are looking at a DELTA of the load average from one condition to\nanother. All alone, it doesn't say much.\n\nThe CPU was 60% idle, and ~35% in io wait. If those processes were waiting\non CPU resources to be available, the idle % would be very low. Or, the OS\nscheduler is broken.\n\n>> My production PG server that runs ONLY pg has 222 processes on it.\n>> It's no big deal. Unless they're all trying to get cpu time, which\n>> generally isn't the case.\n>> \n> 222 / 8 cores = ridiculous 27 processes per core, while the OP has 239.\n> \n\nThat's not rediculous at all. Modern OS's handle thousands of idle\nprocesses just fine.\n\n\n>> This is kernel buffers, not pg buffers. It's set by the OS\n>> semi-automagically. In this case it's 325M out of 32 Gig, so it's\n>> well under 10%, which is typical.\n>> \n> \n> You can control the FS buffers indirectly by not allowing running\n> processes to take too much memory. If you have like 40% free, there are\n> good chances the system will use that memory for buffers. If you let\n> them eat up 90% and swap out some more, there is no room for buffers and\n> the system will have to swap out something when it really needs it.\n> \n\nOr you can control the behavior with the following kenrnel params:\nvm.swappiness\nvm.dirty_ratio\nvm.dirty_background ratio\n\n\n>> Not true. Linux will happily swap out seldom used processes to make\n>> room in memory for more kernel cache etc. You can adjust this\n>> tendency by setting swappiness.\n>> \n> \n> This is fine until one of those processes wakes up. Then your FS cache\n> is dumped.\n\nActually, no. When a process wakes up only the pages that are needed are\naccessed. For most idle processes that wake up from time to time, a small\nbit of work is done, then they go back to sleep. This initial allocation\ndoes NOT come from the page cache, but from the \"buffers\" line in top. The\nos tries to keep some ammount of free buffers not allocated to processes or\npages available, so that allocation demands can be met without having to\nsynchronously decide which buffers from page cache to eject.\n\n>>> 3G of cached swap\n>> and it's not swap that's cached, it's\n>> the kernel using extra memory to cache data to / from the hard drives.\n>> \n> \n> Oh please.. it *is*:\n> http://www.linux-tutorial.info/modules.php?name=MContent&pageid=314\n> \n\nThere is no such thing as \"cached swap\". What would there be to cache? A\nprocess' page is either in RAM or swap, and a file is either in buffer cache\nor not.\nThat line entry is the size of the file page cache.\nRead about 'free' and compare the values to top.\n\n>> It's normal, and shouldn't worry anybody. In fact it's a good sign\n>> that you're not using way too much memory for any one process.\n>> \n> \n> It says exactly the opposite.\n\nIt says a ton of space is used caching files.\n\n> \n>> Really? I have eight cores on my production servers and many batch\n>> jobs I run put all 8 cores at 90% for extended periods. Since that\n>> machine is normally doing a lot of smaller cached queries, it hardly\n>> even notices.\n>> \n> \n> The OP's machine is doing a lot of write ops, which is different.\n> \n\nNot when it comes to CPU use percentage. The overlap with disk I/O and CPU\non linux shows up in time spent by the kernel (system time), and often\nkswapd processor time (shows up as system time). Everything else is i/o\nwait.\n\n\nThe OP has a I/O bottleneck. Suggestions other than new hardware:\n\n* Put the xlogs on a separate partition and if ext3 mount with\ndata=writeback or use ext2.\n* Use the deadline scheduler.\n\nIf queries are intermittently causing problems, it might be due to\ncheckpoints. Make sure that the kernel parameters for\ndirty_background_ratio is 5 or less, and dirty_ratio is 10 or less.\n\nSearch this group for information about tuning postgres checkpoints.\n\nIf using a hardware RAID card with a battery back-up, make sure its cache\nmode is set to write-back.\n\nA larger shared_buffers size can help if sequential scans are infrequent and\nkick out pages from the OS page cache.\nPostgres does not let sequential scans kick out index pages or pages\naccessed randomly from its buffer cache, but the OS (Linux) is more prone to\nthat.\n\nWhether larger or smaller shared_buffers will help is HIGHLY load and use\ncase dependant.\n\n", "msg_date": "Tue, 30 Jun 2009 13:08:12 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "\nOn 6/30/09 12:06 PM, \"Jean-David Beyer\" <[email protected]> wrote:\n\n> Alan Hodgson wrote:\n>> On Tuesday 30 June 2009, Mike Ivanov <[email protected]> wrote:\n>>> Hi Scott,\n>>> \n>>>> Well, we can't be sure OP's only got one core.\n>>> In fact, we can, Sean posted what top -b -n 1 says. There was only one\n>>> CPU line.\n>>> \n>> \n>> Recent versions of top on Linux (on RedHat 5 anyway) may show only one\n>> combined CPU line unless you break them out with an option.\n> \n> I have not noticed that to be the case. I ran RHEL3 from early 2004 until a\n> little after RHEL5 came out. I now run that (updated whenever updates come\n> out), and I do not recall ever setting any flag to get it to split the CPU\n> into 4 pieces.\n> \n> I know the flag is there, but I do not recall ever setting it.\n\nTop now has storable defaults so how it behaves depends on what the user has\nstored for their defaults. For example: go to interactive mode by just\ntyping top.\nNow, hit \"1\". Or hit \"c\", or try \"M\". Now, toggle the '1' flag until it\nshows one (and reports Cpu(s) not Cpu0, Cpu1, etc) and hit shift-w.\nNow your defaults are changed and it will spit out one line for all cpus\nunless you tell it not to.\n\nThe output can be highly customized and your preferences stored. Hit 'h'\nfor more info.\n\nAnother way to put it, is that Linux' top has mostly caught up to the\nproprietary commmand line interactive tools on Solaris and AIX that used to\nbe light-years ahead.\n\n", "msg_date": "Tue, 30 Jun 2009 13:46:40 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "\n\n\nOn 6/30/09 1:08 PM, \"Scott Carey\" <[email protected]> wrote:\n> \n> A larger shared_buffers size can help if sequential scans are infrequent and\n> kick out pages from the OS page cache.\n> Postgres does not let sequential scans kick out index pages or pages\n> accessed randomly from its buffer cache, but the OS (Linux) is more prone to\n> that.\n\nLet me qualify the above:\nPostgres 8.3+ doesn't let full page scans push out pages from its\nshared_buffers. It uses a ring buffer for full page scans and vacuums.\n\n\n> \n> Whether larger or smaller shared_buffers will help is HIGHLY load and use\n> case dependant.\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 30 Jun 2009 14:20:04 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Scott Carey wrote:\n>> 222 / 8 cores = ridiculous 27 processes per core, while the OP has 239\n> That's not rediculous at all. Modern OS's handle thousands of idle\n> processes just fine.\n>\n> \nI meant that 27 was a ridiculously small number.\n\n> Or you can control the behavior with the following kenrnel params:\n> vm.swappiness\n> vm.dirty_ratio\n> vm.dirty_background ratio\n> \nThanks for pointing that out!\n\n> Actually, no. When a process wakes up only the pages that are needed are\n> accessed. For most idle processes that wake up from time to time, a small\n> bit of work is done, then they go back to sleep. This initial allocation\n> does NOT come from the page cache, but from the \"buffers\" line in top. The\n> os tries to keep some ammount of free buffers not allocated to processes or\n> pages available, so that allocation demands can be met without having to\n> synchronously decide which buffers from page cache to eject.\n> \nWait a second, I'm trying to understand that :-)\nDid you mean that FS cache pages are first allocated from the buffer \npages or that process memory being paged out to swap is first written to \nbuffers? Could you clarify please?\n\n> If queries are intermittently causing problems, it might be due to\n> checkpoints. Make sure that the kernel parameters for\n> dirty_background_ratio is 5 or less, and dirty_ratio is 10 or less.\n> \nScott, isn't dirty_ratio supposed to be less than \ndirty_background_ratio? I've heard that system would automatically set \ndirty_ratio = dirty_background_ratio / 2 if that's not the case. Also, \nhow dirty_ratio could be less than 5 if 5 is the minimal value?\n\nRegards,\nMike\n\n", "msg_date": "Tue, 30 Jun 2009 14:39:50 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "\nOn 6/30/09 2:39 PM, \"Mike Ivanov\" <[email protected]> wrote:\n\n> Scott Carey wrote:\n>>> 222 / 8 cores = ridiculous 27 processes per core, while the OP has 239\n>> That's not rediculous at all. Modern OS's handle thousands of idle\n>> processes just fine.\n>> \n>> \n> I meant that 27 was a ridiculously small number.\n> \n>> Or you can control the behavior with the following kenrnel params:\n>> vm.swappiness\n>> vm.dirty_ratio\n>> vm.dirty_background ratio\n>> \n> Thanks for pointing that out!\n> \n>> Actually, no. When a process wakes up only the pages that are needed are\n>> accessed. For most idle processes that wake up from time to time, a small\n>> bit of work is done, then they go back to sleep. This initial allocation\n>> does NOT come from the page cache, but from the \"buffers\" line in top. The\n>> os tries to keep some ammount of free buffers not allocated to processes or\n>> pages available, so that allocation demands can be met without having to\n>> synchronously decide which buffers from page cache to eject.\n>> \n> Wait a second, I'm trying to understand that :-)\n> Did you mean that FS cache pages are first allocated from the buffer\n> pages or that process memory being paged out to swap is first written to\n> buffers? Could you clarify please?\n> \n\nThere are some kernel parameters that control how much RAM the OS tries to\nkeep in a state that is not allocated to page cache or processes. I've\nforgotten what these are exactly.\n\nBut the purpose is to prevent the virtual memory system from having to make\nthe decision on what memory to kick out of the page cache, or what pages to\nswap to disk, when memory is allocated. Rather, it can do this in the\nbackground most of the time. So, the first use of this is when a process\nallocates memory. Pulling a swapped page off disk probably uses this too\nbut I'm not sure. It would make sense. Pages being written to swap go\ndirectly to swap and deallocated.\nFile pages are either on disk or in the page cache. Process pages are\neither in memory or swap.\nBut when either of these is first put in memory (process allocation,\npage-in, file read), the OS can either quickly allocate to the process or\nthe page cache from the free buffers, or more slowly take from the page\ncache, or even more slowly page out a process page.\n\n>> If queries are intermittently causing problems, it might be due to\n>> checkpoints. Make sure that the kernel parameters for\n>> dirty_background_ratio is 5 or less, and dirty_ratio is 10 or less.\n>> \n> Scott, isn't dirty_ratio supposed to be less than\n> dirty_background_ratio? I've heard that system would automatically set\n> dirty_ratio = dirty_background_ratio / 2 if that's not the case. Also,\n> how dirty_ratio could be less than 5 if 5 is the minimal value?\n> \n\ndirty_ratio is the percentage of RAM that can be in the page cache and not\nyet written to disk before all writes in the system block.\ndirty_background_ratio is the percentage of RAM that can be filled with\ndirty file pages before a background thread is started by the OS to start\nflushing to disk. Flushing to disk also occurs on timed intervals or other\ntriggers.\n\nBy default, Linux 2.6.18 (RHEL5/Centos5, etc) has the former at 40 and the\nlatter at 10, which on a 32GB system means over 13GB can be in memory and\nnot yet on disk! Sometime near 2.6.22 or so the default became 10 and 5,\nrespectively. For some systems, this is still too much.\n\nI like to use the '5 second rule'. dirty_background_ratio should be sized\nso that it takes about 5 seconds to flush to disk in optimal conditions.\ndirty_ratio should be 2x to 5x this depending on your application's needs --\nfor a system with well tuned postgres checkpoints, smaller tends to be\nbetter to limit stalls while waiting for the checkpoint fsync to finish.\n\n> Regards,\n> Mike\n> \n> \n\n", "msg_date": "Tue, 30 Jun 2009 17:11:19 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" }, { "msg_contents": "Scott Carey wrote:\n> the OS can either quickly allocate to the process or\n> the page cache from the free buffers, or more slowly take from the page\n> cache, or even more slowly page out a process page.\n> \n\nAha, now it all makes sense.\n\n> I like to use the '5 second rule'. dirty_background_ratio should be sized\n> so that it takes about 5 seconds to flush to disk in optimal conditions.\n> dirty_ratio should be 2x to 5x this depending on your application's needs --\n> for a system with well tuned postgres checkpoints, smaller tends to be\n> better to limit stalls while waiting for the checkpoint fsync to finish.\n> \n\nThanks a lot, this is invaluable information.\n\n\nRegards,\nMike\n\n\n", "msg_date": "Tue, 30 Jun 2009 17:58:46 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random slow query" } ]
[ { "msg_contents": "Good morning.\n\n \n\nI have developed a function call that schedules patient appointments\nwithin a day based on several resource constraints. The algorithm has\nbeen mentioned on here before and I have managed to tweak it down to 6-9\nseconds from the original 27 seconds. \n\n \n\nOf course, I want it to be faster still. The function throttles one of\nmy CPUs to 100% (shown as 50% in Task Manager) and leaves the other one\nsitting pretty. Is there any way to use both CPUs?\n\n \n\nThanks,\n\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n\n \n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGood morning.\n \nI have developed a function call that schedules patient\nappointments within a day based on several resource constraints. The algorithm\nhas been mentioned on here before and I have managed to tweak it down to 6-9\nseconds from the original 27 seconds. \n \nOf course, I want it to be faster still. The function\nthrottles one of my CPUs to 100% (shown as 50% in Task Manager) and leaves the\nother one sitting pretty. Is there any way to use both CPUs?\n \nThanks,\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294", "msg_date": "Mon, 29 Jun 2009 10:26:40 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": true, "msg_subject": "Utilizing multiple cores in a function call." }, { "msg_contents": "Hartman, Matthew wrote:\n> Good morning.\n> \n> \n> \n> I have developed a function call that schedules patient appointments \n> within a day based on several resource constraints. The algorithm has \n> been mentioned on here before and I have managed to tweak it down to 6-9 \n> seconds from the original 27 seconds.\n> \nTo speed up the execution of processes, I heartily recommend the book,\n\"Writing Efficient Programs\" by Jon Louis Bentley, Prentice-Hall, 1982.\n\nThere are many important steps. The most important is usually to refine the\nalgorithm itself. I once speeded up a program that would have required\nseveral weeks on a main frame running 24/7 to 6 minutes by improving the\nbasic algorithm of the thing. Only then would it have made sense to optimize\nthe actual code.\n\nNext, you need to profile the code to see where the hot spots are. There is\nlittle point to examining code in other parts of the program.\n> \n> Of course, I want it to be faster still. The function throttles one of \n> my CPUs to 100% (shown as 50% in Task Manager) and leaves the other one \n> sitting pretty. Is there any way to use both CPUs?\n> \nYou could write your algorithm as a separate process -- a server.\nThen in you SQL program, you invoke a trivial function that just hands the\narguments off to the server. Thus, your SQL program would normally run on\none processor and the time-consuming algorithm would run on the other.\n\nIf you are not careful, this would not benefit you at all because your SQL\nprocess would wait until the server returns its answer. So you would need to\nmodify your SQL program so that it could do other things while the server\nprocess did its thing.\n\nMy guess is that you need a more efficient algorithm before you go to the\ntrouble of optimizing the execution of your current one. As far as making it\nrun on multiple processors, it depends critically on the nature of your\nalgorithm. A few can easily be modified to run on multiple processors. Some\ncannot run on multiple processors at all.\n> \n> \n> Thanks,\n> \n> \n> Matthew Hartman\n> Programmer/Analyst\n> Information Management, ICP\n> Kingston General Hospital\n> (613) 549-6666 x4294\n> \n> \n> \n> \n> \n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 10:40:01 up 10 days, 21:29, 3 users, load average: 4.19, 4.22, 4.19\n", "msg_date": "Mon, 29 Jun 2009 10:52:58 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "I'm pretty much at that point where I've chewed the fat off of the\nalgorithm, or at least at my personal limits. Occasionally a new idea\npops into my head and yields an improvement but it's in the order of\n100-250ms.\n\nGoogle came back with \"no sir\". It seems PostgreSQL is limited to one\nCPU per query unless I spawn a master/controller like you suggested.\nShame..\n\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jean-David\nBeyer\nSent: Monday, June 29, 2009 10:53 AM\nTo: pgsql performance\nSubject: Re: [PERFORM] Utilizing multiple cores in a function call.\n\nHartman, Matthew wrote:\n> Good morning.\n> \n> \n> \n> I have developed a function call that schedules patient appointments \n> within a day based on several resource constraints. The algorithm has \n> been mentioned on here before and I have managed to tweak it down to\n6-9 \n> seconds from the original 27 seconds.\n> \nTo speed up the execution of processes, I heartily recommend the book,\n\"Writing Efficient Programs\" by Jon Louis Bentley, Prentice-Hall, 1982.\n\nThere are many important steps. The most important is usually to refine\nthe\nalgorithm itself. I once speeded up a program that would have required\nseveral weeks on a main frame running 24/7 to 6 minutes by improving the\nbasic algorithm of the thing. Only then would it have made sense to\noptimize\nthe actual code.\n\nNext, you need to profile the code to see where the hot spots are. There\nis\nlittle point to examining code in other parts of the program.\n> \n> Of course, I want it to be faster still. The function throttles one of\n\n> my CPUs to 100% (shown as 50% in Task Manager) and leaves the other\none \n> sitting pretty. Is there any way to use both CPUs?\n> \nYou could write your algorithm as a separate process -- a server.\nThen in you SQL program, you invoke a trivial function that just hands\nthe\narguments off to the server. Thus, your SQL program would normally run\non\none processor and the time-consuming algorithm would run on the other.\n\nIf you are not careful, this would not benefit you at all because your\nSQL\nprocess would wait until the server returns its answer. So you would\nneed to\nmodify your SQL program so that it could do other things while the\nserver\nprocess did its thing.\n\nMy guess is that you need a more efficient algorithm before you go to\nthe\ntrouble of optimizing the execution of your current one. As far as\nmaking it\nrun on multiple processors, it depends critically on the nature of your\nalgorithm. A few can easily be modified to run on multiple processors.\nSome\ncannot run on multiple processors at all.\n> \n> \n> Thanks,\n> \n> \n> Matthew Hartman\n> Programmer/Analyst\n> Information Management, ICP\n> Kingston General Hospital\n> (613) 549-6666 x4294\n> \n> \n> \n> \n> \n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 10:40:01 up 10 days, 21:29, 3 users, load average: 4.19, 4.22,\n4.19\n\n-- \nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 29 Jun 2009 11:01:57 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "Hartman, Matthew wrote:\n> I'm pretty much at that point where I've chewed the fat off of the\n> algorithm, or at least at my personal limits. Occasionally a new idea\n> pops into my head and yields an improvement but it's in the order of\n> 100-250ms.\n> \n> Google came back with \"no sir\". It seems PostgreSQL is limited to one\n> CPU per query unless I spawn a master/controller like you suggested.\n> Shame..\n\nAlthough I have never done it myself, you might try using PL/R to \nperform the algo in R, and make use of snow package to run parallel \ntasks -- see:\n http://cran.r-project.org/web/views/HighPerformanceComputing.html\n\nJoe\n\n", "msg_date": "Mon, 29 Jun 2009 11:20:21 -0700", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "On Mon, 29 Jun 2009, Hartman, Matthew wrote:\n\n> The function throttles one of my CPUs to 100% (shown as 50% in Task \n> Manager) and leaves the other one sitting pretty. Is there any way to \n> use both CPUs?\n\nNot easily. Potential techniques:\n\n-Rewrite the function or its time critical portion in some other language \nthat allows using two processes usefully\n\n-Write a \"worker server\" that you prompt to pick up work from a table and \nwrite its output to another that you can ask to handle part of the job. \nYou might communicate with the worker using the LISTEN/NOTIFY mechanism in \nthe database.\n\n-Some combination of these two techniques. One popular way to speed up \nthings that are running slowly is to run some part of them in a C UDF, so \nthat you could use \"select my_big_computation(x,y,z)\" and get faster \nexecution.\n\nIf you were hoping for a quick answer, no such thing. I suspect you'd get \nbetter help talking about what your function does and see if there's a \nspecific part somebody else is familiar with optimizing.\n\nFor example, I've seen >10:1 speedups just be rewriting one small portion \nof a computationally expensive mathematical function in C before, keeping \nthe rest of the logic on the database side. You don't necessarily have to \nrewrite the whole thing.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 29 Jun 2009 14:42:18 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "On Mon, Jun 29, 2009 at 10:26 AM, Hartman,\nMatthew<[email protected]> wrote:\n> Good morning.\n>\n>\n>\n> I have developed a function call that schedules patient appointments within\n> a day based on several resource constraints. The algorithm has been\n> mentioned on here before and I have managed to tweak it down to 6-9 seconds\n> from the original 27 seconds.\n>\n>\n>\n> Of course, I want it to be faster still. The function throttles one of my\n> CPUs to 100% (shown as 50% in Task Manager) and leaves the other one sitting\n> pretty. Is there any way to use both CPUs?\n\nYour best bet at using multiple cores on a cpu bound problem is to try\nand divide up the work logically into separate pools and to attack the\nwork with multiple function calls. This is probably what the database\nwould do for you if it had 'in-query multi threading', only the\ndatabase could attack it on a much finer grained level.\n\nIn your particular case, I think the answer is to attack the problem\nin an entirely new direction, although your matrix query is one of the\ncoolest queries i've seen in a while.\n\nThe first thought that jumped out at me was to try and treat your\nnurses and stations as incrementing numbers so that if you allocate\nthree hours of nurse x's time, you increment some number by three in\nthe nurse's table. This would lay on top of a kind of a time\ncalculation system that would convert that number to actual time based\non the nurses schedule, etc. On top of _that_, you would need some\nkind of resolution system to handle canceled appointments, nurse\nno-shows, etc.\n\nThe stations would operate on a similar principle...you imagine all\nthe available hours for the station stretched to infinity on a number\nline and keep a fixed allocation point which always moves forwards,\nplus a 'number line time' -> real time converter and a freestore list\nto pick up unexpectedly freed time.\n\nmerlin\n", "msg_date": "Mon, 29 Jun 2009 17:21:32 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "On Mon, 2009-06-29 at 14:42 -0400, Greg Smith wrote:\n\n> -Write a \"worker server\" that you prompt to pick up work from a table and \n> write its output to another that you can ask to handle part of the job. \n> You might communicate with the worker using the LISTEN/NOTIFY mechanism in \n> the database.\n> \n> -Some combination of these two techniques. One popular way to speed up \n> things that are running slowly is to run some part of them in a C UDF, so \n> that you could use \"select my_big_computation(x,y,z)\" and get faster \n> execution.\n\nThe trouble here is that the backend may not like having threads\nsuddenly introduced into its execution environment.\n\nIf properly written, I don't really see why a C UDF that used pthreads\ncouldn't spawn two worker threads that _NEVER_ touched _ANY_ PostgreSQL\nAPIs, talked to the SPI, etc, and let them run while blocking the main\nthread until they complete.\n\nThen again, I know relatively little about Pg's guts, and for all I know\niniting the pthread environment could completely mess up the backend.\n\n\nPersonally I'd want to do it out-of-process, using a SECURITY DEFINER\nPL/PgSQL function owned by a role that also owned some otherwise private\nqueue and result tables for your worker server. As Greg Smith noted,\nLISTEN/NOTIFY would allow your worker server to avoid polling and\ninstead sleep when there's nothing in the queue, and would also let your\nwaiting clients avoid polling the result table.\n\n> For example, I've seen >10:1 speedups just be rewriting one small portion \n> of a computationally expensive mathematical function in C before, keeping \n> the rest of the logic on the database side. You don't necessarily have to \n> rewrite the whole thing.\n\nA useful dirty trick is to use Psyco in Python. It's a specializing\ncompiler that can get massive performance boosts out of Python code\nwithout any code changes, and it seems to work with PL/Python. Just:\n\ntry:\n import psyco\n psyco.full()\nexcept:\n # Enabing Pysco failed; don't care\n pass\n\nin your function should get you a pretty serious boost. This will NOT,\nhowever, allow your code to use two cores at once; you'll need threading\nor multiple processes for that.\n\n-- \nCraig Ringer\n\n", "msg_date": "Tue, 30 Jun 2009 13:54:41 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "I have tried to wrap my brain around different approaches but I'm still\nstuck with this one so far. Your approach is interesting but the problem\nis more complicated than that. Let me break it down a bit more.\n\nThe chemotherapy treatment room is divided into groupings of chairs,\ncalled pods. Pod 1 could have three chairs, pod 2 could have two, and so\nforth. Every day can have a unique number of pods, chairs, and groupings\nof chairs to pods. Furthermore, every day can have a unique number of\nnurses, and nurses are assigned to one or more pods. A single nurse\ncould be assigned to cover three pods for example. On top of that, pods\nhave a start/end time as well as nurses. Every pod and nurse can have\nunique start/end times.\n\nChemotherapy regimens have a required chair time and a required nurse\ntime. The required nurse time represents how long it takes a nurse to\nstart the treatment. To schedule an appointment, both the chair and\nnurse have to be available for the required times at the same time,\nwhile also respecting the pod/chair and pod/nurse assignments. It's more\nthan incrementing/decrementing the total available time.\n\nThanks,\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n \n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: Monday, June 29, 2009 5:22 PM\nTo: Hartman, Matthew\nCc: [email protected]\nSubject: Re: [PERFORM] Utilizing multiple cores in a function call.\n\nThe first thought that jumped out at me was to try and treat your\nnurses and stations as incrementing numbers so that if you allocate\nthree hours of nurse x's time, you increment some number by three in\nthe nurse's table. This would lay on top of a kind of a time\ncalculation system that would convert that number to actual time based\non the nurses schedule, etc. On top of _that_, you would need some\nkind of resolution system to handle canceled appointments, nurse\nno-shows, etc.\n\nThe stations would operate on a similar principle...you imagine all\nthe available hours for the station stretched to infinity on a number\nline and keep a fixed allocation point which always moves forwards,\nplus a 'number line time' -> real time converter and a freestore list\nto pick up unexpectedly freed time.\n\nmerlin\n\n", "msg_date": "Tue, 30 Jun 2009 08:30:24 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "On Tue, Jun 30, 2009 at 8:30 AM, Hartman,\nMatthew<[email protected]> wrote:\n> I have tried to wrap my brain around different approaches but I'm still\n> stuck with this one so far. Your approach is interesting but the problem\n> is more complicated than that. Let me break it down a bit more.\n>\n> The chemotherapy treatment room is divided into groupings of chairs,\n> called pods. Pod 1 could have three chairs, pod 2 could have two, and so\n> forth. Every day can have a unique number of pods, chairs, and groupings\n> of chairs to pods. Furthermore, every day can have a unique number of\n> nurses, and nurses are assigned to one or more pods. A single nurse\n> could be assigned to cover three pods for example. On top of that, pods\n> have a start/end time as well as nurses. Every pod and nurse can have\n> unique start/end times.\n>\n> Chemotherapy regimens have a required chair time and a required nurse\n> time. The required nurse time represents how long it takes a nurse to\n> start the treatment. To schedule an appointment, both the chair and\n> nurse have to be available for the required times at the same time,\n> while also respecting the pod/chair and pod/nurse assignments. It's more\n> than incrementing/decrementing the total available time.\n\nI take it then that the char time and the nurse time are not the same\nduration. Does the nurse time always have to be the same portion of\nthe chair time (say, at the beginning?), or is their some more\ncomplicated definition of how the nurse time overlays on top the chair\ntime during the treatment?\n\nmerlin\n", "msg_date": "Tue, 30 Jun 2009 09:19:11 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Utilizing multiple cores in a function call." }, { "msg_contents": "> I take it then that the char time and the nurse time are not the same\n> duration. Does the nurse time always have to be the same portion of\n> the chair time (say, at the beginning?), or is their some more\n> complicated definition of how the nurse time overlays on top the chair\n> time during the treatment?\n\nThe nurse time is equal to or often less than the chair time, and always\nat the beginning. They've asked to be able to specify nurse time at the\nend as well but I've stuck with \"no\" so far. :)\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n(613) 549-6666 x4294 \n\n\n", "msg_date": "Tue, 30 Jun 2009 09:29:32 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Utilizing multiple cores in a function call." } ]
[ { "msg_contents": "\nGreg,\n\nThanks for the mental prod! Yes, the original data is more closely sorted by the timestamptz column, since they represent events coming into the collection system in real time. As for the distribution of data values, it goes without saying the timestamptz value is monotonically increasing, with roughly 1300 entries having the same timestamptz value. The other three columns' values are essentially reference data, with 400 values for the varchar, 680 for the first text column, and 60 for the second text column. The distribution is fairly even, with some small spikes but nothing significant.\n\nThe \"duh\" moment came for me when you pointed out the implicit sort order of the data. After resorting the data into the new index column order the insert performance was largely restored. I didn't monitor the process with vmstat, however - the end result is good enough for me. I believe that the index maintenance of page splitting, etc., that you describe below was exactly the culprit, and that presorting the data solved that problem. \n\nI call it my \"duh\" moment since I've presorted data for Sybase and Oracle for exactly the same reason, but forgot to apply the lesson to PostgreSQL.\n\nBTW, this is PG 8.2.1 and 8.3.7 running on SLES 10.3, although I don't think it matters.\n\nThanks for the help, Greg and Tom!\n\n--- On Sat, 6/27/09, Greg Smith <[email protected]> wrote:\n\n> From: Greg Smith <[email protected]>\n> Subject: Re: [PERFORM] Insert performance and multi-column index order\n> To: [email protected]\n> Cc: [email protected]\n> Date: Saturday, June 27, 2009, 1:08 AM\n> On Fri, 26 Jun 2009, [email protected]\n> wrote:\n> \n> > The original unique index was in the order\n> (timestamptz, varchar, text, text) and most queries against\n> it were slow.  I changed the index order to (varchar, text,\n> timestamptz, text) and queries now fly, but loading data\n> (via copy from stdin) in the table is 2-4 times slower.\n> \n> Is the input data closer to being sorted by the timestamptz\n> field than the varchar field?  What you might be seeing\n> is that the working set of index pages needed to keep\n> building the varchar index are bigger or have more of a\n> random access component to them as they spill in and out of\n> the buffer cache.  Usually you can get a better idea\n> what the difference is by comparing the output from vmstat\n> while the two are loading.  More random read/write\n> requests in the mix will increase the waiting for I/O\n> percentage while not increasing the total amount\n> read/written per second.\n> \n> --\n> * Greg Smith [email protected]\n> http://www.gregsmith.com Baltimore, MD\n\n\n \n", "msg_date": "Tue, 30 Jun 2009 05:31:43 -0700 (PDT)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance and multi-column index order" } ]
[ { "msg_contents": "HI Gurus ,\n\ni have this query (i think is a simple one)\nit takes me 1,7s to run it, it's not to long, but considering it takes 1,7s\nto return 71lines makes me wonder... is there anyother way to do this, on a\nfirst look??\n\nany sugestion would be largely appreciated.\n\nSELECT distinct on (bien.uid) bien.uid , bien.date_creation ,\nbien.date_modification , bien.nom , bien.numero_voie , bien.mer ,\nbien.proximite , bien.nom_voie , bien.type_voie , bien.lieudit ,\nbien.arrondissement , bien.montagne , bien.complement_adresse , bien.xy_geo\n, bien.ref_type_avancement , bien.ref_agence , bien.acces_handicape ,\nbien.surface_totale , bien.ref_type_transaction , bien.reference_bien ,\n bien.ref_type_bien , bien.bien_exception ,\nbien.video_online , bien.geom , habitation.nombre_de_chambres,\nhabitation.nombre_de_wc ,\n prix.montant , ville.nom ,ville.abreviation ,\nville.code_insee , ville.code_postal ,\n freguesia_ville.code_insee , freguesia_ville.code_postal\n, freguesia_ville.ref_freguesia , freguesia_ville.ref_ville ,\n freguesia.nom , freguesia.numero , departement.nom ,\ndepartement.numero , region.nom ,region.numero , zone.zone_public ,\ntype_transaction.nom, mandat.numero_mandat_pt\nFROM bien\nLEFT outer JOIN prix ON prix.ref_bien = bien.uid AND prix.ref_type_prix in\n(2,9) and prix.montant !=0 LEFT outer JOIN habitation on habitation.uid =\nbien.uid\nLEFT outer JOIN ville ON ville.uid = bien.ref_ville LEFT outer JOIN\nfreguesia_ville ON freguesia_ville.ref_ville =ville.uid\nLEFT outer JOIN freguesia ON freguesia.uid = freguesia_ville.ref_freguesia\nLEFT outer JOIN departement ON departement.uid =ville.ref_departement LEFT\nouter JOIN region ON region.uid = departement.ref_region\nLEFT outer JOIN zone ON zone.ref_bien = bien.uid JOIN imagebien ON\nimagebien.ref_bien = bien.uid left outer join mandat on\nmandat.ref_bien=bien.uid\nLEFT outer JOIN type_transaction ON type_transaction.uid =\nbien.ref_type_transaction\nLEFT OUTER JOIN agence on agence.uid = bien.ref_agence\nWHERE imagebien.uid IS NOT NULL AND bien.statut = 0 and\nbien.visible_internet = 1 and bien.ref_agence = XXXXXXX\n\n\n\n\nthanks.\n\nRC\n\nHI Gurus ,i have this query (i think is a simple one)it takes me 1,7s to run it, it's not to long, but considering it takes 1,7s to return 71lines makes me wonder... is there anyother way to do this, on a first look??\nany sugestion would be largely appreciated.SELECT distinct on (bien.uid) bien.uid , bien.date_creation , bien.date_modification , bien.nom ,  bien.numero_voie , bien.mer , bien.proximite ,  bien.nom_voie , bien.type_voie , bien.lieudit ,  bien.arrondissement , bien.montagne , bien.complement_adresse , bien.xy_geo , bien.ref_type_avancement ,   bien.ref_agence , bien.acces_handicape , bien.surface_totale , bien.ref_type_transaction ,  bien.reference_bien ,\n                    bien.ref_type_bien ,  bien.bien_exception , bien.video_online , bien.geom , habitation.nombre_de_chambres, habitation.nombre_de_wc ,                    prix.montant , ville.nom ,ville.abreviation , ville.code_insee , ville.code_postal ,\n                    freguesia_ville.code_insee , freguesia_ville.code_postal , freguesia_ville.ref_freguesia , freguesia_ville.ref_ville ,                    freguesia.nom , freguesia.numero , departement.nom , departement.numero , region.nom ,region.numero , zone.zone_public , type_transaction.nom, mandat.numero_mandat_pt\nFROM bienLEFT outer JOIN prix ON prix.ref_bien = bien.uid  AND prix.ref_type_prix in (2,9) and prix.montant !=0  LEFT outer JOIN habitation on habitation.uid = bien.uidLEFT outer JOIN ville ON ville.uid = bien.ref_ville LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\nLEFT outer JOIN freguesia ON freguesia.uid = freguesia_ville.ref_freguesiaLEFT outer JOIN departement ON departement.uid =ville.ref_departement LEFT outer JOIN region ON region.uid = departement.ref_regionLEFT outer JOIN zone ON zone.ref_bien = bien.uid JOIN imagebien ON imagebien.ref_bien = bien.uid left outer join mandat on mandat.ref_bien=bien.uid\nLEFT outer JOIN type_transaction ON type_transaction.uid = bien.ref_type_transactionLEFT OUTER JOIN agence on agence.uid = bien.ref_agenceWHERE imagebien.uid IS NOT NULL AND bien.statut = 0 and bien.visible_internet = 1 and bien.ref_agence = XXXXXXX\nthanks.RC", "msg_date": "Wed, 1 Jul 2009 10:40:02 +0100", "msg_from": "Rui Carvalho <[email protected]>", "msg_from_op": true, "msg_subject": "- Slow Query" }, { "msg_contents": "Hi Rui,\n> i have this query (i think is a simple one)\n\nCould you EXPLAIN ANALYZE the query and show the results please?\n\nThanks,\nMike\n\n\n", "msg_date": "Wed, 01 Jul 2009 09:39:05 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "Rui Carvalho wrote:\n> SELECT distinct on (bien.uid) bien.uid , bien.date_creation , \n> bien.date_modification , bien.nom , bien.numero_voie , bien.mer , \n> bien.proximite , bien.nom_voie , bien.type_voie , bien.lieudit , \n> bien.arrondissement , bien.montagne , bien.complement_adresse , \n> bien.xy_geo , bien.ref_type_avancement , bien.ref_agence , \n> bien.acces_handicape , bien.surface_totale , bien.ref_type_transaction \n> , bien.reference_bien ,\n> bien.ref_type_bien , bien.bien_exception , \n> bien.video_online , bien.geom , habitation.nombre_de_chambres, \n> habitation.nombre_de_wc ,\n> prix.montant , ville.nom ,ville.abreviation , \n> ville.code_insee , ville.code_postal ,\n> freguesia_ville.code_insee , \n> freguesia_ville.code_postal , freguesia_ville.ref_freguesia , \n> freguesia_ville.ref_ville ,\n> freguesia.nom , freguesia.numero , departement.nom \n> , departement.numero , region.nom ,region.numero , zone.zone_public , \n> type_transaction.nom, mandat.numero_mandat_pt\n> FROM bien\n> LEFT outer JOIN prix ON prix.ref_bien = bien.uid AND \n> prix.ref_type_prix in (2,9) and prix.montant !=0 LEFT outer JOIN \n> habitation on habitation.uid = bien.uid\n> LEFT outer JOIN ville ON ville.uid = bien.ref_ville LEFT outer JOIN \n> freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n> LEFT outer JOIN freguesia ON freguesia.uid = freguesia_ville.ref_freguesia\n> LEFT outer JOIN departement ON departement.uid =ville.ref_departement \n> LEFT outer JOIN region ON region.uid = departement.ref_region\n> LEFT outer JOIN zone ON zone.ref_bien = bien.uid JOIN imagebien ON \n> imagebien.ref_bien = bien.uid left outer join mandat on \n> mandat.ref_bien=bien.uid\n> LEFT outer JOIN type_transaction ON type_transaction.uid = \n> bien.ref_type_transaction\n> LEFT OUTER JOIN agence on agence.uid = bien.ref_agence\n> WHERE imagebien.uid IS NOT NULL AND bien.statut = 0 and \n> bien.visible_internet = 1 and bien.ref_agence = XXXXXXX\n>\n\nYou need to run explain analyze on the query, and post the results \nThis will tell us where the time is getting eaten up and other problems \nthat might be in the query. \nAlso need to know the version of Postgresql???\n", "msg_date": "Wed, 01 Jul 2009 12:41:15 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": " > Merge Join (cost=111885.70..319492.88 rows=13016048 width=620)\n\nThe outermost merge join has to go through 13 million rows. If you \nremove \"distinct on (bien.uid)\", you'll see that.\n\n > LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n > LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n\nThis is not enough. You have to add this condition as well:\n\nAND bien.ref_ville = freguesia_ville.ref_ville\n\nIn other words, when you link three tables by a common field, all three \nrelationships should be explicitly expressed, otherwise you'll have this \ntype of explosive row multiplication.\n\nAlthough I don't quite understand the purpose of the query, I don't \nthink you need all those OUTER joins.\n\nRegards,\nMike\n\n", "msg_date": "Wed, 01 Jul 2009 10:12:23 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "hum thanks a lot for the quick answer,\n\nif is not abuse of your patience\n\nwhat is the best alternative to the LEFT OUTER JOINS?\n\n\nRC\n\nOn Wed, Jul 1, 2009 at 6:12 PM, Mike Ivanov <[email protected]> wrote:\n\n> > Merge Join (cost=111885.70..319492.88 rows=13016048 width=620)\n>\n> The outermost merge join has to go through 13 million rows. If you remove\n> \"distinct on (bien.uid)\", you'll see that.\n>\n> > LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n> > LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n>\n> This is not enough. You have to add this condition as well:\n>\n> AND bien.ref_ville = freguesia_ville.ref_ville\n>\n> In other words, when you link three tables by a common field, all three\n> relationships should be explicitly expressed, otherwise you'll have this\n> type of explosive row multiplication.\n>\n> Although I don't quite understand the purpose of the query, I don't think\n> you need all those OUTER joins.\n>\n> Regards,\n> Mike\n>\n>\n\nhum thanks a lot for the quick answer,if is not abuse of your patiencewhat is the best alternative to the LEFT OUTER JOINS?RCOn Wed, Jul 1, 2009 at 6:12 PM, Mike Ivanov <[email protected]> wrote:\n>  Merge Join (cost=111885.70..319492.88 rows=13016048 width=620)\n\nThe outermost merge join has to go through 13 million rows. If you remove \"distinct on (bien.uid)\", you'll see that.\n\n> LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n> LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n\nThis is not enough. You have to add this condition as well:\n\nAND bien.ref_ville = freguesia_ville.ref_ville\n\nIn other words, when you link three tables by a common field, all three relationships should be explicitly expressed, otherwise you'll have this type of explosive row multiplication.\n\nAlthough I don't quite understand the purpose of the query, I don't think you need all those OUTER joins.\n\nRegards,\nMike", "msg_date": "Wed, 1 Jul 2009 18:37:30 +0100", "msg_from": "Rui Carvalho <[email protected]>", "msg_from_op": true, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "On Wed, Jul 1, 2009 at 11:37 AM, Rui Carvalho<[email protected]> wrote:\n> hum thanks a lot for the quick answer,\n>\n> if is not abuse of your patience\n>\n> what is the best alternative to the LEFT OUTER JOINS?\n\nHard to say. Generally, when you really do need a left, right, or\nfull outer join, you need it, and there's not a lot of alternatives.\nSometimes putting a where clause portion into the on clause helps.\nlike:\n\nselect * from a left join b on (a.id=b.id) where a.somefield=2\n\nmight run faster with\n\nselect * from a left join b on (a.id=bid. and a.somefield=2);\n\nbut it's hard to say. I'd definitely post it to the list and see who\nknows what.\n", "msg_date": "Wed, 1 Jul 2009 11:42:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> Sometimes putting a where clause portion into the on clause helps.\n> like:\n> select * from a left join b on (a.id=b.id) where a.somefield=2\n> might run faster with\n> select * from a left join b on (a.id=bid. and a.somefield=2);\n> but it's hard to say.\n\nUh, those are not the same query ... they will give different results\nfor rows with a.somefield different from 2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Jul 2009 13:52:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query " }, { "msg_contents": "Rui Carvalho wrote:\n> hum thanks a lot for the quick answer,\n>\n> if is not abuse of your patience\n>\n> what is the best alternative to the LEFT OUTER JOINS?\nI meant I wasn't sure whether you really meant *outer* joins. Too many \nof them looked kinda suspicious :-)\n\nIf you *do* need them, then there is no alternative, as Scott said.\n\nMike\n\n", "msg_date": "Wed, 01 Jul 2009 10:53:21 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "On Wed, Jul 1, 2009 at 11:52 AM, Tom Lane<[email protected]> wrote:\n> Scott Marlowe <[email protected]> writes:\n>> Sometimes putting a where clause portion into the on clause helps.\n>> like:\n>> select * from a left join b on (a.id=b.id) where a.somefield=2\n>> might run faster with\n>> select * from a left join b on (a.id=bid. and a.somefield=2);\n>> but it's hard to say.\n>\n> Uh, those are not the same query ... they will give different results\n> for rows with a.somefield different from 2.\n\nHow so? Neither should return any rows with a.somefield <> 2. Or are\nyou talking where a.somefield is null?\n", "msg_date": "Wed, 1 Jul 2009 12:00:06 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Wed, Jul 1, 2009 at 11:52 AM, Tom Lane<[email protected]> wrote:\n>> Scott Marlowe <[email protected]> writes:\n>>> Sometimes putting a where clause portion into the on clause helps.\n>>> like:\n>>> select * from a left join b on (a.id=b.id) where a.somefield=2\n>>> might run faster with\n>>> select * from a left join b on (a.id=bid. and a.somefield=2);\n>>> but it's hard to say.\n>> \n>> Uh, those are not the same query ... they will give different results\n>> for rows with a.somefield different from 2.\n\n> How so? Neither should return any rows with a.somefield <> 2.\n\nWrong. The second will return rows with somefield <> 2, null-extended\n(whether or not there is any match on id).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Jul 2009 14:07:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query " }, { "msg_contents": "2009/7/1 Mike Ivanov <[email protected]>\n\n>\n>\n> > LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n> > LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n>\n> This is not enough. You have to add this condition as well:\n>\n> AND bien.ref_ville = freguesia_ville.ref_ville\n>\n> In other words, when you link three tables by a common field, all three\n> relationships should be explicitly expressed, otherwise you'll have this\n> type of explosive row multiplication.\n>\n\nWhy so? Is not changing \"freguesia_ville.ref_ville =ville.uid\" to\n\"freguesia_ville.ref_ville =bien.uid\" enough (to prevent cases when\nville.uid is null as result of join)?\n\n2009/7/1 Mike Ivanov <[email protected]>\n\n\n> LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n> LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n\nThis is not enough. You have to add this condition as well:\n\nAND bien.ref_ville = freguesia_ville.ref_ville\n\nIn other words, when you link three tables by a common field, all three relationships should be explicitly expressed, otherwise you'll have this type of explosive row multiplication.\nWhy so? Is not changing \"freguesia_ville.ref_ville =ville.uid\" to \"freguesia_ville.ref_ville =bien.uid\" enough (to prevent cases when ville.uid is null as result of join)?", "msg_date": "Fri, 3 Jul 2009 14:22:35 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" }, { "msg_contents": "Sorry, it was an error in previous letter.\n\n3 липня 2009 р. 14:22 Віталій Тимчишин <[email protected]> написав:\n\n>\n>\n> 2009/7/1 Mike Ivanov <[email protected]>\n>\n>>\n>>\n>> > LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n>> > LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n>>\n>> This is not enough. You have to add this condition as well:\n>>\n>> AND bien.ref_ville = freguesia_ville.ref_ville\n>>\n>> In other words, when you link three tables by a common field, all three\n>> relationships should be explicitly expressed, otherwise you'll have this\n>> type of explosive row multiplication.\n>>\n>\n> Why so? Is not changing \"freguesia_ville.ref_ville =ville.uid\" to\n> \"freguesia_ville.ref_ville =bien.ref_ville\" enough (to prevent cases when\n> ville.uid is null as result of join)?\n>\n>\n>\n\nSorry, it was an error in previous letter.3 липня 2009 р. 14:22 Віталій Тимчишин <[email protected]> написав:\n2009/7/1 Mike Ivanov <[email protected]>\n\n\n> LEFT outer JOIN ville ON ville.uid = bien.ref_ville\n> LEFT outer JOIN freguesia_ville ON freguesia_ville.ref_ville =ville.uid\n\nThis is not enough. You have to add this condition as well:\n\nAND bien.ref_ville = freguesia_ville.ref_ville\n\nIn other words, when you link three tables by a common field, all three relationships should be explicitly expressed, otherwise you'll have this type of explosive row multiplication.\nWhy so? Is not changing \"freguesia_ville.ref_ville =ville.uid\" to \"freguesia_ville.ref_ville =bien.ref_ville\" enough (to prevent cases when ville.uid is null as result of join)?", "msg_date": "Fri, 3 Jul 2009 14:29:28 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: - Slow Query" } ]
[ { "msg_contents": "\n8.4 from CVS HEAD:\nEXPLAIN ANALYZE select * from (select n, 1 as r from generate_series(1, 100000) as n union all select n, 2 from generate_series(1, 100000) as n) as x where r = 3;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..30.00 rows=10 width=36) (actual time=90.723..90.723 rows=0 loops=1)\n -> Append (cost=0.00..30.00 rows=10 width=36) (actual time=90.720..90.720 rows=0 loops=1)\n -> Function Scan on generate_series n (cost=0.00..15.00 rows=5 width=36) (actual time=45.191..45.191 rows=0 loops=1)\n Filter: (1 = 3)\n -> Function Scan on generate_series n (cost=0.00..15.00 rows=5 width=36) (actual time=45.522..45.522 rows=0 loops=1)\n Filter: (2 = 3)\n Total runtime: 118.709 ms\n(7 rows)\n\n8.3.7:\nEXPLAIN ANALYZE select * from (select n, 1 as r from generate_series(1, 100000) as n union all select n, 2 from generate_series(1, 100000) as n) as x where r = 3;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Result (cost=0.00..25.02 rows=2 width=8) (actual time=0.005..0.005 rows=0 loops=1)\n -> Append (cost=0.00..25.02 rows=2 width=8) (actual time=0.004..0.004 rows=0 loops=1)\n -> Result (cost=0.00..12.50 rows=1 width=4) (actual time=0.001..0.001 rows=0 loops=1)\n One-Time Filter: false\n -> Function Scan on generate_series n (cost=0.00..12.50 rows=1 width=4) (never executed)\n -> Result (cost=0.00..12.50 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1)\n One-Time Filter: false\n -> Function Scan on generate_series n (cost=0.00..12.50 rows=1 width=4) (never executed)\n Total runtime: 0.053 ms\n(9 rows)\n\nIs it right ?\n\n-- \nSergey Burladyan\n", "msg_date": "Thu, 02 Jul 2009 04:08:14 +0400", "msg_from": "Sergey Burladyan <[email protected]>", "msg_from_op": true, "msg_subject": "regression ? 8.4 do not apply One-Time Filter to subquery" }, { "msg_contents": "On Wed, Jul 1, 2009 at 8:08 PM, Sergey Burladyan<[email protected]> wrote:\n>\n> 8.4 from CVS HEAD:\n> EXPLAIN ANALYZE select * from (select n, 1 as r from generate_series(1, 100000) as n union all select n, 2 from generate_series(1, 100000) as n) as x where r = 3;\n>                                                           QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n>  Result  (cost=0.00..30.00 rows=10 width=36) (actual time=90.723..90.723 rows=0 loops=1)\n>   ->  Append  (cost=0.00..30.00 rows=10 width=36) (actual time=90.720..90.720 rows=0 loops=1)\n>         ->  Function Scan on generate_series n  (cost=0.00..15.00 rows=5 width=36) (actual time=45.191..45.191 rows=0 loops=1)\n>               Filter: (1 = 3)\n>         ->  Function Scan on generate_series n  (cost=0.00..15.00 rows=5 width=36) (actual time=45.522..45.522 rows=0 loops=1)\n>               Filter: (2 = 3)\n>  Total runtime: 118.709 ms\n> (7 rows)\n>\n> 8.3.7:\n> EXPLAIN ANALYZE select * from (select n, 1 as r from generate_series(1, 100000) as n union all select n, 2 from generate_series(1, 100000) as n) as x where r = 3;\n>                                                QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------\n>  Result  (cost=0.00..25.02 rows=2 width=8) (actual time=0.005..0.005 rows=0 loops=1)\n>   ->  Append  (cost=0.00..25.02 rows=2 width=8) (actual time=0.004..0.004 rows=0 loops=1)\n>         ->  Result  (cost=0.00..12.50 rows=1 width=4) (actual time=0.001..0.001 rows=0 loops=1)\n>               One-Time Filter: false\n>               ->  Function Scan on generate_series n  (cost=0.00..12.50 rows=1 width=4) (never executed)\n>         ->  Result  (cost=0.00..12.50 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1)\n>               One-Time Filter: false\n>               ->  Function Scan on generate_series n  (cost=0.00..12.50 rows=1 width=4) (never executed)\n>  Total runtime: 0.053 ms\n> (9 rows)\n>\n> Is it right ?\n\nThis might be related to this fix by Tom.\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\n...Robert\n", "msg_date": "Wed, 22 Jul 2009 14:19:52 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: regression ? 8.4 do not apply One-Time Filter to\n\tsubquery" } ]
[ { "msg_contents": "Hi,\n\n As far as I recall postgres does not have built-in support for \"insert or\nreplace\" feature.\n But there is a lot of ways to obtain the same result.\n The problem is that performance of these techniques is quite bad.\n It is about two times slower than simple insert.\n\n I tried the following ways:\n1) Add the following rule on insert:\nCREATE RULE replace_dummy AS\n ON INSERT TO dummy\n WHERE\n EXISTS(SELECT 1 FROM dummy WHERE key = NEW.key)\n DO INSTEAD\n (UPDATE dummy SET value = NEW.value);\n\n2) Use the function:\nCREATE FUNCTION merge_dummy(ikey int, ivalue text) RETURNS VOID AS\n$$\nBEGIN\n UPDATE dummy SET value = ivalue WHERE key = ikey;\n IF found THEN\n RETURN;\n END IF;\n INSERT INTO dummy VALUES (ikey, ivalue);\n RETURN;\nEND;\n$$\nLANGUAGE plpgsql;\n3) Last the most effective in a short period, but seems produces a lot of\nwork for vacuum.\n Add extra column, (time int) into table. I can guarantee that next\ninsert has time greater than previous.\n And use the following rule:\ncreate rule dummy_insert as on insert to dummy do also\n delete from dummy\n where key == NEW.key and time != NEW.time;\n\n Please comment these ways and propose effective ways to simulate \"insert\nor replace\" behavior.\n Also in may case I'm making a lot of inserts in a batch.\n\n Note: insert or replace I meant.\n Suggest we have:\n dummy with columns: key int, value text.\n Filled with:\n insert into dummy values (1, \"one\"), (2, \"two\"), (3, \"three\")\n When user tries to \"insert or replace\" pair into this table then in should\nbe inserted if there is no row with the same key.\n Otherwise value of appropriate row is updated.\n\nBest Regards,\n Sergei\n\nHi,  As far as I recall postgres does not have built-in support for \"insert or replace\" feature.  But there is a lot of ways to obtain the same result.  The problem is that performance of these techniques is quite bad.\n  It is about two times slower than simple insert.  I tried the following ways:1) Add the following rule on insert:CREATE RULE replace_dummy AS  ON INSERT TO dummy  WHERE     EXISTS(SELECT 1 FROM dummy WHERE key = NEW.key) \n  DO INSTEAD      (UPDATE dummy SET value = NEW.value);2) Use the function:CREATE FUNCTION merge_dummy(ikey int, ivalue text) RETURNS VOID AS$$BEGIN    UPDATE dummy SET value = ivalue WHERE key = ikey;\n    IF found THEN        RETURN;    END IF;    INSERT INTO dummy VALUES (ikey, ivalue);    RETURN;END;$$LANGUAGE plpgsql;3) Last the most effective in a short period, but seems produces a lot of work for vacuum.\n    Add extra column, (time int) into table. I can guarantee that next insert has time greater than previous.    And use the following rule:create rule dummy_insert as on insert to dummy do also    delete from dummy\n        where key == NEW.key and time != NEW.time;  Please comment these ways and propose effective ways to simulate \"insert or replace\" behavior.  Also in may case I'm making a lot of inserts in a batch.\n  Note: insert or replace I meant.  Suggest we have:  dummy with columns: key int, value text.  Filled with:  insert into dummy values (1, \"one\"), (2, \"two\"), (3, \"three\")\n  When user tries to \"insert or replace\" pair into this table then in should be inserted if there is no row with the same key.  Otherwise value of appropriate row is updated.Best Regards,  Sergei", "msg_date": "Fri, 3 Jul 2009 15:06:13 +0400", "msg_from": "Sergei Politov <[email protected]>", "msg_from_op": true, "msg_subject": "Most effective insert or replace" }, { "msg_contents": "On Fri, 3 Jul 2009, Sergei Politov wrote:\n>   As far as I recall postgres does not have built-in support for \"insert or replace\" feature.\n\n>   Please comment these ways and propose effective ways to simulate \"insert or replace\" behavior.\n>   Also in may case I'm making a lot of inserts in a batch.\n\nA few years ago I researched this, and came up with the following method \nas seeming the fastest:\n\nBEGIN;\nDELETE FROM table WHERE id IN (big long list);\nCOPY table FROM STDIN BINARY;\nCOMMIT;\n\nHowever, our circumstances may not be the same as yours for the following \nreasons:\n\n1. We are updating whole rows indexed by primary key, not just a single\n field in each row.\n2. We are able to use the COPY command - indeed we wrote a fair amount of\n Java to enable batching, background writing, and COPY support. See\n http://www.flymine.org/api/index.html?org/intermine/sql/writebatch/Batch.html\n and http://www.intermine.org/\n3. HOT has been invented since then, and it won't play well with this\n method.\n\nMatthew\n\n-- \n Trying to write a program that can't be written is... well, it can be an\n enormous amount of fun! -- Computer Science Lecturer", "msg_date": "Fri, 3 Jul 2009 12:20:50 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Most effective insert or replace" } ]
[ { "msg_contents": "\nI use poker software (HoldemManager) to keep track of the statistics (and\nshow nice graphs) of millions of poker hand histories.\nThis software (also PokerTracker 3) imports all the poker hands in\nPostgreSQL. The software runs on Windows) only.\nAll of its users have NORMAL PCs. From single-core laptops, to a quadcore\ndesktop at best.\n\nQuestions:\n\n-1 [quote] \"POSTGRESQL uses a multi-process model. Because of this, all\nmulti-cpu operating systems can spread multiple database connections among\nthe available CPUs. \nHowever, if only a single database connection is active, it can only use one\nCPU. POSTGRESQL does not use multi-threading to allow a single process to\nuse multiple CPUs.\"[/quote]\n \nI can see two databases in my pgAdmin: postgres and HoldemManager. All the\npoker data (about 30 GB of data) is in the HoldemManager database.\nDoes the quote above (if true?) means, having a 2 Ghz single core or a Xeon\n2x quadcore (8x 2 Ghz cores) will make no real difference for my\nperformance? \nAnd the real performance increase is only for professional servers running\nmultiple databases? Will I greatly benefit from having quad instead of a\nsingle-core system?\n\n-2 In the recent 8.3 vs 8.4 benchmarks, 8.4. was much faster than 8.3\nrunning on a 16 and 32 core server (with 64GB RAM).\nWith 8 cores, they were about the same speed. Does this mean on a normal\nsingle core computer, there will be NO NOTICABLE performance increase in 8.3\nvs 8.4 and even 8.2?\n\n-3 [quote] \"With PostgreSQL, you could easily have more than 1GB per backend\n(if necessary) without running out of memory, which significantly pushes\naway the point when you need to go to 64-bit.\nIn some cases it may actually be better to run a 32-bit build of PostgreSQL\nto reduce memory usage. In a 64-bit server, every pointer and every integer\nwill take twice as much space as in a 32bit server. That overhead can be\nsignificant, and is most likely unnecessary.\" [/quote] \n\nI have no idea what the maximum amount of RAM is, my database uses. But what\nexactly \"will take twice as much space\"?\nDoes this mean a simple database uses double the amount of RAM on a 64 bit\nsystem? And it's probably better for my 30 GB database to\nrun a 32-bit build of PostgreSQL to reduce memory usage?\n\n-4 One a scale from 1 to 10, how significant are the following on\nperformance increase: \n-[ ] Getting a faster harddisk (RAID or a SSD)\n-[ ] Getting a faster CPU \n-[ ] Upgrading PostgreSQL (8.2 and 8.3) to 8.4\n-[ ] Tweaking PostgreSQL (increasing # shared_buffers, wal_buffers,\neffective_cache_size, etc.)\n-[10!] Something else? \n-[ ] Does NOT effect me, but I was wondering what a switch from Windows to\nLINUX/Solaris does for professional server users in terms of performance.\n\n\n-5 The IO operations/s performance of your harddisk vs read/write speeds vs\naccess time? What is more important?\nWith 4 regular harddisks in RAID0 you get great read/write speeds, but the\nSSDs excel in IO/s and a 0.1ms access time.\nWhat is the most usefull for which situations?\n\n\n-6 The 8.4.0-1 one-click installer automatically set the encoding to UTF8.\nWith the other installers, I was able to \nchange the encoding to SQL_ASCII during the installation process. How do I\nsolve this after I've installed 8.4.0-1? \n(I was unable to delete the postgres database, so I couldn't create a new\none with the right encoding in 8.4.0-1)\n-- \nView this message in context: http://www.nabble.com/Six-PostgreSQL-questions-from-a-pokerplayer-tp24337072p24337072.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Sat, 4 Jul 2009 11:51:48 -0700 (PDT)", "msg_from": "Patvs <[email protected]>", "msg_from_op": true, "msg_subject": "Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Sat, 2009-07-04 at 11:51 -0700, Patvs wrote:\n\n> I can see two databases in my pgAdmin: postgres and HoldemManager. All the\n> poker data (about 30 GB of data) is in the HoldemManager database.\n> Does the quote above (if true?) means, having a 2 Ghz single core or a Xeon\n> 2x quadcore (8x 2 Ghz cores) will make no real difference for my\n> performance? \n\nWhat matters isn't the number of databases, but the number of\nconnections. Any given connection can use at most one full core.\n\nIf you have only one actively working connection you will still gain a\nbit of performance from having a second core that can do other misc work\nfor the OS, I/O management and general housekeeping so that the first\ncore can be fully dedicated to the active pg backend. More than that\nprobably won't gain you anything.\n\nIf you want to improve performance, first learn about where your code is\nbottlenecked. Is it even CPU-limited? Often databases are really limited\nby disk I/O performance rather than CPU time.\n\nIf it is CPU-limited, you might gain from having fewer faster cores,\nand/or significantly faster RAM. If it's not CPU-limited, you'd be\nwasting time effort and money upgrading those parts.\n\n> -2 In the recent 8.3 vs 8.4 benchmarks, 8.4. was much faster than 8.3\n> running on a 16 and 32 core server (with 64GB RAM).\n> With 8 cores, they were about the same speed. Does this mean on a normal\n> single core computer, there will be NO NOTICABLE performance increase in 8.3\n> vs 8.4 and even 8.2?\n\nBenchmark it and see. It'll be rather workload-dependent.\n\n> I have no idea what the maximum amount of RAM is, my database uses. But what\n> exactly \"will take twice as much space\"?\n> Does this mean a simple database uses double the amount of RAM on a 64 bit\n> system?\n\nAbsolutely not. Certain data structures take up more room because of\nalignment/padding concerns, pointer size increases, etc. That does mean\nthat you can fit fewer of them into a given amount of memory, but it's\nnot a simple doubling by any stretch.\n\nWhat that does mean, though, is that if you don't have significantly\nmore RAM than a 32-bit machine can address (say, 6 to 8 GB), you should\nstick with 32-bit binaries.\n\n> -4 One a scale from 1 to 10, how significant are the following on\n> performance increase: \n> -[ ] Getting a faster harddisk (RAID or a SSD)\n> -[ ] Getting a faster CPU \n> -[ ] Upgrading PostgreSQL (8.2 and 8.3) to 8.4\n> -[ ] Tweaking PostgreSQL (increasing # shared_buffers, wal_buffers,\n> effective_cache_size, etc.)\n> -[10!] Something else? \n\nVery workload dependent. Analyse what parts of your system are busiest\nand which are largely idle while Pg is working hard, then consider\nupgrading the busy bits.\n\nTweaking Pg again depends a lot on workload. Sometimes you won't gain\nmuch, sometimes you'll see incredible gains (say, if you increase\nsort/working memory\\ so a sort that used to spill to disk can instead be\ndone in RAM).\n\nIf you have very few connections and they do really complex queries, you\nmight benefit from dramatically increasing work mem etc.\n\n> -[ ] Does NOT effect me, but I was wondering what a switch from Windows to\n> LINUX/Solaris does for professional server users in terms of performance.\n\nNot a bad plan, honestly. Pg is just more mature on UNIX/Linux at this\npoint.\n\n> -5 The IO operations/s performance of your harddisk vs read/write speeds vs\n> access time? What is more important?\n\nDepends on workload. If you're doing lots of sequential scans, you want\nreally fast sequential reads. If you're doing lots of index scans etc,\nyou will benefit from both sequential read speed and access time.\n\nIf you have particular queries you note are slow, consider running them\nwith EXPLAIN ANALYZE to see what their query plans are. What disk access\npatterns are the queries resulting in? Do they have sorts spilling to\ndisk? etc.\n\n> With 4 regular harddisks in RAID0 you get great read/write speeds, but the\n> SSDs excel in IO/s and a 0.1ms access time.\n\n... but are often really, really, really, really slow at writing. The\nfancier ones are fast at writing but generally slow down over time.\n\n> What is the most usefull for which situations?\n\nDepends on your workload, see above.\n\n-- \nCraig Ringer\n\n", "msg_date": "Mon, 06 Jul 2009 13:33:29 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "Craig Ringer wrote:\n> On Sat, 2009-07-04 at 11:51 -0700, Patvs wrote:\n>\n>\n> \n>> With 4 regular harddisks in RAID0 you get great read/write speeds, but the\n>> SSDs excel in IO/s and a 0.1ms access time.\n>> \n>\n> ... but are often really, really, really, really slow at writing. The\n> fancier ones are fast at writing but generally slow down over time.\n>\n> \n\nAlso, (probably pointing out the obvious here) to be on the safe side \nyou should avoid RAID0 for any data that is important to you - as it's \npretty easy to get one bad disk straight from new!\n\nWith respect to SSD's one option for a small sized database is 2xSSD in \nRAID1 - provided they are the *right* SSD that is, which at this point \nin time seems to be the Intel X25E. Note that I have not benchmarked \nthis configuration, so no guarantees that it (or the Intel SSDs \nthemselves) are as good as the various on-the-web tests indicate!\n\nregards\n\nMark\n", "msg_date": "Mon, 06 Jul 2009 18:13:12 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "\n\nOn 7/5/09 11:13 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n\n> Craig Ringer wrote:\n>> On Sat, 2009-07-04 at 11:51 -0700, Patvs wrote:\n>> \n>> \n>> \n>>> With 4 regular harddisks in RAID0 you get great read/write speeds, but the\n>>> SSDs excel in IO/s and a 0.1ms access time.\n>>> \n>> \n>> ... but are often really, really, really, really slow at writing. The\n>> fancier ones are fast at writing but generally slow down over time.\n>> \n>> \n> \n> Also, (probably pointing out the obvious here) to be on the safe side\n> you should avoid RAID0 for any data that is important to you - as it's\n> pretty easy to get one bad disk straight from new!\n> \n> With respect to SSD's one option for a small sized database is 2xSSD in\n> RAID1 - provided they are the *right* SSD that is, which at this point\n> in time seems to be the Intel X25E. Note that I have not benchmarked\n> this configuration, so no guarantees that it (or the Intel SSDs\n> themselves) are as good as the various on-the-web tests indicate!\n\nThere is no reason to go RAID 1 with SSD's if this is an end-user box and\nthe data is recoverable. Unlike a hard drive, a decent SSD isn't expected\nto go bad. I have deployed over 150 Intel X25-M's and they all work\nflawlessly. Some had the 'slowdown' problem due to how they were written\nto, but the recent firmware fixed that. At this point, I consider a single\nhigh quality SSD as more fault tolerant than software raid-1.\n\nUnless there are lots of writes going on (I'm guessing its mostly read,\ngiven the description) a single X25-M will make the DB go very fast\nregardless of random or sequential access.\n\nIf the system is CPU bound, then getting a SSD like that won't help as much.\nBut I'd be willing to bet that in a normal PC or workstation I/O is the\nlimiting factor. Some tuning of work_mem and shared_buffers might help\nsome too.\n\nUse some monitoring tools (PerfMon 'Physical Disk' stats on windows) to see\nif normal use is causing a lot of disk access. If so, and especially if its\nmostly reads, an Intel X-25M will make a huge difference. If there is lots\nof writes, an X-25E will do but its 40% the space for the same price.\n\n> \n> regards\n> \n> Mark\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Mon, 6 Jul 2009 01:43:01 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "\nOn 7/6/09 1:43 AM, \"Scott Carey\" <[email protected]> wrote:\n\n> \n> \n> \n> On 7/5/09 11:13 PM, \"Mark Kirkwood\" <[email protected]> wrote:\n> \n>> Craig Ringer wrote:\n>>> On Sat, 2009-07-04 at 11:51 -0700, Patvs wrote:\n>>> \n> There is no reason to go RAID 1 with SSD's if this is an end-user box and\n> the data is recoverable. Unlike a hard drive, a decent SSD isn't expected\n> to go bad. \n\nClarification -- normal hard drives are expected to have a chance of dying\nwithin the first few months, or days. SSD's are expected to wear down\nslowly and die eventually -- but better ones will do so by entering a\nread-only state.\n\n", "msg_date": "Mon, 6 Jul 2009 01:47:08 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Sat, Jul 4, 2009 at 7:51 PM, Patvs<[email protected]> wrote:\n> -4 One a scale from 1 to 10, how significant are the following on\n> performance increase:\n> -[ ] Getting a faster harddisk (RAID or a SSD)\n> -[ ] Getting a faster CPU\n> -[ ] Upgrading PostgreSQL (8.2 and 8.3) to 8.4\n> -[ ] Tweaking PostgreSQL (increasing # shared_buffers, wal_buffers,\n> effective_cache_size, etc.)\n> -[10!] Something else?\n\nIt sounds like you have specific performance problems you're trying to\naddress. Given the use case it seems surprising that you're looking at\nsuch heavy-duty hardware. It seems more likely that\nPokerTracker/Holdem Manager is missing some indexes in its schema or\nthat some queries could be tweaked to run more efficiently.\n\nPerhaps if you set log_statement_duration and send any slow queries\nhere we would find a problem that could be fixed.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Mon, 6 Jul 2009 10:40:19 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "* Craig Ringer ([email protected]) wrote:\n> What that does mean, though, is that if you don't have significantly\n> more RAM than a 32-bit machine can address (say, 6 to 8 GB), you should\n> stick with 32-bit binaries.\n\nI'm not sure this is always true since on the amd64/em64t platforms\nyou'll get more registers and whatnot in 64-bit mode which can offset\nthe pointer size increases.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Mon, 6 Jul 2009 06:23:10 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Sat, 4 Jul 2009, Patvs wrote:\n\n> I use poker software (HoldemManager) to keep track of the statistics (and\n> show nice graphs) of millions of poker hand histories.\n> This software (also PokerTracker 3) imports all the poker hands in\n> PostgreSQL.\n\nI've got about 200MB of PokerTracker data myself in a PostgreSQL database, \npretty familiar with what you're doing.\n\n1) I don't think there's much that software does that will take advantage \nof multiple cores. You might get better real-time performance while \nplaying in that case, because you can have database/hand history \nprogram/table processes all doing their own thing at once, but the \ndatabase itself isn't going to benefit from more cores.\n\n2) The main performance benefit of 8.4 kicks in when you're deleting data. \nSince that's not happening in your hand history database, I wouldn't \nexpect that to run any better than 8.3. Eventually you might see the \nsoftware rewritten to take advantage of the new programming features added \nin 8.4, that might give the newer version a significant advantage \neventually; until then, 8.3 will run at about the same speed.\n\n3) There's not much reason for you to consider running in 64 bits, you \nwould need to be on something other than Windows to fully take advantage \nof that. The database server doesn't support it yet on that platform \npartly because there's so little to gain: \nhttp://wiki.postgresql.org/wiki/64bit_Windows_port\n\n4) None of your options are the right first step. The best thing you \ncould do to improve performance here is add significantly more RAM to your \nserver, so that more hand data could be stored there. That will help you \nout more than adding more cores, and you'll need a 64-bit Windows to fully \ntake advantage of it--but you don't need to give that memory directly to a \n64-bit database to see that gain. If you're not running with at least 8GB \nor RAM, nothing else you can do will give you as much bang for your buck \nas upgrading to there (pretty easy on a lot of desktops, harder to get \ninto a portable). Along with that, you might as well follow the basic \ntuning guide at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and get some \nof the basics done correctly. You may find correctly setting \neffective_cache_size, default_statistics_target, and work_mem in \nparticular could give you better results when running queries against the \ndatabase; a modest bump to shared_buffers might help too, but you can't go \ncrazy there on Windows. The defaults really aren't set well for as much \ndata as you've got in a small number of tables.\n\n5) It's hard to imagine your use case involving anything but random I/O, \nparticularly if you have a decent amount of memory in the system, so a SSD \nshould be significantly better than your other disk options here. That \nwould be the third area for improvement after getting the memory and basic \ndatabase parameters are set correctly if I were tuning your system.\n\n6) Normally to change the locale you have to shutdown the database, delete \nits data directory, and then run the \"initdb\" command with appropriate \noptions to use an alternate locale. I thought the one-click installer \nhandled that though--the screen shots at \nhttp://www.enterprisedb.com/learning/pginst_guide.do show the \"Advanced \nOptions\" page allowing one to set the locale. This is really the wrong \nlist for that questions--if you still have trouble there, try sending \nsomething with *just* that one to the pgsql-general list instead. From \nthe replies you've gotten here you can see everyone is fixed on the \nperformance questions, and this one is buried at the bottom of your long \nmessage.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 6 Jul 2009 09:26:14 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Mon, Jul 6, 2009 at 2:26 PM, Greg Smith<[email protected]> wrote:\n\n> 6) Normally to change the locale you have to shutdown the database, delete\n> its data directory, and then run the \"initdb\" command with appropriate\n> options to use an alternate locale.  I thought the one-click installer\n> handled that though--the screen shots at\n> http://www.enterprisedb.com/learning/pginst_guide.do show the \"Advanced\n> Options\" page allowing one to set the locale.  This is really the wrong list\n> for that questions--if you still have trouble there, try sending something\n> with *just* that one to the pgsql-general list instead.  From the replies\n> you've gotten here you can see everyone is fixed on the performance\n> questions, and this one is buried at the bottom of your long message.\n\nOn Windows, the installer will always use utf-8, as it's the only\nencoding we know should work with any locale on that platform (and\nthere's no easy way of figuring out other combinations without trying\nthem). We intentionally don't make SQL_ASCII available, as we consider\nthat to be an 'expert' choice which regularly gets misused. To get\nround that if you really need to, either manually init a new cluster\nusing initdb, or do something like:\n\nCREATE DATABASE foo WITH ENCODING 'SQL_ASCII' TEMPLATE template0;\n\nto get a single database in SQL_ASCII.\n\n-- \nDave Page\nEnterpriseDB UK: http://www.enterprisedb.com\n", "msg_date": "Mon, 6 Jul 2009 14:48:59 +0100", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On 07/06/2009 06:23 AM, Stephen Frost wrote:\n> * Craig Ringer ([email protected]) wrote:\n> \n>> What that does mean, though, is that if you don't have significantly\n>> more RAM than a 32-bit machine can address (say, 6 to 8 GB), you should\n>> stick with 32-bit binaries.\n>> \n>\n> I'm not sure this is always true since on the amd64/em64t platforms\n> you'll get more registers and whatnot in 64-bit mode which can offset\n> the pointer size increases.\n> \n\nWhich leads to other things like faster calling conventions...\n\nEven if you only have 4 GB of RAM, the 32-bit kernel needs to fight with \n\"low memory\" vs \"high memory\", whereas 64-bit has a clean address space.\n\nAll things being equal, I recommend 64-bit.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n\n\n\n\n\n\nOn 07/06/2009 06:23 AM, Stephen Frost wrote:\n\n* Craig Ringer ([email protected]) wrote:\n \n\nWhat that does mean, though, is that if you don't have significantly\nmore RAM than a 32-bit machine can address (say, 6 to 8 GB), you should\nstick with 32-bit binaries.\n \n\n\nI'm not sure this is always true since on the amd64/em64t platforms\nyou'll get more registers and whatnot in 64-bit mode which can offset\nthe pointer size increases.\n \n\n\nWhich leads to other things like faster calling conventions...\n\nEven if you only have 4 GB of RAM, the 32-bit kernel needs to fight\nwith \"low memory\" vs \"high memory\", whereas 64-bit has a clean address\nspace.\n\nAll things being equal, I recommend 64-bit.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Mon, 06 Jul 2009 15:27:15 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Mon, 2009-07-06 at 15:27 -0400, Mark Mielke wrote:\n\n> Even if you only have 4 GB of RAM, the 32-bit kernel needs to fight\n> with \"low memory\" vs \"high memory\", whereas 64-bit has a clean address\n> space.\n\nThat's a good point. The cutoff is probably closer to 2G or at most 3G.\nCertainly it's madness to use hacks like PAE to gain access to the RAM\nbehind the PCI address space rather than just going 64-bit ... unless\nyou have a really pressing reason, at least.\n\nIt's also nice that on a 64 bit machine, there's no 2G/2G or 3G/1G\nuserspace/kernelspace address mapping split to limit your app's memory\nuse. I seem to recall that Windows uses 2G/2G which can be painfully\nlimiting for memory-hungry applications.\n\nPersonally, I'd probably go 64-bit on any reasonably modern machine that\ncould be expected to have more than 2 or 3 GB of RAM. Then again, I\ncan't imagine willingly building a production database server for any\nnon-trivial (ie > a couple of gigs) database with less than 8GB of RAM\nwith RAM prices so absurdly low. Skip-lunch-to-afford-more-RAM low.\n\n-- \nCraig Ringer\n\n", "msg_date": "Tue, 07 Jul 2009 12:51:14 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" }, { "msg_contents": "On Mon, Jul 6, 2009 at 10:51 PM, Craig\nRinger<[email protected]> wrote:\n>\n> Personally, I'd probably go 64-bit on any reasonably modern machine that\n> could be expected to have more than 2 or 3 GB of RAM. Then again, I\n> can't imagine willingly building a production database server for any\n> non-trivial (ie > a couple of gigs) database with less than 8GB of RAM\n> with RAM prices so absurdly low. Skip-lunch-to-afford-more-RAM low.\n\nExactly, I was pricing out a new db server at work, and the difference\nin cost on a $7000 or so machine was something like $250 or so to go\nfrom 16G to 32G of RAM.\n\nI also can't imagine running a large pgsql server on windows, even 64\nbit windows.\n", "msg_date": "Mon, 6 Jul 2009 23:51:32 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Six PostgreSQL questions from a pokerplayer" } ]
[ { "msg_contents": "Hi,\n\nWe are bundling PostgreSQL 8.3.7 with our Java based application.\nWe observe that in some systems the Database access becomes very slow after\nrunning it for couple of days.\n\nWe understand that postgresql.conf needs to be adjusted as per the system\nspecification where postgreSQL is running.\n\nIs there a utility that we can use that can check the system specification\nand change the required parameters in postgresql.conf accordingly?\n\nThanks,\nSaurabh\n\nHi,We are bundling PostgreSQL 8.3.7 with our Java based application.We observe that in some systems the Database access becomes very slow after running it for couple of days.We understand that postgresql.conf needs to be adjusted as per the system specification where postgreSQL is running.\nIs there a utility that we can use that can check the system specification and change the required parameters in postgresql.conf accordingly?Thanks,Saurabh", "msg_date": "Mon, 6 Jul 2009 11:18:24 +0530", "msg_from": "Saurabh Dave <[email protected]>", "msg_from_op": true, "msg_subject": "Bundling postgreSQL with my Java application" }, { "msg_contents": "On 07/06/2009 01:48 AM, Saurabh Dave wrote:\n> We are bundling PostgreSQL 8.3.7 with our Java based application.\n> We observe that in some systems the Database access becomes very slow \n> after running it for couple of days.\n>\n> We understand that postgresql.conf needs to be adjusted as per the \n> system specification where postgreSQL is running.\n>\n> Is there a utility that we can use that can check the system \n> specification and change the required parameters in postgresql.conf \n> accordingly?\n\nHi Saurabh:\n\nNo offense intended - but have you looked at the documentation for \npostgresql.conf?\n\nIf you are going to include PostgreSQL in your application, I'd highly \nrecommend you understand what you are including. :-)\n\nPostgreSQL 8.4 comes with significantly improved \"out of the box\" \nconfiguration. I think that is what you are looking for. Specifically, \nyou are probably looking for \"autovacuum\" to be enabled.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Mon, 06 Jul 2009 02:00:24 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "Mark Mielke <mark 'at' mark.mielke.cc> writes:\n\n> On 07/06/2009 01:48 AM, Saurabh Dave wrote:\n>> We are bundling PostgreSQL 8.3.7 with our Java based application.\n\n[...]\n\n> PostgreSQL 8.4 comes with significantly improved \"out of the box\"\n> configuration. I think that is what you are looking for. Specifically,\n> you are probably looking for \"autovacuum\" to be enabled.\n\nautovacuum is enabled by default on PG 8.3 as well.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Mon, 06 Jul 2009 09:07:50 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": ">No offense intended - but have you looked at the documentation for\npostgresql.conf?\n\n>If you are going to include PostgreSQL in your application, I'd highly\nrecommend you >understand what you are including. :-)\n\nI had a look into the documentation of postgres.conf, and tried a lot with\nchanging paramters I thought would improve the performance, but in vain.\nAutovaccum is enabled by default in 8.3.7 , but i reduced the nap time so\nthat it happens more frequently.\n\nMy personal opinion is that certain parameters in postgres.conf are simply\ntoo technical in nature for a application developer like me, it becomes more\nof a trial and error kind of frustrating process.\n\nIf there a utility that understands the system specification on which\npostgres is going to run and change the paramters accordingly, that would\nhelp.\n\nThanks,\nSaurabh\n\nOn Mon, Jul 6, 2009 at 12:37 PM, Guillaume Cottenceau <[email protected]> wrote:\n\n> Mark Mielke <mark 'at' mark.mielke.cc> writes:\n>\n> > On 07/06/2009 01:48 AM, Saurabh Dave wrote:\n> >> We are bundling PostgreSQL 8.3.7 with our Java based application.\n>\n> [...]\n>\n> > PostgreSQL 8.4 comes with significantly improved \"out of the box\"\n> > configuration. I think that is what you are looking for. Specifically,\n> > you are probably looking for \"autovacuum\" to be enabled.\n>\n> autovacuum is enabled by default on PG 8.3 as well.\n>\n> --\n> Guillaume Cottenceau\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n>No offense intended - but have you looked at the documentation for postgresql.conf?\n\n>If you are going to include PostgreSQL in your application, I'd\nhighly recommend you >understand what you are including. :-)I\nhad a look into the documentation of postgres.conf, and tried a lot\nwith changing paramters I thought would improve the performance, but in\nvain.\nAutovaccum is enabled by default in 8.3.7 , but i reduced the nap time so that it happens more frequently.My\npersonal opinion is that certain parameters in postgres.conf are simply\ntoo technical in nature for a application developer like me, it becomes\nmore of a trial and error kind of frustrating process.\nIf there a utility that understands the system specification on\nwhich postgres is going to run and change the paramters accordingly,\nthat would help.Thanks,SaurabhOn Mon, Jul 6, 2009 at 12:37 PM, Guillaume Cottenceau <[email protected]> wrote:\nMark Mielke <mark 'at' mark.mielke.cc> writes:\n\n> On 07/06/2009 01:48 AM, Saurabh Dave wrote:\n>> We are bundling PostgreSQL 8.3.7 with our Java based application.\n\n[...]\n\n> PostgreSQL 8.4 comes with significantly improved \"out of the box\"\n> configuration. I think that is what you are looking for. Specifically,\n> you are probably looking for \"autovacuum\" to be enabled.\n\nautovacuum is enabled by default on PG 8.3 as well.\n\n--\nGuillaume Cottenceau\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 6 Jul 2009 12:47:55 +0530", "msg_from": "Saurabh Dave <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "On 07/06/2009 03:17 AM, Saurabh Dave wrote:\n> >No offense intended - but have you looked at the documentation for \n> postgresql.conf?\n>\n> >If you are going to include PostgreSQL in your application, I'd \n> highly recommend you >understand what you are including. :-)\n>\n> I had a look into the documentation of postgres.conf, and tried a lot \n> with changing paramters I thought would improve the performance, but \n> in vain.\n> Autovaccum is enabled by default in 8.3.7 , but i reduced the nap time \n> so that it happens more frequently.\n>\n> My personal opinion is that certain parameters in postgres.conf are \n> simply too technical in nature for a application developer like me, it \n> becomes more of a trial and error kind of frustrating process.\n>\n> If there a utility that understands the system specification on which \n> postgres is going to run and change the paramters accordingly, that \n> would help.\n\nIf autovacuum is on - then I suspect your problem would not be addressed \nby tweaking postgresql.conf. You'll have to analyze the queries that are \ntaking longer than you expect. Run them with \"explain analyze\" in front \nand post the results. Provide table structure information.\n\nIt's possible tweaking postgresql.conf would give you a performance \nboost - but it would probably be temporary. That is, getting 1 x 5% \nspeedup here and 1 x 10% there will be useless if the actual query is \nbecoming slower by 100% every few days.\n\nThe problem needs to be understood.\n\nFor what it's worth, we have some fairly busy systems that have used \nPostgreSQL 8.0 / 8.1 out of the box, the administrators forgot to run \nvacuum / analyze, and the system *still* performed well months later. \nPostgreSQL is pretty good even without non-optimal configuration and \neven without database maintenance. If autovacuum is really running for \nyou - I would look as to whether you have the right indexes defined \nand/or whether you are actually using them?\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Mon, 06 Jul 2009 04:27:30 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "\n\n\n\n\n\nSaurabh Dave wrote:\n\n>No offense intended - but have you looked at the\ndocumentation for postgresql.conf?\n\n>If you are going to include PostgreSQL in your application, I'd\nhighly recommend you >understand what you are including. :-)\n\n\nI\nhad a look into the documentation of postgres.conf, and tried a lot\nwith changing paramters I thought would improve the performance, but in\nvain.\nAutovaccum is enabled by default in 8.3.7 , but i reduced the nap time\nso that it happens more frequently.\n\nAs others have pointed tuning is not a caned answer  hence all the\nconfig options to start with.  But to change the configuration to\nsomething a bench mark must be made.  The only way to do that is\nidentify the common SQL commands sent to the server then run explain\nanalyze  so you know what the server is doing.  Then post the the\nresults along with Config file and we can make suggestions \n\nThere is  http://wiki.postgresql.org/wiki/Performance_Optimization\n\nGreg Smith is working on a tuner \nhttp://notemagnet.blogspot.com/2008/11/automating-initial-postgresqlconf.html\n\nBut thats a monumental undertaking as one configuration setting for one\ntype of work load can be ruinousness to another work load.\n\nThe one common theme is know the workload so the configuration\nmatches.  \n\nMy\npersonal opinion is that certain parameters in postgres.conf are simply\ntoo technical in nature for a application developer like me, it becomes\nmore of a trial and error kind of frustrating process.\n\nThis boils down to know the  work load. \ndifferent kinds of work loads:  \n    A: more writing with very few  reads.\n    B: more reads that are simple queries and few complex quiers with\nvery few writes.  There is a ratio to look at in my case 10000 reads\noccur before next write So we have lots of indexes aimed at those\ncommon queries.  \n    C: Complex queries taking minutes to hours to run on data warehouse\ncovering  millions of records.\n    D: equal work load between writes and reads.  \n\nThere are many kinds of workloads requiring different configurations.  \n\nIf there a utility that understands the system specification on\nwhich postgres is going to run and change the paramters accordingly,\nthat would help.\n\nThanks,\nSaurabh\n<snip>\n\n\n\n", "msg_date": "Mon, 06 Jul 2009 19:16:21 -0400", "msg_from": "justin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "Thanks all for your valuable comments, as I gather, what I need to do is to\ncheck the queries that are slow and do a vacuum analyze and share the\nresults along with postgresql.conf being used.\n\nI will work on that.\n\nThanks again,\nSaurabh\n\n\nOn Tue, Jul 7, 2009 at 4:46 AM, justin <[email protected]> wrote:\n\n> Saurabh Dave wrote:\n>\n> >No offense intended - but have you looked at the documentation for\n> postgresql.conf?\n>\n> >If you are going to include PostgreSQL in your application, I'd highly\n> recommend you >understand what you are including. :-)\n>\n> I had a look into the documentation of postgres.conf, and tried a lot with\n> changing paramters I thought would improve the performance, but in vain.\n> Autovaccum is enabled by default in 8.3.7 , but i reduced the nap time so\n> that it happens more frequently.\n>\n> As others have pointed tuning is not a caned answer hence all the config\n> options to start with. But to change the configuration to something a bench\n> mark must be made. The only way to do that is identify the common SQL\n> commands sent to the server then run explain analyze so you know what the\n> server is doing. Then post the the results along with Config file and we\n> can make suggestions\n>\n> There is http://wiki.postgresql.org/wiki/Performance_Optimization\n>\n> Greg Smith is working on a tuner\n> http://notemagnet.blogspot.com/2008/11/automating-initial-postgresqlconf.html\n>\n> But thats a monumental undertaking as one configuration setting for one\n> type of work load can be ruinousness to another work load.\n>\n> The one common theme is know the workload so the configuration matches.\n>\n>\n> My personal opinion is that certain parameters in postgres.conf are simply\n> too technical in nature for a application developer like me, it becomes more\n> of a trial and error kind of frustrating process.\n>\n> This boils down to know the work load.\n> different kinds of work loads:\n> A: more writing with very few reads.\n> B: more reads that are simple queries and few complex quiers with very\n> few writes. There is a ratio to look at in my case 10000 reads occur before\n> next write So we have lots of indexes aimed at those common queries.\n> C: Complex queries taking minutes to hours to run on data warehouse\n> covering millions of records.\n> D: equal work load between writes and reads.\n>\n> There are many kinds of workloads requiring different configurations.\n>\n>\n> If there a utility that understands the system specification on which\n> postgres is going to run and change the paramters accordingly, that would\n> help.\n>\n> Thanks,\n> Saurabh\n>\n> <snip>\n>\n>\n\nThanks all for your valuable comments, as I gather, what I need to do is to check the queries that are slow and do a vacuum analyze and share the results along with postgresql.conf being used.I will work on that.\nThanks again,SaurabhOn Tue, Jul 7, 2009 at 4:46 AM, justin <[email protected]> wrote:\n\nSaurabh Dave wrote:\n\n>No offense intended - but have you looked at the\ndocumentation for postgresql.conf?\n\n>If you are going to include PostgreSQL in your application, I'd\nhighly recommend you >understand what you are including. :-)\n\n\nI\nhad a look into the documentation of postgres.conf, and tried a lot\nwith changing paramters I thought would improve the performance, but in\nvain.\nAutovaccum is enabled by default in 8.3.7 , but i reduced the nap time\nso that it happens more frequently.\n\nAs others have pointed tuning is not a caned answer  hence all the\nconfig options to start with.  But to change the configuration to\nsomething a bench mark must be made.  The only way to do that is\nidentify the common SQL commands sent to the server then run explain\nanalyze  so you know what the server is doing.  Then post the the\nresults along with Config file and we can make suggestions \n\nThere is  http://wiki.postgresql.org/wiki/Performance_Optimization\n\nGreg Smith is working on a tuner \nhttp://notemagnet.blogspot.com/2008/11/automating-initial-postgresqlconf.html\n\nBut thats a monumental undertaking as one configuration setting for one\ntype of work load can be ruinousness to another work load.\n\nThe one common theme is know the workload so the configuration\nmatches.  \n\nMy\npersonal opinion is that certain parameters in postgres.conf are simply\ntoo technical in nature for a application developer like me, it becomes\nmore of a trial and error kind of frustrating process.\n\nThis boils down to know the  work load. \ndifferent kinds of work loads:  \n    A: more writing with very few  reads.\n    B: more reads that are simple queries and few complex quiers with\nvery few writes.  There is a ratio to look at in my case 10000 reads\noccur before next write So we have lots of indexes aimed at those\ncommon queries.  \n    C: Complex queries taking minutes to hours to run on data warehouse\ncovering  millions of records.\n    D: equal work load between writes and reads.  \n\nThere are many kinds of workloads requiring different configurations.  \n\nIf there a utility that understands the system specification on\nwhich postgres is going to run and change the paramters accordingly,\nthat would help.\n\nThanks,\nSaurabh\n<snip>", "msg_date": "Tue, 7 Jul 2009 10:41:14 +0530", "msg_from": "Saurabh Dave <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "On Sun, Jul 5, 2009 at 11:48 PM, Saurabh Dave<[email protected]> wrote:\n> Hi,\n>\n> We are bundling PostgreSQL 8.3.7 with our Java based application.\n> We observe that in some systems the Database access becomes very slow after\n> running it for couple of days.\n>\n> We understand that postgresql.conf needs to be adjusted as per the system\n> specification where postgreSQL is running.\n>\n> Is there a utility that we can use that can check the system specification\n> and change the required parameters in postgresql.conf accordingly?\n\nAssuming autovacuum is enabled still (it is by default) it is likely\nthat your updates are big enough that you're blowing out the free\nspace map. Easy to check, take a db that's slowed down and run vacuum\nverbose as a super user on any of the dbs in it (postgres is a good\nchoice) and see what the last 20 or so lines have to say about how\nmany slots you have and how many you're using. If you need more\nslots, then adjust the free space map settings (max slots and max\nrelations) so they're large enough to keep the db from bloating in the\nfuture. On larger datasets 1M to 10M slots is not uncommon, and since\nit only uses 6 bytes per slot, even 10M is only 60M of shared memory.\n\n8.4 has a LOT of improvements in this area, as I understand the whole\nFSM stuff has been automated on that version. Note that I haven't\ntested 8.4 yet, so I'm just going by what I read.\n", "msg_date": "Mon, 6 Jul 2009 23:59:19 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" }, { "msg_contents": "Saurabh Dave <[email protected]> wrote:\n \n> what I need to do is to check the queries that are slow and do a\n> vacuum analyze and share the results along with postgresql.conf\n> being used.\n \nHopefully just a typo there. While you need to ensure that adequate\nVACUUM and ANALYZE maintenance is being performed, and a suggestion\nwas made to use VACUUM VERBOSE for diagnostic purposes, what you need\nto do with the slow queries you identify is to run them with EXPLAIN\nANALYZE in front (not VACUUM ANALYZE as your post stated).\n \n-Kevin\n", "msg_date": "Tue, 07 Jul 2009 08:40:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bundling postgreSQL with my Java application" } ]
[ { "msg_contents": "\nHi,\n\nI'm using both IN and ANY() operators extensively my application. Can\nanybody answer me on the following questions:\n\t1) Which operator is advantage over the another, interms of performance?\n\t2) If I've indexed these columns, will both the operators make use of index\nscanning?\n\t3) Also I read from PostgreSQL documentation that there is a limit in\npassing values to IN operator. As far as I remember, it is 1000, but then I\ndon't have the web link justifying this handy now. If yes/no, is it\napplicable for ANY operator also?\n\t4) Is there any difference in framing queries at query planner level?\n\nRegards,\nGnanam\n-- \nView this message in context: http://www.nabble.com/Performance-difference-between-IN%28...%29-and-ANY%28...%29-operator-tp24386330p24386330.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 7 Jul 2009 23:50:10 -0700 (PDT)", "msg_from": "Gnanam <[email protected]>", "msg_from_op": true, "msg_subject": "Performance difference between IN(...) and ANY(...)\n operator" }, { "msg_contents": "On Wed, Jul 8, 2009 at 2:50 AM, Gnanam<[email protected]> wrote:\n> I'm using both IN and ANY() operators extensively my application.  Can\n> anybody answer me on the following questions:\n>        1) Which operator is advantage over the another, interms of performance?\n>        2) If I've indexed these columns, will both the operators make use of index\n> scanning?\n>        3) Also I read from PostgreSQL documentation that there is a limit in\n> passing values to IN operator.  As far as I remember, it is 1000, but then I\n> don't have the web  link justifying this handy now.  If yes/no, is it\n> applicable for ANY operator also?\n>        4) Is there any difference in framing queries at query planner level?\n\nYou might want to have a look at this email, and Tom Lane's reply:\n\nhttp://archives.postgresql.org/message-id/[email protected]\n\n...Robert\n", "msg_date": "Thu, 23 Jul 2009 19:22:41 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance difference between IN(...) and ANY(...)\n\toperator" } ]
[ { "msg_contents": "Hi All,\n I would like to know if there is a limitation on the size of the XML\ndocument that can be contained in a table?\n Do you think inserting a large XML, say with 100 elements, will be a\nproblem?\n Waiting for your reply.\n Franclin.\n", "msg_date": "Wed, 08 Jul 2009 17:27:32 +0100", "msg_from": "Franclin Foping <[email protected]>", "msg_from_op": true, "msg_subject": "Maximum size of an XML document" } ]
[ { "msg_contents": "Hello everybody,\nI have a simple query which selects data from not very large table (\n434161 rows) and takes far more time than I'd expect. I believe it's\ndue to a poor disk performance because when I execute the very same\nquery for a second time I get much better results (caching kicks in?).\nCan you please confirm my theory or do you see any other possible\nexplanation?\n\nThank you in advance\n\nMartin\n\n\n# explain analyze select * from\n\"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" where variable_id=7553\nand ts > '2009-07-01 17:00:00' and ts < now() order by ts limit 20000;\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3924.13..3928.91 rows=1912 width=206) (actual\ntime=3687.661..3705.546 rows=2161 loops=1)\n -> Sort (cost=3924.13..3928.91 rows=1912 width=206) (actual\ntime=3687.654..3693.864 rows=2161 loops=1)\n Sort Key: ts\n Sort Method: quicksort Memory: 400kB\n -> Bitmap Heap Scan on\n\"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" (cost=76.75..3819.91\nrows=1912 width=206) (actual time=329.416..3677.521 rows=2161 loops=1)\n Recheck Cond: ((variable_id = 7553) AND (ts >\n'2009-07-01 17:00:00'::timestamp without time zone) AND (ts < now()))\n -> Bitmap Index Scan on pokusny_index\n(cost=0.00..76.27 rows=1912 width=0) (actual time=304.160..304.160\nrows=2687 loops=1)\n Index Cond: ((variable_id = 7553) AND (ts >\n'2009-07-01 17:00:00'::timestamp without time zone) AND (ts < now()))\n Total runtime: 3711.488 ms\n(9 rows)\n\n# explain analyze select * from\n\"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" where variable_id=7553\nand ts > '2009-07-01 17:00:00' and ts < now() order by ts limit 20000;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3924.13..3928.91 rows=1912 width=206) (actual\ntime=18.135..35.140 rows=2161 loops=1)\n -> Sort (cost=3924.13..3928.91 rows=1912 width=206) (actual\ntime=18.127..24.064 rows=2161 loops=1)\n Sort Key: ts\n Sort Method: quicksort Memory: 400kB\n -> Bitmap Heap Scan on\n\"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" (cost=76.75..3819.91\nrows=1912 width=206) (actual time=1.616..10.369 rows=2161 loops=1)\n Recheck Cond: ((variable_id = 7553) AND (ts >\n'2009-07-01 17:00:00'::timestamp without time zone) AND (ts < now()))\n -> Bitmap Index Scan on pokusny_index\n(cost=0.00..76.27 rows=1912 width=0) (actual time=1.352..1.352\nrows=2687 loops=1)\n Index Cond: ((variable_id = 7553) AND (ts >\n'2009-07-01 17:00:00'::timestamp without time zone) AND (ts < now()))\n Total runtime: 40.971 ms\n(9 rows)\n", "msg_date": "Thu, 9 Jul 2009 12:29:24 +0200", "msg_from": "Martin Chlupac <[email protected]>", "msg_from_op": true, "msg_subject": "Data caching" }, { "msg_contents": "Martin Chlupac wrote:\n> Hello everybody,\n> I have a simple query which selects data from not very large table (\n> 434161 rows) and takes far more time than I'd expect. I believe it's\n> due to a poor disk performance because when I execute the very same\n> query for a second time I get much better results (caching kicks in?).\n> Can you please confirm my theory or do you see any other possible\n> explanation?\n\nYep - it's the difference between fetching from memory and from disk.\n\n> -> Bitmap Heap Scan on\n> \"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" (cost=76.75..3819.91\n> rows=1912 width=206) (actual time=329.416..3677.521 rows=2161 loops=1)\n\n> -> Bitmap Heap Scan on\n> \"records_f4f23ca0-9c35-43ac-bb0d-1ef3784399ac\" (cost=76.75..3819.91\n> rows=1912 width=206) (actual time=1.616..10.369 rows=2161 loops=1)\n\nThe plan scans the index, and builds up a bitmap of which disk-blocks \ncontain (potential) matches. It then has to read the blocks (the heap \nscan above), confirm they match and then return the rows. If you look at \nthe \"actual time\" above you can see about 90% of the slow query is spent \ndoing this.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 09 Jul 2009 11:52:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data caching" } ]
[ { "msg_contents": "\nI noticed a bit of a performance regression in embedded sql queries when\nmoving from the client libraries in verison 8.2.4 to 8.3.7. My\napplication does a whole lot of queries, many of which don't return any\ndata. When we moved to the new libraries the time of running a query\n(from the application point of view) went from about 550 usec to 800\nusec. In both cases this was against a server running 8.3.7.\nI turned on log_statement_stats and noticed that the behaviour is\nslightly different, and the 8.3.7 version sends the statement to the\nserver twice, while 8.2.4 only sends it once.\n\n const char *SQL_text = \"select * from foo\"; (not always the same\nquery)\n exec sql prepare s_1ab from :SQL_text; <---- [*1]\n exec sql declare c_1ab cursor for s_1ab;\n exec sql open c_1ab; <---- [*2]\n\nAt [*1], with the 8.3.7 libraries, I see in the server log:\nSTATEMENT: select * from foo\n\nWith 8.2.4, nothing is logged. Both versions send the statement to\ndeclare the cursor:\nSTATEMENT: declare c_1ab cursor for select * from foo\n\nSuggestions?\n\neric\n", "msg_date": "Thu, 9 Jul 2009 11:31:24 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": true, "msg_subject": "embedded sql regression from 8.2.4 to 8.3.7" }, { "msg_contents": "Eric Haszlakiewicz wrote:\n> I noticed a bit of a performance regression in embedded sql queries when\n> moving from the client libraries in verison 8.2.4 to 8.3.7. My\n> application does a whole lot of queries, many of which don't return any\n> data. When we moved to the new libraries the time of running a query\n> (from the application point of view) went from about 550 usec to 800\n> usec. In both cases this was against a server running 8.3.7.\n> I turned on log_statement_stats and noticed that the behaviour is\n> slightly different, and the 8.3.7 version sends the statement to the\n> server twice, while 8.2.4 only sends it once.\n> \n> const char *SQL_text = \"select * from foo\"; (not always the same query)\n> exec sql prepare s_1ab from :SQL_text; <---- [*1]\n> exec sql declare c_1ab cursor for s_1ab;\n> exec sql open c_1ab; <---- [*2]\n> \n> At [*1], with the 8.3.7 libraries, I see in the server log:\n> STATEMENT: select * from foo\n> \n> With 8.2.4, nothing is logged. Both versions send the statement to\n> declare the cursor:\n> STATEMENT: declare c_1ab cursor for select * from foo\n\nThe log is misleading; the first statement is not really executed,\nit is only prepared (parsed). If you set the log level to DEBUG2, it\nwill look like:\n\n DEBUG: parse s_1ab: select * from empsalary\n STATEMENT: select * from empsalary\n LOG: statement: begin transaction\n LOG: statement: declare c_1ab cursor for select * from empsalary\n\nThe difference to 8.2 is that since 8.3, EXEC SQL PREPARE will result\nin a PREPARE statement on the server. In 8.2, no named prepared\nstatement was created on the server, so nothing is logged in 8.2.\n\nThe change in the source was here:\nhttp://archives.postgresql.org/pgsql-committers/2007-08/msg00185.php\n\nMaybe it is the additional PREPARE that slows your program.\nAre your queries complex enough that the PREPARE consumes\nsignificant time?\n\nMaybe you could use something like this to avoid the\nextra PREPARE:\n\n EXEC SQL BEGIN DECLARE SECTION;\n const char *SQL_text = \"declare c_1ab cursor for select * from foo\";\n const char *fetch = \"fetch from c_1ab\";\n int i;\n EXEC SQL END DECLARE SECTION;\n\n ....\n exec sql execute immediate :SQL_text;\n exec sql prepare fetch from :fetch;\n exec sql execute fetch into :i;\n\nIt avoids the extra PREPARE, but looks pretty ugly.\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 10 Jul 2009 11:05:24 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: embedded sql regression from 8.2.4 to 8.3.7" }, { "msg_contents": ">-----Original Message-----\n>From: Albe Laurenz [mailto:[email protected]] \n>> \n>> const char *SQL_text = \"select * from foo\"; (not always \n>the same query)\n>> exec sql prepare s_1ab from :SQL_text; <---- [*1]\n>> exec sql declare c_1ab cursor for s_1ab;\n>> exec sql open c_1ab; <---- [*2]\n>> \n>> At [*1], with the 8.3.7 libraries, I see in the server log:\n>> STATEMENT: select * from foo\n>> \n>> With 8.2.4, nothing is logged. Both versions send the statement to\n>> declare the cursor:\n>> STATEMENT: declare c_1ab cursor for select * from foo\n>\n>The log is misleading; the first statement is not really executed,\n>it is only prepared (parsed). If you set the log level to DEBUG2, it\n>will look like:\n\nYes, but it's still incurring the overhead of sending the message to the\nserver, isn't it?\n\n>The difference to 8.2 is that since 8.3, EXEC SQL PREPARE will result\n>in a PREPARE statement on the server. In 8.2, no named prepared\n>statement was created on the server, so nothing is logged in 8.2.\n>\n>The change in the source was here:\n>http://archives.postgresql.org/pgsql-committers/2007-08/msg00185.php\n>\n>Maybe it is the additional PREPARE that slows your program.\n>Are your queries complex enough that the PREPARE consumes\n>significant time?\n\nNo, the queries aren't complex, but we prepare and excute hundred of\nqueries, so it seems like the overhead of the extra message sent to the\nserver adds up.\n\n>Maybe you could use something like this to avoid the\n>extra PREPARE:\n> EXEC SQL BEGIN DECLARE SECTION;\n> const char *SQL_text = \"declare c_1ab cursor for select * \n>from foo\";\n> const char *fetch = \"fetch from c_1ab\";\n> int i;\n> EXEC SQL END DECLARE SECTION;\n> ....\n> exec sql execute immediate :SQL_text;\n> exec sql prepare fetch from :fetch;\n> exec sql execute fetch into :i;\n>\n>It avoids the extra PREPARE, but looks pretty ugly.\n\nThere are a number of things I could optimize once I start changing the\ncode, such as just skipping the prepare entirely, but then I'd need to\ngo through another whole release cycle of my app and I'd prefer not to\ndo that right now. I was hoping there was a way to work around this by\nhaving Postgres not send that prepare to the server, but given the\n\"major protocol rewrite\" phrase on that commit log message you pointed\nme at, I'm guessing that's not possible.\n\neric\n\n", "msg_date": "Mon, 13 Jul 2009 10:51:41 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: embedded sql regression from 8.2.4 to 8.3.7" }, { "msg_contents": ">-----Original Message-----\n>From: Albe Laurenz [mailto:[email protected]] \n>Eric Haszlakiewicz wrote:\n>> const char *SQL_text = \"select * from foo\"; (not always \n>the same query)\n>> exec sql prepare s_1ab from :SQL_text; <---- [*1]\n>> exec sql declare c_1ab cursor for s_1ab;\n>> exec sql open c_1ab; <---- [*2]\n exec sql fetch c_1ab into :myvar;\n\n>Maybe it is the additional PREPARE that slows your program.\n>Are your queries complex enough that the PREPARE consumes\n>significant time?\n>\n>Maybe you could use something like this to avoid the\n>extra PREPARE:\n>\n> EXEC SQL BEGIN DECLARE SECTION;\n> const char *SQL_text = \"declare c_1ab cursor for select * \n>from foo\";\n> const char *fetch = \"fetch from c_1ab\";\n> int i;\n> EXEC SQL END DECLARE SECTION;\n>\n> ....\n> exec sql execute immediate :SQL_text;\n> exec sql prepare fetch from :fetch;\n> exec sql execute fetch into :i;\n>\n>It avoids the extra PREPARE, but looks pretty ugly.\n\nThat doesn't avoid an extra prepare entirely since the fetch statement\ngets prepared, but it actually _is_ faster: 1360 usec to run the (real)\nquery my way, 910 usec your way (710usec w/ pg8.2.4). (wall clock time,\nmeasured in the app)\nThe real queries are a little more complicated that the above example.\nOne might have a form a bit like this:\n select varchar_col1, varchar_col2 from foo where colA = '12345' and\ncolB = 99 and colC = 'xyzabc' and colD like 'BLUE%';\nThe difference in wall clock time from the app point of view seems to\nmatch up with the query stats from the db, (20 usec for the parsing the\nfetch, 268 usec for the select) so it looks like re-writing things this\nway would help somewhat.\n\noh, yuck. It looks like I can't get rid of the prepare entirely b/c I\ncan't declare a cursor using a sql string. i.e.:\n exec sql declare c_1ab cursor for :SQL_text;\nactually means something more like:\n exec sql declare c_1ab cursor for :statement_name;\nAlso, I can't use execute immediate with host variables, so I guess I'm\nstuck preparing stuff. :(\n\neric\n", "msg_date": "Mon, 13 Jul 2009 13:24:38 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: embedded sql regression from 8.2.4 to 8.3.7" }, { "msg_contents": "Eric Haszlakiewicz wrote:\n>> The log is misleading; the first statement is not really executed,\n>> it is only prepared (parsed). If you set the log level to DEBUG2, it\n>> will look like:\n> \n> Yes, but it's still incurring the overhead of sending the message to the\n> server, isn't it?\n\nYes.\n\n>> Maybe it is the additional PREPARE that slows your program.\n>> Are your queries complex enough that the PREPARE consumes\n>> significant time?\n> \n> No, the queries aren't complex, but we prepare and excute hundred of\n> queries, so it seems like the overhead of the extra message sent to the\n> server adds up.\n\nI see.\n\n> I was hoping there was a way to work around this by\n> having Postgres not send that prepare to the server, but given the\n> \"major protocol rewrite\" phrase on that commit log message you pointed\n> me at, I'm guessing that's not possible.\n\nIt looks like what is normally an advantage (having named prepared\nstatements that can be reused) makes things slower in your case, since\nyou do not use the prepared statement at all and only need it to\nbe able to use a cursor with dynamic SQL.\n\nYours,\nLaurenz Albe\n", "msg_date": "Tue, 14 Jul 2009 11:39:59 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: embedded sql regression from 8.2.4 to 8.3.7" } ]
[ { "msg_contents": "[ Attempting to resend, because it didn't seem to get through last time. ]\n\nWe have a query that runs very slowly on our 8.3 database. (I can't\ntell you exactly how slowly, because it has never successfully run to\ncompletion even when we left it running overnight.) On the 8.4\ndatabase on my laptop, it runs in about 90 seconds. Of course there\nare several differences between the two instances, but I wonder\nwhether query planning improvements in 8.4 could essentially account\nfor it. (In case you're wondering, I ran 'vacuum full analyze' on the\nslow one, and it made no discernible difference.) The two instances\nhave the same schema and data.\nThe query looks like this:\nselect first.feature_cvterm_id from feature_cvterm first\njoin feature_cvterm_dbxref first_fvd\non first.feature_cvterm_id = first_fvd.feature_cvterm_id\njoin dbxref first_withfrom_dbxref\non first_fvd.dbxref_id = first_withfrom_dbxref.dbxref_id\n   , cv\n   , cvterm evidence_type join cv evidence_type_cv\n        on evidence_type.cv_id = evidence_type_cv.cv_id\n   , feature_cvtermprop first_evidence\n   , feature_cvterm second\nleft join feature_cvtermprop second_evidence\n    on second_evidence.feature_cvterm_id = second.feature_cvterm_id\njoin feature_cvterm_dbxref second_fvd\non second.feature_cvterm_id = second_fvd.feature_cvterm_id\njoin dbxref second_withfrom_dbxref\non second_fvd.dbxref_id = second_withfrom_dbxref.dbxref_id\n   , cvterm second_term\n   , cvterm first_term\n   , feature\nwhere first.cvterm_id = first_term.cvterm_id\nand first_evidence.feature_cvterm_id = first.feature_cvterm_id\nand second_term.cv_id = cv.cv_id\nand first_term.cv_id = cv.cv_id\nand cv.name in (\n      'biological_process'\n    , 'molecular_function'\n    , 'cellular_component'\n)\nand second.feature_id = feature.feature_id\nand second.feature_id = first.feature_id\nand first.cvterm_id = first_term.cvterm_id\nand second.cvterm_id = second_term.cvterm_id\nand second.pub_id = first.pub_id\nand evidence_type.name = 'evidence'\nand evidence_type_cv.name = 'genedb_misc'\nand second_evidence.type_id = evidence_type.cvterm_id\nand first_evidence.type_id = evidence_type.cvterm_id\nand second.feature_cvterm_id > first.feature_cvterm_id\nand first_withfrom_dbxref.accession = second_withfrom_dbxref.accession\nand upper(first_evidence.value) = upper(second_evidence.value)\nand first_term.name = second_term.name\n;\n(There's some fairly obvious room for improvement in this query as\nwritten, but none of the changes I've tried have changed the overall\nperformance picture.)\nThe execution plan on the (slow) 8.3 server is:\n\n Nested Loop  (cost=44050.86..77140.03 rows=1 width=4)\n   Join Filter: (second_term.cv_id = cv.cv_id)\n   ->  Nested Loop  (cost=44050.86..77138.61 rows=1 width=12)\n         Join Filter: ((first_term.cv_id = second_term.cv_id) AND\n((first_term.name)::text = (second_term.name)::text))\n         ->  Nested Loop  (cost=44050.86..77130.32 rows=1 width=56)\n               ->  Nested Loop  (cost=44050.86..77122.65 rows=1 width=12)\n                     Join Filter: (upper(second_evidence.value) =\nupper(first_evidence.value))\n                     ->  Nested Loop  (cost=44050.86..77114.32 rows=1 width=50)\n                           Join Filter: ((second.feature_cvterm_id >\nfirst.feature_cvterm_id) AND (second.feature_id = first.feature_id)\nAND (second.pub_id = first.pub_id) AND\n((second_withfrom_dbxref.accession)::text =\n(first_withfrom_dbxref.accession)::text))\n                           ->  Nested Loop  (cost=30794.26..42915.70\nrows=1 width=69)\n                                 ->  Hash Join\n(cost=30794.26..42906.88 rows=1 width=65)\n                                       Hash Cond:\n(second_evidence.type_id = evidence_type.cvterm_id)\n                                       ->  Hash Join\n(cost=30784.59..42807.07 rows=24035 width=61)\n                                             Hash Cond:\n(second_fvd.dbxref_id = second_withfrom_dbxref.dbxref_id)\n                                             ->  Hash Join\n(cost=19044.44..28262.26 rows=24035 width=50)\n                                                   Hash Cond:\n(second_evidence.feature_cvterm_id = second.feature_cvterm_id)\n                                                   ->  Seq Scan on\nfeature_cvtermprop second_evidence  (cost=0.00..4370.07 rows=223307\nwidth=34)\n                                                   ->  Hash\n(cost=18169.19..18169.19 rows=47620 width=24)\n                                                         ->  Hash Join\n (cost=1516.45..18169.19 rows=47620 width=24)\n                                                               Hash\nCond: (second.feature_cvterm_id = second_fvd.feature_cvterm_id)\n                                                               ->  Seq\nScan on feature_cvterm second  (cost=0.00..7243.27 rows=442427\nwidth=16)\n                                                               ->\nHash  (cost=734.20..734.20 rows=47620 width=8)\n\n->  Seq Scan on feature_cvterm_dbxref second_fvd  (cost=0.00..734.20\nrows=47620 width=8)\n                                             ->  Hash\n(cost=5838.29..5838.29 rows=321429 width=19)\n                                                   ->  Seq Scan on\ndbxref second_withfrom_dbxref  (cost=0.00..5838.29 rows=321429\nwidth=19)\n                                       ->  Hash  (cost=9.66..9.66\nrows=1 width=4)\n                                             ->  Nested Loop\n(cost=0.00..9.66 rows=1 width=4)\n                                                   Join Filter:\n(evidence_type.cv_id = evidence_type_cv.cv_id)\n                                                   ->  Seq Scan on cv\nevidence_type_cv  (cost=0.00..1.35 rows=1 width=4)\n                                                         Filter:\n((name)::text = 'genedb_misc'::text)\n                                                   ->  Index Scan\nusing cvterm_idx2 on cvterm evidence_type  (cost=0.00..8.29 rows=1\nwidth=8)\n                                                         Index Cond:\n((evidence_type.name)::text = 'evidence'::text)\n                                 ->  Index Scan using feature_pkey on\nfeature  (cost=0.00..8.81 rows=1 width=4)\n                                       Index Cond: (feature.feature_id\n= second.feature_id)\n                           ->  Hash Join  (cost=13256.60..33246.22\nrows=47620 width=35)\n                                 Hash Cond: (first_fvd.dbxref_id =\nfirst_withfrom_dbxref.dbxref_id)\n                                 ->  Hash Join\n(cost=1516.45..18169.19 rows=47620 width=24)\n                                       Hash Cond:\n(first.feature_cvterm_id = first_fvd.feature_cvterm_id)\n                                       ->  Seq Scan on feature_cvterm\nfirst  (cost=0.00..7243.27 rows=442427 width=16)\n                                       ->  Hash  (cost=734.20..734.20\nrows=47620 width=8)\n                                             ->  Seq Scan on\nfeature_cvterm_dbxref first_fvd  (cost=0.00..734.20 rows=47620\nwidth=8)\n                                 ->  Hash  (cost=5838.29..5838.29\nrows=321429 width=19)\n                                       ->  Seq Scan on dbxref\nfirst_withfrom_dbxref  (cost=0.00..5838.29 rows=321429 width=19)\n                     ->  Index Scan using feature_cvtermprop_c1 on\nfeature_cvtermprop first_evidence  (cost=0.00..8.31 rows=1 width=34)\n                           Index Cond:\n((first_evidence.feature_cvterm_id = first.feature_cvterm_id) AND\n(first_evidence.type_id = second_evidence.type_id))\n               ->  Index Scan using cvterm_pkey on cvterm first_term\n(cost=0.00..7.65 rows=1 width=52)\n                     Index Cond: (first_term.cvterm_id = first.cvterm_id)\n         ->  Index Scan using cvterm_pkey on cvterm second_term\n(cost=0.00..8.28 rows=1 width=52)\n               Index Cond: (second_term.cvterm_id = second.cvterm_id)\n   ->  Seq Scan on cv  (cost=0.00..1.39 rows=3 width=4)\n         Filter: ((cv.name)::text = ANY\n('{biological_process,molecular_function,cellular_component}'::text[]))\n\nand on the (fast) 8.4 server is:\n\n Nested Loop  (cost=63949.73..77767.20 rows=1 width=4)\n   Join Filter: (second_term.cv_id = cv.cv_id)\n   ->  Nested Loop  (cost=63949.73..77765.78 rows=1 width=12)\n         Join Filter: (upper(second_evidence.value) =\nupper(first_evidence.value))\n         ->  Nested Loop  (cost=63949.73..77757.46 rows=1 width=51)\n               Join Filter: ((first_term.cv_id = second_term.cv_id)\nAND ((first_term.name)::text = (second_term.name)::text))\n               ->  Nested Loop  (cost=63949.73..77749.17 rows=1 width=95)\n                     ->  Nested Loop  (cost=63949.73..77741.55 rows=1 width=51)\n                           ->  Hash Join  (cost=63949.73..77732.49\nrows=1 width=59)\n                                 Hash Cond: ((second.feature_id =\nfirst.feature_id) AND (second.pub_id = first.pub_id) AND\n((second_withfrom_dbxref.accession)::text =\n(first_withfrom_dbxref.accession)::text))\n                                 Join Filter:\n(second.feature_cvterm_id > first.feature_cvterm_id)\n                                 ->  Hash Join\n(cost=30236.57..41303.13 rows=4607 width=66)\n                                       Hash Cond:\n(second_evidence.type_id = evidence_type.cvterm_id)\n                                       ->  Hash Join\n(cost=30226.90..41161.01 rows=23034 width=62)\n                                             Hash Cond:\n(second_fvd.dbxref_id = second_withfrom_dbxref.dbxref_id)\n                                             ->  Hash Join\n(cost=18735.19..27081.42 rows=23034 width=51)\n                                                   Hash Cond:\n(second_evidence.feature_cvterm_id = second.feature_cvterm_id)\n                                                   ->  Seq Scan on\nfeature_cvtermprop second_evidence  (cost=0.00..3965.92 rows=210392\nwidth=35)\n                                                   ->  Hash\n(cost=17861.31..17861.31 rows=47590 width=24)\n                                                         ->  Hash Join\n (cost=1490.78..17861.31 rows=47590 width=24)\n                                                               Hash\nCond: (second.feature_cvterm_id = second_fvd.feature_cvterm_id)\n                                                               ->  Seq\nScan on feature_cvterm second  (cost=0.00..7115.82 rows=434682\nwidth=16)\n                                                               ->\nHash  (cost=709.90..709.90 rows=47590 width=8)\n\n->  Seq Scan on feature_cvterm_dbxref second_fvd  (cost=0.00..709.90\nrows=47590 width=8)\n                                             ->  Hash\n(cost=5743.87..5743.87 rows=321587 width=19)\n                                                   ->  Seq Scan on\ndbxref second_withfrom_dbxref  (cost=0.00..5743.87 rows=321587\nwidth=19)\n                                       ->  Hash  (cost=9.66..9.66\nrows=1 width=4)\n                                             ->  Nested Loop\n(cost=0.00..9.66 rows=1 width=4)\n                                                   Join Filter:\n(evidence_type.cv_id = evidence_type_cv.cv_id)\n                                                   ->  Seq Scan on cv\nevidence_type_cv  (cost=0.00..1.35 rows=1 width=4)\n                                                         Filter:\n((name)::text = 'genedb_misc'::text)\n                                                   ->  Index Scan\nusing cvterm_idx2 on cvterm evidence_type  (cost=0.00..8.29 rows=1\nwidth=8)\n                                                         Index Cond:\n((evidence_type.name)::text = 'evidence'::text)\n                                 ->  Hash  (cost=32531.33..32531.33\nrows=47590 width=35)\n                                       ->  Hash Join\n(cost=12982.48..32531.33 rows=47590 width=35)\n                                             Hash Cond:\n(first_fvd.dbxref_id = first_withfrom_dbxref.dbxref_id)\n                                             ->  Hash Join\n(cost=1490.78..17861.31 rows=47590 width=24)\n                                                   Hash Cond:\n(first.feature_cvterm_id = first_fvd.feature_cvterm_id)\n                                                   ->  Seq Scan on\nfeature_cvterm first  (cost=0.00..7115.82 rows=434682 width=16)\n                                                   ->  Hash\n(cost=709.90..709.90 rows=47590 width=8)\n                                                         ->  Seq Scan\non feature_cvterm_dbxref first_fvd  (cost=0.00..709.90 rows=47590\nwidth=8)\n                                             ->  Hash\n(cost=5743.87..5743.87 rows=321587 width=19)\n                                                   ->  Seq Scan on\ndbxref first_withfrom_dbxref  (cost=0.00..5743.87 rows=321587\nwidth=19)\n                           ->  Index Scan using feature_pkey on\nfeature  (cost=0.00..9.04 rows=1 width=4)\n                                 Index Cond: (feature.feature_id =\nsecond.feature_id)\n                     ->  Index Scan using cvterm_pkey on cvterm\nfirst_term  (cost=0.00..7.61 rows=1 width=52)\n                           Index Cond: (first_term.cvterm_id = first.cvterm_id)\n               ->  Index Scan using cvterm_pkey on cvterm second_term\n(cost=0.00..8.27 rows=1 width=52)\n                     Index Cond: (second_term.cvterm_id = second.cvterm_id)\n         ->  Index Scan using feature_cvtermprop_c1 on\nfeature_cvtermprop first_evidence  (cost=0.00..8.30 rows=1 width=35)\n               Index Cond: ((first_evidence.feature_cvterm_id =\nfirst.feature_cvterm_id) AND (first_evidence.type_id =\nsecond_evidence.type_id))\n   ->  Seq Scan on cv  (cost=0.00..1.39 rows=3 width=4)\n         Filter: ((cv.name)::text = ANY\n('{biological_process,molecular_function,cellular_component}'::text[]))\n\nAny insights would be much appreciated.\nThanks,\nRobin\n", "msg_date": "Thu, 9 Jul 2009 17:35:19 +0100", "msg_from": "Robin Houston <[email protected]>", "msg_from_op": true, "msg_subject": "Huge difference in query performance between 8.3 and 8.4 (possibly)" }, { "msg_contents": "Robin Houston escribi�:\n\n> We have a query that runs very slowly on our 8.3 database. (I can't\n> tell you exactly how slowly, because it has never successfully run to\n> completion even when we left it running overnight.) On the 8.4\n> database on my laptop, it runs in about 90 seconds. Of course there\n> are several differences between the two instances, but I wonder\n> whether query planning improvements in 8.4 could essentially account\n> for it.\n\nOf course. Great news. Congratulations.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 9 Jul 2009 13:02:31 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge difference in query performance between 8.3 and\n\t8.4 (possibly)" }, { "msg_contents": "Robin Houston <[email protected]> writes:\n> We have a query that runs very slowly on our 8.3 database. (I can't\n> tell you exactly how slowly, because it has never successfully run to\n> completion even when we left it running overnight.) On the 8.4\n> database on my laptop, it runs in about 90 seconds. Of course there\n> are several differences between the two instances, but I wonder\n> whether query planning improvements in 8.4 could essentially account\n> for it.\n\nWell, it's hard to be sure with only EXPLAIN and not EXPLAIN ANALYZE\noutput to look at; but I think the significant difference in these plans\nis that 8.4 has chosen a hash instead of nestloop join for a couple of\nthe intermediate join levels. Which is evidently because of a change\nin the estimated size of the next join down:\n\n -> Nested Loop (cost=44050.86..77114.32 rows=1 width=50)\n Join Filter: ((second.feature_cvterm_id > first.feature_cvterm_id) AND (second.feature_id = first.feature_id) AND (second.pub_id = first.pub_id) AND ((second_withfrom_dbxref.accession)::text = (first_withfrom_dbxref.accession)::text))\n -> Nested Loop (cost=30794.26..42915.70 rows=1 width=69)\n -> Hash Join (cost=30794.26..42906.88 rows=1 width=65)\n Hash Cond: (second_evidence.type_id = evidence_type.cvterm_id)\n\nversus\n\n -> Hash Join (cost=63949.73..77732.49 rows=1 width=59)\n Hash Cond: ((second.feature_id = first.feature_id) AND (second.pub_id = first.pub_id) AND ((second_withfrom_dbxref.accession)::text = (first_withfrom_dbxref.accession)::text))\n Join Filter: (second.feature_cvterm_id > first.feature_cvterm_id)\n -> Hash Join (cost=30236.57..41303.13 rows=4607 width=66)\n Hash Cond: (second_evidence.type_id = evidence_type.cvterm_id)\n\nIf the 8.4 rowcount estimate is accurate then it's not surprising that\nthe nestloop plan sucks --- it'd be re-executing the other arm of the\njoin 4600 or so times.\n\nThis could reflect improvements in the join size estimation code, or\nmaybe it's just a consequence of 8.4 using larger statistics targets\nby default. It's hard to be sure with so little information to go on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 09 Jul 2009 13:09:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge difference in query performance between 8.3 and 8.4\n\t(possibly)" }, { "msg_contents": "> From: [email protected]\n[mailto:pgsql-performance-\n> [email protected]] On Behalf Of Robin Houston\n> Sent: Thursday, July 09, 2009 12:35 PM\n\n> We have a query that runs very slowly on our 8.3 database. (I can't\n> tell you exactly how slowly, because it has never successfully run to\n> completion even when we left it running overnight.) On the 8.4\n> database on my laptop, it runs in about 90 seconds. \n> Any insights would be much appreciated.\n\nI doubt this is the insight you're looking for, but that is the worst\nquery I have ever seen. It is difficult to understand exactly what it\nreturns. There are so many cross joins, outer joins, and inner joins\nmixed up together, ugh.\n\nRather than trying to puzzle out why it is slow, rewrite it. It will be\nfaster than before on any version.\n\nMatthew Hartman\nProgrammer/Analyst\nInformation Management, ICP\nKingston General Hospital\n\n", "msg_date": "Thu, 9 Jul 2009 13:36:00 -0400", "msg_from": "\"Hartman, Matthew\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge difference in query performance between 8.3 and 8.4\n\t(possibly)" } ]
[ { "msg_contents": "Below is a query that takes 16 seconds on the first run. I am having\ngenerally poor performance for queries in uncached areas of the data\nand often mediocre (500ms-2s+) performance generallly, although\nsometimes it's very fast. All the queries are pretty similar and use\nthe indexes this way.\n\nI've been trying to tune this thing with little luck. There are about\n1.5M records. It's using the index properly. Settings are:\nwork_mem=20MB, shared_buffers=128MB, effective_cache_size=1024MB.\n\nI have run ANALYZE and VACUUM FULL recently.\n\nThe whole database is around 16GB. The server is an ec2 instance\nwith 5 compute units in two cores (1 unit is one 2Ghz processor) and\n1.7Gb of RAM.\n\nSwapping seems to be minimal.\n\nNote that the ANALYZE is from my slow query logger, so the numbers\ndon't match the time the uncached query took.\n\nThere are 118K rows in this select. It is possible the sort is the\nissue, but that's why I have 20M working memory. Do I really need\nmore than that?\n\nSlow query: (16.852746963501) [0] SELECT id FROM \"source_listings\"\nWHERE (post_time BETWEEN '2009-07-02 14:19:29.520886' AND '2009-07-11\n14:19:29.520930' AND ((geo_lon BETWEEN 10879358 AND 10909241 AND\ngeo_lat BETWEEN 13229080 AND 13242719)) AND city = 'boston') ORDER BY\npost_time DESC LIMIT 108 OFFSET 0\nLimit (cost=30396.63..30396.90 rows=108 width=12) (actual\ntime=1044.575..1044.764 rows=108 loops=1)\n -> Sort (cost=30396.63..30401.47 rows=1939 width=12) (actual\ntime=1044.573..1044.630 rows=108 loops=1)\n Sort Key: post_time\n Sort Method: top-N heapsort Memory: 21kB\n -> Bitmap Heap Scan on source_listings\n(cost=23080.81..30321.44 rows=1939 width=12) (actual\ntime=321.111..952.704 rows=118212 loops=1)\n Recheck Cond: ((city = 'boston'::text) AND (post_time >=\n'2009-07-02 14:19:29.520886'::timestamp without time zone) AND\n(post_time <= '2009-07-11 14:19:29.52093'::timestamp without time\nzone) AND (geo_lat >= 13229080) AND (geo_lat <= 13242719) AND (geo_lon\n>= 10879358) AND (geo_lon <= 10909241))\n -> Bitmap Index Scan on sl_city_etc\n(cost=0.00..23080.33 rows=1939 width=0) (actual time=309.007..309.007\nrows=118212 loops=1)\n Index Cond: ((city = 'boston'::text) AND\n(post_time >= '2009-07-02 14:19:29.520886'::timestamp without time\nzone) AND (post_time <= '2009-07-11 14:19:29.52093'::timestamp without\ntime zone) AND (geo_lat >= 13229080) AND (geo_lat <= 13242719) AND\n(geo_lon >= 10879358) AND (geo_lon <= 10909241))\nTotal runtime: 1045.683 ms\n\n\n\nEven without the sort performance is poor:\n\ncribq=# EXPLAIN ANALYZE SELECT count(id) FROM \"source_listings\" WHERE\n(post_time BETWEEN '2009-07-02 14:19:29.520886' AND '2009-07-11\n14:19:29.520930' AND ((geo_lon BETWEEN 10879358 AND 10909241 AND\ngeo_lat BETWEEN 13229080 AND 13242719)) AND city = 'boston');\n \nQUERY\nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=30326.29..30326.30 rows=1 width=4) (actual\ntime=847.967..847.968 rows=1 loops=1)\n -> Bitmap Heap Scan on source_listings (cost=23080.81..30321.44\nrows=1939 width=4) (actual time=219.505..769.878 rows=118212 loops=1)\n Recheck Cond: ((city = 'boston'::text) AND (post_time >=\n'2009-07-02 14:19:29.520886'::timestamp without time zone) AND\n(post_time <= '2009-07-11 14:19:29.52093'::timestamp without time\nzone) AND (geo_lat >= 13229080) AND (geo_lat <= 13242719) AND (geo_lon\n>= 10879358) AND (geo_lon <= 10909241))\n -> Bitmap Index Scan on sl_city_etc (cost=0.00..23080.33\nrows=1939 width=0) (actual time=206.981..206.981 rows=118212 loops=1)\n Index Cond: ((city = 'boston'::text) AND (post_time >=\n'2009-07-02 14:19:29.520886'::timestamp without time zone) AND\n(post_time <= '2009-07-11 14:19:29.52093'::timestamp without time\nzone) AND (geo_lat >= 13229080) AND (geo_lat <= 13242719) AND (geo_lon\n>= 10879358) AND (geo_lon <= 10909241))\n Total runtime: 848.816 ms\n", "msg_date": "Thu, 9 Jul 2009 14:34:01 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Poor query performance" }, { "msg_contents": "Forgot to add:\n\npostgres@ec2-75-101-128-4:~$ psql --version\npsql (PostgreSQL) 8.3.5\n", "msg_date": "Thu, 9 Jul 2009 14:35:31 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "On Thu, Jul 9, 2009 at 10:35 PM, Alex<[email protected]> wrote:\n> Forgot to add:\n>\n> postgres@ec2-75-101-128-4:~$ psql --version\n> psql (PostgreSQL) 8.3.5\n\n\nHow is the index sl_city_etc defined?\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 15 Jul 2009 07:52:40 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "\n>\n> How is the index  sl_city_etc defined?\n>\n\n Index \"public.sl_city_etc\"\n Column | Type\n--------------+-----------------------------\n city | text\n listing_type | text\n post_time | timestamp without time zone\n bedrooms | integer\n region | text\n geo_lat | integer\n geo_lon | integer\nbtree, for table \"public.source_listings\"\n", "msg_date": "Wed, 15 Jul 2009 00:50:42 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "On Wed, Jul 15, 2009 at 8:51 AM, Alex<[email protected]> wrote:\n> Also posted this to the list.  Thanks for your answer - still\n> struggling.\n\nStaying on-list is always preferred.\n\n>> How is the index  sl_city_etc defined?\n>\n>         Index \"public.sl_city_etc\"\n>    Column    |            Type\n> --------------+-----------------------------\n>  city         | text\n>  listing_type | text\n>  post_time    | timestamp without time zone\n>  bedrooms     | integer\n>  region       | text\n>  geo_lat      | integer\n>  geo_lon      | integer\n> btree, for table \"public.source_listings\"\n\nSo the presence of listing_type before post_time when it's not in your\nquery means that the index scan has to look at every entry for\n'boston'. It skips over entries that don't match the post_time or geo\ncolumns but it still has to go through them in the index. Think of\nbeing asked to find every word in the dictionary starting with 'a' and\nwhose third letter is 'k' but with no restriction on the second\nletter...\n\nYou would probably be better off starting with separate indexes on\neach column and then considering how to combine them if possible than\nstarting with them all in one index like this.\n\nIf you always have city in your query and then some collection of\nother columns then you could have separate indexes on\n<city,listing_type>, <city,post_time>, <city, bedrooms>, etc.\n\nThe geometric columns are a more interesting case. You could have\nseparate indexes on each and hope a bitmap scan combines them, or you\ncould use a geometric GIST index on point(geo_lon,geo_lat). Doing so\nwould mean using the right operator to find points within a box rather\nthan simple < and > operators.\n\n\n\n\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 15 Jul 2009 09:22:26 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "Thanks. That's very helpful. I'll take your suggestions and see if\nthings improve.\n\n\n", "msg_date": "Wed, 15 Jul 2009 01:42:04 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "On Thu, Jul 9, 2009 at 3:34 PM, Alex<[email protected]> wrote:\n> Below is a query that takes 16 seconds on the first run.  I am having\n> generally poor performance for queries in uncached areas of the data\n> and often mediocre (500ms-2s+) performance generallly, although\n> sometimes it's very fast.  All the queries are pretty similar and use\n> the indexes this way.\n>\n> I've been trying to tune this thing with little luck.  There are about\n> 1.5M records.  It's using the index properly.  Settings are:\n> work_mem=20MB, shared_buffers=128MB, effective_cache_size=1024MB.\n>\n> I have run ANALYZE and VACUUM FULL recently.\n\nNote that vacuum full can bloat indexes, good idea to reindex after a\nvacuum full.\n", "msg_date": "Sun, 19 Jul 2009 23:39:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor query performance" }, { "msg_contents": "Dear all,\n Can we create a trigger on particular column of a table?\n\nRegards,\nRam\n\n", "msg_date": "Mon, 20 Jul 2009 14:35:31 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": false, "msg_subject": "Trigger on column" }, { "msg_contents": "In response to ramasubramanian :\n> Dear all,\n> Can we create a trigger on particular column of a table?\n\nNo, but you can compare OLD.column and NEW.column and return from the\nfunction if NEW.column = OLD.column.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 20 Jul 2009 11:23:56 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger on column" }, { "msg_contents": " Dear all,\n Performance of query or procedure going down when we are taking the \nbackup of that schema(it is obvious), But how to increase the performance.\n\nRegards,\nRam. \n\n\n", "msg_date": "Mon, 20 Jul 2009 15:45:03 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": false, "msg_subject": "Performance of quer or procedure going down when we are taking the\n\tbackup" }, { "msg_contents": "On Mon, Jul 20, 2009 at 6:15 AM,\nramasubramanian<[email protected]> wrote:\n>   Dear all,\n>       Performance of query or procedure going down when we are taking the\n> backup of that schema(it is obvious), But how to  increase the performance.\n>\n> Regards,\n> Ram.\n\nYou're going to need to provide an awful lot more details than this if\nyou want to have much chance of getting a useful answer, I think. At\na high level, you want to find out which part your system is the\nbottleneck and then look for ways to remove the bottleneck.\n\n...Robert\n", "msg_date": "Sun, 26 Jul 2009 17:15:13 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of quer or procedure going down when we are\n\ttaking the backup" } ]
[ { "msg_contents": "Hi all have a dought about explain of postgres\nLet's see:\n\nWho have more cost for database:\nIndex Scan using xix3_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_ (cost=0.00..73386.80 rows=5126 width=451)\n Index Cond: (mrem_ammovimento = 200906)\n Filter: ((ftgr_id = 26) AND ((mrem_icatualizarleitura IS NULL) OR (mrem_icatualizarleitura = 2)) AND (mrem_tmleitura IS NOT NULL))\n\nor\n\nAggregate (cost=26573.78..26573.79 rows=1 width=4)\n -> Index Scan using xix4_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_ (cost=0.00..26560.41 rows=5347 width=4)\n Index Cond: ((ftgr_id = 26) AND (mrem_ammovimento = 200906))\n\n\n\nIm confused about who is the most import (colour red ou blue).\n\nthanks \n\n\n ____________________________________________________________________________________\nVeja quais são os assuntos do momento no Yahoo! +Buscados\nhttp://br.maisbuscados.yahoo.com\nHi all have a dought about explain of postgresLet's see:Who have more cost for database:Index Scan using xix3_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_  (cost=0.00..73386.80 rows=5126 width=451)  Index Cond: (mrem_ammovimento = 200906)  Filter: ((ftgr_id = 26) AND ((mrem_icatualizarleitura IS NULL) OR (mrem_icatualizarleitura = 2)) AND (mrem_tmleitura IS NOT NULL))orAggregate  (cost=26573.78..26573.79 rows=1 width=4)  ->  Index Scan using xix4_movimento_roteiro_empresa on movimento_roteiro_empresa\n movimentor0_  (cost=0.00..26560.41 rows=5347 width=4)        Index Cond: ((ftgr_id = 26) AND (mrem_ammovimento = 200906))Im confused about who is the most import (colour red ou blue).thanks \nVeja quais são os assuntos do momento no Yahoo! + Buscados: Top 10 - Celebridades - Música - Esportes", "msg_date": "Fri, 10 Jul 2009 07:39:42 -0700 (PDT)", "msg_from": "paulo matadr <[email protected]>", "msg_from_op": true, "msg_subject": "Cost performace question" }, { "msg_contents": "paulo matadr <[email protected]> wrote: \n \n> Im confused about who is the most import (colour red ou blue).\n \nFor the benefit of those without an easy way to view email in http,\nthe first cost number was in red, the second in blue.\n \nThe first number is the cost to retrieve the first row, the second\nnumber is the cost to retrieve the full set of rows. Usually I'm\npaying more attention to the second number.\n \n-Kevin\n", "msg_date": "Fri, 10 Jul 2009 09:51:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cost performace question" }, { "msg_contents": "thanks i understand now, i have other dought\nIndex Scan using xix3_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_ (cost=0.00..73386.80 rows=5126width=451)\nwhat's this mean on red color above?\n\n\n\n\n\n\n________________________________\nDe: Kevin Grittner <[email protected]>\nPara: [email protected]; paulo matadr <[email protected]>\nEnviadas: Sexta-feira, 10 de Julho de 2009 11:51:27\nAssunto: Re: [PERFORM] Cost performace question\n\npaulo matadr <[email protected]> wrote: \n\n> Im confused about who is the most import (colour red ou blue).\n\nFor the benefit of those without an easy way to view email in http,\nthe first cost number was in red, the second in blue.\n\nThe first number is the cost to retrieve the first row, the second\nnumber is the cost to retrieve the full set of rows. Usually I'm\npaying more attention to the second number.\n\n-Kevin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n ____________________________________________________________________________________\nVeja quais são os assuntos do momento no Yahoo! +Buscados\nhttp://br.maisbuscados.yahoo.com\nthanks i understand now,  i have other doughtIndex Scan using xix3_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_  (cost=0.00..73386.80 rows=5126 width=451)what's this mean on red color above?De: Kevin Grittner <[email protected]>Para: [email protected]; paulo matadr <[email protected]>Enviadas: Sexta-feira, 10 de Julho de 2009 11:51:27Assunto: Re: [PERFORM] Cost performace questionpaulo matadr <[email protected]> wrote: > Im confused about who is the most import (colour red ou blue). For the benefit of those without an easy way to view email in http,the first cost number was in red, the second in blue. The first number is the cost to retrieve the first row, the secondnumber is the cost to retrieve the full set of rows.  Usually I'mpaying more attention to the second number. -Kevin-- Sent via pgsql-performance mailing list ([email protected])To make changes to your\n subscription:http://www.postgresql.org/mailpref/pgsql-performance\nVeja quais são os assuntos do momento no Yahoo! + Buscados: Top 10 - Celebridades - Música - Esportes", "msg_date": "Fri, 10 Jul 2009 12:33:08 -0700 (PDT)", "msg_from": "paulo matadr <[email protected]>", "msg_from_op": true, "msg_subject": "Res: Cost performace question" }, { "msg_contents": "On Fri, 10 Jul 2009, paulo matadr wrote:\n> thanks i understand now,  i have other dought\n> Index Scan using xix3_movimento_roteiro_empresa on movimento_roteiro_empresa movimentor0_  (cost=0.00..73386.80\n> rows=5126 width=451)\n> what's this mean on red color above?\n\nPlease don't assume that everyone receiving your mails uses a HTML mail \nreader. In fact, recommended netiquette is to not send HTML mails at all. \nMany people set their email programs to automatically reject (as spam) any \nemails containing HTML.\n\nManually reading the HTML, it seems that you are referring to the \n\"width=451\" part of the EXPLAIN result. Please read the manual at\nhttp://www.postgresql.org/docs/8.4/interactive/using-explain.html\n\nThe width is the:\n\n* Estimated average width (in bytes) of rows output by this plan node\n\nMatthew\n\n-- \n There once was a limerick .sig\n that really was not very big\n It was going quite fine\n Till it reached the fourth line", "msg_date": "Mon, 13 Jul 2009 13:21:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Res: Cost performace question" } ]
[ { "msg_contents": "Tom Lane [[email protected]] wrote:\n> Oh, and don't forget the more-complete pg_locks state. We'll want all\n> the columns of pg_locks, not just the ones you showed before.\nauto vacuum of ts_user_sessions_map has been running for > 17 hours. \nThis table has 2,204,488 rows. I hope that I've captured enough info.\n\nThanks,\nBrian\n\n\ncemdb=# select procpid,current_query,query_start from pg_stat_activity; \n procpid | current_query \n | query_start\n---------+-----------------------------------------------------------------+-------------------------------\n 8817 | <IDLE> \n | 2009-07-09 16:48:12.656419-07\n 8818 | autovacuum: VACUUM public.ts_user_sessions_map \n | 2009-07-09 16:48:18.873677-07\n\n\ncemdb=# select \nl.pid,c.relname,l.mode,l.granted,l.virtualxid,l.virtualtransaction from \npg_locks l left outer join pg_class c on c.oid=l.relation order by l.pid;\n pid | relname | mode \n | granted | virtualxid | virtualtransaction\n-------+--------------------------------------------+--------------------------+---------+------------+-------------------- \n 8818 | ts_user_sessions_map_interimsessionidindex | RowExclusiveLock \n | t | | 2/3\n 8818 | ts_user_sessions_map_appindex | RowExclusiveLock \n | t | | 2/3\n 8818 | ts_user_sessions_map_sessionidindex | RowExclusiveLock \n | t | | 2/3\n 8818 | ts_user_sessions_map | \nShareUpdateExclusiveLock | t | | 2/3\n 8818 | | ExclusiveLock \n | t | 2/3 | 2/3\n 8818 | ts_user_sessions_map_pkey | RowExclusiveLock \n | t | | 2/3\n 13706 | | ExclusiveLock \n | t | 6/50 | 6/50\n 13706 | pg_class_oid_index | AccessShareLock \n | t | | 6/50\n 13706 | pg_class_relname_nsp_index | AccessShareLock \n | t | | 6/50\n 13706 | pg_locks | AccessShareLock \n | t | | 6/50\n 13706 | pg_class | AccessShareLock \n | t | | 6/50\n(11 rows)\n\n\n[root@rdl64xeoserv01 log]# strace -p 8818 -o /tmp/strace.log\nProcess 8818 attached - interrupt to quit\nProcess 8818 detached\n[root@rdl64xeoserv01 log]# more /tmp/strace.log\nselect(0, NULL, NULL, NULL, {0, 13000}) = 0 (Timeout)\nread(36, \"`\\0\\0\\0\\370\\354\\250u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\340\\f\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\300,\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0(L\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\\264\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0|M\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\\264\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\\\~\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\\264\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0D\\234\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\34\\255\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\4\\315\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\234\\2334x\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\354\\354\\251u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\324\\f\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\274,\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\244L\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0008^\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0,\\233\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\370\\330\\252u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0000\\371\\252u\\1\\0\\0\\0\\34\\0\\270\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\364\\30\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\2448\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nselect(0, NULL, NULL, NULL, {0, 20000}) = 0 (Timeout)\nread(36, \"`\\0\\0\\0dX\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\\264\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0X\\216\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\10\\256\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\300\\315\\253u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\304\\f\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\354=\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\254]\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0d}\\254u\\1\\0\\0\\0\\34\\0\\270\\37\\360\\37\\4 \\0\\0\\0\\0\\270\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\24\\235\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\314\\274\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\314\\330\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0X\\354\\254u\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\nread(36, \"`\\0\\0\\0\\350\\253\\30x\\1\\0\\0\\0\\34\\0\\264\\37\\360\\37\\4 \\0\\0\\0\"..., \n8192) = 8192\n_llseek(36, 1068474368, [1068474368], SEEK_SET) = 0\nread(36, \"`\\0\\0\\0\\350\\253\\30x\\1\\0\\0\\0\\24\\1h\\21\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\n_llseek(36, 1068220416, [1068220416], SEEK_SET) = 0\nread(36, \"`\\0\\0\\0P\\356\\254u\\1\\0\\0\\0\\34\\0\\270\\37\\360\\37\\4 \\0\\0\\0\\0\"..., \n8192) = 8192\n\n\n", "msg_date": "Fri, 10 Jul 2009 11:11:55 -0700", "msg_from": "Brian Cox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: autovacuum hung?" } ]
[ { "msg_contents": "Hi,\n\nI have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB \nas well as a Greenplum DB.\n\nThe Primary key is a composite one comprising of 2 columns (so_no, \nserial_no).\n\nThe execution of the following query takes 8214.016 ms in Greenplum but \nonly 729.134 ms in Postgres.\nselect * from observation_all order by so_no, serial_no;\n\nI believe that execution time in greenplum should be less compared to \npostgres. Can anybody throw some light, it would be of great help.\n\n\nRegards,\n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi,\n\nI have some 99,000 records in a table\n(OBSERVATION_ALL) in a Postgres DB as well as a Greenplum DB.\n\nThe Primary key is a composite one\ncomprising of 2 columns (so_no, serial_no).\n\nThe execution of the following query\ntakes 8214.016 ms in Greenplum but only 729.134 ms in Postgres.\nselect * from observation_all order\nby so_no, serial_no;\n\nI believe that execution time in greenplum\nshould be less compared to postgres. Can anybody throw some light, it would\nbe of great help.\n\n\nRegards,\n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Mon, 13 Jul 2009 16:53:41 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Performance comparison between Postgres and Greenplum" }, { "msg_contents": "On Mon, Jul 13, 2009 at 5:23 AM, Suvankar Roy<[email protected]> wrote:\n>\n> Hi,\n>\n> I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB as\n> well as a Greenplum DB.\n>\n> The Primary key is a composite one comprising of 2 columns (so_no,\n> serial_no).\n>\n> The execution of the following query takes 8214.016 ms in Greenplum but only\n> 729.134 ms in Postgres.\n> select * from observation_all order by so_no, serial_no;\n>\n> I believe that execution time in greenplum should be less compared to\n> postgres. Can anybody throw some light, it would be of great help.\n\nWhat versions are you comparing?\n", "msg_date": "Tue, 14 Jul 2009 21:40:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "Hi Scott,\n\nThis is what I have got -\n\nIn Greenplum, the following query returns:\n\ntest_db1=# select version();\n version \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 build 4) on \ni686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20080704 (Red Hat \n4.1.2-44) compiled on Jun 4 2009 16:30:49\n(1 row) \n\n\nIn Postgres, the same query returns:\n\npostgres=# select version();\n version\n-----------------------------------------------------\n PostgreSQL 8.3.7, compiled by Visual C++ build 1400\n(1 row)\n\nRegards,\n\nSuvankar Roy\nTata Consultancy Services\nPh:- +91 33 66367352\nCell:- +91 9434666898\n\n\n\n\nScott Marlowe <[email protected]> \n07/15/2009 09:10 AM\n\nTo\nSuvankar Roy <[email protected]>\ncc\[email protected]\nSubject\nRe: [PERFORM] Performance comparison between Postgres and Greenplum\n\n\n\n\n\n\nOn Mon, Jul 13, 2009 at 5:23 AM, Suvankar Roy<[email protected]> wrote:\n>\n> Hi,\n>\n> I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB \nas\n> well as a Greenplum DB.\n>\n> The Primary key is a composite one comprising of 2 columns (so_no,\n> serial_no).\n>\n> The execution of the following query takes 8214.016 ms in Greenplum but \nonly\n> 729.134 ms in Postgres.\n> select * from observation_all order by so_no, serial_no;\n>\n> I believe that execution time in greenplum should be less compared to\n> postgres. Can anybody throw some light, it would be of great help.\n\nWhat versions are you comparing?\n\nForwardSourceID:NT00004AAE \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi Scott,\n\nThis is what I have got -\n\nIn Greenplum, the following query\nreturns:\n\ntest_db1=# select version();\n           \n                     \n                     \n                     \n    version              \n                     \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.13 (Greenplum Database\n3.3.0.1 build 4) on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 4.1.2\n20080704 (Red Hat 4.1.2-44) compiled on Jun  4 2009 16:30:49\n(1 row) \n\n\nIn Postgres, the same query returns:\n\npostgres=# select version();\n           \n           version\n-----------------------------------------------------\n PostgreSQL 8.3.7, compiled by\nVisual C++ build 1400\n(1 row)\n\nRegards,\n\nSuvankar Roy\nTata Consultancy Services\nPh:- +91 33 66367352\nCell:- +91 9434666898\n\n\n\n\n\n\nScott Marlowe <[email protected]>\n\n07/15/2009 09:10 AM\n\n\n\n\nTo\nSuvankar Roy <[email protected]>\n\n\ncc\[email protected]\n\n\nSubject\nRe: [PERFORM] Performance comparison\nbetween Postgres and Greenplum\n\n\n\n\n\n\n\n\nOn Mon, Jul 13, 2009 at 5:23 AM, Suvankar Roy<[email protected]>\nwrote:\n>\n> Hi,\n>\n> I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres\nDB as\n> well as a Greenplum DB.\n>\n> The Primary key is a composite one comprising of 2 columns (so_no,\n> serial_no).\n>\n> The execution of the following query takes 8214.016 ms in Greenplum\nbut only\n> 729.134 ms in Postgres.\n> select * from observation_all order by so_no, serial_no;\n>\n> I believe that execution time in greenplum should be less compared\nto\n> postgres. Can anybody throw some light, it would be of great help.\n\nWhat versions are you comparing?\n\nForwardSourceID:NT00004AAE\n   \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Wed, 15 Jul 2009 11:03:59 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "On Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roy<[email protected]> wrote:\n>\n> Hi Scott,\n>\n> This is what I have got -\n> In Greenplum, version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 build 4) on\n> i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n\n> In Postgres, version PostgreSQL 8.3.7, compiled by Visual C++ build 1400\n> (1 row)\n\nI wouldn't expect 8.2.x to outrun 8.3.x\n", "msg_date": "Wed, 15 Jul 2009 03:30:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "Hi Scott,\n\nThanks for your input Scott.\n\nBut, then being a Massively Parallel Processing Database, is Greenplum not \nexpected to outperform versions of Postgres higher than on which it is \nbased.\n\nMy notion was that GP 3.3 (based on PostgreSQL 8.2.13) would exceed PG \n8.3.7.\n\nIt seems that I was wrong here.\n\nRegards,\n\nSuvankar Roy\n\n\n\n\nScott Marlowe <[email protected]> \n07/15/2009 03:00 PM\n\nTo\nSuvankar Roy <[email protected]>\ncc\[email protected]\nSubject\nRe: [PERFORM] Performance comparison between Postgres and Greenplum\n\n\n\n\n\n\nOn Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roy<[email protected]> \nwrote:\n>\n> Hi Scott,\n>\n> This is what I have got -\n> In Greenplum, version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 \nbuild 4) on\n> i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n\n> In Postgres, version PostgreSQL 8.3.7, compiled by Visual C++ build 1400\n> (1 row)\n\nI wouldn't expect 8.2.x to outrun 8.3.x\n\nForwardSourceID:NT00004AD2 \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi Scott,\n\nThanks for your input Scott.\n\nBut, then being a Massively Parallel\nProcessing Database, is Greenplum not expected to outperform versions of\nPostgres higher than on which it is based.\n\nMy notion was that GP 3.3 (based on\nPostgreSQL 8.2.13)\nwould exceed PG 8.3.7.\n\nIt seems that I was wrong here.\n\nRegards,\n\nSuvankar Roy\n\n\n\n\n\n\nScott Marlowe <[email protected]>\n\n07/15/2009 03:00 PM\n\n\n\n\nTo\nSuvankar Roy <[email protected]>\n\n\ncc\[email protected]\n\n\nSubject\nRe: [PERFORM] Performance comparison\nbetween Postgres and Greenplum\n\n\n\n\n\n\n\n\nOn Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roy<[email protected]>\nwrote:\n>\n> Hi Scott,\n>\n> This is what I have got -\n> In Greenplum, version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1\nbuild 4) on\n> i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n\n> In Postgres, version PostgreSQL 8.3.7, compiled by Visual C++\nbuild 1400\n> (1 row)\n\nI wouldn't expect 8.2.x to outrun 8.3.x\n\nForwardSourceID:NT00004AD2\n   \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Wed, 15 Jul 2009 15:09:21 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": ",--- You/Suvankar (Mon, 13 Jul 2009 16:53:41 +0530) ----*\n| I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB \n| as well as a Greenplum DB.\n| \n| The Primary key is a composite one comprising of 2 columns (so_no, \n| serial_no).\n| \n| The execution of the following query takes 8214.016 ms in Greenplum but \n| only 729.134 ms in Postgres.\n| select * from observation_all order by so_no, serial_no;\n| \n| I believe that execution time in greenplum should be less compared to \n| postgres. Can anybody throw some light, it would be of great help.\n\nWhy do you believe so?\n\nIs your data distributed and served by separate segment hosts? By how\nmany? Is the network connectivity not a factor? What happens with\nthe times if you don't sort your result set?\n\n-- Alex -- [email protected] --\n\n", "msg_date": "Wed, 15 Jul 2009 08:37:32 -0400", "msg_from": "Alex Goncharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "Hi Alex,\n\nYes, I have got 2 segments and a master host. So, in a way processing \nshould be faster in Greenplum.\n\nActually this is only a sort of Proof of Concept trial that I am carrying \nout to notice differences between greenplum and postgres, if any. \n\nFor other queries though, results are satisfactory or at least comparable, \nlike-\n\nselect distinct so_no, serial_no from observation_all;\nin postgres it takes - 1404.238 ms\nin gp it takes - 1217.283 ms\n\n\nRegards,\n\nSuvankar Roy\n\n\n\nAlex Goncharov <[email protected]> \n07/15/2009 06:07 PM\nPlease respond to\nAlex Goncharov <[email protected]>\n\n\nTo\nSuvankar Roy <[email protected]>\ncc\[email protected]\nSubject\nRe: [PERFORM] Performance comparison between Postgres and Greenplum\n\n\n\n\n\n\n,--- You/Suvankar (Mon, 13 Jul 2009 16:53:41 +0530) ----*\n| I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres DB \n\n| as well as a Greenplum DB.\n| \n| The Primary key is a composite one comprising of 2 columns (so_no, \n| serial_no).\n| \n| The execution of the following query takes 8214.016 ms in Greenplum but \n| only 729.134 ms in Postgres.\n| select * from observation_all order by so_no, serial_no;\n| \n| I believe that execution time in greenplum should be less compared to \n| postgres. Can anybody throw some light, it would be of great help.\n\nWhy do you believe so?\n\nIs your data distributed and served by separate segment hosts? By how\nmany? Is the network connectivity not a factor? What happens with\nthe times if you don't sort your result set?\n\n-- Alex -- [email protected] --\n\n\nForwardSourceID:NT00004AF2 \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi Alex,\n\nYes, I have got 2 segments and a master\nhost. So, in a way processing should be faster in Greenplum.\n\nActually this is only a sort of Proof\nof Concept trial that I am carrying out to notice differences between greenplum\nand postgres, if any. \n\nFor other queries though, results are\nsatisfactory or at least comparable, like-\n\nselect distinct so_no, serial_no from\nobservation_all;\nin postgres it takes - 1404.238\nms\nin gp it takes - 1217.283\nms\n\n\nRegards,\n\nSuvankar Roy\n\n\n\n\n\nAlex Goncharov <[email protected]>\n\n07/15/2009 06:07 PM\n\n\n\nPlease respond to\nAlex Goncharov <[email protected]>\n\n\n\n\n\nTo\nSuvankar Roy <[email protected]>\n\n\ncc\[email protected]\n\n\nSubject\nRe: [PERFORM] Performance comparison\nbetween Postgres and Greenplum\n\n\n\n\n\n\n\n\n,--- You/Suvankar (Mon, 13 Jul 2009 16:53:41 +0530)\n----*\n| I have some 99,000 records in a table (OBSERVATION_ALL) in a Postgres\nDB \n| as well as a Greenplum DB.\n| \n| The Primary key is a composite one comprising of 2 columns (so_no, \n| serial_no).\n| \n| The execution of the following query takes 8214.016 ms in Greenplum but\n\n| only 729.134 ms in Postgres.\n| select * from observation_all order by so_no, serial_no;\n| \n| I believe that execution time in greenplum should be less compared to\n\n| postgres. Can anybody throw some light, it would be of great help.\n\nWhy do you believe so?\n\nIs your data distributed and served by separate segment hosts?  By\nhow\nmany?  Is the network connectivity not a factor?  What happens\nwith\nthe times if you don't sort your result set?\n\n-- Alex -- [email protected] --\n\n\nForwardSourceID:NT00004AF2\n   \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Wed, 15 Jul 2009 18:32:12 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": ",--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) ----*\n| Yes, I have got 2 segments and a master host. So, in a way processing \n| should be faster in Greenplum.\n\nNo, it should not: it all depends on your data, SQL statements and\nsetup.\n\nIn my own experiments, with small amounts of stored data, PostgreSQL\nbeats Greenplum, which doesn't surprise me a bit.\n\nYou need to know where most of the execution time goes -- maybe to\nsorting? And sorting in Greenplum, isn't it done on one machine, the\nmaster host? Why would that be faster than in PostgreSQL?\n|\n| For other queries though, results are satisfactory or at least comparable, \n| like-\n| \n| select distinct so_no, serial_no from observation_all;\n| in postgres it takes - 1404.238 ms\n| in gp it takes - 1217.283 ms\n\nNo surprise here: the data is picked by multiple segment hosts and\nnever sorted on the master.\n\n-- Alex -- [email protected] --\n\n", "msg_date": "Wed, 15 Jul 2009 09:18:12 -0400", "msg_from": "Alex Goncharov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "On Wed, Jul 15, 2009 at 9:18 AM, Alex Goncharov\n<[email protected]>wrote:\n\n> ,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) ----*\n> | Yes, I have got 2 segments and a master host. So, in a way processing\n> | should be faster in Greenplum.\n>\n> No, it should not: it all depends on your data, SQL statements and\n> setup.\n>\n> In my own experiments, with small amounts of stored data, PostgreSQL\n> beats Greenplum, which doesn't surprise me a bit.\n\n\nAgreed. You're only operating on 99,000 rows. That isn't really\nenough rows to exercise the architecture of shared-nothing clusters.\nNow, I don't know greenplum very well, but I am familiar with another\nwarehousing product\nwith approximately the same architecture behind\nit. From all the testing I've done, you need to get into the 50\nmillion plus row range before the architecture starts to be really\neffective. 99,000 rows probably fits completely into memory on the\nmachine that you're testing PG with, so your test really isn't fair.\n On one PG box, you're just doing memory reads, and maybe some high-speed\ndisk access, on the Greenplum setup, you've got network overhead on top of\nall that. Bottom\nline: You need to do a test with a number of rows that won't fit into\nmemory, and won't be very quickly scanned from disk into memory. You\nneed a LOT of data.\n\n--Scott\n\nOn Wed, Jul 15, 2009 at 9:18 AM, Alex Goncharov <[email protected]> wrote:\n,--- You/Suvankar (Wed, 15 Jul 2009 18:32:12 +0530) ----*\n| Yes, I have got 2 segments and a master host. So, in a way processing\n| should be faster in Greenplum.\n\nNo, it should not: it all depends on your data, SQL statements and\nsetup.\n\nIn my own experiments, with small amounts of stored data, PostgreSQL\nbeats Greenplum, which doesn't surprise me a bit.Agreed.  You're only operating on 99,000 rows.  That isn't really enough rows to exercise the architecture of shared-nothing clusters.  Now, I don't know greenplum very well, but I am familiar with another warehousing product with approximately the same architecture behind it.  From all the testing I've done, you need to get into the 50 million plus row range before the architecture starts to be really effective.  99,000 rows probably fits completely into memory on the machine that you're testing PG with, so your test really isn't fair.  On one PG box, you're just doing memory reads, and maybe some high-speed disk access, on the Greenplum setup, you've got network overhead on top of all that.  Bottom line: You need to do a test with a number of rows that won't fit into memory, and won't be very quickly scanned from disk into memory.  You need a LOT of data.\n--Scott", "msg_date": "Wed, 15 Jul 2009 11:33:03 -0400", "msg_from": "Scott Mead <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" }, { "msg_contents": "On Mon, 13 Jul 2009, Suvankar Roy wrote:\n\n> I believe that execution time in greenplum should be less compared to postgres.\n\nWell, first off you don't even mention which PostgreSQL or Greenplum \nversion you're comparing, which leaves a lot of variables we can't account \nfor. Second, you'd need to make sure that the two servers had as close to \nidentical server parameter configurations as possible to get a fair \ncomparison (the postgresql.conf file). Next, you need to make sure the \ndata has been loaded and analyzed similarly on the two--try using \"VACUUM \nANALYZE\" on both systems before running your query, then \"EXPLAIN ANALYZE\" \non both setups to get an idea if they're using the same plan to pull data \nfrom the disk, you may discover there's a radical different there.\n\n...and even if you did all that, this still wouldn't be the right place to \nask about Greenplum's database product. You'll end up with everyone mad \nat you. Nobody likes have benchmarks that show their product in a bad \nlight published, particularly if they aren't completely fair. And this \nlist is dedicated to talking about the open-source PostgreSQL versions. \nYour question would be more appropriate to throw in Greenplum's direction. \nThe list I gave above is by no means even comprehensive--there are plenty \nof other ways you can end up doing an unfair comparison here (using \ndifferent paritions on the same disk which usually end up with different \nspeeds comes to mind).\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 15 Jul 2009 21:14:46 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and\n Greenplum" }, { "msg_contents": "On Wed, 15 Jul 2009, Scott Marlowe wrote:\n\n> On Tue, Jul 14, 2009 at 11:33 PM, Suvankar Roy<[email protected]> wrote:\n>>\n>> Hi Scott,\n>>\n>> This is what I have got -\n>> In Greenplum,�version PostgreSQL 8.2.13 (Greenplum Database 3.3.0.1 build 4) on\n>> i686-pc-linux-gnu, compiled by GCC gcc (GCC)\n>\n>> In Postgres, version�PostgreSQL 8.3.7, compiled by Visual C++ build 1400\n>> (1 row)\n>\n> I wouldn't expect 8.2.x to outrun 8.3.x\n\nAnd you can't directly compare performance of a system running Linux with \none running Windows, even if they're the same hardware. Theoretically, \nLinux should have an advantage, but only if you're accounting for a whole \nstack of other variables.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n>From [email protected] Wed Jul 15 23:37:09 2009\nReceived: from maia.hub.org (unknown [200.46.204.183])\n\tby mail.postgresql.org (Postfix) with ESMTP id 7372E633F4F\n\tfor <[email protected]>; Wed, 15 Jul 2009 23:37:09 -0300 (ADT)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)\n with ESMTP id 39630-06\n for <[email protected]>;\n Wed, 15 Jul 2009 23:36:58 -0300 (ADT)\nX-Greylist: domain auto-whitelisted by SQLgrey-1.7.6\nReceived: from mail-ew0-f223.google.com (mail-ew0-f223.google.com [209.85.219.223])\n\tby mail.postgresql.org (Postfix) with ESMTP id F374863386F\n\tfor <[email protected]>; Wed, 15 Jul 2009 23:36:55 -0300 (ADT)\nReceived: by ewy23 with SMTP id 23so4392589ewy.19\n for <[email protected]>; Wed, 15 Jul 2009 19:36:53 -0700 (PDT)\nDKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=gmail.com; s=gamma;\n h=domainkey-signature:mime-version:received:in-reply-to:references\n :date:message-id:subject:from:to:cc:content-type\n :content-transfer-encoding;\n bh=zAd2j/NRQceoCSRzNlsqeSjRgrq+BGQ8vrIESsiGTU8=;\n b=WfO09u5wWjUyoJRb7cEyxT/WM1tjLY4r8zTMVAabVs3QvEQufug8qkFtXLv6dni5Z7\n TIUHUELABFrezvTmPQG9eX1VJgYcBHuNw6PPXzeyKlMCSlRd74Ev6f7VJ7m2FShKPhPv\n aM8B5cfPEd0v2gbgSKS7V6oXXSJphHBEX7xiA=\nDomainKey-Signature: a=rsa-sha1; c=nofws;\n d=gmail.com; s=gamma;\n h=mime-version:in-reply-to:references:date:message-id:subject:from:to\n :cc:content-type:content-transfer-encoding;\n b=fqS+7js4KDYQDotfr5TcBhbo5dIyS2vaYgwwWo4yvSfNoAkOSEoB431WBbhCh6J785\n xyhpVHz5RXyFluxbtEvGjCN/cZcjc7AOrHF75AF9bmj7neoT5xdP9erdCYWqbuSbU/Eq\n G3j6SqJiU57Csoi6VYbELJBi5ia0EnTDL44Nw=\nMIME-Version: 1.0\nReceived: by 10.210.81.9 with SMTP id e9mr9033856ebb.68.1247711811972; Wed, 15 \n\tJul 2009 19:36:51 -0700 (PDT)\nIn-Reply-To: <[email protected]>\nReferences: <[email protected]>\n\t <C683C36A.A2F8%[email protected]>\n\t <[email protected]>\nDate: Wed, 15 Jul 2009 20:36:51 -0600\nMessage-ID: <[email protected]>\nSubject: Re: cluster index on a table\nFrom: Scott Marlowe <[email protected]>\nTo: Justin Pitts <[email protected]>\nCc: \"[email protected]\" <[email protected]>, \n\tIbrahim Harrani <[email protected]>, Scott Carey <[email protected]>\nContent-Type: text/plain; charset=windows-1252\nContent-Transfer-Encoding: quoted-printable\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0 tagged_above=0 required=5 tests=none\nX-Spam-Level: \nX-Archive-Number: 200907/122\nX-Sequence-Number: 34782\n\nI'd love to see it.\n\nOn Wed, Jul 15, 2009 at 8:17 PM, Justin Pitts<[email protected]> wrote:\n> Is there any interest in adding that (continual/automatic cluster\n> order maintenance) to a future release?\n>\n> On Wed, Jul 15, 2009 at 8:33 PM, Scott Carey<[email protected]> wro=\nte:\n>> If you have a lot of insert/update/delete activity on a table fillfactor=\n can\n>> help.\n>>\n>> I don=92t believe that postgres will try and maintain the table in the c=\nluster\n>> order however.\n>>\n>>\n>> On 7/15/09 8:04 AM, \"Ibrahim Harrani\" <[email protected]> wrote:\n>>\n>> Hi,\n>>\n>> thanks for your suggestion.\n>> Is there any benefit of setting fillfactor to 70 or 80 on this table?\n>>\n>>\n>>\n>> On Wed, Jun 24, 2009 at 8:42 PM, Scott Marlowe<[email protected]>\n>> wrote:\n>>> As another poster pointed out, you cluster on ONE index and one index\n>>> only. =A0However, you can cluster on a multi-column index.\n>>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]=\ng)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n\n\n\n--=20\nWhen fascism comes to America, it will be intolerance sold as diversity.\n", "msg_date": "Wed, 15 Jul 2009 21:17:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and\n Greenplum" }, { "msg_contents": "On Wed, Jul 15, 2009 at 7:02 AM, Suvankar Roy<[email protected]> wrote:\n>\n> Hi Alex,\n>\n> Yes, I have got 2 segments and a master host. So, in a way processing should\n> be faster in Greenplum.\n>\n> Actually this is only a sort of Proof of Concept trial that I am carrying\n> out to notice differences between greenplum and postgres, if any.\n\nYou're definitely gonna want more data to test with. I run regular\nvanilla pgsql for stats at work, and we average 0.8M to 2M rows of\nstats every day. We keep them for up to two years. So, when we reach\nour max of two years, we're talking somewhere in the range of a\nbillion rows to mess about with.\n\nDuring a not so busy day, the 99,000th row entered into stats for\nhappens at about 3am. Once they're loaded into memory it takes 435 ms\nto access those 99k rows.\n\nStart testing in the millions, at a minimum. Hundreds of millions is\nmore likely to start showing a difference.\n", "msg_date": "Fri, 17 Jul 2009 00:24:35 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance comparison between Postgres and Greenplum" } ]
[ { "msg_contents": "Hi,\n\nI'm trying to solve big performance issues with PostgreSQL + bacula while \ninserting very big sets of records.\n\nI'm sorry, this email will be a long one, as I've already spent quite a lot of \ntime on the issue, I don't want to waste your time speculating on things I \nmay already have done, and the problem is (or seems to me) a bit complex. The \nother problem is that I don't have the explain plans to provide with the \nemail right now. I'll try to use this as a way to push 8.4 in this setup, to \ndump all these plans with autoexplain (queries are on temporary tables, so a \nbit tricky to get).\n\nLet me first explain or remind how this works. Bacula is a backup solution and \nis trying to insert its metadatas at the end of backups (file name, directory \nname, size, etc ...)\nFor what we are interested in, there are 3 tables :\n- file\n- filename\n- path\n\nfile is the one containing most records. It's the real metadata. filename and \npath just contain an id and the real file or directory name (to save some \nspace with redundant names).\n\nBefore explaining the issue, just some information about sizing here :\n\nfile is 1.1 billion records for 280GB (with indexes).\n\n Column | Type | Modifiers\n------------+---------+-------------------------------------------------------\n fileid | bigint | not null default nextval('file_fileid_seq'::regclass)\n fileindex | integer | not null default 0\n jobid | integer | not null\n pathid | integer | not null\n filenameid | integer | not null\n markid | integer | not null default 0\n lstat | text | not null\n md5 | text | not null\nIndexes:\n \"file_pkey\" UNIQUE, btree (fileid)\n \"file_fp_idx\" btree (filenameid, pathid)\n \"file_jpfid_idx\" btree (jobid, pathid, filenameid)\n\n\npath is 17 million for 6 GB\n\n Column | Type | Modifiers\n--------+---------+-------------------------------------------------------\n pathid | integer | not null default nextval('path_pathid_seq'::regclass)\n path | text | not null\nIndexes:\n \"path_pkey\" PRIMARY KEY, btree (pathid)\n \"path_name_idx\" UNIQUE, btree (path)\n\nfilename is 80 million for 13GB\n\n Column | Type | Modifiers\n------------+---------+---------------------------------------------------------------\n filenameid | integer | not null default \nnextval('filename_filenameid_seq'::regclass)\n name | text | not null\nIndexes:\n \"filename_pkey\" PRIMARY KEY, btree (filenameid)\n \"filename_name_idx\" UNIQUE, btree (name)\n\n\nThere are several queries for each job despooling :\n\nFirst we fill a temp table with the raw data (filename, pathname, metadata), \nusing COPY (no problem here)\n\nThen we insert missing filenames in file, and missing pathnames in path,\nwith this query (and the same for file) :\n\nINSERT INTO Path (Path) \n SELECT a.Path FROM (SELECT DISTINCT Path FROM batch) AS a \n WHERE NOT EXISTS (SELECT Path FROM Path WHERE Path = a.Path)\n\nThese do nested loops and work very well (after a sort on batch to get rid \nfrom duplicates). They work reasonably fast (what one would expect when \nlooping on millions of records... they do their job in a few minutes).\n\nThe problem occurs with the final query, which inserts data in file, joining \nthe temp table to both file and filename\n\nINSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5)\n SELECT batch.FileIndex,\n batch.JobId,\n Path.PathId,\n Filename.FilenameId,\n batch.LStat,\n batch.MD5 \n FROM batch \n JOIN Path ON (batch.Path = Path.Path) \n JOIN Filename ON (batch.Name = Filename.Name)\n\nThis one has two split personnalities, depending on how many records are in \nbatch.\nFor small batch tables, it does nested loops.\nFor big batch tables (more than around one million initially) it decides to \nhash join path (still ok, it's reasonably small) and then filename to batch \nbefore starting. And that's when the problems begin The behaviour seems \nlogicial to me, it should go to hash join when batch gets bigger, but it \nseems to be much too early here, considering the size of filename.\n\nFirst of all, performance remains much better on nested loops, except for \nextremely big batches (i'd say over 30 million, extrapolating from the times \nI'm seeing with 10 millions records), so if I disable hash/merge joins, I get \nmy performance back on these queries (they execute in around the same time as \nthe searches in path and filename above). So I found a way to make most of my \nqueries do nested loops (I'll come back to this later)\n\nSecond, If there is more than one of these big sorts, performance degrades \ndrastically (we had 2 of them this weekend, they both took 24 hours to \ncomplete). This is probably due to our quite bad disk setup (we didn't have a \nbig budget for this). There was no swapping of linux\n\n\nSo all of this makes me think there is a cost evaluation problem in this \nsetup : with the default values, postgresql seems to underestimate the cost \nof sorting here (the row estimates were good, no problem with that).\nPostgreSQL seems to think that at around 1 million records in file it should \ngo with a hash join on filename and path, so we go on hashing the 17 million \nrecords of path, the 80 millions of filename, then joining and inserting into \nfile (we're talking about sorting around 15 GB for each of these despools in \nparallel).\n\nTemporarily I moved the problem at a bit higher sizes of batch by changing \nrandom_page_cost to 0.02 and seq_page_cost to 0.01, but I feel like an \napprentice sorcerer with this, as I told postgreSQL that fetching rows from \ndisk are much cheaper than they are. These values are, I think, completely \nabnormal. Doing this, I got the change of plan at around 8 million. And had 2 \nof them at 9 millions at the same time this weekend, and both of the took 24 \nhours, while the nested loops before the join (for inserts in path and \nfilename) did their work in minutes...\n\nSo, finally, to my questions :\n- Is it normal that PostgreSQL is this off base on these queries (sorry I \ndon't have the plans, if they are required I'll do my best to get some, but \nthey really are the two obvious plans for this kind of query). What could \nmake it choose the hash join for too small batch tables ?\n- Is changing the 2 costs the way to go ?\n- Is there a way to tell postgreSQL that it's more costly to sort than it \nthinks ? (instead of telling it that fetching data from disk doesn't cost \nanything).\n\n\n\n\nHere are the other non-default values from my configuration :\n\nshared_buffers = 2GB\nwork_mem = 64MB\nmaintenance_work_mem = 256MB\nmax_fsm_pages = 15000000 # There are quite big deletes with bacula ...\neffective_cache_size = 800MB\ndefault_statistics_target = 1000\n\nPostgreSQL is 8.3.5 on Debian Lenny\n\n\n\nI'm sorry for this very long email, I tried to be as precise as I could, but \ndon't hesitate to ask me more.\n\nThanks for helping.\n\nMarc Cousin\n", "msg_date": "Mon, 13 Jul 2009 15:40:18 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Very big insert/join performance problem (bacula)" }, { "msg_contents": "Hi,\njust a remark, as the number of entries seems to be very high:\nDid you ever activate bacula's program dbcheck Option 16?\n\nRegards\n\nReiner\n\n\nMarc Cousin schrieb:\n> Hi,\n>\n> I'm trying to solve big performance issues with PostgreSQL + bacula while \n> inserting very big sets of records.\n>\n> I'm sorry, this email will be a long one, as I've already spent quite a lot of \n> time on the issue, I don't want to waste your time speculating on things I \n> may already have done, and the problem is (or seems to me) a bit complex. The \n> other problem is that I don't have the explain plans to provide with the \n> email right now. I'll try to use this as a way to push 8.4 in this setup, to \n> dump all these plans with autoexplain (queries are on temporary tables, so a \n> bit tricky to get).\n>\n> Let me first explain or remind how this works. Bacula is a backup solution and \n> is trying to insert its metadatas at the end of backups (file name, directory \n> name, size, etc ...)\n> For what we are interested in, there are 3 tables :\n> - file\n> - filename\n> - path\n>\n> file is the one containing most records. It's the real metadata. filename and \n> path just contain an id and the real file or directory name (to save some \n> space with redundant names).\n>\n> Before explaining the issue, just some information about sizing here :\n>\n> file is 1.1 billion records for 280GB (with indexes).\n>\n> Column | Type | Modifiers\n> ------------+---------+-------------------------------------------------------\n> fileid | bigint | not null default nextval('file_fileid_seq'::regclass)\n> fileindex | integer | not null default 0\n> jobid | integer | not null\n> pathid | integer | not null\n> filenameid | integer | not null\n> markid | integer | not null default 0\n> lstat | text | not null\n> md5 | text | not null\n> Indexes:\n> \"file_pkey\" UNIQUE, btree (fileid)\n> \"file_fp_idx\" btree (filenameid, pathid)\n> \"file_jpfid_idx\" btree (jobid, pathid, filenameid)\n>\n>\n> path is 17 million for 6 GB\n>\n> Column | Type | Modifiers\n> --------+---------+-------------------------------------------------------\n> pathid | integer | not null default nextval('path_pathid_seq'::regclass)\n> path | text | not null\n> Indexes:\n> \"path_pkey\" PRIMARY KEY, btree (pathid)\n> \"path_name_idx\" UNIQUE, btree (path)\n>\n> filename is 80 million for 13GB\n>\n> Column | Type | Modifiers\n> ------------+---------+---------------------------------------------------------------\n> filenameid | integer | not null default \n> nextval('filename_filenameid_seq'::regclass)\n> name | text | not null\n> Indexes:\n> \"filename_pkey\" PRIMARY KEY, btree (filenameid)\n> \"filename_name_idx\" UNIQUE, btree (name)\n>\n>\n> There are several queries for each job despooling :\n>\n> First we fill a temp table with the raw data (filename, pathname, metadata), \n> using COPY (no problem here)\n>\n> Then we insert missing filenames in file, and missing pathnames in path,\n> with this query (and the same for file) :\n>\n> INSERT INTO Path (Path) \n> SELECT a.Path FROM (SELECT DISTINCT Path FROM batch) AS a \n> WHERE NOT EXISTS (SELECT Path FROM Path WHERE Path = a.Path)\n>\n> These do nested loops and work very well (after a sort on batch to get rid \n> from duplicates). They work reasonably fast (what one would expect when \n> looping on millions of records... they do their job in a few minutes).\n>\n> The problem occurs with the final query, which inserts data in file, joining \n> the temp table to both file and filename\n>\n> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5)\n> SELECT batch.FileIndex,\n> batch.JobId,\n> Path.PathId,\n> Filename.FilenameId,\n> batch.LStat,\n> batch.MD5 \n> FROM batch \n> JOIN Path ON (batch.Path = Path.Path) \n> JOIN Filename ON (batch.Name = Filename.Name)\n>\n> This one has two split personnalities, depending on how many records are in \n> batch.\n> For small batch tables, it does nested loops.\n> For big batch tables (more than around one million initially) it decides to \n> hash join path (still ok, it's reasonably small) and then filename to batch \n> before starting. And that's when the problems begin The behaviour seems \n> logicial to me, it should go to hash join when batch gets bigger, but it \n> seems to be much too early here, considering the size of filename.\n>\n> First of all, performance remains much better on nested loops, except for \n> extremely big batches (i'd say over 30 million, extrapolating from the times \n> I'm seeing with 10 millions records), so if I disable hash/merge joins, I get \n> my performance back on these queries (they execute in around the same time as \n> the searches in path and filename above). So I found a way to make most of my \n> queries do nested loops (I'll come back to this later)\n>\n> Second, If there is more than one of these big sorts, performance degrades \n> drastically (we had 2 of them this weekend, they both took 24 hours to \n> complete). This is probably due to our quite bad disk setup (we didn't have a \n> big budget for this). There was no swapping of linux\n>\n>\n> So all of this makes me think there is a cost evaluation problem in this \n> setup : with the default values, postgresql seems to underestimate the cost \n> of sorting here (the row estimates were good, no problem with that).\n> PostgreSQL seems to think that at around 1 million records in file it should \n> go with a hash join on filename and path, so we go on hashing the 17 million \n> records of path, the 80 millions of filename, then joining and inserting into \n> file (we're talking about sorting around 15 GB for each of these despools in \n> parallel).\n>\n> Temporarily I moved the problem at a bit higher sizes of batch by changing \n> random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel like an \n> apprentice sorcerer with this, as I told postgreSQL that fetching rows from \n> disk are much cheaper than they are. These values are, I think, completely \n> abnormal. Doing this, I got the change of plan at around 8 million. And had 2 \n> of them at 9 millions at the same time this weekend, and both of the took 24 \n> hours, while the nested loops before the join (for inserts in path and \n> filename) did their work in minutes...\n>\n> So, finally, to my questions :\n> - Is it normal that PostgreSQL is this off base on these queries (sorry I \n> don't have the plans, if they are required I'll do my best to get some, but \n> they really are the two obvious plans for this kind of query). What could \n> make it choose the hash join for too small batch tables ?\n> - Is changing the 2 costs the way to go ?\n> - Is there a way to tell postgreSQL that it's more costly to sort than it \n> thinks ? (instead of telling it that fetching data from disk doesn't cost \n> anything).\n>\n>\n>\n>\n> Here are the other non-default values from my configuration :\n>\n> shared_buffers = 2GB\n> work_mem = 64MB\n> maintenance_work_mem = 256MB\n> max_fsm_pages = 15000000 # There are quite big deletes with bacula ...\n> effective_cache_size = 800MB\n> default_statistics_target = 1000\n>\n> PostgreSQL is 8.3.5 on Debian Lenny\n>\n>\n>\n> I'm sorry for this very long email, I tried to be as precise as I could, but \n> don't hesitate to ask me more.\n>\n> Thanks for helping.\n>\n> Marc Cousin\n>\n> \n\n", "msg_date": "Mon, 13 Jul 2009 16:37:06 +0200", "msg_from": "SystemManagement <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "We regularly do all of dbcheck. This is our real configuration, there are \nreally lots of servers and lots of files (500 million files backed up every \nmonth).\n\nBut thanks for mentionning that.\n\nThe thing is we're trying to improve bacula with postgresql in order to make \nit able to bear with this kind of volumes. So we are looking for things to \nimprove bacula and postgresql tuning to make it cope with the queries \nmentionned (or rewrite the queries or the way to do inserts, that may not be \na problem either)\n\nOn Monday 13 July 2009 16:37:06 SystemManagement wrote:\n> Hi,\n> just a remark, as the number of entries seems to be very high:\n> Did you ever activate bacula's program dbcheck Option 16?\n>\n> Regards\n>\n> Reiner\n", "msg_date": "Mon, 13 Jul 2009 16:51:10 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Hi Marc,\n\nI don't have really extensive comments, but I found two small things...\n\nOn Monday 13 July 2009 15:40:18 Marc Cousin wrote:\n> I'm trying to solve big performance issues with PostgreSQL + bacula while\n> inserting very big sets of records.\n>\n> I'm sorry, this email will be a long one, as I've already spent quite a lot\n> of time on the issue, I don't want to waste your time speculating on things\n> I may already have done, and the problem is (or seems to me) a bit complex.\n> The other problem is that I don't have the explain plans to provide with\n> the email right now. I'll try to use this as a way to push 8.4 in this\n> setup, to dump all these plans with autoexplain (queries are on temporary\n> tables, so a bit tricky to get).\n>\n> Let me first explain or remind how this works. Bacula is a backup solution\n> and is trying to insert its metadatas at the end of backups (file name,\n> directory name, size, etc ...)\n> For what we are interested in, there are 3 tables :\n> - file\n> - filename\n> - path\n>\n> file is the one containing most records. It's the real metadata. filename\n> and path just contain an id and the real file or directory name (to save\n> some space with redundant names).\n>\n> Before explaining the issue, just some information about sizing here :\n>\n> file is 1.1 billion records for 280GB (with indexes).\n>\n> Column | Type | Modifiers\n> ------------+---------+----------------------------------------------------\n>--- fileid | bigint | not null default\n> nextval('file_fileid_seq'::regclass) fileindex | integer | not null\n> default 0\n> jobid | integer | not null\n> pathid | integer | not null\n> filenameid | integer | not null\n> markid | integer | not null default 0\n> lstat | text | not null\n> md5 | text | not null\n> Indexes:\n> \"file_pkey\" UNIQUE, btree (fileid)\n> \"file_fp_idx\" btree (filenameid, pathid)\n> \"file_jpfid_idx\" btree (jobid, pathid, filenameid)\n>\n>\n> path is 17 million for 6 GB\n>\n> Column | Type | Modifiers\n> --------+---------+-------------------------------------------------------\n> pathid | integer | not null default nextval('path_pathid_seq'::regclass)\n> path | text | not null\n> Indexes:\n> \"path_pkey\" PRIMARY KEY, btree (pathid)\n> \"path_name_idx\" UNIQUE, btree (path)\n>\n> filename is 80 million for 13GB\n>\n> Column | Type | Modifiers\n> ------------+---------+----------------------------------------------------\n>----------- filenameid | integer | not null default\n> nextval('filename_filenameid_seq'::regclass)\n> name | text | not null\n> Indexes:\n> \"filename_pkey\" PRIMARY KEY, btree (filenameid)\n> \"filename_name_idx\" UNIQUE, btree (name)\n>\n>\n> There are several queries for each job despooling :\n>\n> First we fill a temp table with the raw data (filename, pathname,\n> metadata), using COPY (no problem here)\n>\n> Then we insert missing filenames in file, and missing pathnames in path,\n> with this query (and the same for file) :\n>\n> INSERT INTO Path (Path)\n> SELECT a.Path FROM (SELECT DISTINCT Path FROM batch) AS a\n> WHERE NOT EXISTS (SELECT Path FROM Path WHERE Path = a.Path)\n>\n> These do nested loops and work very well (after a sort on batch to get rid\n> from duplicates). They work reasonably fast (what one would expect when\n> looping on millions of records... they do their job in a few minutes).\nWhile this is not your questions, I still noticed you seem to be on 8.3 - it \nmight be a bit faster to use GROUP BY instead of DISTINCT.\n\n> The problem occurs with the final query, which inserts data in file,\n> joining the temp table to both file and filename\n>\n> INSERT INTO File (FileIndex, JobId, PathId, FilenameId, LStat, MD5)\n> SELECT batch.FileIndex,\n> batch.JobId,\n> Path.PathId,\n> Filename.FilenameId,\n> batch.LStat,\n> batch.MD5\n> FROM batch\n> JOIN Path ON (batch.Path = Path.Path)\n> JOIN Filename ON (batch.Name = Filename.Name)\n>\n> This one has two split personnalities, depending on how many records are in\n> batch.\n> For small batch tables, it does nested loops.\n> For big batch tables (more than around one million initially) it decides to\n> hash join path (still ok, it's reasonably small) and then filename to batch\n> before starting. And that's when the problems begin The behaviour seems\n> logicial to me, it should go to hash join when batch gets bigger, but it\n> seems to be much too early here, considering the size of filename.\n>\n> First of all, performance remains much better on nested loops, except for\n> extremely big batches (i'd say over 30 million, extrapolating from the\n> times I'm seeing with 10 millions records), so if I disable hash/merge\n> joins, I get my performance back on these queries (they execute in around\n> the same time as the searches in path and filename above). So I found a way\n> to make most of my queries do nested loops (I'll come back to this later)\n>\n> Second, If there is more than one of these big sorts, performance degrades\n> drastically (we had 2 of them this weekend, they both took 24 hours to\n> complete). This is probably due to our quite bad disk setup (we didn't have\n> a big budget for this). There was no swapping of linux\n>\n>\n> So all of this makes me think there is a cost evaluation problem in this\n> setup : with the default values, postgresql seems to underestimate the cost\n> of sorting here (the row estimates were good, no problem with that).\n> PostgreSQL seems to think that at around 1 million records in file it\n> should go with a hash join on filename and path, so we go on hashing the 17\n> million records of path, the 80 millions of filename, then joining and\n> inserting into file (we're talking about sorting around 15 GB for each of\n> these despools in parallel).\n>\n> Temporarily I moved the problem at a bit higher sizes of batch by changing\n> random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel like an\n> apprentice sorcerer with this, as I told postgreSQL that fetching rows from\n> disk are much cheaper than they are. These values are, I think, completely\n> abnormal. Doing this, I got the change of plan at around 8 million. And had\n> 2 of them at 9 millions at the same time this weekend, and both of the took\n> 24 hours, while the nested loops before the join (for inserts in path and\n> filename) did their work in minutes...\n>\n> So, finally, to my questions :\n> - Is it normal that PostgreSQL is this off base on these queries (sorry I\n> don't have the plans, if they are required I'll do my best to get some, but\n> they really are the two obvious plans for this kind of query). What could\n> make it choose the hash join for too small batch tables ?\n> - Is changing the 2 costs the way to go ?\n> - Is there a way to tell postgreSQL that it's more costly to sort than it\n> thinks ? (instead of telling it that fetching data from disk doesn't cost\n> anything).\n\n> Here are the other non-default values from my configuration :\n>\n> shared_buffers = 2GB\n> work_mem = 64MB\n> maintenance_work_mem = 256MB\n> max_fsm_pages = 15000000 # There are quite big deletes with bacula ...\n> effective_cache_size = 800MB\n> default_statistics_target = 1000\nYour effective_cache_size is really small for the system you seem to have - its \nthe size of IO caching your os is doing and uses no resources itself. And \n800MB of that on a system with that amount of data seems a bit unlikely ;-)\n\nUsing `free` you can see the amount of io caching your OS is doing atm. in the \n'cached' column.\n\nThat possibly might tip some plans in a direction you prefer.\n\nWhat kind of machine are you running this on?\n\nAndres\n", "msg_date": "Mon, 13 Jul 2009 17:06:07 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": ">\n> While this is not your questions, I still noticed you seem to be on 8.3 -\n> it might be a bit faster to use GROUP BY instead of DISTINCT.\nIt didn't do a big difference, I already tried that before for this query. \nAnyway, as you said, it's not the query having problems :)\n\n\n> Your effective_cache_size is really small for the system you seem to have -\n> its the size of IO caching your os is doing and uses no resources itself.\n> And 800MB of that on a system with that amount of data seems a bit unlikely\n> ;-)\n>\n> Using `free` you can see the amount of io caching your OS is doing atm. in\n> the 'cached' column.\n>\n> That possibly might tip some plans in a direction you prefer.\n>\n> What kind of machine are you running this on?\n\nI played with this parameter too, and it didn't influence the plan. Anyway, the \ndoc says it's the OS cache available for one query, and there may be a lot of \ninsert queries at the same time, so I chose to be conservative with this \nvalue. I tried it with 8GB too, the plans were the same.\n\nThe OS cache is around 8-10GB by the way.\n\nThe machine is a dell PE2900, with 6 disks dedicated to this database (raid 10 \nconfig)\n", "msg_date": "Tue, 14 Jul 2009 07:54:38 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin wrote:\n> \n>> Your effective_cache_size is really small for the system you seem to have -\n>> its the size of IO caching your os is doing and uses no resources itself.\n>> And 800MB of that on a system with that amount of data seems a bit unlikely\n>> ;-)\n>>\n>> Using `free` you can see the amount of io caching your OS is doing atm. in\n>> the 'cached' column.\n>>\n>> That possibly might tip some plans in a direction you prefer.\n>>\n>> What kind of machine are you running this on?\n> \n> I played with this parameter too, and it didn't influence the plan. Anyway, the \n> doc says it's the OS cache available for one query,\n\nNo they don't. I'm guessing you're getting mixed up with work_mem.\n\n > and there may be a lot of\n> insert queries at the same time, so I chose to be conservative with this \n> value. I tried it with 8GB too, the plans were the same.\n> \n> The OS cache is around 8-10GB by the way.\n\nThat's what you need to set effective_cache_size to then.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 14 Jul 2009 09:15:21 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin wrote:\n> \n> Temporarily I moved the problem at a bit higher sizes of batch by changing \n> random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel like an \n> apprentice sorcerer with this, as I told postgreSQL that fetching rows from \n> disk are much cheaper than they are. These values are, I think, completely \n> abnormal.\n\nThey certainly don't have anything to do with reality. Try putting them \nback to (say) seq_page_cost=1 and random_page_cost=2.\n\n> So, finally, to my questions :\n> - Is it normal that PostgreSQL is this off base on these queries (sorry I \n> don't have the plans, if they are required I'll do my best to get some, but \n> they really are the two obvious plans for this kind of query). What could \n> make it choose the hash join for too small batch tables ?\n\nNo point in speculating without plans.\n\n> - Is changing the 2 costs the way to go ?\n\nNot the way you have.\n\n> - Is there a way to tell postgreSQL that it's more costly to sort than it \n> thinks ? (instead of telling it that fetching data from disk doesn't cost \n> anything).\n\nThat's what the configuration settings do. But if you put a couple way \noff from reality it'll be pure chance if it gets any estimates right.\n\n> Here are the other non-default values from my configuration :\n> \n> shared_buffers = 2GB\n> work_mem = 64MB\n\nSet this *much* higher when you are running your bulk imports. You can \ndo it per-connection. Try 256MB, 512MB, 1GB (but keep an eye on total \nmemory used).\n\n> maintenance_work_mem = 256MB\n> max_fsm_pages = 15000000 # There are quite big deletes with bacula ...\n> effective_cache_size = 800MB\n\nSee other emails on this one.\n\n> default_statistics_target = 1000\n\nProbably don't need this for all columns, but it won't cause problems \nwith these queries.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 14 Jul 2009 09:23:25 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Le Tuesday 14 July 2009 10:15:21, vous avez écrit :\n> Marc Cousin wrote:\n> >> Your effective_cache_size is really small for the system you seem to\n> >> have - its the size of IO caching your os is doing and uses no resources\n> >> itself. And 800MB of that on a system with that amount of data seems a\n> >> bit unlikely ;-)\n> >>\n> >> Using `free` you can see the amount of io caching your OS is doing atm.\n> >> in the 'cached' column.\n> >>\n> >> That possibly might tip some plans in a direction you prefer.\n> >>\n> >> What kind of machine are you running this on?\n> >\n> > I played with this parameter too, and it didn't influence the plan.\n> > Anyway, the doc says it's the OS cache available for one query,\n>\n> No they don't. I'm guessing you're getting mixed up with work_mem.\n\nI'm not (from the docs) :\neffective_cache_size (integer)\n Sets the planner's assumption about the effective size of the disk cache that \nis available to a single query\n\nI trust you, of course, but then I think maybe this should be rephrased in the \ndoc then, because I understand it like I said ... I always had a doubt about \nthis sentence, and that's why I tried both 800MB and 8GB for this parameter.\n\n>\n> > and there may be a lot of\n> >\n> > insert queries at the same time, so I chose to be conservative with this\n> > value. I tried it with 8GB too, the plans were the same.\n> >\n> > The OS cache is around 8-10GB by the way.\n>\n> That's what you need to set effective_cache_size to then.\nOk but that doesn't change a thing for this query (I had a doubt on this \nparameter and tried with both 800MB and 8GB)\n", "msg_date": "Tue, 14 Jul 2009 11:16:01 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Le Tuesday 14 July 2009 10:23:25, Richard Huxton a écrit :\n> Marc Cousin wrote:\n> > Temporarily I moved the problem at a bit higher sizes of batch by\n> > changing random_page_cost to 0.02 and seq_page_cost to 0.01, but I feel\n> > like an apprentice sorcerer with this, as I told postgreSQL that fetching\n> > rows from disk are much cheaper than they are. These values are, I think,\n> > completely abnormal.\n>\n> They certainly don't have anything to do with reality. Try putting them\n> back to (say) seq_page_cost=1 and random_page_cost=2.\n\nThat's the first thing I tried (it seemed more sensible), and it didn't work. I \ncan't put them back to these values for more than one test query, the server \nreally died before I changed the settings.\n\n>\n> > So, finally, to my questions :\n> > - Is it normal that PostgreSQL is this off base on these queries (sorry I\n> > don't have the plans, if they are required I'll do my best to get some,\n> > but they really are the two obvious plans for this kind of query). What\n> > could make it choose the hash join for too small batch tables ?\n>\n> No point in speculating without plans.\n\nOk, I'll try to have them tomorrow.\n\n>\n> > - Is changing the 2 costs the way to go ?\n>\n> Not the way you have.\nThat's what I thought, and the reason I posted :)\n\n>\n> > - Is there a way to tell postgreSQL that it's more costly to sort than it\n> > thinks ? (instead of telling it that fetching data from disk doesn't cost\n> > anything).\n>\n> That's what the configuration settings do. But if you put a couple way\n> off from reality it'll be pure chance if it gets any estimates right.\n>\n> > Here are the other non-default values from my configuration :\n> >\n> > shared_buffers = 2GB\n> > work_mem = 64MB\n>\n> Set this *much* higher when you are running your bulk imports. You can\n> do it per-connection. Try 256MB, 512MB, 1GB (but keep an eye on total\n> memory used).\n\nI'll try that. But anyhow, I've got much better performance when not doing the \nhash join. I'll get back with the plans as soon as possible.\n\n>\n> > maintenance_work_mem = 256MB\n> > max_fsm_pages = 15000000 # There are quite big deletes with bacula ...\n> > effective_cache_size = 800MB\n>\n> See other emails on this one.\n>\n> > default_statistics_target = 1000\n>\n> Probably don't need this for all columns, but it won't cause problems\n> with these queries.\n", "msg_date": "Tue, 14 Jul 2009 11:22:16 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "This mail contains the asked plans :\n\nI've done them with the different configurations, as I had done the effort of setting up the whole thing :)\nStats were updated between all runs. Each time is the first run of the query (that's what we have in production with bacula)\nAnd I added the executor stats, in case ...\n\nBy the way, I think I must mention it, the whole thing runs over DRBD, but with 2 gigabyte links between the master and the slave.\nAnd I tried deactivating replication when things got really slow (despooling in 24 hours), it changed nothing (sorts were a bit faster, \naround 20%). Server is 12 GB ram, 1 quad core xeon E5335.\n\nPostgreSQL starts to hash filename a bit later than what I said in the first mail, because it's become bigger (it was around 30-40 million last time I did the tests).\n\nThis is the query (temp_mc is the table I've created to do my tests ...):\n\nexplain ANALYZE SELECT batch.FileIndex,\n batch.JobId,\n Path.PathId,\n Filename.FilenameId,\n batch.LStat,\n batch.MD5\n FROM temp_mc AS batch\n JOIN Path ON (batch.Path = Path.Path)\n JOIN Filename ON (batch.Name = Filename.Name);\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++\nPlan 1\naround 1 million records to insert, seq_page_cost 1, random_page_cost 4\n\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 380.143452 elapsed 79.000938 user 44.386774 system sec\n! [415.785985 user 155.733732 sys total]\n! 15848728/12934936 [24352752/50913184] filesystem blocks in/out\n! 0/44188 [86/987512] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 93812/40706 [405069/184511] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 877336 read, 0 written, buffer hit rate = 6.75%\n! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n Hash Join (cost=3923929.71..5131377.91 rows=1286440 width=91) (actual time=234021.194..380018.709 rows=1286440 loops=1)\n Hash Cond: (batch.name = filename.name)\n -> Hash Join (cost=880140.87..1286265.62 rows=1286440 width=102) (actual time=23184.959..102400.782 rows=1286440 loops=1)\n Hash Cond: (batch.path = path.path)\n -> Seq Scan on temp_mc batch (cost=0.00..49550.40 rows=1286440 width=189) (actual time=0.007..342.396 rows=1286440 loops=1)\n -> Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual time=23184.196..23184.196 rows=16732049 loops=1)\n -> Seq Scan on path (cost=0.00..425486.72 rows=16746972 width=92) (actual time=0.004..7318.850 rows=16732049 loops=1)\n -> Hash (cost=1436976.15..1436976.15 rows=79104615 width=35) (actual time=210831.840..210831.840 rows=79094418 loops=1)\n -> Seq Scan on filename (cost=0.00..1436976.15 rows=79104615 width=35) (actual time=46.324..148887.662 rows=79094418 loops=1)\n Total runtime: 380136.601 ms\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++\nPlan 2\nthe same insert, with seq_page_cost to 0.01 and random_page_cost to 0.02\n\nDETAIL: ! system usage stats:\n! 42.378039 elapsed 28.277767 user 12.192762 system sec\n! [471.865489 user 180.499280 sys total]\n! 0/4072368 [24792848/59059032] filesystem blocks in/out\n! 0/0 [86/989858] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 1061/9131 [429738/200320] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 251574 read, 0 written, buffer hit rate = 96.27%\n! Local blocks: 0 read, 0 written, buffer hit rate = 0.00%\n! Direct blocks: 0 read, 0 written\nLOG: duration: 42378.373 ms statement: \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=381840.21..1012047.92 rows=1286440 width=91) (actual time=20284.387..42242.955 rows=1286440 loops=1)\n Hash Cond: (batch.path = path.path)\n -> Nested Loop (cost=0.00..583683.91 rows=1286440 width=178) (actual time=0.026..10333.636 rows=1286440 loops=1)\n -> Seq Scan on temp_mc batch (cost=0.00..13231.26 rows=1286440 width=189) (actual time=0.008..380.361 rows=1286440 loops=1)\n -> Index Scan using filename_name_idx on filename (cost=0.00..0.43 rows=1 width=35) (actual time=0.006..0.007 rows=1 loops=1286440)\n Index Cond: (filename.name = batch.name)\n -> Hash (cost=170049.89..170049.89 rows=16746972 width=92) (actual time=20280.729..20280.729 rows=16732049 loops=1)\n -> Seq Scan on path (cost=0.00..170049.89 rows=16746972 width=92) (actual time=0.005..4560.872 rows=16732049 loops=1)\n Total runtime: 42371.362 ms\n\n\nThe thing is that this query is ten times faster, but it's not the main point : this query stays reasonably fast even when there are\n20 of it running simultaneously. Of course, as it's faster, it also has less tendancy to pile up than the other one does.\n\nWhen I get 10-20 of the first one running at the same time, the queries get extremely slow (I guess they are fighting\nfor accessing the sort disk, because I see a lot of smaller IOs instead of the big and nice IOs I see when only one of\nthese queries runs). The IO subsystem seems to degrade very much when there is a lot of concurrent activity on this server.\nFor instance, last weekend, we had to 8 million simultaneous backups, with the hash join plan. It took 24 hours for them to complete.\nIf they had been alone on the server, it would have taken around 1 hour for each of them.\n\n\nOf course, with these smaller cost values, there is still a batch size when the plans goes back to the first one.\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++\nPlan 3\nseq_page_cost to 1, random_page_cost to 2. Plan is the same as Plan 1.\n\n-------------------------------------------------------------------------------------\n Hash Join (cost=3923961.69..5131416.88 rows=1286440 width=91)\n Hash Cond: (batch.name = filename.name)\n -> Hash Join (cost=880144.31..1286270.06 rows=1286440 width=102)\n Hash Cond: (batch.path = path.path)\n -> Seq Scan on temp_mc batch (cost=0.00..49550.40 rows=1286440 width=189)\n -> Hash (cost=425488.36..425488.36 rows=16747036 width=92)\n -> Seq Scan on path (cost=0.00..425488.36 rows=16747036 width=92)\n -> Hash (cost=1436989.50..1436989.50 rows=79105350 width=35)\n -> Seq Scan on filename (cost=0.00..1436989.50 rows=79105350 width=35)\n(9 rows)\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++\nPlan 4:\nseq_page_cost to 1, random_page_cost back to 4, raise work_mem to 512MB. Same as Plan 1\nEstimated cost hasn't changed. Is this normal ?\n\n-------------------------------------------------------------------------------------\n Hash Join (cost=3923961.69..5131416.88 rows=1286440 width=91)\n Hash Cond: (batch.name = filename.name)\n -> Hash Join (cost=880144.31..1286270.06 rows=1286440 width=102)\n Hash Cond: (batch.path = path.path)\n -> Seq Scan on temp_mc batch (cost=0.00..49550.40 rows=1286440 width=189)\n -> Hash (cost=425488.36..425488.36 rows=16747036 width=92)\n -> Seq Scan on path (cost=0.00..425488.36 rows=16747036 width=92)\n -> Hash (cost=1436989.50..1436989.50 rows=79105350 width=35)\n -> Seq Scan on filename (cost=0.00..1436989.50 rows=79105350 width=35)\n(9 rows)\n\nMaybe this one would scale a bit better, as there would be less sort files ? I couldn't execute it and get reliable times (sorry, the production period has started).\nIf necessary, I can run it again tomorrow. I had to cancel the query after more than 15 minutes, to let the server do it's regular work.\n\n\n\nThere are other things I am thinking of : maybe it would be better to have sort space on another (and not DBRD'ded) raid set ? we have a quite\ncheap setup right now for the database, and I think maybe this would help scale better. I can get a filesystem in another volume group, which is not used that much for now.\n\nAnyway, thanks for all the ideas you could have on this.\n\nMarc.\n", "msg_date": "Wed, 15 Jul 2009 13:37:01 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin escribi�:\n\n> There are other things I am thinking of : maybe it would be better to have sort space on another (and not DBRD'ded) raid set ? we have a quite\n> cheap setup right now for the database, and I think maybe this would help scale better. I can get a filesystem in another volume group, which is not used that much for now.\n\nYou know, that's the first thing it came to me when I read you're using\nDRDB. Have you tried setting temp_tablespace to a non-replicated disk?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 15 Jul 2009 09:45:01 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Le Wednesday 15 July 2009 15:45:01, Alvaro Herrera a écrit :\n> Marc Cousin escribió:\n> > There are other things I am thinking of : maybe it would be better to\n> > have sort space on another (and not DBRD'ded) raid set ? we have a quite\n> > cheap setup right now for the database, and I think maybe this would help\n> > scale better. I can get a filesystem in another volume group, which is\n> > not used that much for now.\n>\n> You know, that's the first thing it came to me when I read you're using\n> DRDB. Have you tried setting temp_tablespace to a non-replicated disk?\n\nI wish I could easily. I'm not entitled to tune the database, only to give \ndirectives. I've given this one, but I don't know when it will be done. I'll \nkeep you informed on this one, but I don't have my hopes too high.\n\nAs mentionned before, I tried to deactivate DRBD (still using the DRBD device, \nbut not connected to the other node, so it has almost no effect). It didn't \nchange much (performance was a bit (around 20% better).\n\nAnyway, the thing is that :\n- big sorts kill my machine when there are more that 5 of them. I think it is \na system problem (raid, filesystem, linux tuning, don't really know, I'll have \nto dig into this, but it will be complicated, for human reasons :) )\n- the plan through nested loops is faster anyway, and I think it's because \nthere is only a small fraction of filename and path that is used (most files \nbacked up have the same name or path, as we save 600 machines with mostly 2 \nOSes, linux and windows), so the hot parts of these 2 tables are extremely \nlikely to be in the database or linux cache (buffer hit rate was 97% in the \nexample provided). Moreover, the first two queries of the insert procedure fill \nthe cache for us...\n\n\n\n", "msg_date": "Wed, 15 Jul 2009 15:53:36 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin wrote:\n> This mail contains the asked plans :\n> Plan 1\n> around 1 million records to insert, seq_page_cost 1, random_page_cost 4\n\n> -> Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual time=23184.196..23184.196 rows=16732049 loops=1)\n> -> Seq Scan on path (cost=0.00..425486.72 rows=16746972 width=92) (actual time=0.004..7318.850 rows=16732049 loops=1)\n\n> -> Hash (cost=1436976.15..1436976.15 rows=79104615 width=35) (actual time=210831.840..210831.840 rows=79094418 loops=1)\n> -> Seq Scan on filename (cost=0.00..1436976.15 rows=79104615 width=35) (actual time=46.324..148887.662 rows=79094418 loops=1)\n\nThis doesn't address the cost driving plan question, but I think it's a \nbit puzzling that a seq scan of 17M 92-byte rows completes in 7 secs, \nwhile a seqscan of 79M 35-byte rows takes 149secs. It's about 4:1 row \nratio, less than 2:1 byte ratio, but a 20:1 time ratio. Perhaps there's \nsome terrible bloat on filename that's not present on path? If that seq \nscan time on filename were proportionate to path this plan would \ncomplete about two minutes faster (making it only 6 times slower instead \nof 9 :).\n\n-- \n-Devin\n", "msg_date": "Wed, 15 Jul 2009 16:56:37 -0700", "msg_from": "Devin Ben-Hur <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "\n\nOn 7/15/09 4:56 PM, \"Devin Ben-Hur\" <[email protected]> wrote:\n\n> Marc Cousin wrote:\n>> This mail contains the asked plans :\n>> Plan 1\n>> around 1 million records to insert, seq_page_cost 1, random_page_cost 4\n> \n>> -> Hash (cost=425486.72..425486.72 rows=16746972 width=92) (actual\n>> time=23184.196..23184.196 rows=16732049 loops=1)\n>> -> Seq Scan on path (cost=0.00..425486.72 rows=16746972\n>> width=92) (actual time=0.004..7318.850 rows=16732049 loops=1)\n> \n>> -> Hash (cost=1436976.15..1436976.15 rows=79104615 width=35) (actual\n>> time=210831.840..210831.840 rows=79094418 loops=1)\n>> -> Seq Scan on filename (cost=0.00..1436976.15 rows=79104615\n>> width=35) (actual time=46.324..148887.662 rows=79094418 loops=1)\n> \n> This doesn't address the cost driving plan question, but I think it's a\n> bit puzzling that a seq scan of 17M 92-byte rows completes in 7 secs,\n> while a seqscan of 79M 35-byte rows takes 149secs. It's about 4:1 row\n> ratio, less than 2:1 byte ratio, but a 20:1 time ratio. Perhaps there's\n> some terrible bloat on filename that's not present on path? If that seq\n> scan time on filename were proportionate to path this plan would\n> complete about two minutes faster (making it only 6 times slower instead\n> of 9 :).\n\n\nBloat is possible. This can be checked with VACUUM VERBOSE on the table.\nPostgres has a habit of getting its table files fragmented too under certain\nuse cases.\nAdditionally, some of the table pages may have been cached in one use case\nand not in another.\n> \n> --\n> -Devin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 15 Jul 2009 17:43:19 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :\n> Marc Cousin wrote:\n> > This mail contains the asked plans :\n> > Plan 1\n> > around 1 million records to insert, seq_page_cost 1, random_page_cost 4\n> >\n> > -> Hash (cost=425486.72..425486.72 rows=16746972 width=92)\n> > (actual time=23184.196..23184.196 rows=16732049 loops=1) -> Seq Scan on\n> > path (cost=0.00..425486.72 rows=16746972 width=92) (actual\n> > time=0.004..7318.850 rows=16732049 loops=1)\n> >\n> > -> Hash (cost=1436976.15..1436976.15 rows=79104615 width=35) (actual\n> > time=210831.840..210831.840 rows=79094418 loops=1) -> Seq Scan on\n> > filename (cost=0.00..1436976.15 rows=79104615 width=35) (actual\n> > time=46.324..148887.662 rows=79094418 loops=1)\n>\n> This doesn't address the cost driving plan question, but I think it's a\n> bit puzzling that a seq scan of 17M 92-byte rows completes in 7 secs,\n> while a seqscan of 79M 35-byte rows takes 149secs. It's about 4:1 row\n> ratio, less than 2:1 byte ratio, but a 20:1 time ratio. Perhaps there's\n> some terrible bloat on filename that's not present on path? If that seq\n> scan time on filename were proportionate to path this plan would\n> complete about two minutes faster (making it only 6 times slower instead\n> of 9 :).\nMuch simpler than that I think : there is a bigger percentage of path that is \nused all the time than of filename. The database used is the production \ndatabase, so there were other insert queries running a few minutes before I \ngot this plan.\n\nBut I'll give it a look today and come back with bloat and cache information \non these 2 tables.\n", "msg_date": "Thu, 16 Jul 2009 07:20:18 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "On Thursday 16 July 2009 07:20:18 Marc Cousin wrote:\n> Le Thursday 16 July 2009 01:56:37, Devin Ben-Hur a écrit :\n> > Marc Cousin wrote:\n> > > This mail contains the asked plans :\n> > > Plan 1\n> > > around 1 million records to insert, seq_page_cost 1, random_page_cost 4\n> > >\n> > > -> Hash (cost=425486.72..425486.72 rows=16746972 width=92)\n> > > (actual time=23184.196..23184.196 rows=16732049 loops=1) -> Seq Scan\n> > > on path (cost=0.00..425486.72 rows=16746972 width=92) (actual\n> > > time=0.004..7318.850 rows=16732049 loops=1)\n> > >\n> > > -> Hash (cost=1436976.15..1436976.15 rows=79104615 width=35)\n> > > (actual time=210831.840..210831.840 rows=79094418 loops=1) -> Seq Scan\n> > > on filename (cost=0.00..1436976.15 rows=79104615 width=35) (actual\n> > > time=46.324..148887.662 rows=79094418 loops=1)\n> >\n> > This doesn't address the cost driving plan question, but I think it's a\n> > bit puzzling that a seq scan of 17M 92-byte rows completes in 7 secs,\n> > while a seqscan of 79M 35-byte rows takes 149secs. It's about 4:1 row\n> > ratio, less than 2:1 byte ratio, but a 20:1 time ratio. Perhaps there's\n> > some terrible bloat on filename that's not present on path? If that seq\n> > scan time on filename were proportionate to path this plan would\n> > complete about two minutes faster (making it only 6 times slower instead\n> > of 9 :).\n>\n> Much simpler than that I think : there is a bigger percentage of path that\n> is used all the time than of filename. The database used is the production\n> database, so there were other insert queries running a few minutes before I\n> got this plan.\n>\n> But I'll give it a look today and come back with bloat and cache\n> information on these 2 tables.\n\nHere are the stats for filename :\n\nSELECT * from pgstattuple('public.filename');\n table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent\n------------+-------------+------------+---------------+------------------+----------------+--------------------+------------+--------------\n 5308719104 | 79338344 | 4717466438 | 88.86 | 0 | 0 | 0 | 11883396 | 0.22\n\nSo I guess it's not bloated.\n\nI checked in the cache, the times displayed before were with path in the cache. filename couldn't stay in the cache, as it's too big.\n", "msg_date": "Thu, 16 Jul 2009 09:41:05 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin <[email protected]> wrote: \n \n> the hot parts of these 2 tables are extremely likely to be in the\n> database or linux cache (buffer hit rate was 97% in the example\n> provided). Moreover, the first two queries of the insert procedure \n> fill the cache for us...\n \nThis would be why the optimizer does the best job estimating the\nrelative costs of various plans when you set the random_page_cost and\nseq_page_cost very low.\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 15:07:25 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem\n\t (bacula)" }, { "msg_contents": "Le Thursday 16 July 2009 22:07:25, Kevin Grittner a écrit :\n> Marc Cousin <[email protected]> wrote:\n> > the hot parts of these 2 tables are extremely likely to be in the\n> > database or linux cache (buffer hit rate was 97% in the example\n> > provided). Moreover, the first two queries of the insert procedure\n> > fill the cache for us...\n>\n> This would be why the optimizer does the best job estimating the\n> relative costs of various plans when you set the random_page_cost and\n> seq_page_cost very low.\n>\n> -Kevin\n\n\nOk, so to sum it up, should I keep these values (I hate doing this :) ) ? \nWould there be a way to approximately evaluate them regarding to the expected \nbuffer hit ratio of the query ?\n\n", "msg_date": "Thu, 16 Jul 2009 22:50:10 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin <[email protected]> wrote:\n \n> to sum it up, should I keep these values (I hate doing this :) ) ?\n \nMany people need to set the random_page_cost and/or seq_page_cost to\nreflect the overall affect of caching on the active portion of the\ndata. We set our fully-cached databases to 0.1 for both. Databases\nwith less caching usually wind up at 2 and 1. We have one database\nwhich does best at 0.5 and 0.3. My advice is to experiment and try to\nfind a pair of settings which works well for most or all of your\nqueries. If you have a few which need a different setting, you can\nset a special value right before running the query, but I've always\nbeen able to avoid that (thankfully).\n \n> Would there be a way to approximately evaluate them regarding to\n> the expected buffer hit ratio of the query ?\n \nNothing query-specific except setting them on the connection right\nbefore the query (and setting them back or discarding the connection\nafterward). Well, that and making sure that effective_cache_size\nreflects reality.\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 16:54:54 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem\n\t (bacula)" }, { "msg_contents": "Le Thursday 16 July 2009 23:54:54, Kevin Grittner a écrit :\n> Marc Cousin <[email protected]> wrote:\n> > to sum it up, should I keep these values (I hate doing this :) ) ?\n>\n> Many people need to set the random_page_cost and/or seq_page_cost to\n> reflect the overall affect of caching on the active portion of the\n> data. We set our fully-cached databases to 0.1 for both. Databases\n> with less caching usually wind up at 2 and 1. We have one database\n> which does best at 0.5 and 0.3. My advice is to experiment and try to\n> find a pair of settings which works well for most or all of your\n> queries. If you have a few which need a different setting, you can\n> set a special value right before running the query, but I've always\n> been able to avoid that (thankfully).\n>\n> > Would there be a way to approximately evaluate them regarding to\n> > the expected buffer hit ratio of the query ?\n>\n> Nothing query-specific except setting them on the connection right\n> before the query (and setting them back or discarding the connection\n> afterward). Well, that and making sure that effective_cache_size\n> reflects reality.\n>\n> -Kevin\n\n\nOK, thanks a lot.\n\nA last thing :\n\nAs mentionned in another mail from the thread (from Richard Huxton), I felt \nthis message in the documentation a bit misleading :\n\neffective_cache_size (integer)\n Sets the planner's assumption about the effective size of the disk cache that \nis available to a single query\n\nI don't really know what the 'a single query' means. I interpreted that as \n'divide it by the amount of queries typically running in parallel on the \ndatabase'. Maybe it should be rephrased ? (I may not be the one \nmisunderstanding it).\n", "msg_date": "Fri, 17 Jul 2009 00:03:24 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin wrote:\n> Le Thursday 16 July 2009 22:07:25, Kevin Grittner a �crit :\n>> Marc Cousin <[email protected]> wrote:\n>>> the hot parts of these 2 tables are extremely likely to be in the\n>>> database or linux cache (buffer hit rate was 97% in the example\n>>> provided). Moreover, the first two queries of the insert procedure\n>>> fill the cache for us...\n> \n> Ok, so to sum it up, should I keep these values (I hate doing this :) ) ? \n> Would there be a way to approximately evaluate them regarding to the expected \n> buffer hit ratio of the query ?\n\ncached_buffer_cost = 0.01\neffective_page_cost =\n ((1 - expected_cache_hit_ratio) * standard_page_cost)\n+ (expected_cache_hit_ratio * cached_buffer_cost)\n\nIf your assumption is only about these queries in particular, rather \nthan applicable across the board, you should set the page_costs just for \nthis query and reset them or close the connection after.\n\n-- \n-Devin\n", "msg_date": "Thu, 16 Jul 2009 15:29:48 -0700", "msg_from": "Devin Ben-Hur <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "Marc Cousin <[email protected]> wrote:\n \n> As mentionned in another mail from the thread (from Richard Huxton),\n> I felt this message in the documentation a bit misleading :\n> \n> effective_cache_size (integer)\n> Sets the planner's assumption about the effective size of the disk\n> cache that is available to a single query\n> \n> I don't really know what the 'a single query' means. I interpreted\n> that as 'divide it by the amount of queries typically running in\n> parallel on the database'. Maybe it should be rephrased ? (I may not\n> be the one misunderstanding it).\n \nI'm afraid I'll have to let someone else speak to that; I only have a\nvague sense of its impact. I've generally gotten good results setting\nthat to the available cache space on the machine. If I'm running\nmultiple database clusters on one machine, I tend to hedge a little\nand set it lower to allow for some competition.\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 17:30:17 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem\n\t (bacula)" }, { "msg_contents": "On Thu, Jul 16, 2009 at 6:30 PM, Kevin\nGrittner<[email protected]> wrote:\n> Marc Cousin <[email protected]> wrote:\n>\n>> As mentionned in another mail from the thread (from Richard Huxton),\n>> I felt this message in the documentation a bit misleading :\n>>\n>> effective_cache_size (integer)\n>>  Sets the planner's assumption about the effective size of the disk\n>>  cache that is available to a single query\n>>\n>> I don't really know what the 'a single query' means. I interpreted\n>> that as 'divide it by the amount of queries typically running in\n>> parallel on the database'. Maybe it should be rephrased ? (I may not\n>> be the one misunderstanding it).\n>\n> I'm afraid I'll have to let someone else speak to that; I only have a\n> vague sense of its impact.  I've generally gotten good results setting\n> that to the available cache space on the machine.  If I'm running\n> multiple database clusters on one machine, I tend to hedge a little\n> and set it lower to allow for some competition.\n\nIt really has very little impact. It only affects index scans, and\neven then only if effective_cache_size is less than the size of the\ntable.\n\nEssentially, when this kicks in, it models the effect that if you are\nindex scanning a table much larger than the size of your cache, you\nmight have to reread some blocks that you previously read in during\n*that same index scan*.\n\n...Robert\n", "msg_date": "Thu, 23 Jul 2009 23:48:27 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "> It really has very little impact. It only affects index scans, and\n> even then only if effective_cache_size is less than the size of the\n> table.\n>\n> Essentially, when this kicks in, it models the effect that if you are\n> index scanning a table much larger than the size of your cache, you\n> might have to reread some blocks that you previously read in during\n> *that same index scan*.\n\n\nOk, thanks for clearing that up for me. Still, I think the doc could be \nimproved on this point (sorry to be a bit obsessed with that, but I'm one of \nthe french translators, so I like the doc to be perfect :) )\n", "msg_date": "Fri, 24 Jul 2009 07:13:06 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": "On Fri, Jul 24, 2009 at 1:13 AM, Marc Cousin<[email protected]> wrote:\n>> It really has very little impact.  It only affects index scans, and\n>> even then only if effective_cache_size is less than the size of the\n>> table.\n>>\n>> Essentially, when this kicks in, it models the effect that if you are\n>> index scanning a table much larger than the size of your cache, you\n>> might have to reread some blocks that you previously read in during\n>> *that same index scan*.\n>\n> Ok, thanks for clearing that up for me. Still, I think the doc could be\n> improved on this point (sorry to be a bit obsessed with that, but I'm one of\n> the french translators, so I like the doc to be perfect :) )\n\nYes, I agree. I was confused for quite a long time, too, until I read\nthe code. I think many people think this value is much more important\nthan it really is.\n\n(That having been said, I have no current plans to write such a doc\npatch myself.)\n\n...Robert\n", "msg_date": "Sat, 25 Jul 2009 23:31:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" }, { "msg_contents": ">>> It really has very little impact. It only affects index scans, and\n>>> even then only if effective_cache_size is less than the size of the\n>> table.\n>>>\n>>> Essentially, when this kicks in, it models the effect that if you are\n>>> index scanning a table much larger than the size of your cache, you\n>>> might have to reread some blocks that you previously read in during\n>>> *that same index scan*.\n>>\n>> Ok, thanks for clearing that up for me. Still, I think the doc could be\n>> improved on this point (sorry to be a bit obsessed with that, but I'm one \n>> of\n>> the french translators, so I like the doc to be perfect :) )\n>\n>Yes, I agree. I was confused for quite a long time, too, until I read\n>the code. I think many people think this value is much more important\n>than it really is.\n>\n>(That having been said, I have no current plans to write such a doc\n>patch myself.)\n>\n>...Robert\n\nHow about adding a comment to the wiki performance page....\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n\n", "msg_date": "Mon, 27 Jul 2009 06:43:07 -0400", "msg_from": "\"Eric Comeau\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very big insert/join performance problem (bacula)" } ]
[ { "msg_contents": "Howdy. Some months back, when advised on one of these lists that it\nshould not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit\nthis nightly \"maintenance\" practice. We've been very happy to not\nhave to do that, since it locked the database all night. Since then,\nhowever, our database performance has decreased. The decrease took a\nfew weeks to become noticable; perhaps six weeks to become awful.\n\nI have no objective measurement of the decrease in performance. I\nhave just created a benchmark that exercises our system and used it to\nmeasure the current degraded performance. I hope it will show me,\nobjectively, how much any attempted fix improves system performance.\n\nOne thing I noticed is that when we stopped doing the VACUUM\nFULL/REINDEX is that the size of the weekly backups (a compressed\ntarball of main + WAL files) jumped in size. A steady 53GB before we\nstopped doing the vacuum, the next backup after stopping the VACUUM\nFULL was 97GB. The backup sizes have grown in the three months since\nthen and are now hovering at around 130GB. We believe, but have no\nhard numbers to prove, that this growth in physical backup size is out\nof proportion with the growth of the logical database that we expect\ndue to the slow growth of the business. We are pretty sure we would\nhave noticed the business growing at more than 50% per quarter.\n\nI did a VACUUM VERBOSE and looked at the statistics at the end; they\nseem to indicated that my max_fsm_pages is large enough to keep track\nof all of the dead rows that are being created (we do a fair amount of\ndeleting as well as inserting). Postgres prints no complaint saying\nwe need more slots, and we have more than the number of slots needed\n(if I recall, about twice as many).\n\nWhat options do I have for restoring performance other than VACUUM\nFULL/REINDEX DATABASE?\n\nBefore trying any fix, what data do I want to collect that might\nindicate where the performance problem is?\n\nBest Regards,\n Wayne Conrad\n", "msg_date": "Mon, 13 Jul 2009 12:31:52 -0700 (MST)", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "On Mon, Jul 13, 2009 at 1:31 PM, Wayne Conrad<[email protected]> wrote:\n> Howdy.  Some months back, when advised on one of these lists that it\n> should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit\n> this nightly \"maintenance\" practice.  We've been very happy to not\n> have to do that, since it locked the database all night.  Since then,\n> however, our database performance has decreased.  The decrease took a\n> few weeks to become noticable; perhaps six weeks to become awful.\nSNIP\n> What options do I have for restoring performance other than VACUUM\n> FULL/REINDEX DATABASE?\n\nJust wondering, which pgsql version, and also, do you have autovacuum turned on?\n", "msg_date": "Tue, 14 Jul 2009 22:52:27 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "On Mon, Jul 13, 2009 at 3:31 PM, Wayne Conrad<[email protected]> wrote:\n> Howdy.  Some months back, when advised on one of these lists that it\n> should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit\n> this nightly \"maintenance\" practice.  We've been very happy to not\n> have to do that, since it locked the database all night.  Since then,\n> however, our database performance has decreased.  The decrease took a\n> few weeks to become noticable; perhaps six weeks to become awful.\n\n <snip>\n\n> I did a VACUUM VERBOSE and looked at the statistics at the end; they\n> seem to indicated that my max_fsm_pages is large enough to keep track\n> of all of the dead rows that are being created (we do a fair amount of\n> deleting as well as inserting).  Postgres prints no complaint saying\n> we need more slots, and we have more than the number of slots needed\n> (if I recall, about twice as many).\n>\n> What options do I have for restoring performance other than VACUUM\n> FULL/REINDEX DATABASE?\n>\n\nDo you have autovacuum on, or otherwise replaced your VACUUM FULL with\nregular VACUUM? The symptoms are pretty classically those of table\nbloat. Since it's gotten so out of hand now, a VACUUM FULL/REINDEX is\nprobably what you'll need to fix it.\n\nGoing forward, you need *some* vacuuming strategy. Autovacuum is\nprobably best, especially if you're on 8.3. If not autovacuum for some\nreason, you *must* at least do regular vacuums.\n\nVacuum full/reindex is for fixing the situation you're in now, but a\nregular vacuum strategy should prevent you from getting back into it.\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Wed, 15 Jul 2009 00:53:56 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "On Tue, 14 Jul 2009, Scott Marlowe wrote:\n> Just wondering, which pgsql version, and also, do you have\n> autovacuum turned on?\n\nDang, I should have said in my initial message. 8.3.6, and autovacuum\nis turned on and has plenty of log activity.\n", "msg_date": "Wed, 15 Jul 2009 05:51:17 -0700 (MST)", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "On Wed, Jul 15, 2009 at 6:51 AM, Wayne Conrad<[email protected]> wrote:\n> On Tue, 14 Jul 2009, Scott Marlowe wrote:\n>>\n>> Just wondering, which pgsql version, and also, do you have\n>> autovacuum turned on?\n>\n> Dang, I should have said in my initial message.  8.3.6, and autovacuum\n> is turned on and has plenty of log activity.\n\nAre you guys doing anything that could be deemed pathological, like\nfull table updates on big tables over and over? Had an issue last\nyear where a dev left a where clause off an update to a field in one\nof our biggest tables and in a few weeks the database was so bloated\nwe had to take it offline to fix the problem. After fixing the query.\n", "msg_date": "Wed, 15 Jul 2009 08:30:40 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": ">> On Tue, 14 Jul 2009, Scott Marlowe wrote:\n> Are you guys doing anything that could be deemed pathological, like\n> full table updates on big tables over and over? Had an issue last\n> year where a dev left a where clause off an update to a field in one\n> of our biggest tables and in a few weeks the database was so bloated\n> we had to take it offline to fix the problem. After fixing the\n> query.\n\nI've just audited the source, looking for any updates without where\nclauses. None jumped out to bite me.\n\nAlmost everything we do happens in transactions which can occasionally\ntake 10-20 minutes to complete and span thousands or tens of thousands\nof rows across multiple tables. Are long-running transactions a\nculprit in table bloat?\n\nI've also used contrib/pgstattuple to try to identify which of our\nlarge tables and indices are experiencing bloat. Here are the\npgstattuple results for our largest tables:\n\ntable_len: 56639488\ntuple_count: 655501\ntuple_len: 53573112\ntuple_percent: 94.59\ndead_tuple_count: 0\ndead_tuple_len: 0\ndead_tuple_percent: 0\nfree_space: 251928\nfree_percent: 0.44\ntable_name: status\n\ntable_len: 94363648\ntuple_count: 342363\ntuple_len: 61084340\ntuple_percent: 64.73\ndead_tuple_count: 10514\ndead_tuple_len: 1888364\ndead_tuple_percent: 2\nfree_space: 28332256\nfree_percent: 30.02\ntable_name: uploads\n\ntable_len: 135675904\ntuple_count: 1094803\ntuple_len: 129821312\ntuple_percent: 95.68\ndead_tuple_count: 133\ndead_tuple_len: 16048\ndead_tuple_percent: 0.01\nfree_space: 991460\nfree_percent: 0.73\ntable_name: invoice_details\n\ntable_len: 148914176\ntuple_count: 1858812\ntuple_len: 139661736\ntuple_percent: 93.79\ndead_tuple_count: 1118\ndead_tuple_len: 80704\ndead_tuple_percent: 0.05\nfree_space: 1218040\nfree_percent: 0.82\ntable_name: job_status_log\n\ntable_len: 173416448\ntuple_count: 132974\ntuple_len: 117788200\ntuple_percent: 67.92\ndead_tuple_count: 10670\ndead_tuple_len: 7792692\ndead_tuple_percent: 4.49\nfree_space: 46081516\nfree_percent: 26.57\ntable_name: mail\n\ntable_len: 191299584\ntuple_count: 433378\ntuple_len: 145551144\ntuple_percent: 76.09\ndead_tuple_count: 1042\ndead_tuple_len: 862952\ndead_tuple_percent: 0.45\nfree_space: 42068276\nfree_percent: 21.99\ntable_name: sessions\n\ntable_len: 548552704\ntuple_count: 5446169\ntuple_len: 429602136\ntuple_percent: 78.32\ndead_tuple_count: 24992\ndead_tuple_len: 1929560\ndead_tuple_percent: 0.35\nfree_space: 93157980\nfree_percent: 16.98\ntable_name: job_state_log\n\ntable_len: 639262720\ntuple_count: 556415\ntuple_len: 221505548\ntuple_percent: 34.65\ndead_tuple_count: 66688\ndead_tuple_len: 27239728\ndead_tuple_percent: 4.26\nfree_space: 380168112\nfree_percent: 59.47\ntable_name: jobs\n\ntable_len: 791240704\ntuple_count: 8311799\ntuple_len: 700000052\ntuple_percent: 88.47\ndead_tuple_count: 39\ndead_tuple_len: 3752\ndead_tuple_percent: 0\nfree_space: 11397548\nfree_percent: 1.44\ntable_name: cron_logs\n\ntable_len: 1612947456\ntuple_count: 10854417\ntuple_len: 1513084075\ntuple_percent: 93.81\ndead_tuple_count: 0\ndead_tuple_len: 0\ndead_tuple_percent: 0\nfree_space: 13014040\nfree_percent: 0.81\ntable_name: documents_old_addresses\n\ntable_len: 1832091648\ntuple_count: 13729360\ntuple_len: 1600763725\ntuple_percent: 87.37\ndead_tuple_count: 598525\ndead_tuple_len: 80535904\ndead_tuple_percent: 4.4\nfree_space: 38817616\nfree_percent: 2.12\ntable_name: statements\n\ntable_len: 3544350720\ntuple_count: 64289703\ntuple_len: 2828746932\ntuple_percent: 79.81\ndead_tuple_count: 648849\ndead_tuple_len: 28549356\ndead_tuple_percent: 0.81\nfree_space: 143528236\nfree_percent: 4.05\ntable_name: ps_page\n\ntable_len: 4233355264\ntuple_count: 22866609\ntuple_len: 3285722981\ntuple_percent: 77.62\ndead_tuple_count: 231624\ndead_tuple_len: 31142594\ndead_tuple_percent: 0.74\nfree_space: 706351636\nfree_percent: 16.69\ntable_name: injectd_log\n\ntable_len: 4927676416\ntuple_count: 55919895\ntuple_len: 4176606972\ntuple_percent: 84.76\ndead_tuple_count: 795011\ndead_tuple_len: 58409884\ndead_tuple_percent: 1.19\nfree_space: 279870944\nfree_percent: 5.68\ntable_name: documents_ps_page\n\ntable_len: 4953735168\ntuple_count: 44846317\ntuple_len: 3346823052\ntuple_percent: 67.56\ndead_tuple_count: 2485971\ndead_tuple_len: 183639396\ndead_tuple_percent: 3.71\nfree_space: 1038200484\nfree_percent: 20.96\ntable_name: latest_document_address_links\n\ntable_len: 23458062336\ntuple_count: 89533157\ntuple_len: 19772992448\ntuple_percent: 84.29\ndead_tuple_count: 2311467\ndead_tuple_len: 502940946\ndead_tuple_percent: 2.14\nfree_space: 2332408612\nfree_percent: 9.94\ntable_name: document_address\n\ntable_len: 28510109696\ntuple_count: 44844664\ntuple_len: 21711695949\ntuple_percent: 76.15\ndead_tuple_count: 1134932\ndead_tuple_len: 300674467\ndead_tuple_percent: 1.05\nfree_space: 5988985892\nfree_percent: 21.01\ntable_name: documents\n\nHere are the pgstatindex results for our largest indices. I assumed\nthat negative index sizes are a reslt of integer overflow and ordered\nthe results accordingly.\n\nindex_size: 1317961728\nversion: 2\ntree_level: 3\nroot_block_no: 12439\ninternal_pages: 13366\nleaf_pages: 1182318\nempty_pages: 0\ndeleted_pages: 13775\navg_leaf_density: -157.76\nleaf_fragmentation: 37.87\nindex_name: documents_pkey\n\nindex_size: 1346609152\nversion: 2\ntree_level: 3\nroot_block_no: 10447\ninternal_pages: 1937\nleaf_pages: 162431\nempty_pages: 0\ndeleted_pages: 12\navg_leaf_density: 66.56\nleaf_fragmentation: 26.48\nindex_name: statements_pkey\n\nindex_size: 1592713216\nversion: 2\ntree_level: 3\nroot_block_no: 81517\ninternal_pages: 723\nleaf_pages: 177299\nempty_pages: 0\ndeleted_pages: 16400\navg_leaf_density: 74.15\nleaf_fragmentation: 5.58\nindex_name: latest_document_address2_precedence_key\n\nindex_size: 1617821696\nversion: 2\ntree_level: 3\nroot_block_no: 81517\ninternal_pages: 720\nleaf_pages: 185846\nempty_pages: 0\ndeleted_pages: 10921\navg_leaf_density: 78.8\nleaf_fragmentation: 10.96\nindex_name: documents_ps_page_ps_page_id_idx\n\nindex_size: 1629798400\nversion: 2\ntree_level: 3\nroot_block_no: 81517\ninternal_pages: 728\nleaf_pages: 188325\nempty_pages: 0\ndeleted_pages: 9896\navg_leaf_density: 88.23\nleaf_fragmentation: 0.66\nindex_name: ps_page_pkey\n\nindex_size: 1658560512\nversion: 2\ntree_level: 3\nroot_block_no: 81517\ninternal_pages: 740\nleaf_pages: 191672\nempty_pages: 0\ndeleted_pages: 10048\navg_leaf_density: 86.7\nleaf_fragmentation: 1.03\nindex_name: ps_page_ps_id_key\n\nindex_size: -31956992\nversion: 2\ntree_level: 3\nroot_block_no: 12439\ninternal_pages: 5510\nleaf_pages: 475474\nempty_pages: 0\ndeleted_pages: 39402\navg_leaf_density: 72.19\nleaf_fragmentation: 3.02\nindex_name: latest_document_address2_pkey\n\nindex_size: -321863680\nversion: 2\ntree_level: 3\nroot_block_no: 81517\ninternal_pages: 1809\nleaf_pages: 479805\nempty_pages: 0\ndeleted_pages: 3383\navg_leaf_density: 25.63\nleaf_fragmentation: 40.05\nindex_name: documents_id_idx\n\nindex_size: -461504512\nversion: 2\ntree_level: 3\nroot_block_no: 49813\ninternal_pages: 3023\nleaf_pages: 456246\nempty_pages: 0\ndeleted_pages: 8682\navg_leaf_density: 34.37\nleaf_fragmentation: 66.83\nindex_name: documents_city\n\nindex_size: -11818844162\nversion: 3\ntree_level: \nroot_block_no: 11036\ninternal_pages: 10003\nleaf_pages: 822178\nempty_pages: 0\ndeleted_pages: 72121\navg_leaf_density: 54.52\nleaf_fragmentation: 3.37\nindex_name: document_address_pkey\n\nindex_size: -12678348802\nversion: 3\ntree_level: \nroot_block_no: 32210\ninternal_pages: 2410\nleaf_pages: 359867\nempty_pages: 0\ndeleted_pages: 7245\navg_leaf_density: 53.31\nleaf_fragmentation: 52.7\nindex_name: documents_recipient\n\nindex_size: -13276282882\nversion: 3\ntree_level: \nroot_block_no: 27346\ninternal_pages: 2183\nleaf_pages: 360040\nempty_pages: 0\ndeleted_pages: 0\navg_leaf_density: 58.39\nleaf_fragmentation: 50\nindex_name: documents_magic_id_key\n\nindex_size: -14476328962\nversion: 3\ntree_level: \nroot_block_no: 44129\ninternal_pages: 1998\nleaf_pages: 339111\nempty_pages: 0\ndeleted_pages: 6465\navg_leaf_density: 50.12\nleaf_fragmentation: 52.85\nindex_name: documents_zip10\n\nindex_size: -14723809282\nversion: 3\ntree_level: \nroot_block_no: 81515\ninternal_pages: 2470\nleaf_pages: 326170\nempty_pages: 0\ndeleted_pages: 15913\navg_leaf_density: 38.21\nleaf_fragmentation: 77.19\nindex_name: documents_state\n\nindex_size: -14831697922\nversion: 3\ntree_level: \nroot_block_no: 47536\ninternal_pages: 1607\nleaf_pages: 341421\nempty_pages: 0\ndeleted_pages: 208\navg_leaf_density: 45.28\nleaf_fragmentation: 46.48\nindex_name: documents_account_number\n\nindex_size: -17118412802\nversion: 3\ntree_level: \nroot_block_no: 81517\ninternal_pages: 1149\nleaf_pages: 296146\nempty_pages: 0\ndeleted_pages: 18027\navg_leaf_density: 80.86\nleaf_fragmentation: 7.14\nindex_name: document_address_precedence_key\n", "msg_date": "Wed, 15 Jul 2009 15:03:50 -0700 (MST)", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "\nOn 7/14/09 9:53 PM, \"David Wilson\" <[email protected]> wrote:\n\n> On Mon, Jul 13, 2009 at 3:31 PM, Wayne Conrad<[email protected]> wrote:\n>> Howdy.  Some months back, when advised on one of these lists that it\n>> should not be necessary to issue VACUUM FULL/REINDEX DATABASE, we quit\n>> this nightly \"maintenance\" practice.  We've been very happy to not\n>> have to do that, since it locked the database all night.  Since then,\n>> however, our database performance has decreased.  The decrease took a\n>> few weeks to become noticable; perhaps six weeks to become awful.\n> \n> <snip>\n> \n>> I did a VACUUM VERBOSE and looked at the statistics at the end; they\n>> seem to indicated that my max_fsm_pages is large enough to keep track\n>> of all of the dead rows that are being created (we do a fair amount of\n>> deleting as well as inserting).  Postgres prints no complaint saying\n>> we need more slots, and we have more than the number of slots needed\n>> (if I recall, about twice as many).\n>> \n>> What options do I have for restoring performance other than VACUUM\n>> FULL/REINDEX DATABASE?\n>> \n> \n> Do you have autovacuum on, or otherwise replaced your VACUUM FULL with\n> regular VACUUM? The symptoms are pretty classically those of table\n> bloat. Since it's gotten so out of hand now, a VACUUM FULL/REINDEX is\n> probably what you'll need to fix it.\n\nIf you go that route, do a REINDEX first. You probably want to know whether\nit is mostly index or table bloat that is the majority of the problem.\n\nAdjusting each table and index FILLFACTOR may also help.\n\nHowever, if it has bloated this much, you may have some long living\ntransactions that make it hard for postgres to recycle free space.\n\nAnd as others have said, certain things can cause a lot of bloat that only\nCLUSTER or VACUUM FULL will reclaim well -- especially updating all or most\nrows in a table, or otherwise doing very large bulk delete or update.\n\n> \n> Going forward, you need *some* vacuuming strategy. Autovacuum is\n> probably best, especially if you're on 8.3. If not autovacuum for some\n> reason, you *must* at least do regular vacuums.\n> \n> Vacuum full/reindex is for fixing the situation you're in now, but a\n> regular vacuum strategy should prevent you from getting back into it.\n> \n> --\n> - David T. Wilson\n> [email protected]\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 15 Jul 2009 17:30:58 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "On Wed, Jul 15, 2009 at 4:03 PM, Wayne Conrad<[email protected]> wrote:\n>>> On Tue, 14 Jul 2009, Scott Marlowe wrote:\n>>\n>> Are you guys doing anything that could be deemed pathological, like\n>> full table updates on big tables over and over?  Had an issue last\n>> year where a dev left a where clause off an update to a field in one\n>> of our biggest tables and in a few weeks the database was so bloated\n>> we had to take it offline to fix the problem.  After fixing the\n>> query.\n>\n> I've just audited the source, looking for any updates without where\n> clauses.  None jumped out to bite me.\n>\n> Almost everything we do happens in transactions which can occasionally\n> take 10-20 minutes to complete and span thousands or tens of thousands\n> of rows across multiple tables.  Are long-running transactions a\n> culprit in table bloat?\n>\n> I've also used contrib/pgstattuple to try to identify which of our\n> large tables and indices are experiencing bloat.  Here are the\n> pgstattuple results for our largest tables:\n\nOuch hurts my eyes :) Can you see something like table_len,\ndead_tuple_percent, free_percent order by dead_tuple_percent desc\nlimit 10 or something like that maybe?\n\n>\n> table_len:          56639488\n> tuple_count:        655501\n> tuple_len:          53573112\n> tuple_percent:      94.59\n> dead_tuple_count:   0\n> dead_tuple_len:     0\n> dead_tuple_percent: 0\n> free_space:         251928\n> free_percent:       0.44\n> table_name:         status\nLots more rows deleted.\n", "msg_date": "Wed, 15 Jul 2009 18:40:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" }, { "msg_contents": "> Ouch hurts my eyes :) Can you see something like table_len,\n> dead_tuple_percent, free_percent order by dead_tuple_percent desc\n> limit 10 or something like that maybe?\n\nSorry about the pain. Didn't know what you needed to see.\n\nOrdering by dead_tuple_percent:\n\ndb.production=> select table_name, table_len, dead_tuple_percent,\nfree_percent from temp_tuplestats order by dead_tuple_percent desc\nlimit 10;\n table_name | table_len | dead_tuple_percent | free_percent\n-------------------------------------+-----------+--------------------+--------------\n scheduler_info | 8192 | 43.95 | 46\n inserter_maintenance_logs | 16384 | 25.13 | 9\n merchants | 8192 | 24.19 | 64\n scheduler_in_progress | 32768 | 16.47 | 75\n guilds_hosts | 8192 | 13.28 | 67\n work_types | 8192 | 12.18 | 78\n production_printer_maintenance_logs | 16384 | 11.18 | 11\n guilds_work_types | 8192 | 10.94 | 71\n config | 8192 | 10.47 | 83\n work_in_progress | 131072 | 8.47 | 85\n(10 rows)\n\nThese are our smallest, and in terms of performance, least significant\ntables. Except for work_in_progress, they play little part in overall\nsystem performace. work_in_progress gets dozens of insertions and\ndeletions per second, and as many queries.\n\nOrdering by table size, because I had the questions of where the bloat\nis, in terms of disk space used (since I brought up before that the\nphysical size of the database is growing at about 50% per quarter):\n\ndb.production=> select table_name, table_len, dead_tuple_percent, free_percent from temp_tuplestats order by table_len desc limit 10;\n table_name | table_len | dead_tuple_percent | free_percent\n--------------------------------------------+-------------+--------------------+--------------\n documents | 28510109696 | 1.05 | 21\n document_address | 23458062336 | 2.14 | 10\n latest_document_address_links | 4953735168 | 3.71 | 21\n documents_ps_page | 4927676416 | 1.19 | 6\n injectd_log | 4233355264 | 0.74 | 17\n ps_page | 3544350720 | 0.81 | 4\n temp_bak_documents_invoice_amount_for_near | 3358351360 | 0 | 0\n statements | 1832091648 | 4.4 | 2\n documents_old_addresses | 1612947456 | 0 | 1\n cron_logs | 791240704 | 0 | 1\n(10 rows)\n\nAm I seeing in the above queries evidence that my bloat is mostly in\nfree space, and not in dead tuples?\n", "msg_date": "Thu, 16 Jul 2009 08:00:43 -0700 (MST)", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Poor overall performance unless regular VACUUM FULL" } ]
[ { "msg_contents": "I can't figure what is going on below; first of all, this count which\nreturns 1.5 million from a ~2 million row table:\n\nwoome=# explain analyze SELECT COUNT(*) FROM \"webapp_person\" WHERE\n\"webapp_person\".\"permissionflags\" =\nB'0000000000001111111111111111111111111111'::\"bit\";\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=125774.83..125774.84 rows=1 width=0) (actual\ntime=2976.405..2976.405 rows=1 loops=1)\n -> Seq Scan on webapp_person (cost=0.00..122041.10 rows=1493490\nwidth=0) (actual time=0.019..2781.735 rows=1518635 loops=1)\n Filter: (permissionflags =\nB'0000000000001111111111111111111111111111'::\"bit\")\n Total runtime: 2976.475 ms\n(4 rows)\n\nis slower than\n\nwoome=# explain analyze SELECT COUNT(*) FROM \"webapp_person\" WHERE\n\"webapp_person\".\"permissionflags\" &\nb'0000000000000000000000000000000000000100' =\nb'0000000000000000000000000000000000000100';\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=49341.55..49341.56 rows=1 width=0) (actual\ntime=1035.226..1035.226 rows=1 loops=1)\n -> Bitmap Heap Scan on webapp_person (cost=26280.49..49316.11\nrows=10174 width=0) (actual time=221.672..872.037 rows=1518630\nloops=1)\n Recheck Cond: ((permissionflags &\nB'0000000000000000000000000000000000000100'::\"bit\") =\nB'0000000000000000000000000000000000000100'::\"bit\")\n -> Bitmap Index Scan on\nwebapp_person_permissionflags_bitmasked0100_idx (cost=0.00..26277.95\nrows=10174 width=0) (actual time=186.596..186.596 rows=1574558\nloops=1)\n Total runtime: 1035.328 ms\n(5 rows)\n\nwith both a straight btree index on the permissionflags column, and a\nconditional index that matches the where clause in the 2nd example.\nBoth queries return the same result because with current data, the\nonly one value in the table which matches the bitmask. How come the\nmore complicated one is 3x faster? Maybe the size of the conditional\nindex? But why does the first form not use the conditional index?\n\nNow if I add a straight join with another ~300k row table:\n\nwoome=# explain analyze SELECT COUNT(*) FROM webapp_person join\nwebapp_report on webapp_person.id = webapp_report.reported_id WHERE\nwebapp_report.crime='IP_SPAM' and webapp_person.permissionflags =\nb'0000000000001111111111111111111111111111';\n QUERY\nPLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=150870.81..150870.82 rows=1 width=0) (actual\ntime=4024.475..4024.475 rows=1 loops=1)\n -> Hash Join (cost=140740.92..150531.01 rows=135922 width=0)\n(actual time=3601.576..4013.332 rows=91126 loops=1)\n Hash Cond: (webapp_report.reported_id = webapp_person.id)\n -> Seq Scan on webapp_report (cost=0.00..6579.06\nrows=185180 width=4) (actual time=0.024..88.038 rows=183558 loops=1)\n Filter: ((crime)::text = 'IP_SPAM'::text)\n -> Hash (cost=122059.09..122059.09 rows=1494547 width=4)\n(actual time=3600.877..3600.877 rows=1518724 loops=1)\n -> Seq Scan on webapp_person (cost=0.00..122059.09\nrows=1494547 width=4) (actual time=0.011..2984.683 rows=1518724\nloops=1)\n Filter: (permissionflags =\nB'0000000000001111111111111111111111111111'::\"bit\")\n Total runtime: 4043.415 ms\n(9 rows)\n\nit adds a couple of seconds but the execution time stays within the\nsame order of magnitude. However if I now replace the straight\nequality comparison there with the bitmasked comparison, I get\n\nwoome=# explain analyze SELECT COUNT(*) FROM webapp_person join\nwebapp_report on webapp_person.id = webapp_report.reported_id WHERE\nwebapp_report.crime='IP_SPAM' and webapp_person.permissionflags &\nb'0000000000000000000000000000000000000100' =\nb'0000000000000000000000000000000000000100';\n\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=58927.28..58927.29 rows=1 width=0) (actual\ntime=58004.762..58004.762 rows=1 loops=1)\n -> Hash Join (cost=49558.94..58924.96 rows=926 width=0) (actual\ntime=1505.944..57978.469 rows=91132 loops=1)\n Hash Cond: (webapp_report.reported_id = webapp_person.id)\n -> Seq Scan on webapp_report (cost=0.00..6579.06\nrows=185180 width=4) (actual time=0.030..201.187 rows=183564 loops=1)\n Filter: ((crime)::text = 'IP_SPAM'::text)\n -> Hash (cost=49431.68..49431.68 rows=10181 width=4)\n(actual time=1505.462..1505.462 rows=1518756 loops=1)\n -> Bitmap Heap Scan on webapp_person\n(cost=26383.65..49431.68 rows=10181 width=4) (actual\ntime=225.114..1058.692 rows=1518756 loops=1)\n Recheck Cond: ((permissionflags &\nB'0000000000000000000000000000000000000100'::\"bit\") =\nB'0000000000000000000000000000000000000100'::\"bit\")\n -> Bitmap Index Scan on\nwebapp_person_permissionflags_bitmasked0100_idx (cost=0.00..26381.10\nrows=10181 width=0) (actual time=188.089..188.089 rows=1579945\nloops=1)\n Total runtime: 58004.897 ms\n(10 rows)\n\nTime: 58024.124 ms\n\nIt takes almost a minute to run. My first question is, why is the\nactual execution time for the hash join in the last example 15x higher\nwith the bitmasked condition than with the straight equality version\nof the query above? This being even more puzzling since the\nperformance relationship between bitmask and equality is reversed if I\nomit the join altogether, presumably because the conditional index\ngets used ... however if I define a conditional index on the column\nfor the value that matches the bitmask, it still doesn't get used with\nequality ...\n\nI'm generally interested in the behaviour/performance of such bitmask\nfields with and without indexes as we've started using it a lot. The\nabove seems to suggest there a things to keep in mind and to observe.\n\nRegards,\n\nFrank\n", "msg_date": "Mon, 13 Jul 2009 22:46:08 +0200", "msg_from": "Frank Joerdens <[email protected]>", "msg_from_op": true, "msg_subject": "Odd performance / query plan with bitmasked field as opposed to\n\tequality" }, { "msg_contents": "On Mon, Jul 13, 2009 at 4:46 PM, Frank Joerdens<[email protected]> wrote:\n> I can't figure what is going on below; first of all, this count  which\n> returns 1.5 million from a ~2 million row table:\n>\n> woome=# explain analyze SELECT COUNT(*) FROM \"webapp_person\" WHERE\n> \"webapp_person\".\"permissionflags\" =\n> B'0000000000001111111111111111111111111111'::\"bit\";\n>                                                           QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=125774.83..125774.84 rows=1 width=0) (actual\n> time=2976.405..2976.405 rows=1 loops=1)\n>   ->  Seq Scan on webapp_person  (cost=0.00..122041.10 rows=1493490\n> width=0) (actual time=0.019..2781.735 rows=1518635 loops=1)\n>         Filter: (permissionflags =\n> B'0000000000001111111111111111111111111111'::\"bit\")\n>  Total runtime: 2976.475 ms\n> (4 rows)\n\nThere are two possibilities here: the planner thinks it CAN'T use the\nrelevant index for this query, or it thinks that the index will be\nslower than just seq-scaning the whole table. To figure out which it\nis, try EXPLAIN ANALYZE again with enable_seqscan set to false (note:\nthis is a bad idea in general, but useful for debugging). If you\nstill get a seqscan anyway, then there's some reason why it thinks\nthat it can't use the index (which we can investigate). If that makes\nit switch to an index scan, then you can try adjusting your cost\nparameters. But the first thing is to figure out which kind of\nproblem you have. In any case, send the output to the list.\n\nSolving this problem will probably shed some light on the other things\nin your original email, so I'm not going to specifically address each\none at this point.\n\n...Robert\n", "msg_date": "Wed, 22 Jul 2009 14:17:49 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd performance / query plan with bitmasked field as\n\topposed to equality" } ]
[ { "msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 4919\nLogged by: Lauris Ulmanis\nEmail address: [email protected]\nPostgreSQL version: 8.3.7, 8.4.0\nOperating system: Any\nDescription: CREATE USER command slows down system performance\nDetails: \n\nWhen user count in Postgres database reaches up to 500 000 - database\ncommand of creating users 'CREATE USER' slows down to 5-10 seconds per user.\n\n\nWhat could be a reason of this problem and is there any solution how to\navoid it?\n\nFor each of user can be associated up to 10 roles with grants to system\nobjects.\n", "msg_date": "Tue, 14 Jul 2009 12:39:10 GMT", "msg_from": "\"Lauris Ulmanis\" <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #4919: CREATE USER command slows down system performance" }, { "msg_contents": "Lauris Ulmanis wrote:\n> The following bug has been logged online:\n> \n> Bug reference: 4919\n> Logged by: Lauris Ulmanis\n> Email address: [email protected]\n> PostgreSQL version: 8.3.7, 8.4.0\n> Operating system: Any\n> Description: CREATE USER command slows down system performance\n> Details: \n> \n> When user count in Postgres database reaches up to 500 000 - database\n> command of creating users 'CREATE USER' slows down to 5-10 seconds per user.\n\nI don't see such slowdown here, by just issuing 500000 CREATE USER\ncommands in a loop.\n\n> For each of user can be associated up to 10 roles with grants to system\n> objects.\n\nThat may be related..\n\nCan you produce a repeatable test case?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 14 Jul 2009 18:09:15 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system performance" }, { "msg_contents": "Hello again!\n\nI did test on my local test server\n\nI created up 500 000 users in function loop very quickly - within 48\nseconds. I did again this script reaching up to 1 billion users - results\nwas the same - 48 seconds. It is very quickly.\n\nBut problem seems is with transaction preparation because if in database is\n1 billion users and I want to create 1 new - it will take 4 seconds! \n\nAfter that I generated up to 2 billion users in this server (generation\nprocess took just 1.44 minutes of times - again quickly).\n\nAnd did 1 user creation again - now it took 9 seconds of time!\n\nWhat is a reason of this slowness? Is there a workaround or solution how to\navoid it? \n \n\n-----Original Message-----\nFrom: Heikki Linnakangas [mailto:[email protected]] \nSent: Tuesday, July 14, 2009 6:09 PM\nTo: Lauris Ulmanis\nCc: [email protected]\nSubject: Re: [BUGS] BUG #4919: CREATE USER command slows down system\nperformance\n\nLauris Ulmanis wrote:\n> The following bug has been logged online:\n> \n> Bug reference: 4919\n> Logged by: Lauris Ulmanis\n> Email address: [email protected]\n> PostgreSQL version: 8.3.7, 8.4.0\n> Operating system: Any\n> Description: CREATE USER command slows down system performance\n> Details: \n> \n> When user count in Postgres database reaches up to 500 000 - database\n> command of creating users 'CREATE USER' slows down to 5-10 seconds per\nuser.\n\nI don't see such slowdown here, by just issuing 500000 CREATE USER\ncommands in a loop.\n\n> For each of user can be associated up to 10 roles with grants to system\n> objects.\n\nThat may be related..\n\nCan you produce a repeatable test case?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n", "msg_date": "Wed, 15 Jul 2009 13:31:46 +0300", "msg_from": "\"Lauris Ulmanis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system performance" }, { "msg_contents": "Lauris Ulmanis wrote:\n> Hello again!\n> \n> I did test on my local test server\n> \n> I created up 500 000 users in function loop very quickly - within 48\n> seconds. I did again this script reaching up to 1 billion users - results\n> was the same - 48 seconds. It is very quickly.\n> \n> But problem seems is with transaction preparation because if in database is\n> 1 billion users and I want to create 1 new - it will take 4 seconds! \n> \n> After that I generated up to 2 billion users in this server (generation\n> process took just 1.44 minutes of times - again quickly).\n> \n> And did 1 user creation again - now it took 9 seconds of time!\n> \n> What is a reason of this slowness? Is there a workaround or solution how to\n> avoid it? \n\nMy bet is on the pg_auth flat file. I doubt we have ever tested the\nbehavior of that code with 1 billion users ...\n\nDo you really need 1 billion users? Are you planning on giving accounts\nto every human being in the planet or what? I mean, what's the point of\nthis test?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 15 Jul 2009 10:02:09 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "On Wed, 15 Jul 2009 16:02:09 +0200, Alvaro Herrera \n<[email protected]> wrote:\n> My bet is on the pg_auth flat file. I doubt we have ever tested the\n> behavior of that code with 1 billion users ...\nI've noticed this behaviour some time ago, on a cluster with 50k+ roles \n(not sure about the number now). Restoring the backup took a lot of time, \nespecially when the users file grew significantly and each additional user \ncaused PG to rewrite the whole file.\nI never bothered to report this, as it's not like the users are \n(re)created every day, it was just a one-time run (== some extra time for \nanother coffee during restore ;-)).\nI was always wondering, though, why PostgreSQL uses this approach and not \nits catalogs.\n\nRegards,\n-- \nru\n", "msg_date": "Wed, 15 Jul 2009 16:21:16 +0200", "msg_from": "toruvinn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system\n performance" }, { "msg_contents": "toruvinn wrote:\n> On Wed, 15 Jul 2009 16:02:09 +0200, Alvaro Herrera \n> <[email protected]> wrote:\n>> My bet is on the pg_auth flat file. I doubt we have ever tested the\n>> behavior of that code with 1 billion users ...\n\n> I was always wondering, though, why PostgreSQL uses this approach and not \n> its catalogs.\n\nIt does use the catalog for most things. THe flatfile is used for the\nsituations where the catalogs are not yet ready to be read.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 15 Jul 2009 10:47:35 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> toruvinn wrote:\n>> I was always wondering, though, why PostgreSQL uses this approach and not \n>> its catalogs.\n\n> It does use the catalog for most things. THe flatfile is used for the\n> situations where the catalogs are not yet ready to be read.\n\nNow that we have SQL-level CONNECT privilege, I wonder just how much\nfunctionality would be lost if we got rid of the flat files and told\npeople they had to use CONNECT to do any per-user or per-database\naccess control.\n\nThe main point I can see offhand is that password checking would have\nto be done a lot later in the startup sequence, with correspondingly\nmore cycles wasted to reject bad passwords.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Jul 2009 10:59:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system performance " }, { "msg_contents": "On 7/15/09, Tom Lane <[email protected]> wrote:\n> Alvaro Herrera <[email protected]> writes:\n>\n> > toruvinn wrote:\n> >> I was always wondering, though, why PostgreSQL uses this approach and not\n> >> its catalogs.\n>\n> > It does use the catalog for most things. THe flatfile is used for the\n> > situations where the catalogs are not yet ready to be read.\n>\n>\n> Now that we have SQL-level CONNECT privilege, I wonder just how much\n> functionality would be lost if we got rid of the flat files and told\n> people they had to use CONNECT to do any per-user or per-database\n> access control.\n>\n> The main point I can see offhand is that password checking would have\n> to be done a lot later in the startup sequence, with correspondingly\n> more cycles wasted to reject bad passwords.\n\n From security standpoint, wasting more cycles on bad passwords is good,\nas it decreases the rate bruteforce password scanning can happen.\n\nAnd I cannot imagine a scenario where performance on invalid logins\ncan be relevant..\n\n-- \nmarko\n", "msg_date": "Wed, 15 Jul 2009 18:10:30 +0300", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreen<[email protected]> wrote:\n\n> From security standpoint, wasting more cycles on bad passwords is good,\n> as it decreases the rate bruteforce password scanning can happen.\n>\n> And I cannot imagine a scenario where performance on invalid logins\n> can be relevant..\n\nDoS attacks. The longer it takes to reject an invalid login, the fewer\ninvalid login attempts it takes to DoS the server.\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Wed, 15 Jul 2009 11:16:23 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "David Wilson <[email protected]> writes:\n> On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreen<[email protected]> wrote:\n>> From security standpoint, wasting more cycles on bad passwords is good,\n>> as it decreases the rate bruteforce password scanning can happen.\n>> \n>> And I cannot imagine a scenario where performance on invalid logins\n>> can be relevant..\n\n> DoS attacks. The longer it takes to reject an invalid login, the fewer\n> invalid login attempts it takes to DoS the server.\n\nYeah, but even with the current setup, an attacker who can fire\nconnection request packets at your postmaster port is not going to have\nany trouble DoS'ing the service. We expend quite a lot of cycles before\ngetting to the password challenge already.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Jul 2009 11:30:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "On 7/15/09, David Wilson <[email protected]> wrote:\n> On Wed, Jul 15, 2009 at 11:10 AM, Marko Kreen<[email protected]> wrote:\n> > From security standpoint, wasting more cycles on bad passwords is good,\n> > as it decreases the rate bruteforce password scanning can happen.\n> >\n> > And I cannot imagine a scenario where performance on invalid logins\n> > can be relevant..\n>\n>\n> DoS attacks. The longer it takes to reject an invalid login, the fewer\n> invalid login attempts it takes to DoS the server.\n\nNo, this is not a good argument against it. Especially if you consider\nthat DoS via hanging-connect or SSL is still there.\n\nCompared to minor DoS, the password-leakage is much worse danger.\n\n-- \nmarko\n", "msg_date": "Wed, 15 Jul 2009 18:31:25 +0300", "msg_from": "Marko Kreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Yes, it seems problem in pg_auth flat file.\n\nWe are using db users to manage access rights to db tables and data, that\nway we have two layer security - application and DB. Each system user has\nit's own group role and groups have different access levels.\n\nSo we cannot use one login role for all users. \n\nBecause of using group roles we need to use pgbouncer connection pooling for\nseperate users sessions to pool server and use this connection till user\ncloses session. We cannot user pgbouncer pooling and role managment if in\npostgres is one login role for all users.\n\nI hope there is some solution how to use login roles for each users and role\ngroups with grants to system objects up to ~500 000 users.\n\nFor example, Oracle database allows to create users more then 500 000\nwithout performance problems in creation process. I suppose it is because\noracle don't use flat file to store all users.\n\n \n-----Original Message-----\nFrom: Alvaro Herrera [mailto:[email protected]] \nSent: Wednesday, July 15, 2009 5:02 PM\nTo: Lauris Ulmanis\nCc: 'Heikki Linnakangas'; [email protected];\[email protected]\nSubject: Re: [BUGS] BUG #4919: CREATE USER command slows down\nsystemperformance\n\nLauris Ulmanis wrote:\n> Hello again!\n> \n> I did test on my local test server\n> \n> I created up 500 000 users in function loop very quickly - within 48\n> seconds. I did again this script reaching up to 1 billion users - results\n> was the same - 48 seconds. It is very quickly.\n> \n> But problem seems is with transaction preparation because if in database\nis\n> 1 billion users and I want to create 1 new - it will take 4 seconds! \n> \n> After that I generated up to 2 billion users in this server (generation\n> process took just 1.44 minutes of times - again quickly).\n> \n> And did 1 user creation again - now it took 9 seconds of time!\n> \n> What is a reason of this slowness? Is there a workaround or solution how\nto\n> avoid it? \n\nMy bet is on the pg_auth flat file. I doubt we have ever tested the\nbehavior of that code with 1 billion users ...\n\nDo you really need 1 billion users? Are you planning on giving accounts\nto every human being in the planet or what? I mean, what's the point of\nthis test?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n\n", "msg_date": "Thu, 16 Jul 2009 14:49:44 +0300", "msg_from": "\"Lauris Ulmanis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: BUG #4919: CREATE USER command slows down systemperformance" }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > toruvinn wrote:\n> >> I was always wondering, though, why PostgreSQL uses this approach and not \n> >> its catalogs.\n> \n> > It does use the catalog for most things. THe flatfile is used for the\n> > situations where the catalogs are not yet ready to be read.\n> \n> Now that we have SQL-level CONNECT privilege, I wonder just how much\n> functionality would be lost if we got rid of the flat files and told\n> people they had to use CONNECT to do any per-user or per-database\n> access control.\n> \n> The main point I can see offhand is that password checking would have\n> to be done a lot later in the startup sequence, with correspondingly\n> more cycles wasted to reject bad passwords.\n\nIs this a TODO?\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 8 Aug 2009 00:13:49 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows\n\tdown system performance" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom Lane wrote:\n>> Now that we have SQL-level CONNECT privilege, I wonder just how much\n>> functionality would be lost if we got rid of the flat files and told\n>> people they had to use CONNECT to do any per-user or per-database\n>> access control.\n>> \n>> The main point I can see offhand is that password checking would have\n>> to be done a lot later in the startup sequence, with correspondingly\n>> more cycles wasted to reject bad passwords.\n\n> Is this a TODO?\n\nWell, it's a TO-THINK-ABOUT anyway. I think the appropriate next step\nwould not be to write code, but to do a detailed investigation of what\nwould be gained or lost. I don't remember exactly what we do with the\nflat-file contents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Aug 2009 12:02:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Tom Lane wrote:\n\n> Well, it's a TO-THINK-ABOUT anyway. I think the appropriate next step\n> would not be to write code, but to do a detailed investigation of what\n> would be gained or lost. I don't remember exactly what we do with the\n> flat-file contents.\n\nMaybe what we need is not to get rid of the flat files, but to speed\nthem up. If we're worried about speed in the pg_authid flatfile, and\ncome up with a solution to that problem, what will we do with the\npg_database flatfile when it grows too large? We can't just get rid of\nit, because autovacuum needs to access it.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sat, 8 Aug 2009 13:57:05 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down\n\tsystem performance" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> ... I don't remember exactly what we do with the\n>> flat-file contents.\n\n> Maybe what we need is not to get rid of the flat files, but to speed\n> them up. If we're worried about speed in the pg_authid flatfile, and\n> come up with a solution to that problem, what will we do with the\n> pg_database flatfile when it grows too large? We can't just get rid of\n> it, because autovacuum needs to access it.\n\nWell, one of the components of the TODO would have to be to figure out a\nway to fix autovacuum to avoid that.\n\nOr we could consider getting rid of the pg_auth flatfile while keeping\nthe pg_database one, which would fix the issue for role names anyway...\nbut it's certainly an ugly compromise.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Aug 2009 14:15:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <[email protected]> writes:\n>> Tom Lane wrote:\n>>> ... I don't remember exactly what we do with the\n>>> flat-file contents.\n\n>> Maybe what we need is not to get rid of the flat files, but to speed\n>> them up. If we're worried about speed in the pg_authid flatfile, and\n>> come up with a solution to that problem, what will we do with the\n>> pg_database flatfile when it grows too large? We can't just get rid of\n>> it, because autovacuum needs to access it.\n\n> Well, one of the components of the TODO would have to be to figure out a\n> way to fix autovacuum to avoid that.\n\nActually, I had forgotten that we were using the pg_database flatfile\nfor purposes other than authentication checks. In particular, we need\nit during backend startup to map from database name to database OID,\nwithout which it's impossible to locate the system catalogs for the\ntarget database. It's pretty hard to see a way around that one.\nWe could grovel through pg_database itself, as indeed is done to rebuild\nthe flatfile during system start. But that's certainly not going to be\nfast in cases where there are enough DBs to make the flatfile slow.\n\nSo on third thought, Alvaro's right: the only real solution here is to\nadopt a more efficient representation of the flat files. Maybe some\nsort of simple hashtable arrangement would work. (Rendering them not so\nflat anymore...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Aug 2009 14:49:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Tom Lane wrote:\n\n> Actually, I had forgotten that we were using the pg_database flatfile\n> for purposes other than authentication checks. In particular, we need\n> it during backend startup to map from database name to database OID,\n> without which it's impossible to locate the system catalogs for the\n> target database. It's pretty hard to see a way around that one.\n> We could grovel through pg_database itself, as indeed is done to rebuild\n> the flatfile during system start. But that's certainly not going to be\n> fast in cases where there are enough DBs to make the flatfile slow.\n\nAlso, IIRC flatfiles were introduced precisely to avoid having to read\nthe catalogs manually.\n\n> So on third thought, Alvaro's right: the only real solution here is to\n> adopt a more efficient representation of the flat files. Maybe some\n> sort of simple hashtable arrangement would work. (Rendering them not so\n> flat anymore...)\n\nAs long as there's a simple API, there should be no problem.\n\n(Except that it would be nice to be able to build the file incrementally\n... If we have to write out a million lines each time a millionth user\nis created, there will still be a bottleneck at CREATE USER time.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sat, 8 Aug 2009 14:59:29 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down\n\tsystem performance" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> So on third thought, Alvaro's right: the only real solution here is to\n>> adopt a more efficient representation of the flat files. Maybe some\n>> sort of simple hashtable arrangement would work. (Rendering them not so\n>> flat anymore...)\n\n> As long as there's a simple API, there should be no problem.\n\n> (Except that it would be nice to be able to build the file incrementally\n> ... If we have to write out a million lines each time a millionth user\n> is created, there will still be a bottleneck at CREATE USER time.)\n\nIf we allow the requirements to creep on this, we'll soon find ourselves\neither using or reinventing BDB for the flatfiles. Ick.\n\n[ thinks for awhile... ]\n\nIn some sense this is a bootstrap problem: what does it take to get to\nthe point of being able to read pg_database and its indexes? That is\nnecessarily not dependent on the particular database we want to join.\nMaybe we could solve it by having the relcache write a \"global\" cache\nfile containing only entries for the global tables, and load that before\nwe have identified the database we want to join (after which, we'll load\nanother cache file for the local entries). It would doubtless take some\nrearrangement of the backend startup sequence, but it doesn't seem\nobviously impossible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Aug 2009 15:15:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Tom Lane wrote:\n\n> In some sense this is a bootstrap problem: what does it take to get to\n> the point of being able to read pg_database and its indexes? That is\n> necessarily not dependent on the particular database we want to join.\n> Maybe we could solve it by having the relcache write a \"global\" cache\n> file containing only entries for the global tables, and load that before\n> we have identified the database we want to join (after which, we'll load\n> another cache file for the local entries).\n\nThis sounds good, because autovacuum could probably use this too.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sat, 8 Aug 2009 15:20:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down\n\tsystem performance" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> In some sense this is a bootstrap problem: what does it take to get to\n>> the point of being able to read pg_database and its indexes? That is\n>> necessarily not dependent on the particular database we want to join.\n>> Maybe we could solve it by having the relcache write a \"global\" cache\n>> file containing only entries for the global tables, and load that before\n>> we have identified the database we want to join (after which, we'll load\n>> another cache file for the local entries).\n\n> This sounds good, because autovacuum could probably use this too.\n\nMaybe I'll look at this after commitfest is over. I haven't messed\nwith the bootstrap sequence in awhile, but I used to remember how\nit worked ...\n\nAs far as AV is concerned, taking this approach would likely mean\nturning the launcher into a full-fledged backend just like the workers.\nDo you see any problem with that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Aug 2009 15:25:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows down system\n\tperformance" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <[email protected]> writes:\n> > Tom Lane wrote:\n> >> Now that we have SQL-level CONNECT privilege, I wonder just how much\n> >> functionality would be lost if we got rid of the flat files and told\n> >> people they had to use CONNECT to do any per-user or per-database\n> >> access control.\n> >> \n> >> The main point I can see offhand is that password checking would have\n> >> to be done a lot later in the startup sequence, with correspondingly\n> >> more cycles wasted to reject bad passwords.\n> \n> > Is this a TODO?\n> \n> Well, it's a TO-THINK-ABOUT anyway. I think the appropriate next step\n> would not be to write code, but to do a detailed investigation of what\n> would be gained or lost. I don't remember exactly what we do with the\n> flat-file contents.\n\nThe flat file is the username/password sorted list. We load that info\ninto the postmaster in an array that we can binary sort.\n\nI wonder how big the postmaster process address space was when handling\n2 billion users:\n\n\thttp://archives.postgresql.org/pgsql-bugs/2009-07/msg00176.php\n\nIt seems just storing many users in the postmaster could be burdensome,\nbut fixing that would be creating something like a database, which we\nalready have. ;-)\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Sat, 8 Aug 2009 16:31:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] BUG #4919: CREATE USER command slows\n\tdown system performance" }, { "msg_contents": "Lauris Ulmanis wrote:\n\n> When user count in Postgres database reaches up to 500 000 - database\n> command of creating users 'CREATE USER' slows down to 5-10 seconds per user.\n\nA bunch of commits along the direction of eliminating what we thought\nwas the main cause of the slowdown have just concluded. If you could\ngrab the current CVS version (or alternative a snapshot tarball from the\nFTP, but make sure it contains my commit as of 5 minutes ago) and try\nyour test case with it, it would be great.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Mon, 31 Aug 2009 22:59:41 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #4919: CREATE USER command slows down system\n performance" } ]
[ { "msg_contents": "When users count in Postgres database reaches up to 500 000 - database\ncommand of creating users 'CREATE USER' slows down to 5-10 seconds per user.\n\nWhat could be a reason of this problem and is there any solution how to\navoid it?\n\nFor each of user can be associated up to 10 roles with grants to system\nobjects.\n\nWhen users count in Postgres database reaches up to 500 000 - database\ncommand of creating users 'CREATE USER' slows down to 5-10 seconds per user.\n\nWhat could be a reason of this problem and is there any solution how to\navoid it?\n\nFor each of user can be associated up to 10 roles with grants to system\nobjects.", "msg_date": "Tue, 14 Jul 2009 16:14:27 +0300", "msg_from": "Lauris Ulmanis <[email protected]>", "msg_from_op": true, "msg_subject": "CREATE USER command slows down when user count per server reaches up\n\tto 500 000" }, { "msg_contents": ">-----Original Message-----\n>From: [email protected] \n>\n>When users count in Postgres database reaches up to 500 000 - database\n>command of creating users 'CREATE USER' slows down to 5-10 \n>seconds per user.\n>\n>What could be a reason of this problem and is there any solution how to\n>avoid it?\n>\n>For each of user can be associated up to 10 roles with grants to system\n>objects.\n\nI have no idea about the performance issues, but I'm curious: how/why do\nyou have so many users accessing your database? I'm drawing a blank on\ncoming up with a use case where that many users are needed.\n\neric\n", "msg_date": "Wed, 15 Jul 2009 16:38:15 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CREATE USER command slows down when user count per server reaches\n\tup to 500 000" } ]
[ { "msg_contents": "Hi,\n\nI am transplanting an application to use PostgreSQL8.2.4 instead of DB2 9.1.\nCLI was used to connect to DB2, and ODBC is used to connect to PostgreSQL.\nThe query statement is as follows:\n\nSELECT void,nameId,tag FROM (SELECT void,nameId,tag,.... FROM Attr\nWHERE attributeof IN (SELECT oid_ FROM ItemView WHERE\nItemView.ItemId=?)) x RIGHT OUTER JOIN (SELECT oid_ FROM ItemView\nWHERE ItemView.ItemId=? and ItemView.assignedTo_=?) y ON attributeof =\noid_ FOR READ ONLY\n\nI tested the performance on PostgreSQL against DB2, and found that\n\nFirst execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\nSecond execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n\nPostgreSQL cost nearly the same time but DB2 ran 30 times faster in\nsecond execution.\n\nI tried to adjust shared_buffers parameter in postgresql.conf, no speed up.\nActually my DB holds only several records so I don't think memory is the reason.\n\nCould anybody give some advice to speed up in repeated execution in\nPostgreSQL or\ngive an explanation why DB2 is so mush faster in repeated execution?\n\nThank you.\nning\n", "msg_date": "Wed, 15 Jul 2009 12:10:44 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n> Hi,\n> \n> I am transplanting an application to use PostgreSQL8.2.4 instead of DB2 9.1.\n> CLI was used to connect to DB2, and ODBC is used to connect to PostgreSQL.\n> The query statement is as follows:\n\n> PostgreSQL cost nearly the same time but DB2 ran 30 times faster in\n> second execution.\n\nCan you run your query with EXPLAIN ANALYZE and post the results, both\nof the first and later executions?\n\n-- \nCraig Ringer\n\n", "msg_date": "Wed, 15 Jul 2009 16:26:14 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4\n than DB2 9.1" }, { "msg_contents": "On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n\n> First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n> Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n\nActually, on second thoughts that looks a lot like DB2 is caching the\nquery results and is just returning the cached results when you repeat\nthe query.\n\nI'm not sure to what extent PostgreSQL is capable of result caching, but\nI'd be surprised if it could do as much as DB2.\n\n-- \nCraig Ringer\n\n", "msg_date": "Wed, 15 Jul 2009 16:27:50 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4\n than DB2 9.1" }, { "msg_contents": "On Wed, Jul 15, 2009 at 9:27 AM, Craig\nRinger<[email protected]> wrote:\n> On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n>\n>> First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n>> Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n>\n> Actually, on second thoughts that looks a lot like DB2 is caching the\n> query results and is just returning the cached results when you repeat\n> the query.\n\n\nYeah, is 6ms really a problematic response time for your system?\n\nIf so you might consider whether executing millions of small queries\nis really the best approach instead of absorbing them all into queries\nwhich operate on more records at a time. For example, it's a lot\nfaster to join two large tables than look up matches for every record\none by one in separate queries.\n\nThere's no question if you match up results from DB2 and Postgres one\nto one there will be cases where DB2 is faster and hopefully cases\nwhere Postgres is faster. It's only interesting if the differences\ncould cause problems, otherwise you'll be running around in circles\nhunting down every difference between two fundamentally different\nproducts.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 15 Jul 2009 09:37:00 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "Hi Craig,\n\nThe log is really long, but I compared the result of \"explain analyze\"\nfor first and later executions, except for 3 \"time=XXX\" numbers, they\nare identical.\nI agree with you that PostgreSQL is doing different level of caching,\nI just wonder if there is any way to speed up PostgreSQL in this\nscenario, which is a wrong way perhaps.\n\nThank you.\nNing\n\n\nOn Wed, Jul 15, 2009 at 5:27 PM, Craig\nRinger<[email protected]> wrote:\n> On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n>\n>> First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n>> Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n>\n> Actually, on second thoughts that looks a lot like DB2 is caching the\n> query results and is just returning the cached results when you repeat\n> the query.\n>\n> I'm not sure to what extent PostgreSQL is capable of result caching, but\n> I'd be surprised if it could do as much as DB2.\n>\n> --\n> Craig Ringer\n>\n>\n", "msg_date": "Wed, 15 Jul 2009 17:51:50 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n\tDB2 9.1" }, { "msg_contents": "Hi Greg,\n\nI am doing performance test by running unit test program to compare\ntime used on PostgreSQL and DB2. As you pointed out, there are cases\nthat PostgreSQL is faster. Actually in real world for my application,\nrepeatedly executing same query statement will hardly happen. I am\ninvestigating this because on the performance test report\nautomatically generated by running unit test program, DB2 is 20-30\ntimes faster than PostgreSQL in some test cases because of repeatedly\nexecuted query.\n\nI am thinking that ignoring these test cases for performance measure\nis safe and acceptable, since PostgreSQL is quicker than DB2 for the\nfirst execution.\n\nThank you.\nNing\n\n\nOn Wed, Jul 15, 2009 at 5:37 PM, Greg Stark<[email protected]> wrote:\n> On Wed, Jul 15, 2009 at 9:27 AM, Craig\n> Ringer<[email protected]> wrote:\n>> On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n>>\n>>> First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n>>> Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n>>\n>> Actually, on second thoughts that looks a lot like DB2 is caching the\n>> query results and is just returning the cached results when you repeat\n>> the query.\n>\n>\n> Yeah, is 6ms really a problematic response time for your system?\n>\n> If so you might consider whether executing millions of small queries\n> is really the best approach instead of absorbing them all into queries\n> which operate on more records at a time. For example, it's a lot\n> faster to join two large tables than look up matches for every record\n> one by one in separate queries.\n>\n> There's no question if you match up results from DB2 and Postgres one\n> to one there will be cases where DB2 is faster and hopefully cases\n> where Postgres is faster. It's only interesting if the differences\n> could cause problems, otherwise you'll be running around in circles\n> hunting down every difference between two fundamentally different\n> products.\n>\n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf\n>\n", "msg_date": "Wed, 15 Jul 2009 18:05:44 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "ning wrote:\n> The log is really long, \n\nWhich usually signals a problem with the query.\n\n> but I compared the result of \"explain analyze\"\n> for first and later executions, except for 3 \"time=XXX\" numbers, they\n> are identical.\n> \n\nThey are supposed to be identical unless something is really badly broken.\n\n> I agree with you that PostgreSQL is doing different level of caching,\n> I just wonder if there is any way to speed up PostgreSQL in this\n> scenario, \n> \n\nThis is what EXPLAIN ANALYZE for. Could you post the results please?\n\nCheers,\nMike\n\n", "msg_date": "Wed, 15 Jul 2009 15:37:56 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n \tDB2 9.1" }, { "msg_contents": "On Wednesday 15 July 2009 10:27:50 Craig Ringer wrote:\n> On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n> > First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n> > Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n>\n> Actually, on second thoughts that looks a lot like DB2 is caching the\n> query results and is just returning the cached results when you repeat\n> the query.\nAre you sure getting the query *result* is causing the delay? If my faint \nmemory serves right DB2 does plan caching - PG does not.\nTo test this theory you could prepare it and execute it twice.\n\nPrepare it:\nPREPARE test_query AS SELECT void,nameId,tag FROM (SELECT void,nameId,tag,.... \nFROM Attr\nWHERE attributeof IN (SELECT oid_ FROM ItemView WHERE\nItemView.ItemId=?)) x RIGHT OUTER JOIN (SELECT oid_ FROM ItemView\nWHERE ItemView.ItemId=? and ItemView.assignedTo_=?) y ON attributeof =\noid_ FOR READ ONLY;\n\n\nExecute it:\nEXECUTE test_query;\nEXECUTE test_query;\n\nGreetings, \n\nAndres\n", "msg_date": "Thu, 16 Jul 2009 00:51:54 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "Hi Mike,\n\nThank you for your explanation.\nThe \"explain analyze\" command used is as follows, several integers are\nbound to '?'.\n-----\nSELECT oid_,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\nFROM (SELECT attributeOf,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\nFROM DenormAttributePerf WHERE attributeof IN (SELECT oid_ FROM\nJobView WHERE JobView.JobId=? and JobView.assignedTo_=?) AND nameId in\n(?)) x RIGHT OUTER JOIN (SELECT oid_ FROM JobView WHERE\nJobView.JobId=? and JobView.assignedTo_=?) y ON attributeof = oid_ FOR\nREAD ONLY\n-----\n\nThe result of the command is\n-----\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=575.60..1273.15 rows=81 width=568)\n(actual time=0.018..0.018 rows=0 loops=1)\n Join Filter: (x.attributeof = j1.oid_)\n -> Index Scan using job_tc1 on job j1 (cost=0.00..8.27 rows=1\nwidth=4) (actual time=0.016..0.016 rows=0 loops=1)\n Index Cond: ((assignedto_ = 888) AND (jobid = 0))\n -> Merge Left Join (cost=575.60..899.41 rows=16243 width=564)\n(never executed)\n Merge Cond: (v.void = b.void)\n -> Merge Left Join (cost=470.77..504.87 rows=2152\nwidth=556) (never executed)\n Merge Cond: (v.void = res.void)\n -> Sort (cost=373.61..374.39 rows=310 width=544)\n(never executed)\n Sort Key: v.void\n -> Hash Left Join (cost=112.07..360.78 rows=310\nwidth=544) (never executed)\n Hash Cond: (v.void = i.void)\n -> Hash Left Join (cost=65.40..303.17\nrows=38 width=540) (never executed)\n Hash Cond: (v.void = r.void)\n -> Hash Left Join\n(cost=21.42..257.86 rows=5 width=532) (never executed)\n Hash Cond: (v.void = s.void)\n -> Nested Loop Left Join\n(cost=8.27..244.65 rows=5 width=16) (never executed)\n Join Filter: (v.containedin = a.id)\n -> Nested Loop\n(cost=8.27..16.57 rows=1 width=12) (never executed)\n -> HashAggregate\n(cost=8.27..8.28 rows=1 width=4) (never executed)\n -> Index\nScan using job_tc1 on job j1 (cost=0.00..8.27 rows=1 width=4) (never\nexecuted)\n Index\nCond: ((assignedto_ = 888) AND (jobid = 0))\n -> Index Scan\nusing attribute_tc1 on attribute a (cost=0.00..8.27 rows=1 width=12)\n(never executed)\n Index Cond:\n((a.attributeof = j1.oid_) AND (a.nameid = 6))\n -> Append\n(cost=0.00..137.60 rows=7239 width=12) (never executed)\n -> Index Scan\nusing attribute_value_i on attribute_value v (cost=0.00..5.30 rows=9\nwidth=12) (never executed)\n Index Cond:\n(v.containedin = a.id)\n -> Seq Scan on\nstring_value v (cost=0.00..11.40 rows=140 width=12) (never executed)\n -> Seq Scan on\ninteger_value v (cost=0.00..26.30 rows=1630 width=12) (never\nexecuted)\n -> Seq Scan on\nbigint_value v (cost=0.00..25.10 rows=1510 width=12) (never executed)\n -> Seq Scan on\nrangeofint_value v (cost=0.00..25.10 rows=1510 width=12) (never\nexecuted)\n -> Seq Scan on\nresolution_value v (cost=0.00..24.00 rows=1400 width=12) (never\nexecuted)\n -> Seq Scan on\nopaque_value v (cost=0.00..20.40 rows=1040 width=12) (never executed)\n -> Hash (cost=11.40..11.40\nrows=140 width=520) (never executed)\n -> Seq Scan on\nstring_value s (cost=0.00..11.40 rows=140 width=520) (never executed)\n -> Hash (cost=25.10..25.10\nrows=1510 width=12) (never executed)\n -> Seq Scan on\nrangeofint_value r (cost=0.00..25.10 rows=1510 width=12) (never\nexecuted)\n -> Hash (cost=26.30..26.30 rows=1630\nwidth=8) (never executed)\n -> Seq Scan on integer_value i\n(cost=0.00..26.30 rows=1630 width=8) (never executed)\n -> Sort (cost=97.16..100.66 rows=1400 width=16)\n(never executed)\n Sort Key: res.void\n -> Seq Scan on resolution_value res\n(cost=0.00..24.00 rows=1400 width=16) (never executed)\n -> Sort (cost=104.83..108.61 rows=1510 width=12) (never executed)\n Sort Key: b.void\n -> Seq Scan on bigint_value b (cost=0.00..25.10\nrows=1510 width=12) (never executed)\n Total runtime: 0.479 ms\n(46 rows)\n-----\n\nBest regards,\nNing\n\nOn Thu, Jul 16, 2009 at 7:37 AM, Mike Ivanov<[email protected]> wrote:\n> ning wrote:\n>>\n>> The log is really long,\n>\n> Which usually signals a problem with the query.\n>\n>> but I compared the result of \"explain analyze\"\n>> for first and later executions, except for 3 \"time=XXX\" numbers, they\n>> are identical.\n>>\n>\n> They are supposed to be identical unless something is really badly broken.\n>\n>> I agree with you that PostgreSQL is doing different level of caching,\n>> I just wonder if there is any way to speed up PostgreSQL in this\n>> scenario,\n>\n> This is what EXPLAIN ANALYZE for. Could you post the results please?\n>\n> Cheers,\n> Mike\n>\n>\n", "msg_date": "Thu, 16 Jul 2009 09:53:59 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n\tDB2 9.1" }, { "msg_contents": "Hi Andres,\n\nThe log for the test you suggested is as follows in PostgreSQL8.2.4,\nbut I cannot find a clue to prove or prove not PostgreSQL is doing\nplan caching.\n\nBest regards,\nNing\n\n-----\njob=# prepare test_query as SELECT\noid_,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\nFROM (SELECT attributeOf,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\nFROM DenormAttributePerf WHERE attributeof IN (SELECT oid_ FROM\nJobView WHERE JobView.JobId=100 and JobView.assignedTo_=1) AND nameId\nin (6)) x RIGHT OUTER JOIN (SELECT oid_ FROM JobView WHERE\nJobView.JobId=100 and JobView.assignedTo_=1) y ON attributeof = oid_\nFOR READ ONLY\n;\nPREPARE\njob=# execute test_query;\n oid_ | void | nameid | tag | intval | lowerbound | upperbound |\ncrossfeeddir | feeddir | units | opqval | bigval | strval\n------+------+--------+-----+--------+------------+------------+--------------+---------+-------+--------+--------+--------\n 101 | | | | | | |\n | | | | |\n(1 row)\n\njob=# execute test_query;\n oid_ | void | nameid | tag | intval | lowerbound | upperbound |\ncrossfeeddir | feeddir | units | opqval | bigval | strval\n------+------+--------+-----+--------+------------+------------+--------------+---------+-------+--------+--------+--------\n 101 | | | | | | |\n | | | | |\n(1 row)\n-----\n\nOn Thu, Jul 16, 2009 at 7:51 AM, Andres Freund<[email protected]> wrote:\n> On Wednesday 15 July 2009 10:27:50 Craig Ringer wrote:\n>> On Wed, 2009-07-15 at 12:10 +0900, ning wrote:\n>> > First execution: PostgreSQL 0.006277 seconds / DB2 0.009028 seconds\n>> > Second execution: PostgreSQL 0.005932 seconds / DB2 0.000332 seconds\n>>\n>> Actually, on second thoughts that looks a lot like DB2 is caching the\n>> query results and is just returning the cached results when you repeat\n>> the query.\n> Are you sure getting the query *result* is causing the delay? If my faint\n> memory serves right DB2 does plan caching - PG does not.\n> To test this theory you could prepare it and execute it twice.\n>\n> Prepare it:\n> PREPARE test_query AS SELECT void,nameId,tag FROM (SELECT void,nameId,tag,....\n> FROM Attr\n> WHERE attributeof IN (SELECT oid_ FROM ItemView WHERE\n> ItemView.ItemId=?)) x RIGHT OUTER JOIN (SELECT oid_ FROM ItemView\n> WHERE ItemView.ItemId=? and ItemView.assignedTo_=?) y ON attributeof =\n> oid_ FOR READ ONLY;\n>\n>\n> Execute it:\n> EXECUTE test_query;\n> EXECUTE test_query;\n>\n> Greetings,\n>\n> Andres\n>\n", "msg_date": "Thu, 16 Jul 2009 10:11:29 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n\tDB2 9.1" }, { "msg_contents": "On Thursday 16 July 2009 03:11:29 ning wrote:\n> Hi Andres,\n>\n> The log for the test you suggested is as follows in PostgreSQL8.2.4,\n> but I cannot find a clue to prove or prove not PostgreSQL is doing\n> plan caching.\nWell. How long is the PREPARE and the EXECUTEs taking?\n\n\nAndres\n", "msg_date": "Thu, 16 Jul 2009 09:45:45 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "Hi Andres,\n\nBy executing\n#explain analyze execute test_query;\n\nthe first execution cost 0.389 seconds\nthe second cost 0.285 seconds\n\nGreetings,\nNing\n\n\nOn Thu, Jul 16, 2009 at 4:45 PM, Andres Freund<[email protected]> wrote:\n> On Thursday 16 July 2009 03:11:29 ning wrote:\n>> Hi Andres,\n>>\n>> The log for the test you suggested is as follows in PostgreSQL8.2.4,\n>> but I cannot find a clue to prove or prove not PostgreSQL is doing\n>> plan caching.\n> Well. How long is the PREPARE and the EXECUTEs taking?\n>\n>\n> Andres\n>\n", "msg_date": "Thu, 16 Jul 2009 18:30:00 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n\tDB2 9.1" }, { "msg_contents": "On Thursday 16 July 2009 11:30:00 ning wrote:\n> Hi Andres,\n>\n> By executing\n> #explain analyze execute test_query;\n>\n> the first execution cost 0.389 seconds\n> the second cost 0.285 seconds\nSeconds or milliseconds?\n\nIf seconds that would be by far slower than the plain SELECT, right?\n\nAndres\n", "msg_date": "Thu, 16 Jul 2009 11:33:41 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "I'm sorry, they are in milliseconds, not seconds.\nThe time used is quite same to the result of \"explain analyze select\n....\" I pasted above,\nwhich was \" Total runtime: 0.479 ms\".\n\nGreetings,\nNing\n\n\nOn Thu, Jul 16, 2009 at 6:33 PM, Andres Freund<[email protected]> wrote:\n> On Thursday 16 July 2009 11:30:00 ning wrote:\n>> Hi Andres,\n>>\n>> By executing\n>> #explain analyze execute test_query;\n>>\n>> the first execution cost 0.389 seconds\n>> the second cost 0.285 seconds\n> Seconds or milliseconds?\n>\n> If seconds that would be by far slower than the plain SELECT, right?\n>\n> Andres\n>\n", "msg_date": "Thu, 16 Jul 2009 18:46:18 +0900", "msg_from": "ning <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n\tDB2 9.1" }, { "msg_contents": "On Thursday 16 July 2009 11:46:18 ning wrote:\n> I'm sorry, they are in milliseconds, not seconds.\n> The time used is quite same to the result of \"explain analyze select\n> ....\" I pasted above,\n> which was \" Total runtime: 0.479 ms\".\nYea. Unfortunately that time does not including planning time. If you work \nlocally on the server using psql you can use '\\timing' to make psql output \ntiming information.\n\nIf I interpret those findings correcty the execution is approx. as fast as DB2, \nonly DB2 is doing automated plan caching while pg is not.\n\nIf it _really_ is necessary that this is that fast, you can prepare the query \nlike I showed.\n\nAndres\n", "msg_date": "Thu, 16 Jul 2009 11:52:55 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "Hi,\n\nLe 16 juil. 09 � 11:52, Andres Freund a �crit :\n> If I interpret those findings correcty the execution is approx. as \n> fast as DB2,\n> only DB2 is doing automated plan caching while pg is not.\n>\n> If it _really_ is necessary that this is that fast, you can prepare \n> the query\n> like I showed.\n\nA form of automatic plan caching by PostgreSQL is possible and \navailable as an extension called \"preprepare\", which aims to have your \nstatements already prepared by the time you're able to send queries \nin, at which point you simply EXECUTE and never care about the PREPARE \nin your code.\n http://preprepare.projects.postgresql.org/README.html\n\nThe module is only available for PostgreSQL 8.3 or 8.4, and works in a \nfully automatic manner only in 8.4 (using local_preload_librairies), \nand in both cases requires you to:\n - edit postgresql.conf and restart if preloading, else reload\n - put your full PREPARE statements into a table\n - psql -c \"EXECUTE myquery;\" mydb\n - enjoy\n\nRegards,\n-- \ndim\n\nPS: soon to be in debian sid for both 8.3 and 8.4, currently in NEW \nqueue", "msg_date": "Thu, 16 Jul 2009 22:43:29 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than DB2 9.1" }, { "msg_contents": "Interesting. It's quite a hairy plan even though all the branches are \ncut off by conditions (\"never executed\") so the query yields 0 rows. \n\n0.018 is not a bad timing for that.\n\nHowever, if you run this query with different parameters, the result \ncould be quite sad.\n\nThere are some deeply nested loops with joins filtered by inner seq \nscans; this can be extremely expensive. Also, note that Left Merge Join \nwith 16243 rows being reduced into just 1.\n \nWith a database like DB2, the results you had are quite predictable: \nslow first time execution (because of the ineffective query) and then \nfast consequent responses because the tiny resultset produced by the \nquery can be stored in the memory.\n\nNow, with Postgres the picture is different: all this complex stuff has \nto be executed each time the query is sent.\n\nI would rather rewrite the query without inner selects, using straight \njoins instead. Also, try to filter things before joining, not after. \nCorrect me if I'm wrong, but in this particular case this seems pretty \nmuch possible.\n\nCheers,\nMike\n\n\nning wrote:\n> Hi Mike,\n>\n> Thank you for your explanation.\n> The \"explain analyze\" command used is as follows, several integers are\n> bound to '?'.\n> -----\n> SELECT oid_,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\n> FROM (SELECT attributeOf,void,nameId,tag,intval,lowerbound,upperbound,crossfeeddir,feeddir,units,opqval,bigval,strval\n> FROM DenormAttributePerf WHERE attributeof IN (SELECT oid_ FROM\n> JobView WHERE JobView.JobId=? and JobView.assignedTo_=?) AND nameId in\n> (?)) x RIGHT OUTER JOIN (SELECT oid_ FROM JobView WHERE\n> JobView.JobId=? and JobView.assignedTo_=?) y ON attributeof = oid_ FOR\n> READ ONLY\n> -----\n>\n> The result of the command is\n> -----\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=575.60..1273.15 rows=81 width=568)\n> (actual time=0.018..0.018 rows=0 loops=1)\n> Join Filter: (x.attributeof = j1.oid_)\n> -> Index Scan using job_tc1 on job j1 (cost=0.00..8.27 rows=1\n> width=4) (actual time=0.016..0.016 rows=0 loops=1)\n> Index Cond: ((assignedto_ = 888) AND (jobid = 0))\n> -> Merge Left Join (cost=575.60..899.41 rows=16243 width=564)\n> (never executed)\n> Merge Cond: (v.void = b.void)\n> -> Merge Left Join (cost=470.77..504.87 rows=2152\n> width=556) (never executed)\n> Merge Cond: (v.void = res.void)\n> -> Sort (cost=373.61..374.39 rows=310 width=544)\n> (never executed)\n> Sort Key: v.void\n> -> Hash Left Join (cost=112.07..360.78 rows=310\n> width=544) (never executed)\n> Hash Cond: (v.void = i.void)\n> -> Hash Left Join (cost=65.40..303.17\n> rows=38 width=540) (never executed)\n> Hash Cond: (v.void = r.void)\n> -> Hash Left Join\n> (cost=21.42..257.86 rows=5 width=532) (never executed)\n> Hash Cond: (v.void = s.void)\n> -> Nested Loop Left Join\n> (cost=8.27..244.65 rows=5 width=16) (never executed)\n> Join Filter: (v.containedin = a.id)\n> -> Nested Loop\n> (cost=8.27..16.57 rows=1 width=12) (never executed)\n> -> HashAggregate\n> (cost=8.27..8.28 rows=1 width=4) (never executed)\n> -> Index\n> Scan using job_tc1 on job j1 (cost=0.00..8.27 rows=1 width=4) (never\n> executed)\n> Index\n> Cond: ((assignedto_ = 888) AND (jobid = 0))\n> -> Index Scan\n> using attribute_tc1 on attribute a (cost=0.00..8.27 rows=1 width=12)\n> (never executed)\n> Index Cond:\n> ((a.attributeof = j1.oid_) AND (a.nameid = 6))\n> -> Append\n> (cost=0.00..137.60 rows=7239 width=12) (never executed)\n> -> Index Scan\n> using attribute_value_i on attribute_value v (cost=0.00..5.30 rows=9\n> width=12) (never executed)\n> Index Cond:\n> (v.containedin = a.id)\n> -> Seq Scan on\n> string_value v (cost=0.00..11.40 rows=140 width=12) (never executed)\n> -> Seq Scan on\n> integer_value v (cost=0.00..26.30 rows=1630 width=12) (never\n> executed)\n> -> Seq Scan on\n> bigint_value v (cost=0.00..25.10 rows=1510 width=12) (never executed)\n> -> Seq Scan on\n> rangeofint_value v (cost=0.00..25.10 rows=1510 width=12) (never\n> executed)\n> -> Seq Scan on\n> resolution_value v (cost=0.00..24.00 rows=1400 width=12) (never\n> executed)\n> -> Seq Scan on\n> opaque_value v (cost=0.00..20.40 rows=1040 width=12) (never executed)\n> -> Hash (cost=11.40..11.40\n> rows=140 width=520) (never executed)\n> -> Seq Scan on\n> string_value s (cost=0.00..11.40 rows=140 width=520) (never executed)\n> -> Hash (cost=25.10..25.10\n> rows=1510 width=12) (never executed)\n> -> Seq Scan on\n> rangeofint_value r (cost=0.00..25.10 rows=1510 width=12) (never\n> executed)\n> -> Hash (cost=26.30..26.30 rows=1630\n> width=8) (never executed)\n> -> Seq Scan on integer_value i\n> (cost=0.00..26.30 rows=1630 width=8) (never executed)\n> -> Sort (cost=97.16..100.66 rows=1400 width=16)\n> (never executed)\n> Sort Key: res.void\n> -> Seq Scan on resolution_value res\n> (cost=0.00..24.00 rows=1400 width=16) (never executed)\n> -> Sort (cost=104.83..108.61 rows=1510 width=12) (never executed)\n> Sort Key: b.void\n> -> Seq Scan on bigint_value b (cost=0.00..25.10\n> rows=1510 width=12) (never executed)\n> Total runtime: 0.479 ms\n> (46 rows)\n> -----\n>\n> Best regards,\n> Ning\n>\n> On Thu, Jul 16, 2009 at 7:37 AM, Mike Ivanov<[email protected]> wrote:\n> \n>> ning wrote:\n>> \n>>> The log is really long,\n>>> \n>> Which usually signals a problem with the query.\n>>\n>> \n>>> but I compared the result of \"explain analyze\"\n>>> for first and later executions, except for 3 \"time=XXX\" numbers, they\n>>> are identical.\n>>>\n>>> \n>> They are supposed to be identical unless something is really badly broken.\n>>\n>> \n>>> I agree with you that PostgreSQL is doing different level of caching,\n>>> I just wonder if there is any way to speed up PostgreSQL in this\n>>> scenario,\n>>> \n>> This is what EXPLAIN ANALYZE for. Could you post the results please?\n>>\n>> Cheers,\n>> Mike\n>>\n>>\n>> \n\n", "msg_date": "Thu, 16 Jul 2009 18:25:45 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Repeated Query is much slower in PostgreSQL8.2.4 than\n \tDB2 9.1" } ]
[ { "msg_contents": "I am using Postgres with Rails. Each rails application \"thread\" is\nactually a separate process (mongrel) with it's own connection.\n\nNormally, the db connection processes (?) look something like this in\ntop:\n\n15772 postgres 15 0 229m 13m 12m S 0 0.8 0:00.09 postgres:\ndb db [local]\nidle\n\nThese quickly grow as the application is used to 50+ Mb per instance.\n\nWhen I restart mongrel (the Rails application processes) these go back\ndown to their normal small size. That makes me suspect this is not\nnormal caching and there is some sort of unhealthy leak going on.\n\nIs there something Rails could be doing to cause these to grow? Maybe\nthe connection is not being cleaned up properly? Is there some sort\nof connection cache going on?\n\n\n", "msg_date": "Wed, 15 Jul 2009 01:44:06 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Strange memory behavior with rails - caching in connection?" }, { "msg_contents": "On Wed, Jul 15, 2009 at 2:44 AM, Alex<[email protected]> wrote:\n> I am using Postgres with Rails.  Each rails application \"thread\" is\n> actually a separate process (mongrel) with it's own connection.\n>\n> Normally, the db connection processes (?) look something like this in\n> top:\n>\n> 15772 postgres  15   0  229m  13m  12m S    0  0.8   0:00.09 postgres:\n> db db [local]\n> idle\n>\n> These quickly grow as the application is used to 50+ Mb per instance.\n>\n> When I restart mongrel (the Rails application processes) these go back\n> down to their normal small size.  That makes me suspect this is not\n> normal caching and there is some sort of unhealthy leak going on.\n>\n> Is there something Rails could be doing to cause these to grow?  Maybe\n> the connection is not being cleaned up properly?  Is there some sort\n> of connection cache going on?\n\n\nno, most likely the issue is that top is showing you how much\nshared_buffers the process has touched over time and it's nothing.\nShow us what you see when you think things are going bad.\n", "msg_date": "Fri, 17 Jul 2009 03:06:37 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange memory behavior with rails - caching in\n\tconnection?" } ]
[ { "msg_contents": "Hi,\n \nWe use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it. When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the system (stress testing), we find that the same order number is being generated for multiple clients. Could this be a bug? Is there a workaround? Please let me know.\n \nThanks\nRaji\n\n\n\n\n\nHi,\n \nWe use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it.  When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the system (stress testing), we find that the same order number is being generated for multiple clients. Could this be a bug? Is there a workaround? Please let me know.\n \nThanks\nRaji", "msg_date": "Wed, 15 Jul 2009 21:59:41 -0700", "msg_from": "\"Raji Sridar (raji)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Concurrency issue under very heay loads" }, { "msg_contents": "Hi,\nAre you using automatic sequence increment in table?\n ----- Original Message ----- \n From: Raji Sridar (raji) \n To: [email protected] ; [email protected] \n Sent: Thursday, July 16, 2009 10:29 AM\n Subject: [PERFORM] Concurrency issue under very heay loads\n\n\n Hi,\n\n We use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it. When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the system (stress testing), we find that the same order number is being generated for multiple clients. Could this be a bug? Is there a workaround? Please let me know.\n\n Thanks\n Raji\n\n\n\n\n\n\nHi,\nAre you using automatic sequence increment in \ntable?\n\n----- Original Message ----- \nFrom:\nRaji Sridar (raji)\n\nTo: [email protected] ; \n [email protected]\n\nSent: Thursday, July 16, 2009 10:29 \n AM\nSubject: [PERFORM] Concurrency issue \n under very heay loads\n\nHi,\n \nWe use a typical counter within a transaction to \n generate order sequence number and update the next sequence number. This is a \n simple next counter - nothing fancy about it.  When multiple clients are \n concurrently accessing this table and updating it, under extermely heavy loads \n in the system (stress testing), we find that the same order number is being \n generated for multiple clients. Could this be a bug? Is there a workaround? \n Please let me know.\n \nThanks\nRaji", "msg_date": "Thu, 16 Jul 2009 10:49:17 +0530", "msg_from": "\"ramasubramanian\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "2009/7/16 Raji Sridar (raji) <[email protected]>:\n> Hi,\n>\n> We use a typical counter within a transaction to generate order sequence\n> number and update the next sequence number. This is a simple next counter -\n> nothing fancy about it.  When multiple clients are concurrently accessing\n> this table and updating it, under extermely heavy loads in the system\n> (stress testing), we find that the same order number is being generated for\n> multiple clients. Could this be a bug? Is there a workaround? Please let me\n> know.\n>\n> Thanks\n> Raji\n\nYou'll not have this problem if you use serial type.\n", "msg_date": "Thu, 16 Jul 2009 15:30:53 +1000", "msg_from": "Greenhorn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": "Raji Sridar (raji) wrote:\n> Hi,\n> \n> We use a typical counter within a transaction to generate order \n> sequence number and update the next sequence number. This is a simple \n> next counter - nothing fancy about it. When multiple clients are \n> concurrently accessing this table and updating it, under extermely \n> heavy loads in the system (stress testing), we find that the same \n> order number is being generated for multiple clients. Could this be a \n> bug? Is there a workaround? Please let me know.\n\nwithout seeing your code, its hard to say where this bug is, in your \ncounter implementation, or in postgres. you also don't say what version \nof postgres you're using, or even what OS you're running under...\n\nsounds like you should be using a SERIAL (which is implemented as an \nINTEGER or BIGINT field with an associated SEQUENCE), as these DO work \njust fine under heavy concurrency without any gotchas.\n\n\n", "msg_date": "Wed, 15 Jul 2009 22:34:58 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": "On Wed, Jul 15, 2009 at 10:59 PM, Raji Sridar (raji)<[email protected]> wrote:\n> Hi,\n>\n> We use a typical counter within a transaction to generate order sequence\n> number and update the next sequence number. This is a simple next counter -\n> nothing fancy about it.  When multiple clients are concurrently accessing\n> this table and updating it, under extermely heavy loads in the system\n> (stress testing), we find that the same order number is being generated for\n> multiple clients. Could this be a bug? Is there a workaround? Please let me\n> know.\n\nAs others have said, a serial is a good idea, HOWEVER, if you can't\nhave gaps in sequences, or each customer needs their own sequence,\nthen you get to lock the rows / table / etc that you're mucking with\nto make sure you don't issue the same id number twice. It's easy to\ntest your code by hand by running pieces of it in two different psql\nsessions in such a way as to induce the race condition (or not if\nyou've got it right).\n", "msg_date": "Thu, 16 Jul 2009 00:11:20 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "Raji Sridar wrote:\n> We use a typical counter within a transaction to generate \n> order sequence number and update the next sequence number. \n> This is a simple next counter - nothing fancy about it. When \n> multiple clients are concurrently accessing this table and \n> updating it, under extermely heavy loads in the system \n> (stress testing), we find that the same order number is being \n> generated for multiple clients. Could this be a bug? Is there \n> a workaround? Please let me know.\n\nPlease show us your code!\n\nYours,\nLaurenz Albe\n", "msg_date": "Thu, 16 Jul 2009 11:15:24 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "May be a simple way would be to use a \"SEQUENCE\" database object. And\ncall nextval('your_sequence') to obtain the next unique value (of type\nbigint).\nAccording to PG docs\n\"http://www.postgresql.org/docs/8.4/interactive/functions-sequence.html\",\nthe sequence object has functions that \"provide simple, multiuser-safe\nmethods for obtaining successive sequence values from sequence\nobjects. \"\n\nYou may either provide this function as a default to the field in\nwhich you'd like the unique values to go to.\nOR\nIf you'd like to make use of this value before data is inserted to the\ntable simply call SELECT nextval('your_sequence') to obtain the next\nunique bigint value which you may insert into the appropriate field in\nyour table and still the the value for later use maybe to populate a\nchild table.\n\nAllan.\n\nOn Thu, Jul 16, 2009 at 11:15 AM, Albe Laurenz<[email protected]> wrote:\n> Raji Sridar wrote:\n>> We use a typical counter within a transaction to generate\n>> order sequence number and update the next sequence number.\n>> This is a simple next counter - nothing fancy about it.  When\n>> multiple clients are concurrently accessing this table and\n>> updating it, under extermely heavy loads in the system\n>> (stress testing), we find that the same order number is being\n>> generated for multiple clients. Could this be a bug? Is there\n>> a workaround? Please let me know.\n>\n> Please show us your code!\n>\n> Yours,\n> Laurenz Albe\n>\n> --\n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n", "msg_date": "Thu, 16 Jul 2009 11:44:11 +0200", "msg_from": "Allan Kamau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "\"Raji Sridar (raji)\" <[email protected]> wrote:\n> \n> We use a typical counter within a transaction to generate order sequence number and update the next sequence number. This is a simple next counter - nothing fancy about it. When multiple clients are concurrently accessing this table and updating it, under extermely heavy loads in the system (stress testing), we find that the same order number is being generated for multiple clients. Could this be a bug? Is there a workaround? Please let me know.\n\nAs others have said: using a sequence/serial is best, as long as you can\ndeal with gaps in the generated numbers. (note that in actual practice,\nthe number of gaps is usually very small.)\n\nWithout seeing the code, here's my guess as to what's wrong:\nYou take out a write lock on the table, then acquire the next number, then\nrelease the lock, _then_ insert the new row. Doing this allows a race\ncondition between number generation and insertion which could allow\nduplicates.\n\nAm I right? Did I guess it?\n\nIf so, you need to take out the lock on the table and hold that lock until\nyou've inserted the new row.\n\nIf none of these answers help, you're going to have to show us your code,\nor at least a pared down version that exhibits the problem.\n\n[I'm stripping off the performance list, as this doesn't seem like a\nperformance question.]\n\n-- \nBill Moran\nhttp://www.potentialtech.com\n", "msg_date": "Thu, 16 Jul 2009 07:11:49 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": "On Wed, 15 Jul 2009, Raji Sridar (raji) wrote:\n\n> When multiple clients are concurrently accessing this table and updating \n> it, under extermely heavy loads in the system (stress testing), we find \n> that the same order number is being generated for multiple clients.\n\nThe only clean way to generate sequence numbers without needing to worry \nabout duplicates is using nextval: \nhttp://www.postgresql.org/docs/current/static/functions-sequence.html\n\nIf you're trying to duplicate that logic in your own code, there's \nprobably a subtle race condition in your implementation that is causing \nthe bug.\n\nIf you had two calls to nextval from different clients get the same value \nreturned, that might be a PostgreSQL bug. Given how much that code gets \ntested, the more likely case is that there's something to tweak in your \napplication instead. I would advise starting with the presumption it's an \nissue in your app rather than on the server side of things.\n\nP.S. Posting the same question to two lists here is frowned upon; \npgsql-general is the right one for a question like this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 16 Jul 2009 17:03:21 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": "On Wed, 2009-07-15 at 22:34 -0700, John R Pierce wrote:\n\n> sounds like you should be using a SERIAL (which is implemented as an \n> INTEGER or BIGINT field with an associated SEQUENCE), as these DO work \n> just fine under heavy concurrency without any gotchas.\n\nThere is one gotcha, though we're all so used to it (and/or never\nwould've thought to care about it) as to forget it:\n\n With a SEQUENCE, as produced by the SERIAL pseudo-type, values\n may be skipped if a transaction rolls back. That includes automatic\n rollback on error or disconnect, not just explicit ROLLBACK of course.\n\nIf you're using sequences to generate synthetic keys that's exactly what\nyou want; you don't care about gaps and you want it fast and\nconcurrency-friendly.\n\nIf your application can't cope with gaps in the sequence then either (a)\nfix it so it can, or (b) search this mailing list for gapless sequence\nimplementations and use one of them. Beware the nasty performance\nimplications.\n\n-- \nCraig Ringer\n\n", "msg_date": "Fri, 17 Jul 2009 22:29:25 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": "On Thu, 2009-07-16 at 00:11 -0600, Scott Marlowe wrote:\n\n> As others have said, a serial is a good idea, HOWEVER, if you can't\n> have gaps in sequences, or each customer needs their own sequence,\n> then you get to lock the rows / table / etc that you're mucking with\n> to make sure you don't issue the same id number twice.\n\nThese days can't you just UPDATE ... RETURNING the sequence source\ntable? Or is there some concurrency issue there I'm not seeing? Other\nthan the awful impact on concurrent insert performance of course, but\nyou're stuck with that using any gapless sequence.\n\n-- \nCraig Ringer\n\n", "msg_date": "Fri, 17 Jul 2009 22:32:06 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "Thanks for everyone's inputs and here is an update on the issue:\nThe problem source is that autocommit is not getting unset.\nThe code does the following ( and source code or copyright does not\nbelong to Cisco):\n. unsets autocommit\n. starts transaction\n. SQL for select for update.... \n. SQL for update next sequence number\n. Commits transaction\nThe problem is in unsetting auto commit. Since this runs inside an Jboss\napp server/EJB environment, this becomes a no-op and hence the ACIDity\nacross the select for update and update. We are using postgres 8.2.12 on\nWindows with JDBC driver 8.2-506. \nThanks\nRaji\n-----Original Message-----\nFrom: Greg Smith [mailto:[email protected]] \nSent: Thursday, July 16, 2009 2:03 PM\nTo: Raji Sridar (raji)\nCc: [email protected]\nSubject: Re: [GENERAL] Concurrency issue under very heay loads\n\nOn Wed, 15 Jul 2009, Raji Sridar (raji) wrote:\n\n> When multiple clients are concurrently accessing this table and \n> updating it, under extermely heavy loads in the system (stress \n> testing), we find that the same order number is being generated for\nmultiple clients.\n\nThe only clean way to generate sequence numbers without needing to worry\nabout duplicates is using nextval: \nhttp://www.postgresql.org/docs/current/static/functions-sequence.html\n\nIf you're trying to duplicate that logic in your own code, there's\nprobably a subtle race condition in your implementation that is causing\nthe bug.\n\nIf you had two calls to nextval from different clients get the same\nvalue returned, that might be a PostgreSQL bug. Given how much that\ncode gets tested, the more likely case is that there's something to\ntweak in your application instead. I would advise starting with the\npresumption it's an issue in your app rather than on the server side of\nthings.\n\nP.S. Posting the same question to two lists here is frowned upon;\npgsql-general is the right one for a question like this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 17 Jul 2009 09:48:23 -0700", "msg_from": "\"Raji Sridar (raji)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Concurrency issue under very heay loads" }, { "msg_contents": ">-----Original Message-----\n>From: [email protected] \n> \n>We use a typical counter within a transaction to generate \n>order sequence number and update the next sequence number. \n>This is a simple next counter - nothing fancy about it. When \n>multiple clients are concurrently accessing this table and \n>updating it, under extermely heavy loads in the system (stress \n>testing), we find that the same order number is being \n>generated for multiple clients. Could this be a bug? Is there \n>a workaround? Please let me know.\n\nAre you using \"for update\" in your select statements? Are you setting\nan appropriate transaction isolation level?\n\nA better way to do this is with a sequence instead. This is guaranteed\nto give you a unique value:\nselect nextval('address_serial_num_seq');\n\neric\n", "msg_date": "Fri, 17 Jul 2009 14:13:26 -0500", "msg_from": "\"Haszlakiewicz, Eric\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Concurrency issue under very heay loads" }, { "msg_contents": "I would like some help in reading the postgres logs.\nHere is a snippet of the log.\nAuto commit seems to be set to false.\nBut still the logs shows \"CommitTransactionCommand\" in debug mode.\nThe same order number is given for multiple clients.\nPlease see \"CommitTransactionCommand\" below for both \"select ...for\nupdate\" and \"update...\" SQLs and let me know if I am reading correctly\nthat auto commit is actually effective.\nThanks\nRaji\n-----\n2009-07-17 14:10:59 4736 970134 DEBUG: parse <unnamed>: SELECT\nnextEntityID FROM tableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 DEBUG: StartTransactionCommand\n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 DEBUG: parse tree:\n2009-07-17 14:10:59 4736 970134 DETAIL: {QUERY \n :commandType 1 \n :querySource 0 \n :canSetTag true \n :utilityStmt <> \n :resultRelation 0 \n :into <> \n :intoOptions <> \n :intoOnCommit 0 \n :intoTableSpaceName <> \n :hasAggs false \n :hasSubLinks false \n :rtable (\n {RTE \n :alias <> \n :eref \n {ALIAS \n :aliasname tableentityid \n :colnames (\"entitytype\" \"nextentityid\")\n }\n :rtekind 0 \n :relid 16420 \n :inh true \n :inFromCl true \n :requiredPerms 6 \n :checkAsUser 0\n }\n )\n :jointree \n {FROMEXPR \n :fromlist (\n {RANGETBLREF \n :rtindex 1\n }\n )\n :quals \n {OPEXPR \n :opno 98 \n :opfuncid 0 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n {RELABELTYPE \n :arg \n {PARAM \n :paramkind 0 \n :paramid 1 \n :paramtype 1043\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n )\n }\n }\n :targetList (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 2 \n :vartype 1043 \n :vartypmod 132 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n :resno 1 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 16420 \n :resorigcol 2 \n :resjunk false\n }\n )\n :returningList <> \n :groupClause <> \n :havingQual <> \n :distinctClause <> \n :sortClause <> \n :limitOffset <> \n :limitCount <> \n :rowMarks (\n {ROWMARKCLAUSE \n :rti 1 \n :forUpdate true \n :noWait false\n }\n )\n :setOperations <> \n :resultRelations <> \n :returningLists <>\n }\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 DEBUG: rewritten parse tree:\n2009-07-17 14:10:59 4736 970134 DETAIL: (\n {QUERY \n :commandType 1 \n :querySource 0 \n :canSetTag true \n :utilityStmt <> \n :resultRelation 0 \n :into <> \n :intoOptions <> \n :intoOnCommit 0 \n :intoTableSpaceName <> \n :hasAggs false \n :hasSubLinks false \n :rtable (\n {RTE \n :alias <> \n :eref \n {ALIAS \n :aliasname tableentityid \n :colnames (\"entitytype\" \"nextentityid\")\n }\n :rtekind 0 \n :relid 16420 \n :inh true \n :inFromCl true \n :requiredPerms 6 \n :checkAsUser 0\n }\n )\n :jointree \n {FROMEXPR \n :fromlist (\n {RANGETBLREF \n :rtindex 1\n }\n )\n :quals \n {OPEXPR \n :opno 98 \n :opfuncid 0 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n {RELABELTYPE \n :arg \n {PARAM \n :paramkind 0 \n :paramid 1 \n :paramtype 1043\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n )\n }\n }\n :targetList (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 2 \n :vartype 1043 \n :vartypmod 132 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n :resno 1 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 16420 \n :resorigcol 2 \n :resjunk false\n }\n )\n :returningList <> \n :groupClause <> \n :havingQual <> \n :distinctClause <> \n :sortClause <> \n :limitOffset <> \n :limitCount <> \n :rowMarks (\n {ROWMARKCLAUSE \n :rti 1 \n :forUpdate true \n :noWait false\n }\n )\n :setOperations <> \n :resultRelations <> \n :returningLists <>\n }\n )\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 DEBUG: bind <unnamed> to <unnamed>\n2009-07-17 14:10:59 4736 970134 DEBUG: plan:\n2009-07-17 14:10:59 4736 970134 DETAIL: {SEQSCAN \n :startup_cost 0.00 \n :total_cost 4.05 \n :plan_rows 1 \n :plan_width 12 \n :targetlist (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 2 \n :vartype 1043 \n :vartypmod 132 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 2\n }\n :resno 1 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 16420 \n :resorigcol 2 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno -1 \n :vartype 27 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno -1\n }\n :resno 2 \n :resname ctid1 \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk true\n }\n )\n :qual (\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 10 [ 10 0 0 0 119 105 122 97 114 100 ]\n }\n )\n }\n )\n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1\n }\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: SELECT nextEntityID FROM\ntableEntityID WHERE entityType = $1 FOR UPDATE\n2009-07-17 14:10:59 4736 970134 DEBUG: CommitTransactionCommand\n2009-07-17 14:10:59 4736 970134 DEBUG: parse <unnamed>: UPDATE\ntableEntityID SET nextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: StartTransactionCommand\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: parse tree:\n2009-07-17 14:10:59 4736 970134 DETAIL: {QUERY \n :commandType 2 \n :querySource 0 \n :canSetTag true \n :utilityStmt <> \n :resultRelation 1 \n :into <> \n :intoOptions <> \n :intoOnCommit 0 \n :intoTableSpaceName <> \n :hasAggs false \n :hasSubLinks false \n :rtable (\n {RTE \n :alias <> \n :eref \n {ALIAS \n :aliasname tableentityid \n :colnames (\"entitytype\" \"nextentityid\")\n }\n :rtekind 0 \n :relid 16420 \n :inh true \n :inFromCl false \n :requiredPerms 6 \n :checkAsUser 0\n }\n )\n :jointree \n {FROMEXPR \n :fromlist (\n {RANGETBLREF \n :rtindex 1\n }\n )\n :quals \n {OPEXPR \n :opno 98 \n :opfuncid 0 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n {RELABELTYPE \n :arg \n {PARAM \n :paramkind 0 \n :paramid 2 \n :paramtype 1043\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n )\n }\n }\n :targetList (\n {TARGETENTRY \n :expr \n {FUNCEXPR \n :funcid 669 \n :funcresulttype 1043 \n :funcretset false \n :funcformat 2 \n :args (\n {FUNCEXPR \n :funcid 112 \n :funcresulttype 1043 \n :funcretset false \n :funcformat 2 \n :args (\n {PARAM \n :paramkind 0 \n :paramid 1 \n :paramtype 23\n }\n )\n }\n {CONST \n :consttype 23 \n :constlen 4 \n :constbyval true \n :constisnull false \n :constvalue 4 [ -124 0 0 0 ]\n }\n {CONST \n :consttype 16 \n :constlen 1 \n :constbyval true \n :constisnull false \n :constvalue 1 [ 0 0 0 0 ]\n }\n )\n }\n :resno 2 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk false\n }\n )\n :returningList <> \n :groupClause <> \n :havingQual <> \n :distinctClause <> \n :sortClause <> \n :limitOffset <> \n :limitCount <> \n :rowMarks <> \n :setOperations <> \n :resultRelations <> \n :returningLists <>\n }\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: rewritten parse tree:\n2009-07-17 14:10:59 4736 970134 DETAIL: (\n {QUERY \n :commandType 2 \n :querySource 0 \n :canSetTag true \n :utilityStmt <> \n :resultRelation 1 \n :into <> \n :intoOptions <> \n :intoOnCommit 0 \n :intoTableSpaceName <> \n :hasAggs false \n :hasSubLinks false \n :rtable (\n {RTE \n :alias <> \n :eref \n {ALIAS \n :aliasname tableentityid \n :colnames (\"entitytype\" \"nextentityid\")\n }\n :rtekind 0 \n :relid 16420 \n :inh true \n :inFromCl false \n :requiredPerms 6 \n :checkAsUser 0\n }\n )\n :jointree \n {FROMEXPR \n :fromlist (\n {RANGETBLREF \n :rtindex 1\n }\n )\n :quals \n {OPEXPR \n :opno 98 \n :opfuncid 0 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n {RELABELTYPE \n :arg \n {PARAM \n :paramkind 0 \n :paramid 2 \n :paramtype 1043\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 2\n }\n )\n }\n }\n :targetList (\n {TARGETENTRY \n :expr \n {FUNCEXPR \n :funcid 669 \n :funcresulttype 1043 \n :funcretset false \n :funcformat 2 \n :args (\n {FUNCEXPR \n :funcid 112 \n :funcresulttype 1043 \n :funcretset false \n :funcformat 2 \n :args (\n {PARAM \n :paramkind 0 \n :paramid 1 \n :paramtype 23\n }\n )\n }\n {CONST \n :consttype 23 \n :constlen 4 \n :constbyval true \n :constisnull false \n :constvalue 4 [ -124 0 0 0 ]\n }\n {CONST \n :consttype 16 \n :constlen 1 \n :constbyval true \n :constisnull false \n :constvalue 1 [ 0 0 0 0 ]\n }\n )\n }\n :resno 2 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk false\n }\n )\n :returningList <> \n :groupClause <> \n :havingQual <> \n :distinctClause <> \n :sortClause <> \n :limitOffset <> \n :limitCount <> \n :rowMarks <> \n :setOperations <> \n :resultRelations <> \n :returningLists <>\n }\n )\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: bind <unnamed> to <unnamed>\n2009-07-17 14:10:59 4736 970134 DEBUG: plan:\n2009-07-17 14:10:59 4736 970134 DETAIL: {SEQSCAN \n :startup_cost 0.00 \n :total_cost 4.05 \n :plan_rows 1 \n :plan_width 17 \n :targetlist (\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resno 1 \n :resname entitytype \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {CONST \n :consttype 1043 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 7 [ 7 0 0 0 51 48 53 ]\n }\n :resno 2 \n :resname nextentityid \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk false\n }\n {TARGETENTRY \n :expr \n {VAR \n :varno 1 \n :varattno -1 \n :vartype 27 \n :vartypmod -1 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno -1\n }\n :resno 3 \n :resname ctid \n :ressortgroupref 0 \n :resorigtbl 0 \n :resorigcol 0 \n :resjunk true\n }\n )\n :qual (\n {OPEXPR \n :opno 98 \n :opfuncid 67 \n :opresulttype 16 \n :opretset false \n :args (\n {RELABELTYPE \n :arg \n {VAR \n :varno 1 \n :varattno 1 \n :vartype 1043 \n :vartypmod 68 \n :varlevelsup 0 \n :varnoold 1 \n :varoattno 1\n }\n :resulttype 25 \n :resulttypmod -1 \n :relabelformat 0\n }\n {CONST \n :consttype 25 \n :constlen -1 \n :constbyval false \n :constisnull false \n :constvalue 10 [ 10 0 0 0 119 105 122 97 114 100 ]\n }\n )\n }\n )\n :lefttree <> \n :righttree <> \n :initPlan <> \n :extParam (b)\n :allParam (b)\n :nParamExec 0 \n :scanrelid 1\n }\n \n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: ProcessQuery\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 LOG: duration: 0.000 ms\n2009-07-17 14:10:59 4736 970134 STATEMENT: UPDATE tableEntityID SET\nnextEntityID = $1 WHERE entityType = $2\n2009-07-17 14:10:59 4736 970134 DEBUG: CommitTransactionCommand \n\n-----Original Message-----\nFrom: Raji Sridar (raji) \nSent: Friday, July 17, 2009 9:48 AM\nTo: 'Greg Smith'\nCc: [email protected]\nSubject: RE: [GENERAL] Concurrency issue under very heay loads\n\nThanks for everyone's inputs and here is an update on the issue:\nThe problem source is that autocommit is not getting unset.\nThe code does the following ( and source code or copyright does not\nbelong to Cisco):\n. unsets autocommit\n. starts transaction\n. SQL for select for update.... \n. SQL for update next sequence number\n. Commits transaction\nThe problem is in unsetting auto commit. Since this runs inside an Jboss\napp server/EJB environment, this becomes a no-op and hence the ACIDity\nacross the select for update and update. We are using postgres 8.2.12 on\nWindows with JDBC driver 8.2-506. \nThanks\nRaji\n-----Original Message-----\nFrom: Greg Smith [mailto:[email protected]]\nSent: Thursday, July 16, 2009 2:03 PM\nTo: Raji Sridar (raji)\nCc: [email protected]\nSubject: Re: [GENERAL] Concurrency issue under very heay loads\n\nOn Wed, 15 Jul 2009, Raji Sridar (raji) wrote:\n\n> When multiple clients are concurrently accessing this table and \n> updating it, under extermely heavy loads in the system (stress \n> testing), we find that the same order number is being generated for\nmultiple clients.\n\nThe only clean way to generate sequence numbers without needing to worry\nabout duplicates is using nextval: \nhttp://www.postgresql.org/docs/current/static/functions-sequence.html\n\nIf you're trying to duplicate that logic in your own code, there's\nprobably a subtle race condition in your implementation that is causing\nthe bug.\n\nIf you had two calls to nextval from different clients get the same\nvalue returned, that might be a PostgreSQL bug. Given how much that\ncode gets tested, the more likely case is that there's something to\ntweak in your application instead. I would advise starting with the\npresumption it's an issue in your app rather than on the server side of\nthings.\n\nP.S. Posting the same question to two lists here is frowned upon;\npgsql-general is the right one for a question like this.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 20 Jul 2009 15:33:04 -0700", "msg_from": "\"Raji Sridar (raji)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help needed for reading postgres log : RE: Concurrency issue under\n\tvery heay loads" }, { "msg_contents": "Raji Sridar (raji) wrote:\n> I would like some help in reading the postgres logs.\n> Here is a snippet of the log.\n> Auto commit seems to be set to false.\n> But still the logs shows \"CommitTransactionCommand\" in debug mode.\n> The same order number is given for multiple clients.\n> Please see \"CommitTransactionCommand\" below for both \"select ...for\n> update\" and \"update...\" SQLs and let me know if I am reading correctly\n> that auto commit is actually effective.\n\nCommitTransactionCommand is an internal function that has nothing to do\nwith a SQL-level COMMIT. If there were a true transaction commit you'd\nsee a debug entry saying \"CommitTransaction\".\n\nYou seem to be barking up the wrong tree here.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 20 Jul 2009 21:00:17 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help needed for reading postgres log : RE:\n\tConcurrency issue under very heay loads" } ]
[ { "msg_contents": "\n>   +1 for index organized tables \n\n+1\n\nI have a table:\n\nCREATE TABLE mytab\n(\n \"time\" timestamp without time zone NOT NULL,\n ne_id integer NOT NULL,\n values integer,\n CONSTRAINT mytab_pk PRIMARY KEY (ne_id, \"time\"),\n CONSTRAINT mytab_ne_id_key UNIQUE (\"time\", ne_id)\n}\n\nThe table is written every 15 minutes (that is, every 15 minutes all 20000 ne_ids get written), so the table is \"naturally\" clustered on (\"time\", ne_id).\n\nSince I need it clustered on (ne_id, \"time\"), I tried to cluster on a day by day basis, since clustering the whole table would take too much time: that is, I'd \"reorder\" the table every day (say at 1:00 AM).\n\nI've written a function (below) that re-insterts rows in the table ordered by ne_id,time; but it doesn't work! When I do a \"select * from mytab\" rows aren't ordered by (ne_id,time)....\n\nWhat am I doing wrong?\n\n\nCREATE OR REPLACE FUNCTION somefunc()\n RETURNS void AS\n$BODY$\nDECLARE\n t1 timestamp := '2006-10-01 00:00:00';\n t2 timestamp := '2006-10-01 23:59:00';\nBEGIN\nlock table stscell60_60_13_2800_512_cell_0610_leo in ACCESS EXCLUSIVE MODE;\nWHILE t1 < '2006-11-01 00:00:00' LOOP\n\tinsert into mytab select time,ne_id+100000, values from mytab where time between t1 and t2 order by ne_id,time;\n\tDELETE from mytab where time between t1 and t2 and ne_id<100000;\n\tupdate mytab set ne_id = ne_id - 100000 where time between t1 and t2;\n t1 := t1 + interval '1 days';\n t2 := t2 + interval '1 days';\nEND LOOP ;\t\nEND;\n$BODY$\n LANGUAGE 'plpgsql'\n\n\n\n\n\n \n", "msg_date": "Thu, 16 Jul 2009 13:33:28 +0000 (GMT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Scara Maccai <[email protected]> wrote:\n \n> What am I doing wrong?\n\n> [function which uses INSERT/UPDATE/DELETE statements to try to force\n> order of rows in heap]\n \nYou seem to be assuming that the rows will be in the table in the\nsequence of your inserts. You might be better off with a CLUSTER on\nsome index. (There are a few other options, like TRUNCATE / INSERT or\nSELECT INTO / DROP TABLE / ALTER TABLE RENAME -- but CLUSTER is\nprobably the safest, easiest way to go.)\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 15:14:43 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "> Scara Maccai <[email protected]> wrote:\n>\n>> What am I doing wrong?\n\nI didn't exactly follow the full sequence but it sounded like what's\nhappening is that Postgres is noticing all these empty pages from\nearlier deletes and reusing that space. That's what it's designed to\ndo. As Kevin said, there's no guarantee that tuples will be read back\nin the order you inserted them.\n\nYou might want to check your assumptions about the performance. If\nyou're deleting large batches the new tuples might not be where you\nexpect them to be in the table but they should still all end up in\nchunks mostly in order. They might be located together closely enough\nthat they might still perform as if they're clustered.\n\nIf you're really insistent that they be clustered you would have to\nmake sure there are no free space map entries for them. This means\nnever running vacuum on the table. That will cause massive problems\ndown the line but if you periodically run CLUSTER you might be able to\nengineer things to avoid them since cluster effectively does a vacuum\ntoo. Keep in mind this will mean your table is massively bloated which\nwill make sequential scans much slower.\n\n\n\nAlso, keep in mind that Postgres is moving in the direction of\nmaintaining the free space map more automatically. It will get harder\nand harder to ensure that space doesn't get reused. I'm not even sure\nsome features in 8.3 (HOT) and 8.4 (new FSM) don't already make it\nnearly impossible. I've certainly never heard of anyone else trying\nto.\n\n\n\nA better option you might consider is to use a separate table for the\nre-ordered tuples. If you never insert into the re-ordered table\nexcept in the final order you want (and in the same connection), and\nnever update or delete records, then it should work.\n\nYou could even do this using partitions, so you have a single table\nwith the dynamically added records in one partition and then you\nre-order the records into a new partition and swap it in to replace\nthe old partition.\n\n\nWhatever you do you'll definitely still want an ORDER BY clause on\nyour queries if you need them in a certain order. Running the queries\nyou're doing is fine for seeing what order they're in on disk but\nthere are several reasons why they might still come out out of order\neven if you never run vacuum and always insert them in order.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 16 Jul 2009 22:01:42 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" } ]
[ { "msg_contents": "Hey all!\n\nIs there a better way to increase or decrease the value of an integer \nthan doing something like:\n\n---\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;\n---\n\nWe seem to be getting a lot of deadlocks using this method under heavy \nload. Just wondering if we should be doing something different.\n\nThanks!\n\n-William\n", "msg_date": "Thu, 16 Jul 2009 10:56:47 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "Incr/Decr Integer" }, { "msg_contents": "On Thursday 16 July 2009 19:56:47 William Scott Jordan wrote:\n> Hey all!\n>\n> Is there a better way to increase or decrease the value of an integer\n> than doing something like:\n>\n> ---\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;\n> ---\n>\n> We seem to be getting a lot of deadlocks using this method under heavy\n> load. Just wondering if we should be doing something different.\nIs this the only statement in your transaction? Or are you issuing multiple \nsuch update statements in one transactions?\nI am quite sure its not the increment of that value causing the problem.\n\nIf you issue multiple such statements you have to be carefull. Example:\n\nSession 1:\nBEGIN;\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n\nSession 2: \nBEGIN\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2;\n\nFine so far.\n\nSession 1:\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2 ;\nWaits for lock.\n\nSession 2:\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\nDeadlock.\n\n\nAndres\n\nPS: Moved to pgsql-general, seems more appropriate\n", "msg_date": "Thu, 16 Jul 2009 20:30:56 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Incr/Decr Integer" }, { "msg_contents": "William Scott Jordan wrote:\n> Hey all!\n> \n> Is there a better way to increase or decrease the value of an integer \n> than doing something like:\n\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;\n\nNo.\n\n> We seem to be getting a lot of deadlocks using this method under heavy \n> load. Just wondering if we should be doing something different.\n\nYou can't get deadlocks with that - it only references one table.\n\nWhat is the purpose of this query - how are you using it?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 16 Jul 2009 19:46:15 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incr/Decr Integer" }, { "msg_contents": "William Scott Jordan <[email protected]> wrote: \n \n> We seem to be getting a lot of deadlocks using this method under\n> heavy load.\n \nCould you post the exact message from one of these?\n(Copy and paste if possible.)\n \n-Kevin\n", "msg_date": "Thu, 16 Jul 2009 14:19:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incr/Decr Integer" }, { "msg_contents": "Hi Andrew,\n\nThat's a very good guess. We are in fact updating this table multiple \ntimes within the same triggered function, which is being called on an \nINSERT. Essentially, we're using this to keep a running total of the \nnumber of rows being held in another table. The function we're using \ncurrently looks something like this:\n\n---\nCREATE OR REPLACE FUNCTION the_function() RETURNS \"trigger\"\n AS $$\nBEGIN;\n\tUPDATE the_table\n\tSET first_column = first_column + 1\n\tWHERE first_id = NEW.first_id ;\n\n\tUPDATE the_table\n\tSET second_column = second_column + 1\n\tWHERE second_id = NEW.second_id ;\n\n\tUPDATE the_table\n\tSET third_column = third_column + 1\n\tWHERE third_id = NEW.third_id ;\nRETURN NULL;\nEND;\n$$\nLANGUAGE plpgsql;\n---\n\nFor something like this, would it make more sense to break out the three \ndifferent parts into three different functions, each being triggered on \nINSERT? Or would all three functions still be considered a single \ntransaction, since they're all being called from the same insert?\n\nAny suggestions would be appreciated!\n\n-William\n\n\nAndres Freund wrote:\n> On Thursday 16 July 2009 19:56:47 William Scott Jordan wrote:\n>> Hey all!\n>>\n>> Is there a better way to increase or decrease the value of an integer\n>> than doing something like:\n>>\n>> ---\n>> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;\n>> ---\n>>\n>> We seem to be getting a lot of deadlocks using this method under heavy\n>> load. Just wondering if we should be doing something different.\n> Is this the only statement in your transaction? Or are you issuing multiple \n> such update statements in one transactions?\n> I am quite sure its not the increment of that value causing the problem.\n> \n> If you issue multiple such statements you have to be carefull. Example:\n> \n> Session 1:\n> BEGIN;\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n> \n> Session 2: \n> BEGIN\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2;\n> \n> Fine so far.\n> \n> Session 1:\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2 ;\n> Waits for lock.\n> \n> Session 2:\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n> Deadlock.\n> \n> \n> Andres\n> \n> PS: Moved to pgsql-general, seems more appropriate\n", "msg_date": "Thu, 16 Jul 2009 14:11:48 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Incr/Decr Integer" }, { "msg_contents": "Hi Andrew,\n\nThat's a very good guess. We are in fact updating this table multiple \ntimes within the same triggered function, which is being called on an \nINSERT. Essentially, we're using this to keep a running total of the \nnumber of rows being held in another table. The function we're using \ncurrently looks something like this:\n\n---\nCREATE OR REPLACE FUNCTION the_function() RETURNS \"trigger\"\n AS $$\nBEGIN;\n UPDATE the_table\n SET first_column = first_column + 1\n WHERE first_id = NEW.first_id ;\n\n UPDATE the_table\n SET second_column = second_column + 1\n WHERE second_id = NEW.second_id ;\n\n UPDATE the_table\n SET third_column = third_column + 1\n WHERE third_id = NEW.third_id ;\nRETURN NULL;\nEND;\n$$\nLANGUAGE plpgsql;\n---\n\nFor something like this, would it make more sense to break out the three \ndifferent parts into three different functions, each being triggered on \nINSERT? Or would all three functions still be considered a single \ntransaction, since they're all being called from the same insert?\n\nAny suggestions would be appreciated!\n\n-William\n\n\nAndres Freund wrote:\n> On Thursday 16 July 2009 19:56:47 William Scott Jordan wrote:\n>> Hey all!\n>>\n>> Is there a better way to increase or decrease the value of an integer\n>> than doing something like:\n>>\n>> ---\n>> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 123 ;\n>> ---\n>>\n>> We seem to be getting a lot of deadlocks using this method under heavy\n>> load. Just wondering if we should be doing something different.\n> Is this the only statement in your transaction? Or are you issuing multiple \n> such update statements in one transactions?\n> I am quite sure its not the increment of that value causing the problem.\n> \n> If you issue multiple such statements you have to be carefull. Example:\n> \n> Session 1:\n> BEGIN;\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n> \n> Session 2: \n> BEGIN\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2;\n> \n> Fine so far.\n> \n> Session 1:\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2 ;\n> Waits for lock.\n> \n> Session 2:\n> UPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n> Deadlock.\n> \n> \n> Andres\n> \n> PS: Moved to pgsql-general, seems more appropriate\n", "msg_date": "Thu, 16 Jul 2009 14:20:34 -0700", "msg_from": "William Scott Jordan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Incr/Decr Integer" }, { "msg_contents": "On Thursday 16 July 2009 23:20:34 William Scott Jordan wrote:\n> Hi Andrew,\n>\n> That's a very good guess. We are in fact updating this table multiple\n> times within the same triggered function, which is being called on an\n> INSERT. Essentially, we're using this to keep a running total of the\n> number of rows being held in another table. The function we're using\n> currently looks something like this:\n>\n> ---\n> CREATE OR REPLACE FUNCTION the_function() RETURNS \"trigger\"\n> AS $$\n> BEGIN;\n> UPDATE the_table\n> SET first_column = first_column + 1\n> WHERE first_id = NEW.first_id ;\n>\n> UPDATE the_table\n> SET second_column = second_column + 1\n> WHERE second_id = NEW.second_id ;\n>\n> UPDATE the_table\n> SET third_column = third_column + 1\n> WHERE third_id = NEW.third_id ;\n> RETURN NULL;\n> END;\n> $$\n> LANGUAGE plpgsql;\n> ---\n>\n> For something like this, would it make more sense to break out the three\n> different parts into three different functions, each being triggered on\n> INSERT? Or would all three functions still be considered a single\n> transaction, since they're all being called from the same insert?\n>\n> Any suggestions would be appreciated!\nYou need to make sure *all* your locking access happens in the same order. \nThen you will possibly have one transaction waiting for the other, but not \ndeadlock:\n\nThe formerly described Scenario now works:\n\nSession 1:\nBEGIN;\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\n\nSession 2: \nBEGIN\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 1;\nWait.\n\nSession 1:\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2;\nFine\n\nSession 2:\nStill waiting\n\n\nSession 1:\ncommit\n\nSession 2:\nwaiting ends.\n\nUPDATE the_table SET the_int = the_int + 1 WHERE the_id = 2;\ncommit;\n\n\nSensible? Works?\n\nAndres\n", "msg_date": "Thu, 16 Jul 2009 23:27:14 +0200", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Incr/Decr Integer" }, { "msg_contents": "William Scott Jordan wrote:\n> Hi Andrew,\n>\n> That's a very good guess. We are in fact updating this table multiple \n> times within the same triggered function, which is being called on an \n> INSERT. Essentially, we're using this to keep a running total of the \n> number of rows being held in another table.\n\nThis is the worst way to go about keeping running totals; it would be\nfar better to have a table holding a \"last aggregated value\" and deltas\nfrom that; to figure out the current value of the counter, add the last\nvalue plus/minus the deltas (I figure you'd normally have one +1 for\neach insert and one -1 for each delete; update is an exercise to the\nreader). You have another process that runs periodically and groups the\ndeltas to generate an up-to-date \"last aggregated value\", deleting the\ndeltas.\n\nThis way you should have little deadlock problems if any, because no\ntransaction needs to wait for another one to update the counter.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sun, 19 Jul 2009 20:26:21 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Incr/Decr Integer" } ]
[ { "msg_contents": "\n> You might be better off\n> with a CLUSTER on\n> some index.  \n\nI can't: table is too big, can't lock it for minutes; that's why I wanted to cluster it \"one day at a time\".\n\n\n \n", "msg_date": "Fri, 17 Jul 2009 06:45:18 +0000 (GMT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" } ]
[ { "msg_contents": "\n> As Kevin said, there's no guarantee that tuples will be\n> read back\n> in the order you inserted them.\n\nOk, didn't know that\n\n> A better option you might consider is to use a separate\n> table for the\n> re-ordered tuples. \n> You could even do this using partitions\n\nProblem is I'm already using partions: I'm partitioning on a monthly basis. I want to avoid partitioning on a daily basis: I have 200 tables partitioned by month, 2 years of data. Partition them by day would mean 700*200 tables: what kind of performance impacts would it mean?\n\n\nDoes this other option make sense:\n\npartition only \"last month\" by day; older months by month.\nDay by day the tables of the current month gets clustered (say at 1.00AM next day).\nThen, every 1st of the month, create a new table as \n\n- create table mytable as select * from <parent_table> where time <in last month> (this gets all the data of last month ordered in the \"almost\" correct order, because all the single tables were clustered)\n- alter mytable add constraint \"time in last month\" \n- alter mytable inherit <parent_table> \n\nand then drop last month's tables.\n\nIs this even doable? I mean: between\n\n- alter mytable inherit <parent_table> \n- drop last month's tables.\n\nmore than one table with the same constraint would inherit from the same table: that's fine unless someone can see the \"change\" before the \"drop tables\" part, but I guess this shouldn't be a problem if I use the serializable transaction level.\n\nThis way I could cluster the tables (not perfectly, since I would cluster data day by day, but it's enough) and still have few tables, say (31 for current month + 23 for the past 23 months) * 200.\n\n\n\n\n\n\n\n\n\n\n \n", "msg_date": "Fri, 17 Jul 2009 07:22:42 +0000 (GMT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Scara Maccai <[email protected]> wrote:\n \n> - create table mytable as select * from <parent_table> where time\n> <in last month> (this gets all the data of last month ordered in the\n> \"almost\" correct order, because all the single tables were\n> clustered)\n \nBe sure to include an ORDER BY clause. Without that, there is no\nguarantee that even two successive SELECTs from the same table, with\nno modifications between, will return rows in the same order. For\nexample, if someone else starts a query which the planner determines\nis best handled with a table scan, and that is still running when you\nissue your INSERT/SELECT, your query will join the current scan at\nit's point of progress, and \"wrap around\" when it hits the end. Also,\nthere would be no guarantee of what order the child tables were read.\n \n-Kevin\n", "msg_date": "Fri, 17 Jul 2009 09:14:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" } ]
[ { "msg_contents": "Hi all,\n\n>On Wed, Jul 15, 2009 at 10:36 PM, Scott Marlowe <[email protected]> wrote:\n\nI'd love to see it.\n>\n\n> +1 for index organized tables \n>\n\n>--Scott\n\n+1 also for me...\n\nI am currently working for a large customer who is migrating his main application towards PostgreSQL, this application currently using DB2 and RFM-II (a RDBMS ued on Bull GCOS 8 mainframes). With both RDBMS, \"cluster index\" are used and data rows are stored taking into account these indexes. The benefits are :\n- a good performance level, especially for batch chains that more or less \"scan\" a lot of large tables,\n- and table reorganisations remain not too frequent (about once a month).\nTo keep a good performance level with PostgreSQL, I expect that we will need more frequent reorganisation operations, with the drawbacks this generates for the production schedules. This is one of the very few regressions we need to address (or may be the only one).\n\nDespite my currently limited knowledge of the postgres internals, I don't see why it should be difficult to simply adapt the logic used to determine the data row location at insert time, using something like :\n- read the cluster index to find the tid of the row having the key value just less than the key value of the row to insert,\n- if there is place enough in this same page (due to the use of FILLFACTOR or previous row deletion), use it,\n- else use the first available place using fsm.\nThis doesn't change anything on MVCC mechanism, doesn't change index structure and management, and doesn't require data row move.\nThis doesn't not ensure that all rows are allways in the \"right\" order but if the FILLFACTOR are correctly set, most rows are well stored, requiring less reorganisation.\nBut I probably miss something ;-)\n\nRegards. Philippe Beaudoin.\n\n\n\n\n\nHi all,\n \n\n>On Wed, Jul 15, 2009 at 10:36 PM, Scott Marlowe <[email protected]> \nwrote:\nI'd \n love to see it.\n>\n> +1 for index organized tables \n>\n>--Scott\n +1 also for me...\n \nI am currently working for a large customer who is migrating his main \napplication towards PostgreSQL, this application currently using DB2 \nand RFM-II (a RDBMS ued on Bull GCOS 8 mainframes). With both RDBMS, \n\"cluster index\" are used and data rows are stored taking into account \nthese indexes. The benefits are :\n- a good performance level, especially for batch chains that more or \nless \"scan\" a lot of large tables,\n- and table reorganisations remain not too frequent (about once a \nmonth).\nTo keep a good performance level with PostgreSQL, I expect that we will \nneed more frequent reorganisation operations, with the drawbacks this \ngenerates for the production schedules. This is one of the very few \nregressions we need to address (or may be the only one).\n \nDespite my currently limited knowledge of the postgres internals, I don't \nsee why it should be difficult to simply adapt the logic used to determine the \ndata row location at insert time, using something like :\n- read the cluster index to find the tid of the row having the key \nvalue just less than the key value of the row to insert,\n- if there is place enough in this same page (due to the use of FILLFACTOR \nor previous row deletion), use it,\n- else use the first available place using fsm.\nThis doesn't change anything on MVCC mechanism, doesn't change index \nstructure and management, and doesn't require data row move.\nThis doesn't not ensure that all rows are allways in the \"right\" order but \nif the FILLFACTOR are correctly set, most rows are well stored, requiring \nless reorganisation.\nBut I probably miss something ;-)\n \nRegards. Philippe Beaudoin.", "msg_date": "Fri, 17 Jul 2009 15:25:52 +0200", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" } ]
[ { "msg_contents": "\nI'm considering rewriting a postgres extension (GiST index bioseg) to make \nit use version 1 calling conventions rather than version 0.\n\nDoes anyone have any ideas/opinions/statistics on what the performance \ndifference is between the two calling conventions?\n\nMatthew\n\n-- \n Patron: \"I am looking for a globe of the earth.\"\n Librarian: \"We have a table-top model over here.\"\n Patron: \"No, that's not good enough. Don't you have a life-size?\"\n Librarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Fri, 17 Jul 2009 14:40:40 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Calling conventions" }, { "msg_contents": "On Friday 17 July 2009 16:40:40 Matthew Wakeling wrote:\n> I'm considering rewriting a postgres extension (GiST index bioseg) to make\n> it use version 1 calling conventions rather than version 0.\n>\n> Does anyone have any ideas/opinions/statistics on what the performance\n> difference is between the two calling conventions?\n\nVersion 1 is technically slower if you count the number of instructions, but \nconsidering that everyone else, including PostgreSQL itself, uses version 1, \nand version 0 has been deprecated for years and will break on some \narchitectures, it should be a no-brainer.\n", "msg_date": "Fri, 17 Jul 2009 18:32:12 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "On Fri, 17 Jul 2009, Peter Eisentraut wrote:\n> On Friday 17 July 2009 16:40:40 Matthew Wakeling wrote:\n>> I'm considering rewriting a postgres extension (GiST index bioseg) to make\n>> it use version 1 calling conventions rather than version 0.\n>>\n>> Does anyone have any ideas/opinions/statistics on what the performance\n>> difference is between the two calling conventions?\n>\n> Version 1 is technically slower if you count the number of instructions, but\n> considering that everyone else, including PostgreSQL itself, uses version 1,\n> and version 0 has been deprecated for years and will break on some\n> architectures, it should be a no-brainer.\n\nIs that so?\n\nWell, here's my problem. I have GiST index type called bioseg. I have \nimplemented the very same algorithm in both a Postgres GiST extension and \nas a standalone Java program. In general, the standalone Java program \nperforms about 100 times faster than Postgres when running a large \nindex-based nested loop join.\n\nI profiled Postgres a few weeks back, and found a large amount of time \nbeing spent in fmgr_oldstyle.\n\nOn Thu, 11 Jun 2009, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> Anyway, running opannotate seems to make it clear that time *is* spent in\n>> the gistnext function, but almost all of that is in children of the\n>> function. Lots of time is actually spent in fmgr_oldstyle though.\n>\n> So it'd be worth converting your functions to V1 style.\n\nAre you saying that it would spend just as much time in fmgr_newstyle (or \nwhatever the correct symbol is)?\n\nMatthew\n\n-- \n Contrary to popular belief, Unix is user friendly. It just happens to be\n very selective about who its friends are. -- Kyle Hearn\n", "msg_date": "Fri, 17 Jul 2009 17:09:22 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "Peter Eisentraut <[email protected]> writes:\n>> Does anyone have any ideas/opinions/statistics on what the performance\n>> difference is between the two calling conventions?\n\n> Version 1 is technically slower if you count the number of instructions,\n\nThat would be true if you compare version-0-to-version-0 calls (ie,\nplain old C function call) to version-1-to-version-1 calling. But\nwhat is actually happening, since everything in the backend assumes\nversion 1, is that you have version-1-to-version-0 via an interface\nlayer. Which is the worst of all possible worlds --- you have all\nthe overhead of a version-1 call plus the interface layer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jul 2009 12:34:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions " }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n \n> I have implemented the very same algorithm in both a Postgres GiST\n> extension and as a standalone Java program. In general, the\n> standalone Java program performs about 100 times faster than\n> Postgres when running a large index-based nested loop join.\n> \n> I profiled Postgres a few weeks back, and found a large amount of\n> time being spent in fmgr_oldstyle.\n \nI've seen the code in Java outperform the same code in optimized C,\nbecause the \"just in time\" compiler can generate native code optimized\nfor the actual code paths being taken rather than a compile-time guess\nat that, but a factor of 100? Something else has to be going on here\nbeyond an interface layer. Is this all in RAM with the Java code,\nversus having disk access in PostgreSQL?\n \n-Kevin\n", "msg_date": "Fri, 17 Jul 2009 12:35:51 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "On Fri, 17 Jul 2009, Kevin Grittner wrote:\n> I've seen the code in Java outperform the same code in optimized C,\n> because the \"just in time\" compiler can generate native code optimized\n> for the actual code paths being taken rather than a compile-time guess\n> at that, but a factor of 100? Something else has to be going on here\n> beyond an interface layer. Is this all in RAM with the Java code,\n> versus having disk access in PostgreSQL?\n\nYeah, it does seem a little excessive. The Java code runs all in RAM, \nversus Postgres running all from OS cache or Postgres shared buffer (bit \nhard to tell which of those two it is - there is no hard drive activity \nanyway). The Java code does no page locking, whereas Postgres does loads. \nThe Java code is emulating just the index, whereas Postgres is fetching \nthe whole row as well. However, I still struggle to accept the 100 times \nperformance difference.\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.\n", "msg_date": "Mon, 20 Jul 2009 11:21:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote:\n \n> On Fri, 17 Jul 2009, Kevin Grittner wrote:\n \n>> but a factor of 100?\n \n> The Java code runs all in RAM, versus Postgres running all from OS\n> cache or Postgres shared buffer (bit hard to tell which of those\n> two it is - there is no hard drive activity anyway). The Java code\n> does no page locking, whereas Postgres does loads. The Java code is\n> emulating just the index, whereas Postgres is fetching the whole row\n> as well.\n \nOh, well, if you load all the data into Java's heap and are accessing\nit through HashMap or similar, I guess a factor of 100 is about right.\nI see the big difference as the fact that the Java implementation is\ndealing with everything already set up in RAM, versus needing to deal\nwith a \"disk image\" format, even if it is cached. Try serializing\nthose Java objects to disk and storing the file name in the HashMap,\nretrieving and de-serializing the object for each reference. Even if\nit's all cached, I expect you'd be running about 100 times slower.\n \nThe Java heap isn't a very safe place to persist long-term data,\nhowever.\n \n-Kevin\n", "msg_date": "Mon, 20 Jul 2009 16:03:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Oh, well, if you load all the data into Java's heap and are accessing\n> it through HashMap or similar, I guess a factor of 100 is about right.\n> I see the big difference as the fact that the Java implementation is\n> dealing with everything already set up in RAM, versus needing to deal\n> with a \"disk image\" format, even if it is cached.\n\nEliminating interprocess communication overhead might have something\nto do with it, too ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Jul 2009 19:34:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions " }, { "msg_contents": "On Mon, 20 Jul 2009, Kevin Grittner wrote:\n> Oh, well, if you load all the data into Java's heap and are accessing\n> it through HashMap or similar, I guess a factor of 100 is about right.\n\nNo, that's not what I'm doing. Like I said, I have implemented the very \nsame algorithm as in Postgres, emulating index pages and all. A HashMap \nwould be unable to answer the query I am executing, but if it could it \nwould obviously be very much faster.\n\n> I see the big difference as the fact that the Java implementation is\n> dealing with everything already set up in RAM, versus needing to deal\n> with a \"disk image\" format, even if it is cached.\n\nThe java program uses as near an on-disc format as Postgres does - just \nheld in memory instead of in OS cache.\n\nMatthew\n\n-- \n Okay, I'm weird! But I'm saving up to be eccentric.\n", "msg_date": "Tue, 21 Jul 2009 14:50:01 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Calling conventions" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote: \n \n> I have implemented the very same algorithm as in Postgres, emulating\n> index pages and all.\n \n> The java program uses as near an on-disc format as Postgres does -\n> just held in memory instead of in OS cache.\n \nInteresting. Hard to explain without a lot more detail. Are they\nclose enough in code structure for a comparison of profiling output\nfor both to make any sense? Have you tried switching the calling\nconvention yet; and if so, what impact did that have?\n \n-Kevin\n", "msg_date": "Tue, 21 Jul 2009 09:02:02 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Calling conventions" } ]
[ { "msg_contents": "\n> Be sure to include an ORDER BY clause.  For\n> example, if someone else starts a query which the planner\n> determines\n> is best handled with a table scan, and that is still\n> running when you\n> issue your INSERT/SELECT, your query will join the current\n> scan at\n> it's point of progress, and \"wrap around\" when it hits the\n> end.  Also,\n> there would be no guarantee of what order the child tables\n> were read.\n\nIsn't it going to be much slower?\nI'm asking because I could get away in my case without the order by, I guess: I'm not trying to create a completely clustered table. The important thing is that most of the records are stored \"close\" enough one to the other in the right order.\n\n\n\n\n\n \n", "msg_date": "Fri, 17 Jul 2009 14:36:34 +0000 (GMT)", "msg_from": "Scara Maccai <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cluster index on a table" }, { "msg_contents": "Scara Maccai <[email protected]> wrote:\n \n>> Be sure to include an ORDER BY clause.\n \n> Isn't it going to be much slower?\n \nIt might be; you'd have to test to know for sure.\n \n> The important thing is that most of the records are stored \"close\"\n> enough one to the other in the right order.\n \nThen, yeah, it might not be worth the cost of sorting.\n \n-Kevin\n", "msg_date": "Fri, 17 Jul 2009 10:26:36 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cluster index on a table" } ]
[ { "msg_contents": "Hello,\n\nI'm having a bit of an issue with full text search (using tsvectors) on \nPostgreSQL 8.4. I have a rather large table (around 12M rows) and want \nto use full text search in it (just for one of the columns). Just doing \na plainto_tsquery works reasonably fast (I have a GIN index on the \ncolumn in question, \"comment_tsv\"), but it becomes unbearably slow when \nI want to make it order by another field (\"timestamp\").\n\nHere's an example query:\nSELECT * FROM a WHERE comment_tsv @@ plainto_tsquery('love') ORDER BY \ntimestamp DESC LIMIT 24 OFFSET 0;\n\nI asked in #postgresql and was told that there are two possible plans \nfor this query; the first scans the BTREE timestamp index, gets the \nordering and then filters out the rows using text search; the second \nfinds all rows matching the text search using the GIN index and then \nsorts them according to that field -- this much I already knew, in fact, \nI had to drop the BTREE index on \"timestamp\" to prevent the planner from \nchoosing the first, since the first plan is completely useless to me, \nconsidering the table is so huge (suggestions on how to prevent the \nplanner from picking the \"wrong\" plan are also appreciated).\n\nObviously, this gets really slow when I try to search for common words \nand full text search returns a lot of rows to be ordered.\n\nI tried to make a GIN index on (\"timestamp\", \"comment_tsv\"), (using \nbtree_gin from contrib) but it doesn't really do anything -- I was told \non IRC this is because GIN doesn't provide support for ORDER BY, only \nBTREE can do that.\n\nHere's a couple of queries:\n\narchive=> explain analyze select * from a where comment_tsv @@ \nplainto_tsquery('love') order by timestamp desc limit 24 offset 0;\n\nQUERY PLAN\n----------\n Limit (cost=453248.73..453248.79 rows=24 width=281) (actual \ntime=188441.047..188441.148 rows=24 loops=1)\n -> Sort (cost=453248.73..453882.82 rows=253635 width=281) (actual \ntime=188441.043..188441.079 rows=24 loops=1)\n Sort Key: \"timestamp\"\n Sort Method: top-N heapsort Memory: 42kB\n -> Bitmap Heap Scan on a (cost=17782.16..446166.02 \nrows=253635 width=281) (actual time=2198.930..187948.050 rows=256378 \nloops=1)\n Recheck Cond: (comment_tsv @@ plainto_tsquery('love'::text))\n -> Bitmap Index Scan on timestamp_comment_gin \n(cost=0.00..17718.75 rows=253635 width=0) (actual \ntime=2113.664..2113.664 rows=259828 loops=1)\n Index Cond: (comment_tsv @@ \nplainto_tsquery('love'::text))\n Total runtime: 188442.617 ms\n(9 rows)\n\narchive=> explain analyze select * from a where comment_tsv @@ \nplainto_tsquery('love') limit 24 offset 0;\n\nQUERY PLAN\n----------\n Limit (cost=0.00..66.34 rows=24 width=281) (actual \ntime=14.632..53.647 rows=24 loops=1)\n -> Seq Scan on a (cost=0.00..701071.49 rows=253635 width=281) \n(actual time=14.629..53.588 rows=24 loops=1)\n Filter: (comment_tsv @@ plainto_tsquery('love'::text))\n Total runtime: 53.731 ms\n(4 rows)\n\nFirst one runs painfully slow.\n\nIs there really no way to have efficient full text search results \nordered by a separate field? I'm really open to all possibilities, at \nthis point.\n\nThanks.\n", "msg_date": "Sat, 18 Jul 2009 23:07:56 +0100", "msg_from": "Krade <[email protected]>", "msg_from_op": true, "msg_subject": "Full text search with ORDER BY performance issue" }, { "msg_contents": "Krade,\n\nOn Sat, 18 Jul 2009, Krade wrote:\n\n> Here's a couple of queries:\n>\n> archive=> explain analyze select * from a where comment_tsv @@ \n> plainto_tsquery('love') order by timestamp desc limit 24 offset 0;\n>\n> QUERY PLAN\n> ----------\n> Limit (cost=453248.73..453248.79 rows=24 width=281) (actual \n> time=188441.047..188441.148 rows=24 loops=1)\n> -> Sort (cost=453248.73..453882.82 rows=253635 width=281) (actual \n> time=188441.043..188441.079 rows=24 loops=1)\n> Sort Key: \"timestamp\"\n> Sort Method: top-N heapsort Memory: 42kB\n> -> Bitmap Heap Scan on a (cost=17782.16..446166.02 rows=253635 \n> width=281) (actual time=2198.930..187948.050 rows=256378 loops=1)\n> Recheck Cond: (comment_tsv @@ plainto_tsquery('love'::text))\n> -> Bitmap Index Scan on timestamp_comment_gin \n> (cost=0.00..17718.75 rows=253635 width=0) (actual time=2113.664..2113.664 \n> rows=259828 loops=1)\n> Index Cond: (comment_tsv @@ \n> plainto_tsquery('love'::text))\n> Total runtime: 188442.617 ms\n> (9 rows)\n>\n> archive=> explain analyze select * from a where comment_tsv @@ \n> plainto_tsquery('love') limit 24 offset 0;\n>\n> QUERY PLAN\n> ----------\n> Limit (cost=0.00..66.34 rows=24 width=281) (actual time=14.632..53.647 \n> rows=24 loops=1)\n> -> Seq Scan on a (cost=0.00..701071.49 rows=253635 width=281) (actual \n> time=14.629..53.588 rows=24 loops=1)\n> Filter: (comment_tsv @@ plainto_tsquery('love'::text))\n> Total runtime: 53.731 ms\n> (4 rows)\n>\n> First one runs painfully slow.\n\nHmm, everything is already written in explain :) In the first query \n253635 rows should be readed from disk and sorted, while in the\nsecond query only 24 (random) rows readed from disk, so there is 4 magnitudes\ndifference and in the worst case you should expected time for the 1st query\nabout 53*10^4 ms.\n\n>\n> Is there really no way to have efficient full text search results ordered by \n> a separate field? I'm really open to all possibilities, at this point.\n>\n> Thanks.\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Mon, 20 Jul 2009 16:12:20 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Sun, Jul 19, 2009 at 12:07 AM, Krade<[email protected]> wrote:\n> archive=> explain analyze select * from a where  comment_tsv @@\n> plainto_tsquery('love') order by timestamp desc limit 24 offset 0;\n\nWhat happens if you make it:\n\n\nselect * from (\n select * from a where comment_tsv @@plainto_tsquery('love')\n) xx\n\norder by xx.timestamp desc\nlimit 24 offset 0;\n\n?\n", "msg_date": "Mon, 20 Jul 2009 14:22:03 +0200", "msg_from": "=?UTF-8?Q?Marcin_St=C4=99pnicki?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Hello, thanks for your replies.\n\n\nOn 7/20/2009 13:12, Oleg Bartunov wrote:\n\n> Hmm, everything is already written in explain :) In the first query \n> 253635 rows should be readed from disk and sorted, while in the\n> second query only 24 (random) rows readed from disk, so there is 4 \n> magnitudes\n> difference and in the worst case you should expected time for the 1st \n> query\n> about 53*10^4 ms.\nYes, I do realize the first query is retrieving all the rows that match \nthe full text search and sorting them, that's what I wanted to avoid. :) \nSince I only want 24 results at a time, I wanted to avoid having to get \nall the rows and sort them. I was wondering if there was any way to use, \nsay, some index combination I'm not aware of, cluster the table \naccording to an index or using a different query to get the same results.\n\nWell, to take advantage of the gin index on (timestamp, comment_tsv), I \nsuppose could do something like this:\narchive=> explain analyze select * from a where comment_tsv @@ \nplainto_tsquery('love') and timestamp > cast(floor(extract(epoch from \nCURRENT_TIMESTAMP) - 864000) as integer) order by timestamp limit 24 \noffset 0;\n\nQUERY PLAN\n------------------\nLimit (cost=17326.69..17326.75 rows=24 width=281) (actual \ntime=3249.192..3249.287 rows=24 loops=1)\n-> Sort (cost=17326.69..17337.93 rows=4499 width=281) (actual \ntime=3249.188..3249.221 rows=24 loops=1)\nSort Key: \"timestamp\"\nSort Method: top-N heapsort Memory: 39kB\n-> Bitmap Heap Scan on a (cost=408.80..17201.05 rows=4499 width=281) \n(actual time=3223.890..3240.484 rows=5525 loops=1)\nRecheck Cond: ((\"timestamp\" > (floor((date_part('epoch'::text, now()) - \n864000::double precision)))::integer) AND (comment_tsv @@ \nplainto_tsquery('love'::text)))\n-> Bitmap Index Scan on timestamp_comment_gin (cost=0.00..407.67 \nrows=4499 width=0) (actual time=3222.769..3222.769 rows=11242 loops=1)\nIndex Cond: ((\"timestamp\" > (floor((date_part('epoch'::text, now()) - \n864000::double precision)))::integer) AND (comment_tsv @@ \nplainto_tsquery('love'::text)))\nTotal runtime: 3249.957 ms\n(9 rows)\n\nWhich only looks at the last 10 days and is considerably faster. Not \nperfect, but a lot better. But this requires a lot of application logic, \nfor example, if I didn't get 24 results in the first query, I'd have to \nreissue the query with a larger time interval and it gets worse pretty \nfast. It strikes me as a really dumb thing to do.\n\nI'm really hitting a brick wall here, I can't seem to be able to provide \nreasonably efficient full text search that is ordered by date rather \nthan random results from the database.\n\nOn 7/20/2009 13:22, Marcin Stępnicki wrote:\n> What happens if you make it:\n>\n>\n> select * from (\n> select * from a where comment_tsv @@plainto_tsquery('love')\n> ) xx\n>\n> order by xx.timestamp desc\n> limit 24 offset 0;\n>\n> ?\nSame query plan, I'm afraid.\n", "msg_date": "Mon, 20 Jul 2009 21:48:54 +0100", "msg_from": "Krade <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Krade <[email protected]> wrote:\n> SELECT * FROM a WHERE comment_tsv @@ plainto_tsquery('love')\n> ORDER BY timestamp DESC LIMIT 24 OFFSET 0;\n \nHave you considered keeping rows \"narrow\" until you've identified your\n24 rows? Something like:\n \nSELECT * FROM a \n WHERE id in\n (\n SELECT id FROM a\n WHERE comment_tsv @@ plainto_tsquery('love')\n ORDER BY timestamp DESC\n LIMIT 24 OFFSET 0\n )\n ORDER BY timestamp DESC\n;\n \n-Kevin\n", "msg_date": "Mon, 20 Jul 2009 16:42:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Hello,\n\nOn 7/20/2009 22:42, Kevin Grittner wrote:\n> Have you considered keeping rows \"narrow\" until you've identified your\n> 24 rows? Something like:\n>\n> SELECT * FROM a\n> WHERE id in\n> (\n> SELECT id FROM a\n> WHERE comment_tsv @@ plainto_tsquery('love')\n> ORDER BY timestamp DESC\n> LIMIT 24 OFFSET 0\n> )\n> ORDER BY timestamp DESC\n> ;\n> \nGood idea, but it doesn't really seem to do much. The query times are \nroughly the same.\n\n", "msg_date": "Tue, 21 Jul 2009 01:25:02 +0100", "msg_from": "Krade <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Krade wrote:\n> SELECT * FROM a WHERE comment_tsv @@ plainto_tsquery('love') ORDER BY \n> timestamp DESC LIMIT 24 OFFSET 0;\n\nHave you tried make the full-text condition in a subselect with \"offset \n0\" to stop the plan reordering?\n\neg:\n\nselect *\nfrom (\n select * from a where comment_tsv @@ plainto_tsquery('love')\n offset 0\n) xx\norder by timestamp DESC\nlimit 24\noffset 0;\n\n\nSee http://blog.endpoint.com/2009/04/offset-0-ftw.html\n\n-- \n-Devin\n", "msg_date": "Mon, 20 Jul 2009 18:13:43 -0700", "msg_from": "Devin Ben-Hur <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On 7/21/2009 2:13, Devin Ben-Hur wrote:\n> Have you tried make the full-text condition in a subselect with \n> \"offset 0\" to stop the plan reordering?\n>\n> eg:\n>\n> select *\n> from (\n> select * from a where comment_tsv @@ plainto_tsquery('love')\n> offset 0\n> ) xx\n> order by timestamp DESC\n> limit 24\n> offset 0;\n>\n>\n> See http://blog.endpoint.com/2009/04/offset-0-ftw.html\nYes, that does force the planner to always pick the full text index \nfirst rather than the timestamp index. I managed to force that by doing \nsomething a lot more drastic, I just dropped my timestamp index \naltogether, since I never used it for anything else. (I mentioned this \nin my original post)\n\nThough, that comment did make me try to readd it. I was pretty \nsurprised, the planner was only doing backward searches on the timestamp \nindex for very common words (therefore turning multi-minute queries into \nvery fast ones), as opposed to trying to use the timestamp index for all \nqueries. I wonder if this is related to tweaks to the planner in 8.4 or \nif it was just my statistics that got balanced out.\n\nI'm not entirely happy, because I still easily get minute long queries \non common words, but where the planner choses to not use the timestamp \nindex. The planner can't guess right all the time.\n\nBut I think I might just do:\nselect * from a where comment_tsv @@ plainto_tsquery('query') and \ntimestamp > cast(floor(extract(epoch from CURRENT_TIMESTAMP) - 864000) \nas integer) order by timestamp desc limit 24 offset 0;\n\nAnd if I get less than 24 rows, issue the regular query:\n\nselect * from a where comment_tsv @@ plainto_tsquery('query') order by \ntimestamp desc limit 24 offset 0;\n\nI pay the price of doing two queries when I could have done just one, \nand it does make almost all queries about 200 ms slower, but it does so \nat the expense of turning the few very slow queries into quick ones.\n\nThanks for all the help.\n", "msg_date": "Tue, 21 Jul 2009 04:35:11 +0100", "msg_from": "Krade <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Mon, Jul 20, 2009 at 9:35 PM, Krade<[email protected]> wrote:\n\n> But I think I might just do:\n> select * from a where comment_tsv @@ plainto_tsquery('query') and timestamp\n>> cast(floor(extract(epoch from CURRENT_TIMESTAMP) - 864000) as integer)\n> order by timestamp desc limit 24 offset 0;\n>\n> And if I get less than 24 rows, issue the regular query:\n>\n> select * from a where comment_tsv @@ plainto_tsquery('query') order by\n> timestamp desc limit 24 offset 0;\n\nCouldn't you do tge second query as a with query then run another\nquery to limit that result to everything greater than now()-xdays ?\n", "msg_date": "Mon, 20 Jul 2009 22:06:49 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Jul 21, 6:06 am, [email protected] (Scott Marlowe) wrote:\n> On Mon, Jul 20, 2009 at 9:35 PM, Krade<[email protected]> wrote:\n> > But I think I might just do:\n> > select * from a where comment_tsv @@ plainto_tsquery('query') and timestamp\n> >> cast(floor(extract(epoch from CURRENT_TIMESTAMP) - 864000) as integer)\n> > order by timestamp desc limit 24 offset 0;\n>\n> > And if I get less than 24 rows, issue the regular query:\n>\n> > select * from a where comment_tsv @@ plainto_tsquery('query') order by\n> > timestamp desc limit 24 offset 0;\n>\n> Couldn't you do tge second query as a with query then run another\n> query to limit that result to everything greater than now()-xdays ?\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nHi,\n\nThere is a problem with GIN and GIST indexes, that they cannot be used\nby the ORDER BY. Maybe it will be a nice idea to ask Oleg to make it\npossible to use the b-tree columns in GIST or GIN to make the sort\neasier, but I have no idea how difficult it will be to implement it in\ncurrent GIN or GIST structures. I think Oleg or even Tom will be the\nright people to ask it :) But even if it is possible it will not be\nimplemented at least until 8.5 that will need a year to come, so until\nthen...\n\nIt is possible to strip your table in several smaller ones putting\nthem on different machines and then splitting your query with DBLINK.\nThis will distribute the burden of sorting to several machines that\nwill have to sort smaller parts as well. After you have your 25 ids\nfrom each of the machines, you can merge them, sort again and limit as\nyou wish. Doing large offsets will be still problematic but faster\nanyway in most reasonable offset ranges. (Load balancing tools like\npg_pool can automate this task, but I do not have practical experience\nusing them for that purposes)\n\nYet another very interesting technology -- sphinx search (http://\nwww.sphinxsearch.com/). It can distribute data on several machines\nautomatically, but it will be probably too expensive to start using\n(if your task is not your main one :)) as they do not have standard\nautomation scripts, it does not support live updates (so you will\nalways have some minutes delay), and this is a standalone service,\nthat needs to be maintained and configured and synchronized with our\nmain database separately (though you can use pg/python to access it\nfrom postgres).\n\nGood luck with your task :)\n\n-- Valentine Gogichashvili\n", "msg_date": "Tue, 21 Jul 2009 03:32:39 -0700 (PDT)", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Tue, 21 Jul 2009, valgog wrote:\n> There is a problem with GIN and GIST indexes, that they cannot be used\n> by the ORDER BY. Maybe it will be a nice idea to ask Oleg to make it\n> possible to use the b-tree columns in GIST or GIN to make the sort\n> easier, but I have no idea how difficult it will be to implement it in\n> current GIN or GIST structures. I think Oleg or even Tom will be the\n> right people to ask it :)\n\nI can answer that one for GiST, having had a good look at GiST recently. \nThere is simply no useful information about order in a GiST index for it \nto be used by an ORDER BY. The index structure is just too general, \nbecause it needs to cope with the situation where a particular object type \ndoes not have a well defined order, or where the \"order\" is unuseful for \nindexing.\n\nMatthew\n\n-- \n A good programmer is one who looks both ways before crossing a one-way street.\n Considering the quality and quantity of one-way streets in Cambridge, it\n should be no surprise that there are so many good programmers there.\n", "msg_date": "Tue, 21 Jul 2009 15:01:43 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On 7/21/2009 11:32, valgog wrote:\n> Hi,\n>\n> There is a problem with GIN and GIST indexes, that they cannot be used\n> by the ORDER BY. Maybe it will be a nice idea to ask Oleg to make it\n> possible to use the b-tree columns in GIST or GIN to make the sort\n> easier, but I have no idea how difficult it will be to implement it in\n> current GIN or GIST structures. I think Oleg or even Tom will be the\n> right people to ask it :) But even if it is possible it will not be\n> implemented at least until 8.5 that will need a year to come, so until\n> then...\n> \nUnfortunately, it's not even just the lack of ORDER BY support, \nbtree_gin indexes seem to be broken under some circumstances. So I can't \neven use my idea to limit searches to the last 10 days.\n\nSee this:\nhttp://pgsql.privatepaste.com/5219TutUMk\n\nThe first query gives bogus results. It's not using the index correctly.\n\ntimestamp_comment_gin is a GIN index on timestamp, comment_tsv. The \ntimestamp column is an integer. The queries work right if I drop the \nindex. Is this a bug in btree_gin?\n> It is possible to strip your table in several smaller ones putting\n> them on different machines and then splitting your query with DBLINK.\n> This will distribute the burden of sorting to several machines that\n> will have to sort smaller parts as well. After you have your 25 ids\n> from each of the machines, you can merge them, sort again and limit as\n> you wish. Doing large offsets will be still problematic but faster\n> anyway in most reasonable offset ranges. (Load balancing tools like\n> pg_pool can automate this task, but I do not have practical experience\n> using them for that purposes)\n>\n> Yet another very interesting technology -- sphinx search (http://\n> www.sphinxsearch.com/). It can distribute data on several machines\n> automatically, but it will be probably too expensive to start using\n> (if your task is not your main one :)) as they do not have standard\n> automation scripts, it does not support live updates (so you will\n> always have some minutes delay), and this is a standalone service,\n> that needs to be maintained and configured and synchronized with our\n> main database separately (though you can use pg/python to access it\n> from postgres).\n>\n> Good luck with your task :)\nYeah, I don't really have that sort of resources. This is a small hobby \nproject (ie: no budget) that is growing a bit too large. I might just \nhave to do text searches without time ordering.\n\nOn 7/21/2009 5:06, Scott Marlowe wrote:\n> Couldn't you do tge second query as a with query then run another\n> query to limit that result to everything greater than now()-xdays ?\n> \nI suppose I could, but I have no way to do a fast query that does both a \nfull text match and a < or > in the same WHERE due to the issue I \ndescribed above, so my original plan won't work. A separate BTREE \ntimestamp index obviously does nothing.\n\nAnd again, thank you for all the help.\n", "msg_date": "Tue, 21 Jul 2009 18:10:29 +0100", "msg_from": "Krade <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Tue, 21 Jul 2009, Krade wrote:\n\n> On 7/21/2009 11:32, valgog wrote:\n>> Hi,\n>> \n>> There is a problem with GIN and GIST indexes, that they cannot be used\n>> by the ORDER BY. Maybe it will be a nice idea to ask Oleg to make it\n>> possible to use the b-tree columns in GIST or GIN to make the sort\n>> easier, but I have no idea how difficult it will be to implement it in\n>> current GIN or GIST structures. I think Oleg or even Tom will be the\n>> right people to ask it :) But even if it is possible it will not be\n>> implemented at least until 8.5 that will need a year to come, so until\n>> then...\n>> \n> Unfortunately, it's not even just the lack of ORDER BY support, btree_gin \n> indexes seem to be broken under some circumstances. So I can't even use my \n> idea to limit searches to the last 10 days.\n>\n> See this:\n> http://pgsql.privatepaste.com/5219TutUMk\n>\n> The first query gives bogus results. It's not using the index correctly.\n>\n> timestamp_comment_gin is a GIN index on timestamp, comment_tsv. The timestamp \n> column is an integer. The queries work right if I drop the index. Is this a \n> bug in btree_gin?\n\nit'd be nice if you provide us data,so we can reproduce your problem\n\n>> It is possible to strip your table in several smaller ones putting\n>> them on different machines and then splitting your query with DBLINK.\n>> This will distribute the burden of sorting to several machines that\n>> will have to sort smaller parts as well. After you have your 25 ids\n>> from each of the machines, you can merge them, sort again and limit as\n>> you wish. Doing large offsets will be still problematic but faster\n>> anyway in most reasonable offset ranges. (Load balancing tools like\n>> pg_pool can automate this task, but I do not have practical experience\n>> using them for that purposes)\n>> \n>> Yet another very interesting technology -- sphinx search (http://\n>> www.sphinxsearch.com/). It can distribute data on several machines\n>> automatically, but it will be probably too expensive to start using\n>> (if your task is not your main one :)) as they do not have standard\n>> automation scripts, it does not support live updates (so you will\n>> always have some minutes delay), and this is a standalone service,\n>> that needs to be maintained and configured and synchronized with our\n>> main database separately (though you can use pg/python to access it\n>> from postgres).\n>> \n>> Good luck with your task :)\n> Yeah, I don't really have that sort of resources. This is a small hobby \n> project (ie: no budget) that is growing a bit too large. I might just have to \n> do text searches without time ordering.\n>\n> On 7/21/2009 5:06, Scott Marlowe wrote:\n>> Couldn't you do tge second query as a with query then run another\n>> query to limit that result to everything greater than now()-xdays ?\n>> \n> I suppose I could, but I have no way to do a fast query that does both a full \n> text match and a < or > in the same WHERE due to the issue I described above, \n> so my original plan won't work. A separate BTREE timestamp index obviously \n> does nothing.\n>\n> And again, thank you for all the help.\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Tue, 21 Jul 2009 23:27:28 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "On Mon, Jul 20, 2009 at 8:12 AM, Oleg Bartunov<[email protected]> wrote:\n>> Here's a couple of queries:\n>>\n>> archive=> explain analyze select * from a where  comment_tsv @@\n>> plainto_tsquery('love') order by timestamp desc limit 24 offset 0;\n>>\n>> QUERY PLAN\n>> ----------\n>> Limit  (cost=453248.73..453248.79 rows=24 width=281) (actual\n>> time=188441.047..188441.148 rows=24 loops=1)\n>>  ->  Sort  (cost=453248.73..453882.82 rows=253635 width=281) (actual\n>> time=188441.043..188441.079 rows=24 loops=1)\n>>        Sort Key: \"timestamp\"\n>>        Sort Method:  top-N heapsort  Memory: 42kB\n>>        ->  Bitmap Heap Scan on a  (cost=17782.16..446166.02 rows=253635\n>> width=281) (actual time=2198.930..187948.050 rows=256378 loops=1)\n>>              Recheck Cond: (comment_tsv @@ plainto_tsquery('love'::text))\n>>              ->  Bitmap Index Scan on timestamp_comment_gin\n>> (cost=0.00..17718.75 rows=253635 width=0) (actual time=2113.664..2113.664\n>> rows=259828 loops=1)\n>>                    Index Cond: (comment_tsv @@\n>> plainto_tsquery('love'::text))\n>> Total runtime: 188442.617 ms\n>> (9 rows)\n>>\n>> archive=> explain analyze select * from a where  comment_tsv @@\n>> plainto_tsquery('love') limit 24 offset 0;\n>>\n>> QUERY PLAN\n>> ----------\n>> Limit  (cost=0.00..66.34 rows=24 width=281) (actual time=14.632..53.647\n>> rows=24 loops=1)\n>>  ->  Seq Scan on a  (cost=0.00..701071.49 rows=253635 width=281) (actual\n>> time=14.629..53.588 rows=24 loops=1)\n>>        Filter: (comment_tsv @@ plainto_tsquery('love'::text))\n>> Total runtime: 53.731 ms\n>> (4 rows)\n>>\n>> First one runs painfully slow.\n>\n> Hmm, everything is already written in explain :) In the first query 253635\n> rows should be readed from disk and sorted, while in the\n> second query only 24 (random) rows readed from disk, so there is 4\n> magnitudes\n> difference and in the worst case you should expected time for the 1st query\n> about 53*10^4 ms.\n\nIf love is an uncommon word, there's no help for queries of this type\nbeing slow unless the GIN index can return the results in order. But\nif love is a common word, then it would be faster to do an index scan\nby timestamp on the baserel and then treat comment_tsv @@\nplainto_tsquery('love') as a filter condition. Is this a selectivity\nestimation bug?\n\n...Robert\n", "msg_date": "Wed, 29 Jul 2009 10:02:11 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "\n> If love is an uncommon word, there's no help for queries of this type\n> being slow unless the GIN index can return the results in order. But\n> if love is a common word, then it would be faster to do an index scan\n> by timestamp on the baserel and then treat comment_tsv @@\n> plainto_tsquery('love') as a filter condition. Is this a selectivity\n> estimation bug?\n\n\tIf you have really lots of documents to index (this seems the case) \nperhaps you should consider Xapian. It is very easy to use (although, of \ncourse, tsearch integrated in Postgres is much easier since you have \nnothing to install), and it is *incredibly* fast.\n\n\tIn my tests (2 years ago) with many gigabytes of stuff to search into, \ndifferences became obvious when the data set is much bigger than RAM.\n\t- Postgres' fulltext was 10-100x faster than MySQL fulltext on searches \n(lol) (and even a lot \"more faster\" on INSERTs...)\n\t- and Xapian was 10-100 times faster than Postgres' fulltext.\n\n\t(on a small table which fits in RAM, differences are small).\n\t\n\tOf course Xapian is not Postgres when you talk about update \nconcurrency..........\n\t(single writer => fulltext index updating background job is needed, a \nsimple Python script does the job)\n", "msg_date": "Wed, 29 Jul 2009 16:18:38 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> If love is an uncommon word, there's no help for queries of this type\n> being slow unless the GIN index can return the results in order. But\n> if love is a common word, then it would be faster to do an index scan\n> by timestamp on the baserel and then treat comment_tsv @@\n> plainto_tsquery('love') as a filter condition. Is this a selectivity\n> estimation bug?\n\nDoesn't look like it: estimated number of matches is 253635, actual is\n259828, which is really astonishingly close considering what we have to\nwork with. It's not clear though what fraction of the total that\nrepresents.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jul 2009 10:22:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue " }, { "msg_contents": "On Wed, Jul 29, 2009 at 10:22 AM, Tom Lane<[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> If love is an uncommon word, there's no help for queries of this type\n>> being slow unless the GIN index can return the results in order.  But\n>> if love is a common word, then it would be faster to do an index scan\n>> by timestamp on the baserel and then treat comment_tsv @@\n>> plainto_tsquery('love') as a filter condition.  Is this a selectivity\n>> estimation bug?\n>\n> Doesn't look like it: estimated number of matches is 253635, actual is\n> 259828, which is really astonishingly close considering what we have to\n> work with.  It's not clear though what fraction of the total that\n> represents.\n\nHmm, good point. It seems like it would be useful to force the\nplanner into use the other plan and get EXPLAIN ANALYZE output for\nthat for comparison purposes, but off the top of my head I don't know\nhow to do that.\n\n...Robert\n", "msg_date": "Wed, 29 Jul 2009 11:13:34 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Hmm, good point. It seems like it would be useful to force the\n> planner into use the other plan and get EXPLAIN ANALYZE output for\n> that for comparison purposes, but off the top of my head I don't know\n> how to do that.\n\nThe standard way is\n\n\tbegin;\n\tdrop index index_you_dont_want_used;\n\texplain problem-query;\n\trollback;\n\nAin't transactional DDL wonderful?\n\n(If this is a production system, you do have to worry about the DROP\ntransiently locking the table; but if you put the above in a script\nrather than doing it by hand, it should be fast enough to not be a big\nproblem.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jul 2009 11:29:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue " }, { "msg_contents": "On Wed, Jul 29, 2009 at 11:29 AM, Tom Lane<[email protected]> wrote:\n> Ain't transactional DDL wonderful?\n\nYes. :-)\n\n...Robert\n", "msg_date": "Wed, 29 Jul 2009 11:37:14 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" }, { "msg_contents": "I hate to be \"that guy\" but, Is this is still an issue 5 years later?? I\ncan't seem to get Gin/btree to use my ORDER BY column with a LIMIT no matter\nwhat I try.\n\nMy best idea was to cluster the database by the ORDER BY column and then\njust hope the index returns them in the order in the table...\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Full-text-search-with-ORDER-BY-performance-issue-tp2074171p5813083.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Jul 2014 12:55:13 -0700 (PDT)", "msg_from": "worthy7 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Full text search with ORDER BY performance issue" } ]
[ { "msg_contents": "Hi. I notice that when I do a WHERE x, Postgres uses an index, and when I do\nWHERE y, it does so as well, but when I do WHERE x OR y, it doesn't. Why is\nthis so? And how can I shut this off?\nselect * from dict\nwhere\n word in (select substr('moon', 0, generate_series(3,length('moon')))) --\nthis is my X above\n OR word like 'moon%' -- this is my Y above\n\nSeq Scan on dict (cost=0.02..2775.66 rows=30422 width=24) (actual\ntime=16.635..28.580 rows=8 loops=1)\n Filter: ((hashed subplan) OR ((word)::text ~~ 'moon%'::text))\n SubPlan\n -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.014..0.019 rows=2\nloops=1)\nTotal runtime: 28.658 ms\n(Using just X or Y alone uses the index, and completes in 0.150 ms)\n\nIs this a bug?\n\nPS Running \"PostgreSQL 8.2.1 on i686-pc-mingw32, compiled by GCC gcc.exe\n(GCC) 3.4.2 (mingw-special)\"\n\nHi. I notice that when I do a WHERE x, Postgres uses an index, and when I do WHERE y, it does so as well, but when I do WHERE x OR y, it doesn't. Why is this so? And how can I shut this off?select * from dict\nwhere  word in (select substr('moon', 0, generate_series(3,length('moon')))) -- this is my X above OR word like 'moon%' -- this is my Y aboveSeq Scan on dict (cost=0.02..2775.66 rows=30422 width=24) (actual time=16.635..28.580 rows=8 loops=1)\n Filter: ((hashed subplan) OR ((word)::text ~~ 'moon%'::text)) SubPlan -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.014..0.019 rows=2 loops=1)Total runtime: 28.658 ms(Using just X or Y alone uses the index, and completes in 0.150 ms)\nIs this a bug?PS Running \"PostgreSQL 8.2.1 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special)\"", "msg_date": "Sun, 19 Jul 2009 19:03:09 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Can Postgres use an INDEX over an OR?" }, { "msg_contents": "2009/7/20 Robert James <[email protected]>\n\n>\n> Hi. I notice that when I do a WHERE x, Postgres uses an index, and when I\n> do WHERE y, it does so as well, but when I do WHERE x OR y, it doesn't. Why\n> is this so?\n\n\nIt's not clever enough.\n\nAnd how can I shut this off?\n\n\nUse UNION/UNION ALL if possible in your case.\n\n2009/7/20 Robert James <[email protected]>\nHi. I notice that when I do a WHERE x, Postgres uses an index, and when I do WHERE y, it does so as well, but when I do WHERE x OR y, it doesn't. Why is this so? It's not clever enough. \nAnd how can I shut this off?Use UNION/UNION ALL if possible in your case.", "msg_date": "Mon, 20 Jul 2009 09:18:43 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "Віталій Тимчишин wrote:\n> \n> \n> 2009/7/20 Robert James <[email protected] \n> <mailto:[email protected]>>\n> \n> \n> Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n> when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n> it doesn't. Why is this so? \n> \n> \n> It's not clever enough.\n\nOf course it is.\n\nI'm running 8.3.7.\n\ncreate table t1(id int primary key);\ninsert into t1(id) select a from generate_series(1, 500000) as s(a);\nanalyze t1;\n\nexplain analyze select * from t1 where id=5000 or id=25937;\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on t1 (cost=8.60..16.44 rows=2 width=4) (actual \ntime=0.077..0.083 rows=2 loops=1)\n Recheck Cond: ((id = 5000) OR (id = 25937))\n -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual \ntime=0.063..0.063 rows=0 loops=1)\n -> Bitmap Index Scan on t1_pkey (cost=0.00..4.30 rows=1 \nwidth=0) (actual time=0.034..0.034 rows=1 loops=1)\n Index Cond: (id = 5000)\n -> Bitmap Index Scan on t1_pkey (cost=0.00..4.30 rows=1 \nwidth=0) (actual time=0.021..0.021 rows=1 loops=1)\n Index Cond: (id = 25937)\n Total runtime: 0.153 ms\n(8 rows)\n\nWhat Robert didn't post was his query, see\n\nhttp://archives.postgresql.org/pgsql-general/2009-07/msg00767.php\n\nwhich makes it a lot harder to 'optimize' since they aren't straight \nforward conditions.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Mon, 20 Jul 2009 18:02:12 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "Query is:\nselect * from dict\nwhere\n word in (select substr('moon', 0, generate_series(3,length('moon')))) --\nthis is my X above\n OR word like 'moon%' -- this is my Y above\n\ndict is indexed on word\n2009/7/20 Chris <[email protected]>\n\n> 2009/7/20 Robert James <[email protected] <mailto:\n> [email protected]>>\n>\n>\n> Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n> when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n> it doesn't. Why is this so?\n>\n> What Robert didn't post was his query, see\n>\n> http://archives.postgresql.org/pgsql-general/2009-07/msg00767.php\n>\n> which makes it a lot harder to 'optimize' since they aren't straight\n> forward conditions.\n>\n\nQuery is:select * from dictwhere  word in (select substr('moon', 0, generate_series(3,length('moon')))) -- this is my X above OR word like 'moon%' -- this is my Y above\ndict is indexed on word2009/7/20 Chris <[email protected]>\n2009/7/20 Robert James <[email protected] <mailto:[email protected]>>\n\n\n\n    Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n    when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n    it doesn't. Why is this so? \n\nWhat Robert didn't post was his query, see\n\nhttp://archives.postgresql.org/pgsql-general/2009-07/msg00767.php\n\nwhich makes it a lot harder to 'optimize' since they aren't straight forward conditions.", "msg_date": "Mon, 20 Jul 2009 09:25:48 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "20 липня 2009 р. 11:02 Chris <[email protected]> написав:\n\n> Віталій Тимчишин wrote:\n>\n>>\n>>\n>> 2009/7/20 Robert James <[email protected] <mailto:\n>> [email protected]>>\n>>\n>>\n>> Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n>> when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n>> it doesn't. Why is this so?\n>>\n>> It's not clever enough.\n>>\n>\n> Of course it is.\n\n\nFor simple cases\n\n\n>\n> I'm running 8.3.7.\n>\n> create table t1(id int primary key);\n> insert into t1(id) select a from generate_series(1, 500000) as s(a);\n> analyze t1;\n>\n\nexplain analyze select * from t1 where\nid < 10000\n\n\"Index Scan using t1_pkey on t1 (cost=0.00..322.51 rows=9612 width=4)\n(actual time=0.030..3.700 rows=9999 loops=1)\"\n\" Index Cond: (id < 10000)\"\n\"Total runtime: 4.835 ms\"\n\nexplain analyze select * from t1 where\nid in (select (random() * 500000)::int4 from generate_series(0,10))\n\n\"Nested Loop (cost=32.50..1341.49 rows=200 width=4) (actual\ntime=15.353..67.014 rows=11 loops=1)\"\n\" -> HashAggregate (cost=32.50..34.50 rows=200 width=4) (actual\ntime=0.028..0.043 rows=11 loops=1)\"\n\" -> Function Scan on generate_series (cost=0.00..20.00 rows=1000\nwidth=0) (actual time=0.014..0.020 rows=11 loops=1)\"\n\" -> Index Scan using t1_pkey on t1 (cost=0.00..6.52 rows=1 width=4)\n(actual time=6.083..6.084 rows=1 loops=11)\"\n\" Index Cond: (t1.id = (((random() * 500000::double\nprecision))::integer))\"\n\"Total runtime: 67.070 ms\"\n\nexplain analyze select * from t1 where\nid in (select (random() * 500000)::int4 from generate_series(0,10))\nor\nid < 10000\n\n\"Seq Scan on t1 (cost=22.50..9735.50 rows=254806 width=4) (actual\ntime=0.049..148.947 rows=10010 loops=1)\"\n\" Filter: ((hashed subplan) OR (id < 10000))\"\n\" SubPlan\"\n\" -> Function Scan on generate_series (cost=0.00..20.00 rows=1000\nwidth=0) (actual time=0.014..0.019 rows=11 loops=1)\"\n\"Total runtime: 150.123 ms\"\n\nexplain analyze\nselect * from t1 where\nid in (select (random() * 500000)::int4 from generate_series(0,10))\nunion\nselect * from t1 where\nid < 10000\n\n\"Unique (cost=2412.68..2461.74 rows=9812 width=4) (actual\ntime=89.190..95.014 rows=10010 loops=1)\"\n\" -> Sort (cost=2412.68..2437.21 rows=9812 width=4) (actual\ntime=89.189..91.167 rows=10010 loops=1)\"\n\" Sort Key: public.t1.id\"\n\" Sort Method: quicksort Memory: 854kB\"\n\" -> Append (cost=32.50..1762.13 rows=9812 width=4) (actual\ntime=16.641..76.338 rows=10010 loops=1)\"\n\" -> Nested Loop (cost=32.50..1341.49 rows=200 width=4)\n(actual time=16.641..70.051 rows=11 loops=1)\"\n\" -> HashAggregate (cost=32.50..34.50 rows=200 width=4)\n(actual time=0.033..0.049 rows=11 loops=1)\"\n\" -> Function Scan on generate_series\n(cost=0.00..20.00 rows=1000 width=0) (actual time=0.020..0.026 rows=11\nloops=1)\"\n\" -> Index Scan using t1_pkey on t1 (cost=0.00..6.52\nrows=1 width=4) (actual time=6.359..6.361 rows=1 loops=11)\"\n\" Index Cond: (public.t1.id = (((random() *\n500000::double precision))::integer))\"\n\" -> Index Scan using t1_pkey on t1 (cost=0.00..322.51\nrows=9612 width=4) (actual time=0.023..4.075 rows=9999 loops=1)\"\n\" Index Cond: (id < 10000)\"\n\"Total runtime: 112.694 ms\"\n\nSo, if it founds out anything complex, it sadly falls back to Sequence scan.\n\n20 липня 2009 р. 11:02 Chris <[email protected]> написав:\nВіталій Тимчишин wrote:\n\n\n\n2009/7/20 Robert James <[email protected] <mailto:[email protected]>>\n\n\n\n    Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n    when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n    it doesn't. Why is this so? \n\nIt's not clever enough.\n\n\nOf course it is.For simple cases \n\nI'm running 8.3.7.\n\ncreate table t1(id int primary key);\ninsert into t1(id) select a from generate_series(1, 500000) as s(a);\nanalyze t1;explain analyze select * from t1 where id < 10000\"Index Scan using t1_pkey on t1  (cost=0.00..322.51 rows=9612 width=4) (actual time=0.030..3.700 rows=9999 loops=1)\"\n\"  Index Cond: (id < 10000)\"\"Total runtime: 4.835 ms\"explain analyze select * from t1 where id in (select (random() * 500000)::int4 from generate_series(0,10))\"Nested Loop  (cost=32.50..1341.49 rows=200 width=4) (actual time=15.353..67.014 rows=11 loops=1)\"\n\"  ->  HashAggregate  (cost=32.50..34.50 rows=200 width=4) (actual time=0.028..0.043 rows=11 loops=1)\"\"        ->  Function Scan on generate_series  (cost=0.00..20.00 rows=1000 width=0) (actual time=0.014..0.020 rows=11 loops=1)\"\n\"  ->  Index Scan using t1_pkey on t1  (cost=0.00..6.52 rows=1 width=4) (actual time=6.083..6.084 rows=1 loops=11)\"\"        Index Cond: (t1.id = (((random() * 500000::double precision))::integer))\"\n\"Total runtime: 67.070 ms\"explain analyze select * from t1 where id in (select (random() * 500000)::int4 from generate_series(0,10))or id < 10000\"Seq Scan on t1  (cost=22.50..9735.50 rows=254806 width=4) (actual time=0.049..148.947 rows=10010 loops=1)\"\n\"  Filter: ((hashed subplan) OR (id < 10000))\"\"  SubPlan\"\"    ->  Function Scan on generate_series  (cost=0.00..20.00 rows=1000 width=0) (actual time=0.014..0.019 rows=11 loops=1)\"\n\"Total runtime: 150.123 ms\"explain analyze select * from t1 where id in (select (random() * 500000)::int4 from generate_series(0,10))unionselect * from t1 where id < 10000\"Unique  (cost=2412.68..2461.74 rows=9812 width=4) (actual time=89.190..95.014 rows=10010 loops=1)\"\n\"  ->  Sort  (cost=2412.68..2437.21 rows=9812 width=4) (actual time=89.189..91.167 rows=10010 loops=1)\"\"        Sort Key: public.t1.id\"\"        Sort Method:  quicksort  Memory: 854kB\"\n\"        ->  Append  (cost=32.50..1762.13 rows=9812 width=4) (actual time=16.641..76.338 rows=10010 loops=1)\"\"              ->  Nested Loop  (cost=32.50..1341.49 rows=200 width=4) (actual time=16.641..70.051 rows=11 loops=1)\"\n\"                    ->  HashAggregate  (cost=32.50..34.50 rows=200 width=4) (actual time=0.033..0.049 rows=11 loops=1)\"\"                          ->  Function Scan on generate_series  (cost=0.00..20.00 rows=1000 width=0) (actual time=0.020..0.026 rows=11 loops=1)\"\n\"                    ->  Index Scan using t1_pkey on t1  (cost=0.00..6.52 rows=1 width=4) (actual time=6.359..6.361 rows=1 loops=11)\"\"                          Index Cond: (public.t1.id = (((random() * 500000::double precision))::integer))\"\n\"              ->  Index Scan using t1_pkey on t1  (cost=0.00..322.51 rows=9612 width=4) (actual time=0.023..4.075 rows=9999 loops=1)\"\"                    Index Cond: (id < 10000)\"\"Total runtime: 112.694 ms\"\nSo, if it founds out anything complex, it sadly falls back to Sequence scan.", "msg_date": "Mon, 20 Jul 2009 18:05:19 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "2009/7/20 Віталій Тимчишин <[email protected]>:\n> 20 липня 2009 р. 11:02 Chris <[email protected]> написав:\n>>\n>> Віталій Тимчишин wrote:\n>>>\n>>>\n>>> 2009/7/20 Robert James <[email protected]\n>>> <mailto:[email protected]>>\n>>>\n>>>\n>>>    Hi. I notice that when I do a WHERE x, Postgres uses an index, and\n>>>    when I do WHERE y, it does so as well, but when I do WHERE x OR y,\n>>>    it doesn't. Why is this so?\n>>>\n>>> It's not clever enough.\n>>\n>> Of course it is.\n>\n> For simple cases\n>\n>>\n>>\n>> I'm running 8.3.7.\n>>\n>> create table t1(id int primary key);\n>> insert into t1(id) select a from generate_series(1, 500000) as s(a);\n>> analyze t1;\n>\n> explain analyze select * from t1 where\n> id < 10000\n>\n> \"Index Scan using t1_pkey on t1  (cost=0.00..322.51 rows=9612 width=4)\n> (actual time=0.030..3.700 rows=9999 loops=1)\"\n> \"  Index Cond: (id < 10000)\"\n> \"Total runtime: 4.835 ms\"\n>\n> explain analyze select * from t1 where\n> id in (select (random() * 500000)::int4 from generate_series(0,10))\n>\n> \"Nested Loop  (cost=32.50..1341.49 rows=200 width=4) (actual\n> time=15.353..67.014 rows=11 loops=1)\"\n> \"  ->  HashAggregate  (cost=32.50..34.50 rows=200 width=4) (actual\n> time=0.028..0.043 rows=11 loops=1)\"\n> \"        ->  Function Scan on generate_series  (cost=0.00..20.00 rows=1000\n> width=0) (actual time=0.014..0.020 rows=11 loops=1)\"\n> \"  ->  Index Scan using t1_pkey on t1  (cost=0.00..6.52 rows=1 width=4)\n> (actual time=6.083..6.084 rows=1 loops=11)\"\n> \"        Index Cond: (t1.id = (((random() * 500000::double\n> precision))::integer))\"\n> \"Total runtime: 67.070 ms\"\n>\n> explain analyze select * from t1 where\n> id in (select (random() * 500000)::int4 from generate_series(0,10))\n> or\n> id < 10000\n>\n> \"Seq Scan on t1  (cost=22.50..9735.50 rows=254806 width=4) (actual\n> time=0.049..148.947 rows=10010 loops=1)\"\n> \"  Filter: ((hashed subplan) OR (id < 10000))\"\n> \"  SubPlan\"\n> \"    ->  Function Scan on generate_series  (cost=0.00..20.00 rows=1000\n> width=0) (actual time=0.014..0.019 rows=11 loops=1)\"\n> \"Total runtime: 150.123 ms\"\n>\n> explain analyze\n> select * from t1 where\n> id in (select (random() * 500000)::int4 from generate_series(0,10))\n> union\n> select * from t1 where\n> id < 10000\n>\n> \"Unique  (cost=2412.68..2461.74 rows=9812 width=4) (actual\n> time=89.190..95.014 rows=10010 loops=1)\"\n> \"  ->  Sort  (cost=2412.68..2437.21 rows=9812 width=4) (actual\n> time=89.189..91.167 rows=10010 loops=1)\"\n> \"        Sort Key: public.t1.id\"\n> \"        Sort Method:  quicksort  Memory: 854kB\"\n> \"        ->  Append  (cost=32.50..1762.13 rows=9812 width=4) (actual\n> time=16.641..76.338 rows=10010 loops=1)\"\n> \"              ->  Nested Loop  (cost=32.50..1341.49 rows=200 width=4)\n> (actual time=16.641..70.051 rows=11 loops=1)\"\n> \"                    ->  HashAggregate  (cost=32.50..34.50 rows=200 width=4)\n> (actual time=0.033..0.049 rows=11 loops=1)\"\n> \"                          ->  Function Scan on generate_series\n> (cost=0.00..20.00 rows=1000 width=0) (actual time=0.020..0.026 rows=11\n> loops=1)\"\n> \"                    ->  Index Scan using t1_pkey on t1  (cost=0.00..6.52\n> rows=1 width=4) (actual time=6.359..6.361 rows=1 loops=11)\"\n> \"                          Index Cond: (public.t1.id = (((random() *\n> 500000::double precision))::integer))\"\n> \"              ->  Index Scan using t1_pkey on t1  (cost=0.00..322.51\n> rows=9612 width=4) (actual time=0.023..4.075 rows=9999 loops=1)\"\n> \"                    Index Cond: (id < 10000)\"\n> \"Total runtime: 112.694 ms\"\n\nHmm. What you're suggesting here is that we could consider\nimplementing OR conditions by rescanning the inner side for each index\nqual and then unique-ifying the results on the index column. That's\nprobably possible, but it doesn't sound easy, especially since our\nselectivity-estimation code for OR conditions is not very good, so we\nmight choose to do it this way when that's not actually the best plan.\n\n...Robert\n", "msg_date": "Mon, 27 Jul 2009 06:53:06 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "27 липня 2009 р. 13:53 Robert Haas <[email protected]> написав:\n\n>\n> Hmm. What you're suggesting here is that we could consider\n> implementing OR conditions by rescanning the inner side for each index\n> qual and then unique-ifying the results on the index column. That's\n> probably possible, but it doesn't sound easy, especially since our\n> selectivity-estimation code for OR conditions is not very good, so we\n> might choose to do it this way when that's not actually the best plan.\n>\n> ...Robert\n>\n\nActually what I am talking about is to make OR with UNION (or UNION-like\nbecause it's a little different depending on input rows uniqueness) as an\noption. All of OR parts can use/not use different strategies (including\nmultiple different idexes or hash joins).\nIn cases when conditions are complex this can drastically increase\nperformance by winning over sequence scan.\n\nAs of selectivity, I'd say this is general problem - sometimes it is\nestimated OK, sometimes not, but this should not prevent from trying\ndifferent plans. (From my current work: it does wrong estimations of filter\nselectivity, introduces HASH join and kills the server with OOM).\n\nBest regards, Vitaliy Tymchyshyn.\n\n27 липня 2009 р. 13:53 Robert Haas <[email protected]> написав:\n\nHmm.  What you're suggesting here is that we could consider\nimplementing OR conditions by rescanning the inner side for each index\nqual and then unique-ifying the results on the index column.  That's\nprobably possible, but it doesn't sound easy, especially since our\nselectivity-estimation code for OR conditions is not very good, so we\nmight choose to do it this way when that's not actually the best plan.\n\n...Robert\nActually what I am talking about is to make OR with UNION (or UNION-like because it's a little different depending on input rows uniqueness) as an option. All of OR parts can use/not use different strategies (including multiple different idexes or hash joins).\nIn cases when conditions are complex this can drastically increase performance by winning over sequence scan.As of selectivity, I'd say this is general problem - sometimes it is estimated OK, sometimes not, but this should not prevent from trying different plans. (From my current work: it does wrong estimations of filter selectivity, introduces HASH join and kills the server with OOM).\nBest regards, Vitaliy Tymchyshyn.", "msg_date": "Mon, 27 Jul 2009 14:37:14 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "2009/7/27 Віталій Тимчишин <[email protected]>:\n>\n>\n> 27 липня 2009 р. 13:53 Robert Haas <[email protected]> написав:\n>>\n>> Hmm.  What you're suggesting here is that we could consider\n>> implementing OR conditions by rescanning the inner side for each index\n>> qual and then unique-ifying the results on the index column.  That's\n>> probably possible, but it doesn't sound easy, especially since our\n>> selectivity-estimation code for OR conditions is not very good, so we\n>> might choose to do it this way when that's not actually the best plan.\n>>\n>> ...Robert\n>\n> Actually what I am talking about is to make OR with UNION (or UNION-like\n> because it's a little different depending on input rows uniqueness) as an\n> option. All of OR parts can use/not use different strategies (including\n> multiple different idexes or hash joins).\n> In cases when conditions are complex this can drastically increase\n> performance by winning over sequence scan.\n\nThat's exactly what I was talking about.\n\n> As of selectivity, I'd say this is general problem - sometimes it is\n> estimated OK, sometimes not, but this should not prevent from trying\n> different plans. (From my current work: it does wrong estimations of filter\n> selectivity, introduces HASH join and kills the server with OOM).\n\nYep. I think the two things that would help the most with this are\nmulti-column statistics and some kind of cache, but AFAIK nobody is\nactively developing code for either one ATM.\n\nThe problem, though, is that it won't ALWAYS be right to implement OR\nusing UNION, so you have to have some way of deciding which is better.\n\n...Robert\n", "msg_date": "Mon, 27 Jul 2009 08:02:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "27 липня 2009 р. 15:02 Robert Haas <[email protected]> написав:\n\n>\n> The problem, though, is that it won't ALWAYS be right to implement OR\n> using UNION, so you have to have some way of deciding which is better.\n>\n\nThat's easy - you propose both ways to planner and it's up to it to decide.\nYes, it can decide wrong way, but we are returning to statistics problem. At\nleast one can tune costs and enable_ settings. Now one have to rewrite query\nthat may be not possible/too complex.\n\n27 липня 2009 р. 15:02 Robert Haas <[email protected]> написав:\n\nThe problem, though, is that it won't ALWAYS be right to implement OR\nusing UNION, so you have to have some way of deciding which is better.\nThat's easy - you propose both ways to planner and it's up to it to decide. Yes, it can decide wrong way, but we are returning to statistics problem. At least one can tune costs and enable_ settings. Now one have to rewrite query that may be not possible/too complex.", "msg_date": "Mon, 27 Jul 2009 15:59:17 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" }, { "msg_contents": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> Actually what I am talking about is to make OR with UNION (or UNION-like\n> because it's a little different depending on input rows uniqueness) as an\n> option. All of OR parts can use/not use different strategies (including\n> multiple different idexes or hash joins).\n\nAFAICS you're proposing re-inventing the old implementation of OR'd\nindexscans. We took that out when we added bitmap scans because it\ndidn't have any performance advantage over BitmapOr.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Jul 2009 10:18:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR? " }, { "msg_contents": "27 липня 2009 р. 17:18 Tom Lane <[email protected]> написав:\n\n> =?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> > Actually what I am talking about is to make OR with UNION (or UNION-like\n> > because it's a little different depending on input rows uniqueness) as an\n> > option. All of OR parts can use/not use different strategies (including\n> > multiple different idexes or hash joins).\n>\n> AFAICS you're proposing re-inventing the old implementation of OR'd\n> indexscans. We took that out when we added bitmap scans because it\n> didn't have any performance advantage over BitmapOr.\n>\n\nIt's not tied to indexscans at all. Different parts can do (as in UNION)\ntotally different strategy - e.g. perform two hash joins or perform merge\njoin for one part and nested loop for another or ...\n\nAs of performance - see above in this thread. UNION now often provides much\nbetter performance when different parts of OR expression involve different\nadditional tables.\n\n27 липня 2009 р. 17:18 Tom Lane <[email protected]> написав:\n=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]> writes:\n> Actually what I am talking about is to make OR with UNION (or UNION-like\n> because it's a little different depending on input rows uniqueness) as an\n> option. All of OR parts can use/not use different strategies (including\n> multiple different idexes or hash joins).\n\nAFAICS you're proposing re-inventing the old implementation of OR'd\nindexscans.  We took that out when we added bitmap scans because it\ndidn't have any performance advantage over BitmapOr.\nIt's not tied to indexscans at all. Different parts can do (as in UNION) totally different strategy - e.g. perform two hash joins or perform merge join for one part and nested loop for another or ... \nAs of performance - see above in this thread. UNION now often provides much better performance when different parts of OR expression involve different additional tables.", "msg_date": "Mon, 27 Jul 2009 17:33:32 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can Postgres use an INDEX over an OR?" } ]
[ { "msg_contents": "I'm storing a lot of words in a database. What's the fastest format for\nfinding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and WHERE\nw IN ('m', 'ma'). All characters are lowercase a-z, no punctuation, no\nother alphabets. By default I'm using varchar in utf-8 encoding, but was\nwondering if I could specificy something else (perhaps 7bit ascii, perhaps\nlowercase only) that would speed things up even further.\n\nI'm storing a lot of words in a database.  What's the fastest format for finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and WHERE w IN ('m', 'ma').  All characters are lowercase a-z, no punctuation, no other alphabets.  By default I'm using varchar in utf-8 encoding, but was wondering if I could specificy something else (perhaps 7bit ascii, perhaps lowercase only) that would speed things up even further.", "msg_date": "Sun, 19 Jul 2009 21:46:53 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Fastest char datatype" }, { "msg_contents": "On Monday 20 July 2009 04:46:53 Robert James wrote:\n> I'm storing a lot of words in a database. What's the fastest format for\n> finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and\n> WHERE w IN ('m', 'ma'). All characters are lowercase a-z, no punctuation,\n> no other alphabets. By default I'm using varchar in utf-8 encoding, but\n> was wondering if I could specificy something else (perhaps 7bit ascii,\n> perhaps lowercase only) that would speed things up even further.\n\nIf your data is only lowercase a-z, as you say, then the binary representation \nwill be the same in all server-side encodings, because they are all supersets \nof ASCII.\n\nThese concerns will likely be dominated by the question of proper indexing and \ncaching anyway.\n", "msg_date": "Mon, 20 Jul 2009 09:02:34 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest char datatype" }, { "msg_contents": "Is there a way to use a more compact encoding? I only need 4 bits per char -\nthat would certainly help caching. (I have indexes tuned very well,\nalready).\n\nOn Mon, Jul 20, 2009 at 2:02 AM, Peter Eisentraut <[email protected]> wrote:\n\n> On Monday 20 July 2009 04:46:53 Robert James wrote:\n> > I'm storing a lot of words in a database. What's the fastest format for\n> > finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and\n> > WHERE w IN ('m', 'ma'). All characters are lowercase a-z, no\n> punctuation,\n> > no other alphabets. By default I'm using varchar in utf-8 encoding, but\n> > was wondering if I could specificy something else (perhaps 7bit ascii,\n> > perhaps lowercase only) that would speed things up even further.\n>\n> If your data is only lowercase a-z, as you say, then the binary\n> representation\n> will be the same in all server-side encodings, because they are all\n> supersets\n> of ASCII.\n>\n>\n\nIs there a way to use a more compact encoding? I only need 4 bits per char - that would certainly help caching.  (I have indexes tuned very well, already).On Mon, Jul 20, 2009 at 2:02 AM, Peter Eisentraut <[email protected]> wrote:\nOn Monday 20 July 2009 04:46:53 Robert James wrote:\n> I'm storing a lot of words in a database.  What's the fastest format for\n> finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and\n> WHERE w IN ('m', 'ma').  All characters are lowercase a-z, no punctuation,\n> no other alphabets.  By default I'm using varchar in utf-8 encoding, but\n> was wondering if I could specificy something else (perhaps 7bit ascii,\n> perhaps lowercase only) that would speed things up even further.\n\nIf your data is only lowercase a-z, as you say, then the binary representation\nwill be the same in all server-side encodings, because they are all supersets\nof ASCII.", "msg_date": "Mon, 20 Jul 2009 09:23:46 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fastest char datatype" }, { "msg_contents": "On Sun, Jul 19, 2009 at 9:46 PM, Robert James<[email protected]> wrote:\n> I'm storing a lot of words in a database.  What's the fastest format for\n> finding them? I'm going to be doing a lot of WHERE w LIKE 'marsh%' and WHERE\n> w IN ('m', 'ma').  All characters are lowercase a-z, no punctuation, no\n> other alphabets.  By default I'm using varchar in utf-8 encoding, but was\n> wondering if I could specificy something else (perhaps 7bit ascii, perhaps\n> lowercase only) that would speed things up even further.\n\nAll the charater types are basically the same except for char(n) which\npads out the string on disk. Reading downthread, [a-z] needs more\nthan 4 bits (4 bits could only represent 16 characters). 5 bits is a\nvery awkward number in computing, which may explain why this type of\nencoding is rarely done. Coming from the 'cobol' world, where there\nwere all kinds of zany bit compressed encodings, I can tell you that\nthe trend is definitely in the other direction...standard data layouts\ncoupled with well known algorithms.\n\nAny type of simple bitwise encoding that would get you any space\nbenefit would mean converting your text fields to bytea. This would\nmean that any place you needed to deal with your text field as text\nwould require running your data through a decoder function...you would\nencode going into the field and decode going out...ugh.\n\nBetter would be to use a functional index:\ncreate index foo_idx on foo(compactify(myfield));\n\nIf you don't need index ordering, then you could swap a hash function\nfor compactify and have it return type 'int'. This should give the\nbest possible performance (probably better than the built in hash\nindex). You would probably only see a useful benefit if your average\nstring length was well over 10 characters though. In the end though,\nI bet you're best off using a vanilla text field/index unless you\nexpect your table to get really huge. PostgreSQL's btree\nimplementation is really quite amazing.\n\nmerlin\n", "msg_date": "Mon, 20 Jul 2009 17:24:57 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fastest char datatype" } ]
[ { "msg_contents": "Hi,\n\nI have been playing around with PostgreSQL's XML support\nlately (cf.\n<URI:news:[email protected]>)\nand stumbled upon some performance issues related to\nXMLPARSE(). In my \"application\", the XML document is supp-\nlied as a string constant via a DBI ? parameter, for testing\npurposes I have put it into a separate table:\n\n| tim=# \\timing\n| Zeitmessung ist an.\n| tim=# SELECT LENGTH(XMLasText) FROM tmpTestData;\n| length\n| --------\n| 364446\n| (1 Zeile)\n\n| Zeit: 6,295 ms\n| tim=# SELECT SUBSTRING(XMLPARSE(DOCUMENT XMLasText)::TEXT FROM 1 FOR 1) FROM tmpTestData;\n| substring\n| -----------\n| <\n| (1 Zeile)\n\n| Zeit: 40,072 ms\n| tim=#\n\n(The SUBSTRING()s above and following are for reasons of\nbrevity only; the results are comparable when the raw XML is\nqueried.)\n\n| tim=# SELECT G.A, SUBSTRING(XMLPARSE(DOCUMENT XMLasText)::TEXT FROM 1 FOR 1) FROM generate_series(1, 10) AS G(A), tmpTestData;\n| a | substring\n| ----+-----------\n| 1 | <\n| [...]\n| 10 | <\n| (10 Zeilen)\n\n| Zeit: 416,069 ms\n| tim=# SELECT G.A, SUBSTRING(XMLPARSE(DOCUMENT XMLasText)::TEXT FROM 1 FOR 1) FROM generate_series(1, 100) AS G(A), tmpTestData;\n| a | substring\n| -----+-----------\n| 1 | <\n| [...]\n| 100 | <\n| (100 Zeilen)\n\n| Zeit: 3029,196 ms\n| tim=# SELECT G.A, SUBSTRING(XMLPARSE(DOCUMENT XMLasText)::TEXT FROM 1 FOR 1) FROM generate_series(1, 1000) AS G(A), tmpTestData;\n| a | substring\n| ------+-----------\n| 1 | <\n| 1000 | <\n| (1000 Zeilen)\n\n| Zeit: 30740,626 ms\n| tim=#\n\nIt seems that XMLPARSE() is called for every row without\nPostgreSQL realizing that it is IMMUTABLE. This even seems\nto be the case if the XMLPARSE() is part of a WHERE clause:\n\n| tim=# SELECT G.A FROM generate_series(1, 10) AS G(A) WHERE G.A::TEXT = XMLPARSE(DOCUMENT (SELECT XMLasText FROM tmpTestData))::TEXT;\n| a\n| ---\n| (0 Zeilen)\n\n| Zeit: 240,626 ms\n| tim=# SELECT G.A FROM generate_series(1, 100) AS G(A) WHERE G.A::TEXT = XMLPARSE(DOCUMENT (SELECT XMLasText FROM tmpTestData))::TEXT;\n| a\n| ---\n| (0 Zeilen)\n\n| Zeit: 2441,135 ms\n| tim=# SELECT G.A FROM generate_series(1, 1000) AS G(A) WHERE G.A::TEXT = XMLPARSE(DOCUMENT (SELECT XMLasText FROM tmpTestData))::TEXT;\n| a\n| ---\n| (0 Zeilen)\n\n| Zeit: 25228,180 ms\n| tim=#\n\nObviously, the \"problem\" can be circumvented by \"caching\"\nthe results of the XMLPARSE() in a temporary table (or even\na IMMUTABLE function?), but I would assume that this should\nbe PostgreSQL's task.\n\n Any thoughts why this is not the case already? :-)\n\nTim\n\n", "msg_date": "Mon, 20 Jul 2009 13:18:46 +0000", "msg_from": "Tim Landscheidt <[email protected]>", "msg_from_op": true, "msg_subject": "XMLPARSE() evaluated multiple times?" }, { "msg_contents": "Tim Landscheidt <[email protected]> writes:\n> It seems that XMLPARSE() is called for every row without\n> PostgreSQL realizing that it is IMMUTABLE.\n\nIndeed, the system doesn't consider it immutable. None of the examples\nyou show would benefit if it did, though.\n\nI believe there are GUC-parameter dependencies that prevent us from\ntreating it as truly immutable, but if you want to ignore that\nconsideration and force constant-folding anyway, you could wrap it\nin a SQL function that's marked as IMMUTABLE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Jul 2009 15:54:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: XMLPARSE() evaluated multiple times? " } ]
[ { "msg_contents": "Apologies for a slightly off-topic question ... a friend is overseeing the demise of a company and has several computers that they need to get rid of. She's an attorney and knows little about them except that they're IBM and cost >$50K originally. Where does one go to sell equipment like this, and/or get a rough idea of its worth?\n\nThanks,\nCraig\n", "msg_date": "Mon, 20 Jul 2009 07:29:13 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Used computers?" }, { "msg_contents": "On Mon, Jul 20, 2009 at 8:29 AM, Craig James<[email protected]> wrote:\n> Apologies for a slightly off-topic question ... a friend is overseeing the\n> demise of a company and has several computers that they need to get rid of.\n>  She's an attorney and knows little about them except that they're IBM and\n> cost >$50K originally.  Where does one go to sell equipment like this,\n> and/or get a rough idea of its worth?\n\nI generally use ebay to get an idea, especially by searching the sales\nthat are over. You can sell them there, or craig's list.\n", "msg_date": "Mon, 20 Jul 2009 09:52:05 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used computers?" }, { "msg_contents": "Craig James wrote:\n> Apologies for a slightly off-topic question ... a friend is overseeing \n> the demise of a company and has several computers that they need to get \n> rid of. She's an attorney and knows little about them except that \n> they're IBM and cost >$50K originally. Where does one go to sell \n> equipment like this, and/or get a rough idea of its worth?\n> \n> Thanks,\n> Craig\n> \n\nWhen I was looking for Sun boxes I found several places that buy/sell used hardware, some took IBM too.\n\nI googled \"buy sell used ibm\" and came up with several, like:\n\nhttp://www.usedserversystems.com/used-ibm-servers.htm\n\nYou could check with them and see what they are selling for. (And maybe what they'd buy for)\n\nAlso, there is always ebay.\n\n-Andy\n", "msg_date": "Mon, 20 Jul 2009 10:54:18 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used computers?" }, { "msg_contents": "Perhaps these guys know: http://www.recurrent.com/\n\n-kg\n\n\nOn Jul 20, 2009, at 7:29 AM, Craig James wrote:\n\n> Apologies for a slightly off-topic question ... a friend is \n> overseeing the demise of a company and has several computers that \n> they need to get rid of. She's an attorney and knows little about \n> them except that they're IBM and cost >$50K originally. Where does \n> one go to sell equipment like this, and/or get a rough idea of its \n> worth?\n>\n> Thanks,\n> Craig\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 20 Jul 2009 12:21:02 -0700", "msg_from": "Kenny Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used computers?" }, { "msg_contents": "On Mon, Jul 20, 2009 at 10:29 AM, Craig James<[email protected]> wrote:\n> Apologies for a slightly off-topic question ... a friend is overseeing the\n> demise of a company and has several computers that they need to get rid of.\n>  She's an attorney and knows little about them except that they're IBM and\n> cost >$50K originally.  Where does one go to sell equipment like this,\n> and/or get a rough idea of its worth?\n\nWe've done business with Vibrant technologies\n(http://www.vibrant.com/)...they specialize in buying/selling used IBM\ngear. It's a lot less hassle than ebay...\n\nmerlin\n", "msg_date": "Tue, 21 Jul 2009 14:34:00 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Used computers?" } ]
[ { "msg_contents": "\nHere in the UK, we have \"Waste electrical and electronic equipment\" (WEEE) companies that'll safely destroy or sell them on for a cut of the profits.\n\n--- On Mon, 20/7/09, Craig James <[email protected]> wrote:\n\n> From: Craig James <[email protected]>\n> Subject: [PERFORM] Used computers?\n> To: [email protected]\n> Date: Monday, 20 July, 2009, 3:29 PM\n> Apologies for a slightly off-topic\n> question ... a friend is overseeing the demise of a company\n> and has several computers that they need to get rid\n> of.  She's an attorney and knows little about them\n> except that they're IBM and cost >$50K originally. \n> Where does one go to sell equipment like this, and/or get a\n> rough idea of its worth?\n> \n> Thanks,\n> Craig\n> \n> -- Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n \n", "msg_date": "Mon, 20 Jul 2009 15:55:43 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Used computers?" } ]
[ { "msg_contents": "Just wondering is the issue referenced in\nhttp://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\nis still present in 8.4 or if some tunable (or other) made the use of\nhyperthreading a non-issue. We're looking to upgrade our servers soon\nfor performance reasons and am trying to determine if more cpus (no\nHT) or less cpus (with HT) are the way to go. Thx\n\n-- \nDouglas J Hunley\nhttp://douglasjhunley.com\nTwitter: @hunleyd\n", "msg_date": "Tue, 21 Jul 2009 08:42:51 -0400", "msg_from": "Doug Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 21, 2009 at 1:42 PM, Doug Hunley<[email protected]> wrote:\n> Just wondering is the issue referenced in\n> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n> is still present in 8.4 or if some tunable (or other) made the use of\n> hyperthreading a non-issue. We're looking to upgrade our servers soon\n> for performance reasons and am trying to determine if more cpus (no\n> HT) or less cpus (with HT) are the way to go. Thx\n\nI wouldn't recommend HT CPUs at all. I think your assumption, that HT\n== CPU is wrong in first place.\nPlease read more about HT on intel's website.\n\n\n\n-- \nGJ\n", "msg_date": "Tue, 21 Jul 2009 14:53:44 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley<[email protected]> wrote:\n> Just wondering is the issue referenced in\n> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n> is still present in 8.4 or if some tunable (or other) made the use of\n> hyperthreading a non-issue. We're looking to upgrade our servers soon\n> for performance reasons and am trying to determine if more cpus (no\n> HT) or less cpus (with HT) are the way to go. Thx\n\nThis isn't really an application tunable so much as a kernel level\ntunable. PostgreSQL seems to have scaled pretty well a couple years\nago in the tweakers.net benchmark of the Sun T1 CPU with 4 threads per\ncore. However, at the time 4 AMD cores were spanking 8 Sun T1 cores\nwith 4 threads each.\n\nNow, whether or not their benchmark applies to your application only\nyou can say. Can you get machines on a 30 day trial program to\nbenchmark them and decide which to go with? I'm guessing that dual\n6core Opterons with lots of memory is the current king of the hill for\nreasonably priced pg servers that are running CPU bound loads.\n\nIf you're mostly IO bound then it really doesn't matter which CPU.\n", "msg_date": "Tue, 21 Jul 2009 08:16:32 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "2009/7/21 Grzegorz Jaśkiewicz <[email protected]>:\n> On Tue, Jul 21, 2009 at 1:42 PM, Doug Hunley<[email protected]> wrote:\n>> Just wondering is the issue referenced in\n>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>> is still present in 8.4 or if some tunable (or other) made the use of\n>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>> for performance reasons and am trying to determine if more cpus (no\n>> HT) or less cpus (with HT) are the way to go. Thx\n>\n> I wouldn't recommend HT CPUs at all. I think your assumption, that HT\n> == CPU is wrong in first place.\n\nNot sure the OP said that...\n", "msg_date": "Tue, 21 Jul 2009 08:17:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowe<[email protected]> wrote:\n> On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley<[email protected]> wrote:\n>> Just wondering is the issue referenced in\n>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>> is still present in 8.4 or if some tunable (or other) made the use of\n>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>> for performance reasons and am trying to determine if more cpus (no\n>> HT) or less cpus (with HT) are the way to go. Thx\n>\n> This isn't really an application tunable so much as a kernel level\n> tunable.  PostgreSQL seems to have scaled pretty well a couple years\n> ago in the tweakers.net benchmark of the Sun T1 CPU with 4 threads per\n> core.  However, at the time 4 AMD cores were spanking 8 Sun T1 cores\n> with 4 threads each.\n>\n> Now, whether or not their benchmark applies to your application only\n> you can say.  Can you get machines on a 30 day trial program to\n> benchmark them and decide which to go with?  I'm guessing that dual\n> 6core Opterons with lots of memory is the current king of the hill for\n> reasonably priced pg servers that are running CPU bound loads.\n>\n> If you're mostly IO bound then it really doesn't matter which CPU.\nUnless he is doing a lot of computations, on small sets of data.\n\n\nNow I am confused, HT is not anywhere near what 'threads' are on sparcs afaik.\n\n\n\n-- \nGJ\n", "msg_date": "Tue, 21 Jul 2009 15:36:11 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On 07/21/2009 10:36 AM, Grzegorz Jaśkiewicz wrote:\n> On Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowe<[email protected]> wrote:\n> \n>> On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley<[email protected]> wrote:\n>> \n>>> Just wondering is the issue referenced in\n>>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>>> is still present in 8.4 or if some tunable (or other) made the use of\n>>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>>> for performance reasons and am trying to determine if more cpus (no\n>>> HT) or less cpus (with HT) are the way to go. Thx\n>>> \n>> This isn't really an application tunable so much as a kernel level\n>> tunable. PostgreSQL seems to have scaled pretty well a couple years\n>> ago in the tweakers.net benchmark of the Sun T1 CPU with 4 threads per\n>> core. However, at the time 4 AMD cores were spanking 8 Sun T1 cores\n>> with 4 threads each.\n>> \n> Unless he is doing a lot of computations, on small sets of data.\n>\n>\n> Now I am confused, HT is not anywhere near what 'threads' are on sparcs afaik.\n\nFun relatively off-topic chat... :-)\n\nIntel \"HT\" provides the ability to execute two threads per CPU core at \nthe same time.\n\nSun \"CoolThreads\" provide the same capability. They have just scaled it \nfurther. Instead of Intel's Xeon Series 5500 with dual-processor, \nquad-core, dual-thread configuration (= 16 active threads at a time), \nSun T2+ has dual-processor, eight-core, eight-thread configuration (= \n128 active threads at a time).\n\nJust, each Sun \"CoolThread\" thread is far less capable than an Intel \n\"HT\" thread, so the comparison is really about the type of load.\n\nBut, the real point is that \"thread\" (whether \"CoolThread\" or \"HT\" \nthread), is not the same as core, which is not the same as processor. X \n2 threads is usually significantly less benefit than X 2 cores. X 2 \ncores is probably less benefit than X 2 processors.\n\nI think the Intel numbers says that Intel HT provides +15% performance \non average.\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n\n\n\n\n\n\nOn 07/21/2009 10:36 AM, Grzegorz Jaśkiewicz wrote:\n\nOn Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowe<[email protected]> wrote:\n \n\nOn Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley<[email protected]> wrote:\n \n\nJust wondering is the issue referenced in\nhttp://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\nis still present in 8.4 or if some tunable (or other) made the use of\nhyperthreading a non-issue. We're looking to upgrade our servers soon\nfor performance reasons and am trying to determine if more cpus (no\nHT) or less cpus (with HT) are the way to go. Thx\n \n\nThis isn't really an application tunable so much as a kernel level\ntunable.  PostgreSQL seems to have scaled pretty well a couple years\nago in the tweakers.net benchmark of the Sun T1 CPU with 4 threads per\ncore.  However, at the time 4 AMD cores were spanking 8 Sun T1 cores\nwith 4 threads each.\n \n\nUnless he is doing a lot of computations, on small sets of data.\n\n\nNow I am confused, HT is not anywhere near what 'threads' are on sparcs afaik.\n\n\nFun relatively off-topic chat... :-)\n\nIntel \"HT\" provides the ability to execute two threads per CPU core at\nthe same time.\n\nSun \"CoolThreads\" provide the same capability. They have just scaled it\nfurther. Instead of Intel's Xeon Series 5500 with dual-processor,\nquad-core, dual-thread configuration (= 16 active threads at a time),\nSun T2+ has dual-processor, eight-core, eight-thread configuration (=\n128 active threads at a time).\n\nJust, each Sun \"CoolThread\" thread is far less capable than an Intel\n\"HT\" thread, so the comparison is really about the type of load.\n\nBut, the real point is that \"thread\" (whether \"CoolThread\" or \"HT\"\nthread), is not the same as core, which is not the same as processor. X\n2 threads is usually significantly less benefit than X 2 cores. X 2\ncores is probably less benefit than X 2 processors.\n\nI think the Intel numbers says that Intel HT provides +15% performance\non average.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Tue, 21 Jul 2009 11:07:11 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "2009/7/21 Mark Mielke <[email protected]>:\n> On 07/21/2009 10:36 AM, Grzegorz Jaśkiewicz wrote:\n>\n> On Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowe<[email protected]>\n> wrote:\n>\n>\n> On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunley<[email protected]> wrote:\n>\n>\n> Just wondering is the issue referenced in\n> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n> is still present in 8.4 or if some tunable (or other) made the use of\n> hyperthreading a non-issue. We're looking to upgrade our servers soon\n> for performance reasons and am trying to determine if more cpus (no\n> HT) or less cpus (with HT) are the way to go. Thx\n>\n>\n> This isn't really an application tunable so much as a kernel level\n> tunable.  PostgreSQL seems to have scaled pretty well a couple years\n> ago in the tweakers.net benchmark of the Sun T1 CPU with 4 threads per\n> core.  However, at the time 4 AMD cores were spanking 8 Sun T1 cores\n> with 4 threads each.\n>\n>\n> Unless he is doing a lot of computations, on small sets of data.\n>\n>\n> Now I am confused, HT is not anywhere near what 'threads' are on sparcs\n> afaik.\n>\n> Fun relatively off-topic chat... :-)\n>\n> Intel \"HT\" provides the ability to execute two threads per CPU core at the\n> same time.\n>\n> Sun \"CoolThreads\" provide the same capability. They have just scaled it\n> further. Instead of Intel's Xeon Series 5500 with dual-processor, quad-core,\n> dual-thread configuration (= 16 active threads at a time), Sun T2+ has\n> dual-processor, eight-core, eight-thread configuration (= 128 active threads\n> at a time).\n>\n> Just, each Sun \"CoolThread\" thread is far less capable than an Intel \"HT\"\n> thread, so the comparison is really about the type of load.\n>\n> But, the real point is that \"thread\" (whether \"CoolThread\" or \"HT\" thread),\n> is not the same as core, which is not the same as processor. X 2 threads is\n> usually significantly less benefit than X 2 cores. X 2 cores is probably\n> less benefit than X 2 processors.\n\nActually, given the faster inter-connect speed and communication, I'd\nthink a single quad core CPU would be faster than the equivalent dual\ndual core cpu.\n\n> I think the Intel numbers says that Intel HT provides +15% performance on\n> average.\n\nIt's very dependent on work load, that's for sure. I've some things\nthat are 60 to 80% improved, others that go negative. But 15 to 40%\nis more typical.\n", "msg_date": "Tue, 21 Jul 2009 10:22:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "\n\n\nOn 7/21/09 9:22 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n>> But, the real point is that \"thread\" (whether \"CoolThread\" or \"HT\" thread),\n>> is not the same as core, which is not the same as processor. X 2 threads is\n>> usually significantly less benefit than X 2 cores. X 2 cores is probably\n>> less benefit than X 2 processors.\n> \n> Actually, given the faster inter-connect speed and communication, I'd\n> think a single quad core CPU would be faster than the equivalent dual\n> dual core cpu.\n\nIts very workload dependant and system dependant. If the dual core dual cpu\nsetup has 2x the memory bandwidth of the single quad core (Nehalem,\nOpteron), it also likely has higher memory latency and a dedicated\ninterconnect for memory and cache coherency. And so some workloads will\nfavor the low latency and others will favor more bandwidth.\n\nIf its like the older Xeons, where an extra CPU doesn't buy you more memory\nbandwidth alone (but better chipsets do), then a single quad core is usually\nfaster than dual core dual cpu (if the same chipset). Even more so if there\nis a lot of lock contention, since that can all be handled on the same CPU\nrather than communicating across the bus.\n\nBut back on topic for HT -- HT doesn't like spin-locks much unless they use\nthe right low level instruction sequence rather than actually spinning.\nWith the right instruction, the spin will allow the other thread to do\nwork... With the wrong one, it will tie up the pipeline. I have no idea\nwhat Postgres' spin-locks and tool chain compile down to.\n\n", "msg_date": "Tue, 21 Jul 2009 10:21:35 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "Scott Carey wrote:\n> \n> But back on topic for HT -- HT doesn't like spin-locks much unless they\n> use the right low level instruction sequence rather than actually\n> spinning. With the right instruction, the spin will allow the other\n> thread to do work... With the wrong one, it will tie up the pipeline. I\n> have no idea what Postgres' spin-locks and tool chain compile down to.\n> \nI have two hyperthreaded Xeon processors, so this machine thinks it has four \nprocessors. I have not seen the effect of spin locks with postgres. But I \ncan tell that Firefox and Thunderbird use the wrong ones. When one of these \nis having trouble accessing a site, the processor in question goes up to \n100% and the other part of the hyperthreaded processor does nothing even \nthough I run four BOINC processes that would be glad to gobble up the \ncycles. Of course, since it is common to both Firefox and Thunderbird, \nperhaps it is a problem in the name server, bind. But wherever it is, it \nbugs me.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 13:55:01 up 6 days, 3:52, 3 users, load average: 4.03, 4.25, 4.45\n", "msg_date": "Tue, 21 Jul 2009 14:02:00 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, 21 Jul 2009, Doug Hunley wrote:\n\n> Just wondering is the issue referenced in\n> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n> is still present in 8.4 or if some tunable (or other) made the use of\n> hyperthreading a non-issue. We're looking to upgrade our servers soon\n> for performance reasons and am trying to determine if more cpus (no\n> HT) or less cpus (with HT) are the way to go.\n\nIf you're talking about the hyperthreading in the latest Intel Nehalem \nprocessors, I've been seeing great PostgreSQL performance from those. \nThe kind of weird behavior the old generation hyperthreading designs had \nseems gone. You can see at \nhttp://archives.postgresql.org/message-id/[email protected] \nthat I've cleared 90K TPS on a 16 core system (2 quad-core hyperthreaded \nprocessors) running a small test using lots of parallel SELECTs. That \nwould not be possible if there were HT spinlock problems still around. \nThere have been both PostgreSQL scaling improvments and hardware \nimprovements since the 2005 messages you saw there that have combined to \nclear up the issues there. While true cores would still be better if \neverything else were equal, it rarely is, and I wouldn't hestitate to jump \non Intel's bandwagon right now.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 26 Jul 2009 15:52:20 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>\n>> Just wondering is the issue referenced in\n>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>> is still present in 8.4 or if some tunable (or other) made the use of\n>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>> for performance reasons and am trying to determine if more cpus (no\n>> HT) or less cpus (with HT) are the way to go.\n>\n> If you're talking about the hyperthreading in the latest Intel Nehalem\n> processors, I've been seeing great PostgreSQL performance from those.\n> The kind of weird behavior the old generation hyperthreading designs\n> had seems gone. You can see at\n> http://archives.postgresql.org/message-id/[email protected]\n> that I've cleared 90K TPS on a 16 core system (2 quad-core\n> hyperthreaded processors) running a small test using lots of parallel\n> SELECTs. That would not be possible if there were HT spinlock\n> problems still around. There have been both PostgreSQL scaling\n> improvments and hardware improvements since the 2005 messages you saw\n> there that have combined to clear up the issues there. While true\n> cores would still be better if everything else were equal, it rarely\n> is, and I wouldn't hestitate to jump on Intel's bandwagon right now.\n\nGreg, those are compelling numbers for the new Nehalem processors. \nGreat news for postgresql. Do you think it's due to the new internal\ninterconnect, that bears a strong resemblance to AMD's hypertransport\n(AMD's buzzword for borrowing lots of interconnect technology from the\nDEC alpha (EV7?)), or Intel fixing a not-so-good initial implementation\nof \"hyperthreading\" (Intel's marketing buzzword) from a few years ago. \nAlso, and this is getting maybe too far off topic, beyond the buzzwords,\nwhat IS the new \"hyperthreading\" in Nehalems? -- opportunistic\nsuperpipelined cpus?, superscalar? What's shared by the cores\n(bandwidth, cache(s))? What's changed about the new hyperthreading\nthat makes it actually seem to work (or at least not causes other\nproblems)? smarter scheduling of instructions to take advantage of\nstalls, hazards another thread's instruction stream? Fixed\ninstruction-level locking/interlocks, or avoiding locking whenever\npossible? better cache coherency mechanicms (related to the\ninterconnects)? Jedi mind tricks???\n\nI'm guessing it's the better interconnect, but work interferes with\nfinding the time to research and benchmark.\n\n\n", "msg_date": "Mon, 27 Jul 2009 11:05:45 -0700", "msg_from": "Dave Youatt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Mon, 27 Jul 2009, Dave Youatt wrote:\n\n> Do you think it's due to the new internal interconnect, that bears a \n> strong resemblance to AMD's hypertransport (AMD's buzzword for borrowing \n> lots of interconnect technology from the DEC alpha (EV7?)), or Intel \n> fixing a not-so-good initial implementation of \"hyperthreading\" (Intel's \n> marketing buzzword) from a few years ago.\n\nIt certainly looks like it's Intel finally getting the interconnect right, \nbecause I'm seeing huge improvements in raw memory speeds too. That's the \none area I used to see better results from Opterons on sometimes, but \nIntel pulled way ahead on this last upgrade. The experiment I haven't \ndone yet is to turn off hyperthreading and see how much the performance \ndegrades. This is hard because I'm several thousand miles from the \nservers I'm running the tests on, which makes low level config changes \nsomewhat hairy.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 27 Jul 2009 14:24:13 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "\nOn 7/27/09 11:05 AM, \"Dave Youatt\" <[email protected]> wrote:\n\n> On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n>> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>> \n> Also, and this is getting maybe too far off topic, beyond the buzzwords,\n> what IS the new \"hyperthreading\" in Nehalems? -- opportunistic\n> superpipelined cpus?, superscalar? What's shared by the cores\n> (bandwidth, cache(s))? What's changed about the new hyperthreading\n> that makes it actually seem to work (or at least not causes other\n> problems)? smarter scheduling of instructions to take advantage of\n> stalls, hazards another thread's instruction stream? Fixed\n> instruction-level locking/interlocks, or avoiding locking whenever\n> possible? better cache coherency mechanicms (related to the\n> interconnects)? Jedi mind tricks???\n> \n\nThe Nehalems are an iteration off the \"Core\" processor line, which is a\n4-way superscalar, out of order CPU. Also, it has some very sophisticated\nmemory access reordering capability.\nSo, the HyperThreading here (Symmetric Multi-Threading, SMT, is the academic\nname) will take advantage of that processor's inefficiencies -- a mix of\nstalls due to waiting for memory, and unused execution 'width' resources.\nSo, if both threads are active and not stalled on memory access or other\nexecution bubbles, there are a lot of internal processor resources to share.\nAnd if one of them is misbehaving and spinning, it won't dominate those\nresources.\n\nOn the old Pentium-4 based HyperThreading, was also SMT, but those\nprocessors were built to be high frequency and 'narrow' in terms of\nsuperscalar execution (2-way superscalar, I believe). So the main advantage\nof HT there was that one thread could schedule work while another was\nwaiting on memory access. If both were putting demands on the core\nexecution resources there was not much to gain unless one thread stalled on\nmemory access a lot, and if one of them was spinning it would eat up most of\nthe shared resources.\n\nIn both cases, the main execution resources get split up. L1 cache,\ninstruction buffers and decoders, instruction reorder buffers, etc. But in\nthis release, Intel increased several of these to beyond what is optimal for\none thread, to make the HT more efficient.\n\nBut the type of applications that will benefit the most from this HT is not\nalways the same as the older one, since the two CPU lines have different\nweaknesses for SMT to mask or strengths to enhance.\n\n> I'm guessing it's the better interconnect, but work interferes with\n> finding the time to research and benchmark.\n\nThe new memory and interconnect architecture has a huge impact on\nperformance, but it is separate from the other big features (Turbo being the\nother one not discussed here). For scalability to many CPUs it is probably\nthe most significant however.\n\nNote, that these CPU's have some good power saving technology that helps\nquite a bit when idle or using just one core or thread, but when all threads\nare ramped up and all the memory banks are filled the systems draw a LOT of\npower. \n\nAMD still does quite well if you're on a power budget with their latest\nCPUs.\n\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Mon, 27 Jul 2009 12:05:47 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Mon, 27 Jul 2009, Dave Youatt wrote:\n> Greg, those are compelling numbers for the new Nehalem processors.\n> Great news for postgresql. Do you think it's due to the new internal\n> interconnect...\n\nUnlikely. Different threads on the same CPU core share their resources, so \nthey don't need an explicit communication channel at all (I'm simplifying \nmassively here). A real interconnect is only needed between CPUs and \nbetween different cores on a CPU, and of course to the outside world.\n\nScott's explanation of why SMT works better now is much more likely to be \nthe real reason.\n\nMatthew\n\n-- \n Ozzy: Life is full of disappointments.\n Millie: No it isn't - I can always fit more in.\n", "msg_date": "Tue, 28 Jul 2009 10:45:01 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, 28 Jul 2009, Matthew Wakeling wrote:\n\n> Unlikely. Different threads on the same CPU core share their resources, so \n> they don't need an explicit communication channel at all (I'm simplifying \n> massively here). A real interconnect is only needed between CPUs and between \n> different cores on a CPU, and of course to the outside world.\n\nThe question was \"why are the new CPUs benchmarking so much faster than \nthe old ones\", and I believe that's mainly because the interconnection \nboth between CPUs and between CPUs and memory are dramatically faster. \nThe SMT improvements stack on top of that, but are in my opinion \nsecondary. I base that on also seeing a dramatic improvement in memory \ntransfer speeds on the new platform, which alone might even be sufficient \nto explain the performance boost. I'll break the two factors apart later \nto be sure though--all the regulars on this list know where I stand on \nmeasuring performance compared with theorizing about it.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 28 Jul 2009 16:28:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Mon, Jul 27, 2009 at 2:05 PM, Dave Youatt<[email protected]> wrote:\n> On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n>> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>>\n>>> Just wondering is the issue referenced in\n>>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>>> is still present in 8.4 or if some tunable (or other) made the use of\n>>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>>> for performance reasons and am trying to determine if more cpus (no\n>>> HT) or less cpus (with HT) are the way to go.\n>>\n>> If you're talking about the hyperthreading in the latest Intel Nehalem\n>> processors, I've been seeing great PostgreSQL performance from those.\n>> The kind of weird behavior the old generation hyperthreading designs\n>> had seems gone.  You can see at\n>> http://archives.postgresql.org/message-id/[email protected]\n>> that I've cleared 90K TPS on a 16 core system (2 quad-core\n>> hyperthreaded processors) running a small test using lots of parallel\n>> SELECTs.  That would not be possible if there were HT spinlock\n>> problems still around. There have been both PostgreSQL scaling\n>> improvments and hardware improvements since the 2005 messages you saw\n>> there that have combined to clear up the issues there.  While true\n>> cores would still be better if everything else were equal, it rarely\n>> is, and I wouldn't hestitate to jump on Intel's bandwagon right now.\n>\n> Greg, those are compelling numbers for the new Nehalem processors.\n> Great news for postgresql.  Do you think it's due to the new internal\n> interconnect, that bears a strong resemblance to AMD's hypertransport\n[snip]\n\nas a point of reference, here are some numbers on a quad core system\n(2xintel 5160) using the old pgbench, scaling factor 10:\n\npgbench -S -c 16 -t 10000\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 10\nquery mode: simple\nnumber of clients: 16\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 160000/160000\ntps = 24088.807000 (including connections establishing)\ntps = 24201.820189 (excluding connections establishing)\n\nThis shows actually my system (pre-Nehalem) is pretty close clock for\nclock, albeit thats not completely fair..I'm using only 4 cores on\ndual core procs. Still, these are almost two years old now.\n\nEDIT: I see now that Greg has only 8 core system not counting\nhyperthreading...so I'm getting absolutely spanked! Go Intel!\n\nAlso, I'm absolutely dying to see some numbers on the high end\nW5580...if anybody has one, please post!\n\nmerlin\n", "msg_date": "Tue, 28 Jul 2009 16:58:51 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 28, 2009 at 2:58 PM, Merlin Moncure<[email protected]> wrote:\n> On Mon, Jul 27, 2009 at 2:05 PM, Dave Youatt<[email protected]> wrote:\n>> On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n>>> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>>>\n>>>> Just wondering is the issue referenced in\n>>>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>>>> is still present in 8.4 or if some tunable (or other) made the use of\n>>>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>>>> for performance reasons and am trying to determine if more cpus (no\n>>>> HT) or less cpus (with HT) are the way to go.\n>>>\n>>> If you're talking about the hyperthreading in the latest Intel Nehalem\n>>> processors, I've been seeing great PostgreSQL performance from those.\n>>> The kind of weird behavior the old generation hyperthreading designs\n>>> had seems gone.  You can see at\n>>> http://archives.postgresql.org/message-id/[email protected]\n>>> that I've cleared 90K TPS on a 16 core system (2 quad-core\n>>> hyperthreaded processors) running a small test using lots of parallel\n>>> SELECTs.  That would not be possible if there were HT spinlock\n>>> problems still around. There have been both PostgreSQL scaling\n>>> improvments and hardware improvements since the 2005 messages you saw\n>>> there that have combined to clear up the issues there.  While true\n>>> cores would still be better if everything else were equal, it rarely\n>>> is, and I wouldn't hestitate to jump on Intel's bandwagon right now.\n>>\n>> Greg, those are compelling numbers for the new Nehalem processors.\n>> Great news for postgresql.  Do you think it's due to the new internal\n>> interconnect, that bears a strong resemblance to AMD's hypertransport\n> [snip]\n>\n> as a point of reference, here are some numbers on a quad core system\n> (2xintel 5160) using the old pgbench, scaling factor 10:\n>\n> pgbench -S -c 16 -t 10000\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 10\n> query mode: simple\n> number of clients: 16\n> number of transactions per client: 10000\n> number of transactions actually processed: 160000/160000\n> tps = 24088.807000 (including connections establishing)\n> tps = 24201.820189 (excluding connections establishing)\n>\n> This shows actually my system (pre-Nehalem) is pretty close clock for\n> clock, albeit thats not completely fair..I'm using only 4 cores on\n> dual core procs.  Still, these are almost two years old now.\n>\n> EDIT: I see now that Greg has only 8 core system not counting\n> hyperthreading...so I'm getting absolutely spanked!  Go Intel!\n>\n> Also, I'm absolutely dying to see some numbers on the high end\n> W5580...if anybody has one, please post!\n\nJust FYI, I ran the same basic test but with -c 10 since -c shouldn't\nreally be greater than -s, and got this:\n\npgbench -S -c 10 -t 10000\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 32855.677494 (including connections establishing)\ntps = 33344.826183 (excluding connections establishing)\n\nWith -s at 16 and -c at 16 I got this:\n\npgbench -S -c 16 -t 10000\nstarting vacuum...end.\ntransaction type: SELECT only\nscaling factor: 16\nnumber of clients: 16\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 160000/160000\ntps = 32822.559602 (including connections establishing)\ntps = 33266.308652 (excluding connections establishing)\n\nThat's on dual Quad-Core AMD Opteron(tm) Processor 2352 CPUs (2.2GHz)\nand 16 G ram.\n", "msg_date": "Tue, 28 Jul 2009 15:11:09 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 28, 2009 at 4:11 PM, Scott Marlowe<[email protected]> wrote:\n> On Tue, Jul 28, 2009 at 2:58 PM, Merlin Moncure<[email protected]> wrote:\n>> On Mon, Jul 27, 2009 at 2:05 PM, Dave Youatt<[email protected]> wrote:\n>>> On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n>>>> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>>>>\n>>>>> Just wondering is the issue referenced in\n>>>>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>>>>> is still present in 8.4 or if some tunable (or other) made the use of\n>>>>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>>>>> for performance reasons and am trying to determine if more cpus (no\n>>>>> HT) or less cpus (with HT) are the way to go.\n>>>>\n>>>> If you're talking about the hyperthreading in the latest Intel Nehalem\n>>>> processors, I've been seeing great PostgreSQL performance from those.\n>>>> The kind of weird behavior the old generation hyperthreading designs\n>>>> had seems gone.  You can see at\n>>>> http://archives.postgresql.org/message-id/[email protected]\n>>>> that I've cleared 90K TPS on a 16 core system (2 quad-core\n>>>> hyperthreaded processors) running a small test using lots of parallel\n>>>> SELECTs.  That would not be possible if there were HT spinlock\n>>>> problems still around. There have been both PostgreSQL scaling\n>>>> improvments and hardware improvements since the 2005 messages you saw\n>>>> there that have combined to clear up the issues there.  While true\n>>>> cores would still be better if everything else were equal, it rarely\n>>>> is, and I wouldn't hestitate to jump on Intel's bandwagon right now.\n>>>\n>>> Greg, those are compelling numbers for the new Nehalem processors.\n>>> Great news for postgresql.  Do you think it's due to the new internal\n>>> interconnect, that bears a strong resemblance to AMD's hypertransport\n\nI'd love to see some comparisons on the exact same hardware, same\nkernel and everything but with HT enabled and disabled. Don't forget\nthat newer (Linux) kernels have vastly improved SMP performance.\n\n-- \nJon\n", "msg_date": "Tue, 28 Jul 2009 16:45:51 -0500", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "\nOn 7/28/09 1:28 PM, \"Greg Smith\" <[email protected]> wrote:\n\n> On Tue, 28 Jul 2009, Matthew Wakeling wrote:\n> \n>> Unlikely. Different threads on the same CPU core share their resources, so\n>> they don't need an explicit communication channel at all (I'm simplifying\n>> massively here). A real interconnect is only needed between CPUs and between\n>> different cores on a CPU, and of course to the outside world.\n> \n> The question was \"why are the new CPUs benchmarking so much faster than\n> the old ones\", and I believe that's mainly because the interconnection\n> both between CPUs and between CPUs and memory are dramatically faster.\n\nI believe he was answering the question \"What makes SMT work well with\nPostgres for these CPUs when it had problems on old Xeons?\" -- and that\ndoesn't have a lot to do with the interconnect or bandwidth. It may also be\na more advanced compiler / OS toolchain. Postgres 8.0 compiled on an older\nsystem and OS might not work so well with the new HT.\n\nAs for the question as to what is so good about the Nehalems -- the on-die\nmemory controller and point-to-point interprocessor interconnect is the\nbiggest performance change. Turbo and SMT are pretty good icing on the cake\nthough.\n\n\n\n\n", "msg_date": "Tue, 28 Jul 2009 15:46:44 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "\nOn 7/28/09 1:58 PM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Mon, Jul 27, 2009 at 2:05 PM, Dave Youatt<[email protected]> wrote:\n>> On 01/-10/-28163 11:59 AM, Greg Smith wrote:\n>>> On Tue, 21 Jul 2009, Doug Hunley wrote:\n>>> \n>>>> Just wondering is the issue referenced in\n>>>> http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php\n>>>> is still present in 8.4 or if some tunable (or other) made the use of\n>>>> hyperthreading a non-issue. We're looking to upgrade our servers soon\n>>>> for performance reasons and am trying to determine if more cpus (no\n>>>> HT) or less cpus (with HT) are the way to go.\n>>> \n>>> If you're talking about the hyperthreading in the latest Intel Nehalem\n>>> processors, I've been seeing great PostgreSQL performance from those.\n>>> The kind of weird behavior the old generation hyperthreading designs\n>>> had seems gone.  You can see at\n>>> http://archives.postgresql.org/message-id/alpine.GSO.2.01.0907222158050.1671\n>>> [email protected]\n>>> that I've cleared 90K TPS on a 16 core system (2 quad-core\n>>> hyperthreaded processors) running a small test using lots of parallel\n>>> SELECTs.  That would not be possible if there were HT spinlock\n>>> problems still around. There have been both PostgreSQL scaling\n>>> improvments and hardware improvements since the 2005 messages you saw\n>>> there that have combined to clear up the issues there.  While true\n>>> cores would still be better if everything else were equal, it rarely\n>>> is, and I wouldn't hestitate to jump on Intel's bandwagon right now.\n>> \n>> Greg, those are compelling numbers for the new Nehalem processors.\n>> Great news for postgresql.  Do you think it's due to the new internal\n>> interconnect, that bears a strong resemblance to AMD's hypertransport\n> [snip]\n> \n> as a point of reference, here are some numbers on a quad core system\n> (2xintel 5160) using the old pgbench, scaling factor 10:\n> \n> pgbench -S -c 16 -t 10000\n> starting vacuum...end.\n> transaction type: SELECT only\n> scaling factor: 10\n> query mode: simple\n> number of clients: 16\n> number of transactions per client: 10000\n> number of transactions actually processed: 160000/160000\n> tps = 24088.807000 (including connections establishing)\n> tps = 24201.820189 (excluding connections establishing)\n> \n> This shows actually my system (pre-Nehalem) is pretty close clock for\n> clock, albeit thats not completely fair..I'm using only 4 cores on\n> dual core procs. Still, these are almost two years old now.\n> \n> EDIT: I see now that Greg has only 8 core system not counting\n> hyperthreading...so I'm getting absolutely spanked! Go Intel!\n> \n> Also, I'm absolutely dying to see some numbers on the high end\n> W5580...if anybody has one, please post!\n> \n> merlin\n\nNote, that a 5160 is a bit behind. The 52xx and 54xx series were a decent\nperf boost on their own, with more cache, and usually more total system\nbandwidth too (50% more than 51xx and 53xx is typical).\n\nBut the leap to 55xx is far bigger!\n\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 28 Jul 2009 15:52:47 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, 28 Jul 2009, Scott Marlowe wrote:\n\n> Just FYI, I ran the same basic test but with -c 10 since -c shouldn't\n> really be greater than -s\n\nThat's only true if you're running the TPC-B-like or other write tests, \nwhere access to the small branches table becomes a serious hotspot for \ncontention. The select-only test has no such specific restriction as it \nonly operations on the big accounts table. Often peak throughput is \ncloser to a very small multiple on the number of cores though, and \npossibly even clients=cores, presumably because it's more efficient to \napproximately peg one backend per core rather than switch among more than \none on each--reduced L1 cache contention etc. That's the behavior you \nmeasured when your test showed better results with c=10 than c=16 on a 8 \ncore system, rather than suffering less from the \"c must be < s\" \ncontention limitation.\n\nSadly I don't have or expect to have a W5580 in the near future though, \nthe X5550 @ 2.67GHz is the bang for the buck sweet spot right now and \naccordingly that's what I have in the lab at Truviso. As Merlin points \nout, that's still plenty to spank any select-only pgbench results I've \never seen. The multi-threaded pgbench batch submitted by Itagaki Takahiro \nrecently is here just in time to really exercise these new processors \nproperly.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 28 Jul 2009 19:21:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 28, 2009 at 5:21 PM, Greg Smith<[email protected]> wrote:\n> On Tue, 28 Jul 2009, Scott Marlowe wrote:\n>\n>> Just FYI, I ran the same basic test but with -c 10 since -c shouldn't\n>> really be greater than -s\n>\n> That's only true if you're running the TPC-B-like or other write tests,\n> where access to the small branches table becomes a serious hotspot for\n> contention.  The select-only test has no such specific restriction as it\n\nI thought so too, but my pgbench -S -c 16 was WAY faster on a -s 16 db\nthan on a -s10...\n", "msg_date": "Tue, 28 Jul 2009 17:23:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "Greg Smith wrote:\n> On Tue, 28 Jul 2009, Scott Marlowe wrote:\n> \n>> Just FYI, I ran the same basic test but with -c 10 since -c shouldn't\n>> really be greater than -s\n> \n> That's only true if you're running the TPC-B-like or other write tests, \n> where access to the small branches table becomes a serious hotspot for \n> contention. The select-only test has no such specific restriction as it \n> only operations on the big accounts table. Often peak throughput is \n> closer to a very small multiple on the number of cores though, and \n> possibly even clients=cores, presumably because it's more efficient to \n> approximately peg one backend per core rather than switch among more \n> than one on each--reduced L1 cache contention etc. That's the behavior \n> you measured when your test showed better results with c=10 than c=16 on \n> a 8 core system, rather than suffering less from the \"c must be < s\" \n> contention limitation.\n\nWell the real problem is that pgbench itself does not scale too well to \nlots of concurrent connections and/or to high transaction rates so it \nseriously skews the result. If you look \nhttp://www.kaltenbrunner.cc/blog/index.php?/archives/26-Benchmarking-8.4-Chapter-1Read-Only-workloads.html.\nIt is pretty clear that 90k(or the 83k I got due to the slower E5530) \ntps is actually a pgench limit and that the backend really can do almost \ntwice as fast (I only demonstrated ~140k tps using sysbench there but I \nlater managed to do ~160k tps with queries that are closer to what \npgbench does in the lab)\n\n\nStefan\n", "msg_date": "Wed, 29 Jul 2009 06:31:09 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:\n\n> Well the real problem is that pgbench itself does not scale too well to lots \n> of concurrent connections and/or to high transaction rates so it seriously \n> skews the result.\n\nSure, but that's what the multi-threaded pgbench code aims to fix, which \ndidn't show up until after you ran your tests. I got the 90K select TPS \nwith a completely unoptimized postgresql.conf, so that's by no means the \nbest it's possible to get out of the new pgbench code on this hardware. \nI've seen as much as a 40% improvement over the standard pgbench code in \nmy limited testing so far, and the patch author has seen a 450% one. You \nmight be able to see at least the same results you got from sysbench out \nof it.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 29 Jul 2009 02:04:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:\n> \n>> Well the real problem is that pgbench itself does not scale too well \n>> to lots of concurrent connections and/or to high transaction rates so \n>> it seriously skews the result.\n> \n> Sure, but that's what the multi-threaded pgbench code aims to fix, which \n> didn't show up until after you ran your tests. I got the 90K select TPS \n> with a completely unoptimized postgresql.conf, so that's by no means the \n> best it's possible to get out of the new pgbench code on this hardware. \n> I've seen as much as a 40% improvement over the standard pgbench code in \n> my limited testing so far, and the patch author has seen a 450% one. \n> You might be able to see at least the same results you got from sysbench \n> out of it.\n\noh - the 90k tps are with the new multithreaded pgbench? missed that \nfact. As you can see from my results I managed to get 83k with the 8.4 \npgbench on a slightly slower Nehalem which does not sound too impressive \nfor the new code...\n\n\nStefan\n", "msg_date": "Wed, 29 Jul 2009 08:22:07 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Wed, 29 Jul 2009, Stefan Kaltenbrunner wrote:\n\n> oh - the 90k tps are with the new multithreaded pgbench? missed that fact. As \n> you can see from my results I managed to get 83k with the 8.4 pgbench on a \n> slightly slower Nehalem which does not sound too impressive for the new \n> code...\n\nI got 96K with the default postgresql.conf - 32MB shared_buffers etc. - \nand I didn't even try to find the sweet spot yet for things like number of \nthreads, that's just the first useful number that popped out. I saw as \nmuch as 87K with the regular one too. I already planned to run the test \nset you did for comparison sake at some point.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 29 Jul 2009 02:35:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, 28 Jul 2009, Scott Carey wrote:\n> On 7/28/09 1:28 PM, \"Greg Smith\" <[email protected]> wrote:\n>> On Tue, 28 Jul 2009, Matthew Wakeling wrote:\n>>\n>>> Unlikely. Different threads on the same CPU core share their resources, so\n>>> they don't need an explicit communication channel at all (I'm simplifying\n>>> massively here). A real interconnect is only needed between CPUs and between\n>>> different cores on a CPU, and of course to the outside world.\n>>\n>> The question was \"why are the new CPUs benchmarking so much faster than\n>> the old ones\"...\n>\n> I believe he was answering the question \"What makes SMT work well with\n> Postgres for these CPUs when it had problems on old Xeons?\"\n\nExactly. Interconnects and bandwidth are going to make the CPU faster in \ngeneral, but won't have any (much?) effect on the relative speed with and \nwithout SMT.\n\nIf the new CPUs are four-way dispatch and the old ones were two-way \ndispatch, that easily explains why SMT is a bonus on the new CPUs. With a \ntwo-way dispatch, a single thread is likely to be able to keep both \npipelines busy most of the time. Switching on SMT will try to keep the \npipelines busy a bit more, giving a small improvement, however that \nimprovement is cancelled out by the cache being half the size for each \nthread. One of our applications ran 30% slower with SMT enabled on an old \nXeon.\n\nOn the new CPUs, it would be very hard for a single thread to keep four \nexecution pipelines busy, so switching on SMT increases the throughput in \na big way. Also, the bigger caches mean that splitting the cache in half \ndoesn't have nearly as much impact. That's why SMT is a good thing on the \nnew CPUs.\n\nHowever, SMT is always likely to slow down any process that is \nsingle-threaded, if that is the only thread doing significant work on the \nmachine. It only really shows its benefit when you have more CPU-intensive \nprocesses than real CPU cores.\n\nMatthew\n\n-- \n In the beginning was the word, and the word was unsigned,\n and the main() {} was without form and void...\n", "msg_date": "Wed, 29 Jul 2009 12:09:43 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, Jul 28, 2009 at 7:21 PM, Greg Smith<[email protected]> wrote:\n> On Tue, 28 Jul 2009, Scott Marlowe wrote:\n>\n>> Just FYI, I ran the same basic test but with -c 10 since -c shouldn't\n>> really be greater than -s\n>\n> That's only true if you're running the TPC-B-like or other write tests,\n> where access to the small branches table becomes a serious hotspot for\n> contention.  The select-only test has no such specific restriction as it\n> only operations on the big accounts table.  Often peak throughput is closer\n> to a very small multiple on the number of cores though, and possibly even\n> clients=cores, presumably because it's more efficient to approximately peg\n> one backend per core rather than switch among more than one on each--reduced\n> L1 cache contention etc.  That's the behavior you measured when your test\n> showed better results with c=10 than c=16 on a 8 core system, rather than\n> suffering less from the \"c must be < s\" contention limitation.\n>\n> Sadly I don't have or expect to have a W5580 in the near future though, the\n> X5550 @ 2.67GHz is the bang for the buck sweet spot right now and\n> accordingly that's what I have in the lab at Truviso.  As Merlin points out,\n> that's still plenty to spank any select-only pgbench results I've ever seen.\n>  The multi-threaded pgbench batch submitted by Itagaki Takahiro recently is\n> here just in time to really exercise these new processors properly.\n\nCan I trouble you for a single client run, say:\n\npgbench -S -c 1 -t 250000\n\nI'd like to see how much of your improvement comes from SMT and how\nmuch comes from general improvements to the cpu...\n\nmerlin\n", "msg_date": "Wed, 29 Jul 2009 09:39:17 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" } ]
[ { "msg_contents": "Hi\n\nI'm storing historical meteorological gridded data from GFS (\nhttp://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\ntable like this:\n\nCREATE TABLE grid_f_data_i2 (\n //Specifies the variable and other features of data\n id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n //A new grid is available each 3 hours since 5 years ago\n dh_date timestamp,\n //Values are scaled to be stored as signed integers of 2 bytes\n vl_grid smallint[361][720],\nCONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n (co_inventory, dh_date)\n);\n\nDimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\ncells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\nvl_grid[y][x] stores the value at dh_date of a meteorological variable\nspecified by id_inventory in the geodesic point\n\nlatitude = -90 + y*0.5\nlongitude = x*0.5\n\nThe reverse formula for the closest point in the grid of an arbitary\ngeodesic point will be\n\ny = Round((latitude+90) * 2\nx = Round(longitude*2)\n\nField vl_grid is stored in the TOAST table and has a good compression level.\nPostgreSql is the only one database that is able to store this huge amount\nof data in only 34 GB of disk. It's really great system. Queries returning\nbig rectangular areas are very fast, but the target of almost all queries is\nto get historical series for a geodesic point\n\nSELECT dh_date, vl_grid[123][152]\nFROM grid_f_data_i2\nWHERE id_inventory = 6\nORDER BY dh_date\n\nIn this case, atomic access to just a cell of each one of a only few\nthousands of rows becomes too slow.\n\nPlease, could somebody answer some of these questions?\n\n - It's posible to tune some TOAST parameters to get faster atomic access\n to large arrays?\n\n\n - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n the problem?\n\n\n - Atomic access will be faster if table stores vectors for data in the\n same parallel instead of matrices of global data?\n CREATE TABLE grid_f_data_i2 (\n //Specifies the variable and other features of data\n id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n //A new grid is available each 3 hours since 5 years ago\n dh_date timestamp,\n // nu_parallel = y = Round((latitude+90) * 2\n smallint nu_parallel,\n //Values are scaled to be stored as signed integers of 2 bytes\n vl_parallel smallint[],\n CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n (co_inventory, nu_parallel, dh_date)\n );\n\n - There is another faster solution?\n\nThanks in advance and best regards\n\n-- \nVíctor de Buen Remiro\nTol Development Team member\nwww.tol-project.org\n\nHiI'm storing historical meteorological gridded data from GFS (http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a table like this:\nCREATE TABLE grid_f_data_i2 (  //Specifies the variable and other features of data\n  id_inventory integer REFERENCES grid_d_inventory(id_inventory),   //A new grid is available each 3 hours since 5 years ago  dh_date timestamp,    //Values are scaled to be stored as signed integers of 2 bytes\n  vl_grid smallint[361][720],   CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY   (co_inventory, dh_date));Dimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920 cells} for a grid of 0.5 degrees (about each 55 Km) around the world. So, vl_grid[y][x] stores the value at dh_date of a meteorological variable specified by id_inventory in the geodesic point \nlatitude  = -90 + y*0.5longitude = x*0.5\nThe reverse formula for the closest point in the grid of an arbitary geodesic point will bey = Round((latitude+90) * 2\nx = Round(longitude*2)\nField vl_grid is stored in the TOAST table and has a good compression level. PostgreSql is the only one database that is able to store this huge amount of data in only 34 GB of disk. It's really great system. Queries returning big rectangular areas are very fast, but the target of almost all queries is to get historical series for a geodesic point\nSELECT  dh_date, vl_grid[123][152]FROM  grid_f_data_i2WHERE  id_inventory = 6ORDER BY dh_dateIn this case, atomic access to just a cell of each one of a only few thousands of rows becomes too slow.\nPlease, could somebody answer some of these questions?It's posible to tune some TOAST parameters to get faster atomic access to large arrays?Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve the problem? \n\nAtomic access will be faster if table stores vectors for data in the same parallel instead of matrices of global data?\nCREATE TABLE grid_f_data_i2 (\n  //Specifies the variable and other features of data\n  id_inventory integer REFERENCES grid_d_inventory(id_inventory),   //A new grid is available each 3 hours since 5 years ago\n\n\n  dh_date timestamp,  // nu_parallel = y = Round((latitude+90) * 2\n\n  smallint nu_parallel,\n\n  //Values are scaled to be stored as signed integers of 2 bytes\n\n  vl_parallel smallint[],\nCONSTRAINT meteo_f_gfs_tmp PRIMARY KEY \n\n  (co_inventory, nu_parallel, dh_date)\n\n);\n\nThere is another faster solution?Thanks in advance and best regards-- Víctor de Buen RemiroTol Development Team memberwww.tol-project.org", "msg_date": "Wed, 22 Jul 2009 01:43:35 +0200", "msg_from": "\"Victor de Buen (Bayes)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Atomic access to large arrays" }, { "msg_contents": "Victor,\n\nJust wondering why do you use array ?\n\nOleg\nOn Wed, 22 Jul 2009, Victor de Buen (Bayes) wrote:\n\n> Hi\n>\n> I'm storing historical meteorological gridded data from GFS (\n> http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n>\n> CREATE TABLE grid_f_data_i2 (\n> //Specifies the variable and other features of data\n> id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n> //A new grid is available each 3 hours since 5 years ago\n> dh_date timestamp,\n> //Values are scaled to be stored as signed integers of 2 bytes\n> vl_grid smallint[361][720],\n> CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n> (co_inventory, dh_date)\n> );\n>\n> Dimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\n> cells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\n> vl_grid[y][x] stores the value at dh_date of a meteorological variable\n> specified by id_inventory in the geodesic point\n>\n> latitude = -90 + y*0.5\n> longitude = x*0.5\n>\n> The reverse formula for the closest point in the grid of an arbitary\n> geodesic point will be\n>\n> y = Round((latitude+90) * 2\n> x = Round(longitude*2)\n>\n> Field vl_grid is stored in the TOAST table and has a good compression level.\n> PostgreSql is the only one database that is able to store this huge amount\n> of data in only 34 GB of disk. It's really great system. Queries returning\n> big rectangular areas are very fast, but the target of almost all queries is\n> to get historical series for a geodesic point\n>\n> SELECT dh_date, vl_grid[123][152]\n> FROM grid_f_data_i2\n> WHERE id_inventory = 6\n> ORDER BY dh_date\n>\n> In this case, atomic access to just a cell of each one of a only few\n> thousands of rows becomes too slow.\n>\n> Please, could somebody answer some of these questions?\n>\n> - It's posible to tune some TOAST parameters to get faster atomic access\n> to large arrays?\n>\n>\n> - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n> the problem?\n>\n>\n> - Atomic access will be faster if table stores vectors for data in the\n> same parallel instead of matrices of global data?\n> CREATE TABLE grid_f_data_i2 (\n> //Specifies the variable and other features of data\n> id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n> //A new grid is available each 3 hours since 5 years ago\n> dh_date timestamp,\n> // nu_parallel = y = Round((latitude+90) * 2\n> smallint nu_parallel,\n> //Values are scaled to be stored as signed integers of 2 bytes\n> vl_parallel smallint[],\n> CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n> (co_inventory, nu_parallel, dh_date)\n> );\n>\n> - There is another faster solution?\n>\n> Thanks in advance and best regards\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Wed, 22 Jul 2009 09:11:15 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays" }, { "msg_contents": "Hi\n\nI'm storing historical meteorological gridded data from GFS (\nhttp://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\ntable like this:\n\nCREATE TABLE grid_f_data_i2 (\n //Specifies the variable and other features of data\n id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n //A new grid is available each 3 hours since 5 years ago\n dh_date timestamp,\n //Values are scaled to be stored as signed integers of 2 bytes\n vl_grid smallint[361][720],\nCONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n (co_inventory, dh_date)\n);\n\nDimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\ncells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\nvl_grid[y][x] stores the value at dh_date of a meteorological variable\nspecified by id_inventory in the geodesic point\n\nlatitude = -90 + y*0.5\nlongitude = x*0.5\n\nThe reverse formula for the closest point in the grid of an arbitary\ngeodesic point will be\n\ny = Round((latitude+90) * 2\nx = Round(longitude*2)\n\nField vl_grid is stored in the TOAST table and has a good compression level.\nPostgreSql is the only one database that is able to store this huge amount\nof data in only 34 GB of disk. It's really great system. Queries returning\nbig rectangular areas are very fast, but the target of almost all queries is\nto get historical series for a geodesic point\n\nSELECT dh_date, vl_grid[123][152]\nFROM grid_f_data_i2\nWHERE id_inventory = 6\nORDER BY dh_date\n\nIn this case, atomic access to just a cell of each one of a only few\nthousands of rows becomes too slow.\n\nUsing standar way, size increase very much\nCREATE TABLE grid_f_data_i2 (\n id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n dh_date timestamp,\n smallint lat,\n smallint lon,\n smallint value\n};\n\nThis table have (4+8+2+2+2=24) bytes by register and (lat:361 x lon:720 =\n259920) registers by grid, so, 6238080 bytes by grid.\nUncompressed array design uses 4+8+2*259920=519852 bytes by register but\njust one register by grid.\nTOAST table compress these arrays with an average factor 1/2, so, the total\nsize with arrays is 24 times lesser than standard way.\n\nNow, I have more than 60000 stored grids in 30 GB, instead of 720 GB, but\nprobably I will store 1 million of grids or more in 0.5 TB instead of 12 TB.\nI have no enougth money to buy nor maintain 12 TB disks.\n\nPlease, could somebody answer some of these questions?\n\n - It's posible to tune some TOAST parameters to get faster atomic access\n to large arrays?\n\n\n - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n the problem?\n\n\n - Atomic access will be faster if table stores vectors for data in the\n same parallel instead of matrices of global data?\n CREATE TABLE grid_f_data_i2 (\n //Specifies the variable and other features of data\n id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n //A new grid is available each 3 hours since 5 years ago\n dh_date timestamp,\n // nu_parallel = y = Round((latitude+90) * 2\n smallint nu_parallel,\n //Values are scaled to be stored as signed integers of 2 bytes\n vl_parallel smallint[],\n CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n (co_inventory, nu_parallel, dh_date)\n );\n\n - There is another faster solution?\n\nThanks in advance and best regards\n\n-- \nVíctor de Buen Remiro\nTol Development Team member\nwww.tol-project.org\n\nHiI'm storing historical meteorological gridded data from GFS (http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a table like this:\nCREATE TABLE grid_f_data_i2 (  //Specifies the variable and other features of data\n\n  id_inventory integer REFERENCES grid_d_inventory(id_inventory),   //A new grid is available each 3 hours since 5 years ago  dh_date timestamp,    //Values are scaled to be stored as signed integers of 2 bytes\n\n  vl_grid smallint[361][720],   CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY   (co_inventory, dh_date));Dimensions\nof each value of field vl_grid are (lat:361 x lon:720 = 259920 cells}\nfor a grid of 0.5 degrees (about each 55 Km) around the world. So,\nvl_grid[y][x] stores the value at dh_date of a meteorological variable\nspecified by id_inventory in the geodesic point \nlatitude  = -90 + y*0.5longitude = x*0.5\nThe reverse formula for the closest point in the grid of an arbitary geodesic point will bey = Round((latitude+90) * 2\n\nx = Round(longitude*2)\nField vl_grid is stored in the TOAST table and has a good\ncompression level. PostgreSql is the only one database that is able to\nstore this huge amount of data in only 34 GB of disk. It's really great\nsystem. Queries returning big rectangular areas are very fast, but the\ntarget of almost all queries is to get historical series for a geodesic\npoint\nSELECT  dh_date, vl_grid[123][152]FROM  grid_f_data_i2WHERE  id_inventory = 6ORDER BY dh_dateIn this case, atomic access to just a cell of each one of a only few thousands of rows becomes too slow.\nUsing standar way, size increase very muchCREATE TABLE grid_f_data_i2 (\n   id_inventory integer REFERENCES grid_d_inventory(id_inventory),   dh_date timestamp,   smallint lat,\n   smallint lon,   smallint value};\n   This table have (4+8+2+2+2=24) bytes by register and (lat:361 x lon:720 = 259920) registers by grid, so, 6238080 bytes by grid.Uncompressed array design uses 4+8+2*259920=519852 bytes by register but just one register by grid.\n\nTOAST table compress these arrays with an average factor 1/2, so, the\ntotal size with arrays is 24 times lesser than standard way.Now,\nI have more than 60000 stored grids in 30 GB, instead of 720 GB, but\nprobably I will store 1 million of grids or more in 0.5 TB instead of\n12 TB.\nI have no enougth money to buy nor maintain 12 TB disks.Please, could somebody answer some of these questions?It's posible to tune some TOAST parameters to get faster atomic access to large arrays?\nUsing \"EXTERNAL\" strategy for storing TOAST-able columns could solve the problem? \nAtomic access will be faster if table stores vectors for data in the same parallel instead of matrices of global data?\nCREATE TABLE grid_f_data_i2 (\n  //Specifies the variable and other features of data\n  id_inventory integer REFERENCES grid_d_inventory(id_inventory),   //A new grid is available each 3 hours since 5 years ago\n\n\n  dh_date timestamp,  // nu_parallel = y = Round((latitude+90) * 2\n\n  smallint nu_parallel,\n\n  //Values are scaled to be stored as signed integers of 2 bytes\n\n  vl_parallel smallint[],\nCONSTRAINT meteo_f_gfs_tmp PRIMARY KEY \n\n  (co_inventory, nu_parallel, dh_date)\n\n);\n\nThere is another faster solution?Thanks in advance and best regards-- Víctor de Buen RemiroTol Development Team memberwww.tol-project.org", "msg_date": "Wed, 22 Jul 2009 10:12:42 +0200", "msg_from": "Victor de Buen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays" }, { "msg_contents": "\"Victor de Buen (Bayes)\" <[email protected]> writes:\n> I'm storing historical meteorological gridded data from GFS (\n> http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n\n> vl_grid smallint[361][720],\n\n> - It's posible to tune some TOAST parameters to get faster atomic access\n> to large arrays?\n\nIt might save a little bit to make the toast chunk size larger, but I'm\nnot sure you could gain much from that.\n\n> - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n> the problem?\n\nNope, wouldn't help --- AFAIR array access is not optimized for slice\naccess. In any case, doing that would give up the compression savings\nthat you were so happy about.\n\nIf your normal access patterns involve \"vertical\" rather than\n\"horizontal\" scans of the data, maybe you should rethink the choice\nof table layout. Or maybe the compression is enough to allow you\nto consider storing the data twice, once in the current layout and\nonce in a \"vertical\" format.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jul 2009 09:54:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays " }, { "msg_contents": "Thank you very much, Tom\n\nI will try vector 'parallel' and 'vertical' strategies.\n\nRegards\n\n2009/7/22 Tom Lane <[email protected]>\n\n> \"Victor de Buen (Bayes)\" <[email protected]> writes:\n> > I'm storing historical meteorological gridded data from GFS (\n> > http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> > table like this:\n>\n> > vl_grid smallint[361][720],\n>\n> > - It's posible to tune some TOAST parameters to get faster atomic\n> access\n> > to large arrays?\n>\n> It might save a little bit to make the toast chunk size larger, but I'm\n> not sure you could gain much from that.\n>\n> > - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n> > the problem?\n>\n> Nope, wouldn't help --- AFAIR array access is not optimized for slice\n> access. In any case, doing that would give up the compression savings\n> that you were so happy about.\n>\n> If your normal access patterns involve \"vertical\" rather than\n> \"horizontal\" scans of the data, maybe you should rethink the choice\n> of table layout. Or maybe the compression is enough to allow you\n> to consider storing the data twice, once in the current layout and\n> once in a \"vertical\" format.\n>\n> regards, tom lane\n>\n\n\n\n-- \nVíctor de Buen Remiro\nConsultor estadístico\nBayes Forecast\nwww.bayesforecast.com\nTol Development Team member\nwww.tol-project.org\n\nThank you very much, TomI will try vector 'parallel' and 'vertical' strategies.Regards2009/7/22 Tom Lane <[email protected]>\n\"Victor de Buen (Bayes)\" <[email protected]> writes:\n\n> I'm storing historical meteorological gridded data from GFS (\n> http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n\n>   vl_grid smallint[361][720],\n\n>    - It's posible to tune some TOAST parameters to get faster atomic access\n>    to large arrays?\n\nIt might save a little bit to make the toast chunk size larger, but I'm\nnot sure you could gain much from that.\n\n>    - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n>    the problem?\n\nNope, wouldn't help --- AFAIR array access is not optimized for slice\naccess.  In any case, doing that would give up the compression savings\nthat you were so happy about.\n\nIf your normal access patterns involve \"vertical\" rather than\n\"horizontal\" scans of the data, maybe you should rethink the choice\nof table layout.  Or maybe the compression is enough to allow you\nto consider storing the data twice, once in the current layout and\nonce in a \"vertical\" format.\n\n                        regards, tom lane\n-- Víctor de Buen RemiroConsultor estadísticoBayes Forecastwww.bayesforecast.comTol Development Team memberwww.tol-project.org", "msg_date": "Wed, 22 Jul 2009 16:27:34 +0200", "msg_from": "\"Victor de Buen (Bayes)\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Atomic access to large arrays" }, { "msg_contents": "Thank you very much, Tom\n\nI will try vector 'parallel' and 'vertical' strategies.\n\nRegards\n\n2009/7/22 Tom Lane <[email protected]>\n\n> \"Victor de Buen (Bayes)\" <[email protected]> writes:\n> > I'm storing historical meteorological gridded data from GFS (\n> > http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> > table like this:\n>\n> > vl_grid smallint[361][720],\n>\n> > - It's posible to tune some TOAST parameters to get faster atomic\n> access\n> > to large arrays?\n>\n> It might save a little bit to make the toast chunk size larger, but I'm\n> not sure you could gain much from that.\n>\n> > - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n> > the problem?\n>\n> Nope, wouldn't help --- AFAIR array access is not optimized for slice\n> access. In any case, doing that would give up the compression savings\n> that you were so happy about.\n>\n> If your normal access patterns involve \"vertical\" rather than\n> \"horizontal\" scans of the data, maybe you should rethink the choice\n> of table layout. Or maybe the compression is enough to allow you\n> to consider storing the data twice, once in the current layout and\n> once in a \"vertical\" format.\n>\n> regards, tom lane\n>\n\n\n\n-- \nVíctor de Buen Remiro\nConsultor estadístico\nBayes Forecast\nwww.bayesforecast.com\nTol Development Team member\nwww.tol-project.org\n\nThank you very much, TomI will try vector 'parallel' and 'vertical' strategies.Regards2009/7/22 Tom Lane <[email protected]>\n\"Victor de Buen (Bayes)\" <[email protected]> writes:\n\n> I'm storing historical meteorological gridded data from GFS (\n> http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n\n>   vl_grid smallint[361][720],\n\n>    - It's posible to tune some TOAST parameters to get faster atomic access\n>    to large arrays?\n\nIt might save a little bit to make the toast chunk size larger, but I'm\nnot sure you could gain much from that.\n\n>    - Using \"EXTERNAL\" strategy for storing TOAST-able columns could solve\n>    the problem?\n\nNope, wouldn't help --- AFAIR array access is not optimized for slice\naccess.  In any case, doing that would give up the compression savings\nthat you were so happy about.\n\nIf your normal access patterns involve \"vertical\" rather than\n\"horizontal\" scans of the data, maybe you should rethink the choice\nof table layout.  Or maybe the compression is enough to allow you\nto consider storing the data twice, once in the current layout and\nonce in a \"vertical\" format.\n\n                        regards, tom lane\n-- Víctor de Buen RemiroConsultor estadísticoBayes Forecastwww.bayesforecast.comTol Development Team memberwww.tol-project.org", "msg_date": "Wed, 22 Jul 2009 16:30:43 +0200", "msg_from": "Victor de Buen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays" }, { "msg_contents": "On Tue, Jul 21, 2009 at 7:43 PM, Victor de Buen\n(Bayes)<[email protected]> wrote:\n> Hi\n>\n> I'm storing historical meteorological gridded data from GFS\n> (http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n>\n> CREATE TABLE grid_f_data_i2 (\n>   //Specifies the variable and other features of data\n>   id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n>   //A new grid is available each 3 hours since 5 years ago\n>   dh_date timestamp,\n>   //Values are scaled to be stored as signed integers of 2 bytes\n>   vl_grid smallint[361][720],\n> CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n>   (co_inventory, dh_date)\n> );\n>\n> Dimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\n> cells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\n> vl_grid[y][x] stores the value at dh_date of a meteorological variable\n> specified by id_inventory in the geodesic point\n>\n> latitude  = -90 + y*0.5\n> longitude = x*0.5\n>\n> The reverse formula for the closest point in the grid of an arbitary\n> geodesic point will be\n>\n> y = Round((latitude+90) * 2\n> x = Round(longitude*2)\n>\n> Field vl_grid is stored in the TOAST table and has a good compression level.\n> PostgreSql is the only one database that is able to store this huge amount\n> of data in only 34 GB of disk. It's really great system. Queries returning\n> big rectangular areas are very fast, but the target of almost all queries is\n> to get historical series for a geodesic point\n>\n> SELECT  dh_date, vl_grid[123][152]\n> FROM  grid_f_data_i2\n> WHERE  id_inventory = 6\n> ORDER BY dh_date\n>\n> In this case, atomic access to just a cell of each one of a only few\n> thousands of rows becomes too slow.\n\nThat's a side effect of your use of arrays. Arrays are very compact,\nand ideal if you always want the whole block of data at once, but\nasking for particular point is the down side of your trade off. I\nwould suggest maybe experimenting with smaller grid sizes...maybe\ndivide your big grid into approximately 16 (4x4) separate subgrids.\nThis should still 'toast', and give decent compression, but mitigate\nthe impact of single point lookup somewhat.\n\nmerlin\n", "msg_date": "Wed, 22 Jul 2009 13:59:44 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays" }, { "msg_contents": "Thank a lot, Merlin.\n\nI will try to fill a sample of grids in a new table with different sizes of\nsubgrids in order to get the better relation between space and speed.\n\nRegards\n\n2009/7/22 Merlin Moncure <[email protected]>\n\n> On Tue, Jul 21, 2009 at 7:43 PM, Victor de Buen\n> (Bayes)<[email protected]> wrote:\n> > Hi\n> >\n> > I'm storing historical meteorological gridded data from GFS\n> > (http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in\n> a\n> > table like this:\n> >\n> > CREATE TABLE grid_f_data_i2 (\n> > //Specifies the variable and other features of data\n> > id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n> > //A new grid is available each 3 hours since 5 years ago\n> > dh_date timestamp,\n> > //Values are scaled to be stored as signed integers of 2 bytes\n> > vl_grid smallint[361][720],\n> > CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n> > (co_inventory, dh_date)\n> > );\n> >\n> > Dimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\n> > cells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\n> > vl_grid[y][x] stores the value at dh_date of a meteorological variable\n> > specified by id_inventory in the geodesic point\n> >\n> > latitude = -90 + y*0.5\n> > longitude = x*0.5\n> >\n> > The reverse formula for the closest point in the grid of an arbitary\n> > geodesic point will be\n> >\n> > y = Round((latitude+90) * 2\n> > x = Round(longitude*2)\n> >\n> > Field vl_grid is stored in the TOAST table and has a good compression\n> level.\n> > PostgreSql is the only one database that is able to store this huge\n> amount\n> > of data in only 34 GB of disk. It's really great system. Queries\n> returning\n> > big rectangular areas are very fast, but the target of almost all queries\n> is\n> > to get historical series for a geodesic point\n> >\n> > SELECT dh_date, vl_grid[123][152]\n> > FROM grid_f_data_i2\n> > WHERE id_inventory = 6\n> > ORDER BY dh_date\n> >\n> > In this case, atomic access to just a cell of each one of a only few\n> > thousands of rows becomes too slow.\n>\n> That's a side effect of your use of arrays. Arrays are very compact,\n> and ideal if you always want the whole block of data at once, but\n> asking for particular point is the down side of your trade off. I\n> would suggest maybe experimenting with smaller grid sizes...maybe\n> divide your big grid into approximately 16 (4x4) separate subgrids.\n> This should still 'toast', and give decent compression, but mitigate\n> the impact of single point lookup somewhat.\n>\n> merlin\n>\n\n\n\n-- \nVíctor de Buen Remiro\nConsultor estadístico\nBayes Forecast\nwww.bayesforecast.com\nTol Development Team member\nwww.tol-project.org\n\nThank a lot, Merlin.I will try to fill a sample of grids in a new table with different sizes of subgrids in order to get the better relation between space and speed.Regards2009/7/22 Merlin Moncure <[email protected]>\nOn Tue, Jul 21, 2009 at 7:43 PM, Victor de Buen\n(Bayes)<[email protected]> wrote:\n> Hi\n>\n> I'm storing historical meteorological gridded data from GFS\n> (http://www.nco.ncep.noaa.gov/pmb/products/gfs/) into an array field in a\n> table like this:\n>\n> CREATE TABLE grid_f_data_i2 (\n>   //Specifies the variable and other features of data\n>   id_inventory integer REFERENCES grid_d_inventory(id_inventory),\n>   //A new grid is available each 3 hours since 5 years ago\n>   dh_date timestamp,\n>   //Values are scaled to be stored as signed integers of 2 bytes\n>   vl_grid smallint[361][720],\n> CONSTRAINT meteo_f_gfs_tmp PRIMARY KEY\n>   (co_inventory, dh_date)\n> );\n>\n> Dimensions of each value of field vl_grid are (lat:361 x lon:720 = 259920\n> cells} for a grid of 0.5 degrees (about each 55 Km) around the world. So,\n> vl_grid[y][x] stores the value at dh_date of a meteorological variable\n> specified by id_inventory in the geodesic point\n>\n> latitude  = -90 + y*0.5\n> longitude = x*0.5\n>\n> The reverse formula for the closest point in the grid of an arbitary\n> geodesic point will be\n>\n> y = Round((latitude+90) * 2\n> x = Round(longitude*2)\n>\n> Field vl_grid is stored in the TOAST table and has a good compression level.\n> PostgreSql is the only one database that is able to store this huge amount\n> of data in only 34 GB of disk. It's really great system. Queries returning\n> big rectangular areas are very fast, but the target of almost all queries is\n> to get historical series for a geodesic point\n>\n> SELECT  dh_date, vl_grid[123][152]\n> FROM  grid_f_data_i2\n> WHERE  id_inventory = 6\n> ORDER BY dh_date\n>\n> In this case, atomic access to just a cell of each one of a only few\n> thousands of rows becomes too slow.\n\nThat's a side effect of your use of arrays.  Arrays are very compact,\nand ideal if you always want the whole block of data at once, but\nasking for particular point is the down side of your trade off.  I\nwould suggest maybe experimenting with smaller grid sizes...maybe\ndivide your big grid into approximately 16 (4x4) separate subgrids.\nThis should still 'toast', and give decent compression, but mitigate\nthe impact of single point lookup somewhat.\n\nmerlin\n-- Víctor de Buen RemiroConsultor estadísticoBayes Forecastwww.bayesforecast.comTol Development Team member\nwww.tol-project.org", "msg_date": "Wed, 22 Jul 2009 20:28:23 +0200", "msg_from": "Victor de Buen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Atomic access to large arrays" } ]
[ { "msg_contents": "Hi Performance Wizards!\n\nI need advice on this.\n\nI have a db which is being constantly updated and queried by a few\ncomputers. We are doing datamining. The machine is running on a\nmoderately powered machine and processors constantly hit 90%.\n\nAt the same time, we need to present these data on a web interface.\nThe performance for the web interface is now very sluggish as most of\nthe power is occupied by the mining process.\n\nI have thought of a few ways out of this -\n\n1) Buy a mega powered machine (temporal solution, quick fix)\n2) Do a master-slave configuration\n3) Separate the DB into 2 - One for pure mining purposes, the other\npurely for web serving\n\nFor (2), I do not know if it will be very effective since the master\nwill probably have many changes at any moment. I do not understand how\nthe changes will be propagated from the master to the slave without\nimpacting the slave's performance. Anyone with more experience here?\n\n(3) seems ideal but is a very very painful solution!\n\nWe can possibly use a message queue system but again I am not familiar\nwith MQ. Will need to do more research.\n\nIf you were me, how would you solve this problem?\n\nThanks!\n\n\nKelvin Quee\n+65 9177 3635\n", "msg_date": "Wed, 22 Jul 2009 11:47:43 +0800", "msg_from": "Kelvin Quee <[email protected]>", "msg_from_op": true, "msg_subject": "Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "On Tue, Jul 21, 2009 at 9:47 PM, Kelvin Quee<[email protected]> wrote:\n> Hi Performance Wizards!\n>\n> I need advice on this.\n>\n> I have a db which is being constantly updated and queried by a few\n> computers. We are doing datamining. The machine is running on a\n> moderately powered machine and processors constantly hit 90%.\n\nWhen your CPUs say 90%, is that regular user / sys %, or is it wait %?\nThe difference is very important.\nWhat kind of hardware are you running on btw? # cpus, memory, # of\ndrives,type, RAID controller if any?\n\n> At the same time, we need to present these data on a web interface.\n> The performance for the web interface is now very sluggish as most of\n> the power is occupied by the mining process.\n>\n> I have thought of a few ways out of this -\n>\n> 1) Buy a mega powered machine (temporal solution, quick fix)\n\nDepends very much on what your bound by, CPU or IO. If adding a\ncouple of 15K SAS drives would double your performance then u don't\nneed a super powerful machine.\n\n> 2) Do a master-slave configuration\n\nOften a good choice.\n\n> 3) Separate the DB into 2 - One for pure mining purposes, the other\n> purely for web serving\n>\n> For (2), I do not know if it will be very effective since the master\n> will probably have many changes at any moment. I do not understand how\n> the changes will be propagated from the master to the slave without\n> impacting the slave's performance. Anyone with more experience here?\n>\n> (3) seems ideal but is a very very painful solution!\n>\n> We can possibly use a message queue system but again I am not familiar\n> with MQ. Will need to do more research.\n\nThat could be a very complex solution.\n\n> If you were me, how would you solve this problem?\n\nSlony, most likely.\n", "msg_date": "Tue, 21 Jul 2009 23:42:20 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "Hi Scott,\n\nThanks for the quick reply.\n\nI have been staring at *top* for a while and it's mostly been 40% in\nuserspace and 30% in system. Wait is rather low and never ventures\nbeyond 1%.\n\nMy hardware is a duo core AMD Athlon64 X2 5000+, 1GB RAM and a single\n160 GB SATA II hard disk drive.\n\nI will go look at Slony now.\n\nScott, one question though - If my master is constantly changing,\nwouldn't the updates from the master to the slave also slow down the\nslave?\n\n\nKelvin Quee\n+65 9177 3635\n\n\n\nOn Wed, Jul 22, 2009 at 1:42 PM, Scott Marlowe<[email protected]> wrote:\n> On Tue, Jul 21, 2009 at 9:47 PM, Kelvin Quee<[email protected]> wrote:\n>> Hi Performance Wizards!\n>>\n>> I need advice on this.\n>>\n>> I have a db which is being constantly updated and queried by a few\n>> computers. We are doing datamining. The machine is running on a\n>> moderately powered machine and processors constantly hit 90%.\n>\n> When your CPUs say 90%, is that regular user / sys %, or is it wait %?\n> The difference is very important.\n> What kind of hardware are you running on btw? # cpus, memory, # of\n> drives,type, RAID controller if any?\n>\n>> At the same time, we need to present these data on a web interface.\n>> The performance for the web interface is now very sluggish as most of\n>> the power is occupied by the mining process.\n>>\n>> I have thought of a few ways out of this -\n>>\n>> 1) Buy a mega powered machine (temporal solution, quick fix)\n>\n> Depends very much on what your bound by, CPU or IO.  If adding a\n> couple of 15K SAS drives would double your performance then u don't\n> need a super powerful machine.\n>\n>> 2) Do a master-slave configuration\n>\n> Often a good choice.\n>\n>> 3) Separate the DB into 2 - One for pure mining purposes, the other\n>> purely for web serving\n>>\n>> For (2), I do not know if it will be very effective since the master\n>> will probably have many changes at any moment. I do not understand how\n>> the changes will be propagated from the master to the slave without\n>> impacting the slave's performance. Anyone with more experience here?\n>>\n>> (3) seems ideal but is a very very painful solution!\n>>\n>> We can possibly use a message queue system but again I am not familiar\n>> with MQ. Will need to do more research.\n>\n> That could be a very complex solution.\n>\n>> If you were me, how would you solve this problem?\n>\n> Slony, most likely.\n>\n", "msg_date": "Wed, 22 Jul 2009 15:52:34 +0800", "msg_from": "Kelvin Quee <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "On Wed, Jul 22, 2009 at 1:52 AM, Kelvin Quee<[email protected]> wrote:\n> Hi Scott,\n>\n> Thanks for the quick reply.\n>\n> I have been staring at *top* for a while and it's mostly been 40% in\n> userspace and 30% in system. Wait is rather low and never ventures\n> beyond 1%.\n>\n> My hardware is a duo core AMD Athlon64 X2 5000+, 1GB RAM and a single\n> 160 GB SATA II hard disk drive.\n\nSo I take it you're on a tight budget then? I'm guessing you could\nput a single quad core cpu and 8 Gigs of ram in place for a reasonable\nprice. I'd highly recommend setting up at least software RAID-1 for\nincreased reliability.\n\n> I will go look at Slony now.\n\nMight be overkill if you can get by on a single reasonably powerful machine.\n\n> Scott, one question though - If my master is constantly changing,\n> wouldn't the updates from the master to the slave also slow down the\n> slave?\n\nYes it will, but the overhead for the slave is much less than the master.\n", "msg_date": "Wed, 22 Jul 2009 05:54:16 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "[email protected] (Kelvin Quee) writes:\n> I will go look at Slony now.\n\nIt's worth looking at, but it is not always to be assumed that\nreplication will necessarily improve scalability of applications; it's\nnot a \"magic wand\" to wave such that \"presto, it's all faster!\"\n\nReplication is helpful from a performance standpoint if there is a lot\nof query load where it is permissible to look at *somewhat* out of\ndate information.\n\nFor instance, replication can be quite helpful for pushing load off\nfor processing accounting data where you tend to be doing analysis on\ndata from {yesterday, last week, last month, last year}, and where the\ndata tends to be inherently temporal (e.g. - you're looking at\ntransactions with dates on them).\n\nOn the other hand, any process that anticipates *writing* to the\nmaster database will be more or less risky to try to shift over to a\npossibly-somewhat-behind 'slave' system, as will be anything that\nneeds to be consistent with the \"master state.\"\n-- \n(reverse (concatenate 'string \"ofni.secnanifxunil\" \"@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/spiritual.html\n\"Nondeterminism means never having to say you're wrong.\" -- Unknown\n", "msg_date": "Wed, 22 Jul 2009 12:25:20 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE----- \nHash: RIPEMD160 \n\n\n> I have a db which is being constantly updated and queried by a few\n> computers. We are doing datamining. The machine is running on a \n> moderately powered machine and processors constantly hit 90%. \n... \n> 2) Do a master-slave configuration \n> 3) Separate the DB into 2 - One for pure mining purposes, the other\n> purely for web serving \n\nWhy not combine the two (if I'm understanding correctly)? Use Bucardo\nor Slony to make two slaves, one for the web servers to hit (assuming\nthey are read-only queries), and one to act as a data warehouse.\nYour main box gets all the updates but has no selects or complex\nqueries to weigh it down. If the we server does read and write, have\nyour app maintain two database handles.\n\n> For (2), I do not know if it will be very effective since the master\n> will probably have many changes at any moment. I do not understand how\n> the changes will be propagated from the master to the slave without\n> impacting the slave's performance. Anyone with more experience here?\n\nThe slave will get the updates as well, but in a more efficient manner\nas there will be no WHERE clauses or other logic associated with the\noriginal update. Bucardo or Slony will simply COPY over the rows as\nneeded. Keep in mind that both are asynchronous, so changes won't appear\non the slaves at the same time as the master, but the delay is typically\nmeasured in seconds.\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200907221229\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAkpnPpsACgkQvJuQZxSWSsggKgCfT0EbxWQdym30n7IV1J1X6dC6\nHRkAoND4nCMVeffE2VW34VVmPcRtLclI\n=tTjn\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Wed, 22 Jul 2009 16:30:49 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" }, { "msg_contents": "On Wed, Jul 22, 2009 at 12:52 AM, Kelvin Quee<[email protected]> wrote:\n> I have been staring at *top* for a while and it's mostly been 40% in\n> userspace and 30% in system. Wait is rather low and never ventures\n> beyond 1%.\n\nCertainly seems like you are CPU bound.\n\n> My hardware is a duo core AMD Athlon64 X2 5000+, 1GB RAM and a single\n> 160 GB SATA II hard disk drive.\n\nLooks like you are on a budget as Scott also suggested - I would also\nmirror his recommendation to upgrade to a quad core processor and more\nmemory. Hopefully your motherboard supports quad-cores so you don't\nhave to replace that bit, and you should be able to get at least 4GB\nof RAM in there.\n\nIf IO load becomes an issue, Velociraptors are fast and don't cost too\nmuch. Getting a basic RAID1 will help prevent data-loss due to disk\nfailure - make sure you are making offline backups as well!\n\n> I will go look at Slony now.\n>\n> Scott, one question though - If my master is constantly changing,\n> wouldn't the updates from the master to the slave also slow down the\n> slave?\n\nYes - Slony will increase the load on your source node as it does take\nwork to do the replication, so unless you are able to offload your CPU\nheavy read only queries to the slave machine, it will only bog down\nthe source node more.\n\n-Dave\n", "msg_date": "Wed, 22 Jul 2009 12:29:25 -0700", "msg_from": "David Rees <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Master/Slave, DB separation or just spend $$$?" } ]
[ { "msg_contents": "Hello!\n\n \n\nI posted you a message about slowness of creation users more than 500 000\n(#4919). It seems there is no workaround of this problem because of using\npg_auth flat file.\n\n \n\nTo override this problem is it possible to use LDAP authentification metod\nto identify each user and speed up system? How it will affect group roles\nfor each user because we use groups roles to give Access to users to system\nobjects? Because group roles will work only with postgres users not LDAP.\n\n \n\npgBouncer or pgPool uses Postgres users for connection pooling. Is there\nsome more variants to use connection pooling without using postgres users?\n\n \n\n \n\n______________________________________\n\nLauris Ulmanis\n\nTel. +371 29471020\n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHello!\n \nI posted you a message about slowness of creation users more\nthan 500 000 (#4919). It seems there is no workaround of this problem\nbecause of using pg_auth flat file.\n \nTo override this problem is it possible to use LDAP\nauthentification metod to identify each user and speed up system? How it will\naffect group roles for each user because we use groups roles to give Access to users\nto system objects? Because group roles will work only with postgres users not\nLDAP.\n \npgBouncer or pgPool uses Postgres users for connection\npooling. Is there some more variants to use connection pooling without using\npostgres users?\n \n \n______________________________________\nLauris Ulmanis\nTel. +371 29471020", "msg_date": "Thu, 23 Jul 2009 13:47:47 +0300", "msg_from": "\"Lauris Ulmanis\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres user authentification or LDAP authentification" }, { "msg_contents": "On Thu, Jul 23, 2009 at 12:47, Lauris Ulmanis<[email protected]> wrote:\n> Hello!\n>\n>\n>\n> I posted you a message about slowness of creation users more than 500 000\n> (#4919). It seems there is no workaround of this problem because of using\n> pg_auth flat file.\n>\n>\n>\n> To override this problem is it possible to use LDAP authentification metod\n> to identify each user and speed up system?\n\nNo. LDAP authentication still requires all the users to be created in\nthe database before they can log in. This is required so that they get\nan oid in the system, that is used for all permissions checks and\nownership and such things.\n\nThe only thing you could do here is to map multiple users to the\n*same* database user using pg_ident.conf, for example with a regular\nexpression. However, you then loose the ability to distinguish between\nthese users once they are logged in.\n\n\n> How it will affect group roles\n> for each user because we use groups roles to give Access to users to system\n> objects? Because group roles will work only with postgres users not LDAP.\n\nThe PostgreSQL LDAP code currently has no support for groups.\n\n\n> pgBouncer or pgPool uses Postgres users for connection pooling. Is there\n> some more variants to use connection pooling without using postgres users?\n\nNot that I know of.\n\n\n-- \n Magnus Hagander\n Self: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Fri, 24 Jul 2009 09:17:38 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres user authentification or LDAP authentification" }, { "msg_contents": "Lauris Ulmanis wrote:\n> Hello!\n> \n> \n> \n> I posted you a message about slowness of creation users more than 500 000\n> (#4919). It seems there is no workaround of this problem because of using\n> pg_auth flat file.\n> \n> \n> \n> To override this problem is it possible to use LDAP authentification metod\n> to identify each user and speed up system?\n\nNo. The users still need to exist in the PG auth system.\n\nI'm sure this is just some missing optimization. Feel free to work on\nthe code to improve performance for these cases.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sun, 26 Jul 2009 22:00:19 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres user authentification or LDAP authentification" } ]
[ { "msg_contents": "Hi,\n\n \n\nI posted this question at stackoverflow. Please follow there to see the\nquestion in a nice format as I also posted the code that I used for\nbenchmarking.\n\n \n\nhttp://stackoverflow.com/questions/1174848/postgresql-inserting-blob-at-\na-high-rate\n\n \n\nThe main question is: how do I configure Postgresql such that it's most\nefficient for storing large BLOB at a high-rate?\n\n \n\nThanks,\n\n \n\nS.\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI posted this question at stackoverflow. Please follow there\nto see the question in a nice format as I also posted the code that I used for\nbenchmarking.\n \nhttp://stackoverflow.com/questions/1174848/postgresql-inserting-blob-at-a-high-rate\n \nThe main question is: how do I configure Postgresql such\nthat it’s most efficient for storing large BLOB at a high-rate?\n \nThanks,\n \nS.", "msg_date": "Thu, 23 Jul 2009 16:47:56 -0700", "msg_from": "\"WANGRUNGVICHAISRI, SHIVESH\" <[email protected]>", "msg_from_op": true, "msg_subject": "Configuring Postgresql for writing BLOB at a high-rate" }, { "msg_contents": "SHIVESH WANGRUNGVICHAISRI wrote:\n> The main question is: how do I configure Postgresql such that \n> it's most efficient for storing large BLOB at a high-rate?\n\nRefering to what you wrote on the web site you quoted,\nI would guess that neither tuning WAL nor tuning logging\nwill have much effect.\n\nMy guess would be that the system will be I/O bound from\nwriting the large objects to disk, so maybe tuning on the\noperating system or hardware level might be most effective.\n\nNotice the use of subjunctive mode in the above.\nWhat you should do is: run your test and find out where\nthe bottleneck is. Are the disks very busy and do you see\nsignificant I/O-wait? Then you're I/O bound.\n\nIn that case you could try tuning the I/O part of the kernel\n(you didn't say which operating system) and - easiest of all -\nget rid of that RAID-5 and get a RAID-1 of fast disks.\n\nYours,\nLaurenz Albe\n", "msg_date": "Fri, 24 Jul 2009 08:14:23 +0200", "msg_from": "\"Albe Laurenz\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuring Postgresql for writing BLOB at a high-rate" } ]
[ { "msg_contents": "Hello,\n\nIt seems to me that the following query should be a lot faster. This runs\nin 17 seconds (regardless how many times I run it)\n\nselect ac.* from application_controls_view ac, refs r where\nac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\n\nif I do not use the view the query runs in under 100 ms\n\nselect ac.* from application_controls ac, refs r where\nac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\n\n\nThe view is\n\n SELECT t.action_form_type_ref_id, r1.display AS action_form_type_display,\nt.action_order_type_ref_id, r2.display AS action_order_type_display,\nt.action_view_ref_id, r3.display AS action_view_display, t.active_ind,\nt.application_control_id, t.application_control_name, t.application_view_id,\nt.background_color, t.background_processing_ind, t.base_model_type_ref_id,\nr4.display AS base_model_type_display, t.base_value_script,\nt.class_instantiate_script, t.class_name, t.combo_add_to_list_ind,\nt.combo_category_ref_id, r5.display AS combo_category_display,\nt.combo_dynamic_search_ind, t.combo_filter_ref_id, r6.display AS\ncombo_filter_display, t.combo_group_ref_id, r7.display AS\ncombo_group_display, t.combo_short_display_ind, t.combo_values_term_id,\nt.comparison_operator_ref_id, r8.display AS comparison_operator_display,\nt.context_ref_id, r9.display AS context_display, t.control_description,\nt.control_format, t.control_format_ref_id, r10.display AS\n <snip for brevity>\nt.parameter_ref_id = r30.ref_id AND t.parameter_source_ref_id = r31.ref_id\nAND t.record_item_ref_id = r32.ref_id AND t.repeating_section_view_ref_id =\nr33.ref_id AND t.report_print_ref_id = r34.ref_id AND\nt.right_arrow_action_ref_id = r35.ref_id AND t.right_click_action_ref_id =\nr36.ref_id AND t.section_view_ref_id = r37.ref_id AND t.select_action_ref_id\n= r38.ref_id AND t.source_ref_id = r39.ref_id AND t.state_field_type_ref_id\n= r40.ref_id AND t.table_access_ref_id = r41.ref_id AND t.update_user_ref_id\n= r42.ref_id AND t.value_data_type_ref_id = r43.ref_id;\n\nso basically it joins 43 times to the refs table on the primary key\n\nthe explain confirms the nested loops\n\n\n\" {NESTLOOP \"\n\" :startup_cost 2660771.70 \"\n\" :total_cost 3317979.85 \"\n\" :plan_rows 27 \"\n\" :plan_width 4708 \"\n\" :targetlist (\"\n\" {TARGETENTRY \"\n\" :expr \"\n\" {VAR \"\n\" :varno 65001 \"\n\" :varattno 29 \"\n\" :vartype 20 \"\n\" :vartypmod -1 \"\n\" :varlevelsup 0 \"\n\" :varnoold 5 \"\n <snip for brevity>\n\" -> Index Scan using refs_pk on refs r17 (cost=0.00..5.45\nrows=1 width=50)\"\n\" Index Cond: (r17.ref_id = t.detail_record_item_ref_id)\"\n\" -> Index Scan using refs_pk on refs r1 (cost=0.00..5.45 rows=1\nwidth=50)\"\n\" Index Cond: (r1.ref_id = t.action_form_type_ref_id)\"\n\" -> Index Scan using refs_pk on refs r (cost=0.00..5.45 rows=1 width=8)\"\n\" Index Cond: (r.ref_id = t.custom_controller_ref_id)\"\n\" Filter: ((r.ref_key)::text ~~ '%ERNEST%'::text)\"\n\n\nI did a vacuum analyze and so the primary key (indexes of course) is being\nused. But the above query is still 17s. If I dont return so many columns\nit comes down to around 10 seconds.\n\nselect ac.application_control_id from application_controls_view ac, refs r\nwhere ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%ERNEST%';\n\nBut in either case this is only 37 rows. So 1554 lookups on a unique index\non a table of 34000 rows means 6ms per internal join - note that many of\nthose values are the same.\n\nDoes this seem right to you? Anything I can tune ?\n\n\n\n-- \nGregory Caulton\nPrincipal at PatientOS Inc.\npersonal email: [email protected]\nhttp://www.patientos.com\ncorporate: (888)-NBR-1EMR || fax 857.241.3022\n\nHello,It seems to me that the following query should be a lot faster.  This runs in 17 seconds (regardless how many times I run it)select ac.* from application_controls_view ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\nif I do not use the view the query runs in under 100 msselect ac.* from application_controls ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\nThe view is SELECT t.action_form_type_ref_id, r1.display AS action_form_type_display, t.action_order_type_ref_id, r2.display AS action_order_type_display, t.action_view_ref_id, r3.display AS action_view_display, t.active_ind, t.application_control_id, t.application_control_name, t.application_view_id, t.background_color, t.background_processing_ind, t.base_model_type_ref_id, r4.display AS base_model_type_display, t.base_value_script, t.class_instantiate_script, t.class_name, t.combo_add_to_list_ind, t.combo_category_ref_id, r5.display AS combo_category_display, t.combo_dynamic_search_ind, t.combo_filter_ref_id, r6.display AS combo_filter_display, t.combo_group_ref_id, r7.display AS combo_group_display, t.combo_short_display_ind, t.combo_values_term_id, t.comparison_operator_ref_id, r8.display AS comparison_operator_display, t.context_ref_id, r9.display AS context_display, t.control_description, t.control_format, t.control_format_ref_id, r10.display AS \n         <snip for brevity>t.parameter_ref_id = r30.ref_id AND t.parameter_source_ref_id = r31.ref_id AND t.record_item_ref_id = r32.ref_id AND t.repeating_section_view_ref_id = r33.ref_id AND t.report_print_ref_id = r34.ref_id AND t.right_arrow_action_ref_id = r35.ref_id AND t.right_click_action_ref_id = r36.ref_id AND t.section_view_ref_id = r37.ref_id AND t.select_action_ref_id = r38.ref_id AND t.source_ref_id = r39.ref_id AND t.state_field_type_ref_id = r40.ref_id AND t.table_access_ref_id = r41.ref_id AND t.update_user_ref_id = r42.ref_id AND t.value_data_type_ref_id = r43.ref_id;\nso basically it joins 43 times to the refs table on the primary keythe explain confirms the nested loops\"   {NESTLOOP \"\"   :startup_cost 2660771.70 \"\"   :total_cost 3317979.85 \"\n\"   :plan_rows 27 \"\"   :plan_width 4708 \"\"   :targetlist (\"\"      {TARGETENTRY \"\"      :expr \"\"         {VAR \"\"         :varno 65001 \"\n\"         :varattno 29 \"\"         :vartype 20 \"\"         :vartypmod -1 \"\"         :varlevelsup 0 \"\"         :varnoold 5 \"            <snip for brevity>\n\n\"              ->  Index Scan using refs_pk on refs r17  (cost=0.00..5.45 rows=1 width=50)\"\"                    Index Cond: (r17.ref_id = t.detail_record_item_ref_id)\"\"        ->  Index Scan using refs_pk on refs r1  (cost=0.00..5.45 rows=1 width=50)\"\n\"              Index Cond: (r1.ref_id = t.action_form_type_ref_id)\"\"  ->  Index Scan using refs_pk on refs r  (cost=0.00..5.45 rows=1 width=8)\"\"        Index Cond: (r.ref_id = t.custom_controller_ref_id)\"\n\"        Filter: ((r.ref_key)::text ~~ '%ERNEST%'::text)\"I did a vacuum analyze and so the primary key (indexes of course) is being used.  But the above query is still 17s.  If I dont return so many columns it comes down to around 10 seconds.\nselect ac.application_control_id from application_controls_view ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%ERNEST%';But in either case this is only 37 rows.  So 1554 lookups on a unique index on a table of 34000 rows means 6ms per internal join - note that many of those values are the same.\nDoes this seem right to you?  Anything I can tune ?  -- Gregory CaultonPrincipal at PatientOS Inc.personal email: [email protected]://www.patientos.com\ncorporate: (888)-NBR-1EMR || fax  857.241.3022", "msg_date": "Sun, 26 Jul 2009 01:02:42 -0400", "msg_from": "Greg Caulton <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loop Query performance on PK" }, { "msg_contents": "On Sun, Jul 26, 2009 at 1:02 AM, Greg Caulton <[email protected]> wrote:\n\n> Hello,\n>\n> It seems to me that the following query should be a lot faster. This runs\n> in 17 seconds (regardless how many times I run it)\n>\n> select ac.* from application_controls_view ac, refs r where\n> ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\n>\n> if I do not use the view the query runs in under 100 ms\n>\n> select ac.* from application_controls ac, refs r where\n> ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\n>\n>\n> The view is\n>\n> SELECT t.action_form_type_ref_id, r1.display AS action_form_type_display,\n> t.action_order_type_ref_id, r2.display AS action_order_type_display,\n> t.action_view_ref_id, r3.display AS action_view_display, t.active_ind,\n> t.application_control_id, t.application_control_name, t.application_view_id,\n> t.background_color, t.background_processing_ind, t.base_model_type_ref_id,\n> r4.display AS base_model_type_display, t.base_value_script,\n> t.class_instantiate_script, t.class_name, t.combo_add_to_list_ind,\n> t.combo_category_ref_id, r5.display AS combo_category_display,\n> t.combo_dynamic_search_ind, t.combo_filter_ref_id, r6.display AS\n> combo_filter_display, t.combo_group_ref_id, r7.display AS\n> combo_group_display, t.combo_short_display_ind, t.combo_values_term_id,\n> t.comparison_operator_ref_id, r8.display AS comparison_operator_display,\n> t.context_ref_id, r9.display AS context_display, t.control_description,\n> t.control_format, t.control_format_ref_id, r10.display AS\n> <snip for brevity>\n> t.parameter_ref_id = r30.ref_id AND t.parameter_source_ref_id = r31.ref_id\n> AND t.record_item_ref_id = r32.ref_id AND t.repeating_section_view_ref_id =\n> r33.ref_id AND t.report_print_ref_id = r34.ref_id AND\n> t.right_arrow_action_ref_id = r35.ref_id AND t.right_click_action_ref_id =\n> r36.ref_id AND t.section_view_ref_id = r37.ref_id AND t.select_action_ref_id\n> = r38.ref_id AND t.source_ref_id = r39.ref_id AND t.state_field_type_ref_id\n> = r40.ref_id AND t.table_access_ref_id = r41.ref_id AND t.update_user_ref_id\n> = r42.ref_id AND t.value_data_type_ref_id = r43.ref_id;\n>\n> so basically it joins 43 times to the refs table on the primary key\n>\n> the explain confirms the nested loops\n>\n>\n> \" {NESTLOOP \"\n> \" :startup_cost 2660771.70 \"\n> \" :total_cost 3317979.85 \"\n> \" :plan_rows 27 \"\n> \" :plan_width 4708 \"\n> \" :targetlist (\"\n> \" {TARGETENTRY \"\n> \" :expr \"\n> \" {VAR \"\n> \" :varno 65001 \"\n> \" :varattno 29 \"\n> \" :vartype 20 \"\n> \" :vartypmod -1 \"\n> \" :varlevelsup 0 \"\n> \" :varnoold 5 \"\n> <snip for brevity>\n> \" -> Index Scan using refs_pk on refs r17 (cost=0.00..5.45\n> rows=1 width=50)\"\n> \" Index Cond: (r17.ref_id =\n> t.detail_record_item_ref_id)\"\n> \" -> Index Scan using refs_pk on refs r1 (cost=0.00..5.45 rows=1\n> width=50)\"\n> \" Index Cond: (r1.ref_id = t.action_form_type_ref_id)\"\n> \" -> Index Scan using refs_pk on refs r (cost=0.00..5.45 rows=1\n> width=8)\"\n> \" Index Cond: (r.ref_id = t.custom_controller_ref_id)\"\n> \" Filter: ((r.ref_key)::text ~~ '%ERNEST%'::text)\"\n>\n>\n> I did a vacuum analyze and so the primary key (indexes of course) is being\n> used. But the above query is still 17s. If I dont return so many columns\n> it comes down to around 10 seconds.\n>\n> select ac.application_control_id from application_controls_view ac, refs r\n> where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%ERNEST%';\n>\n> But in either case this is only 37 rows. So 1554 lookups on a unique index\n> on a table of 34000 rows means 6ms per internal join - note that many of\n> those values are the same.\n>\n> Does this seem right to you? Anything I can tune ?\n>\n>\n>\n> --\n> Gregory Caulton\n> Principal at PatientOS Inc.\n> personal email: [email protected]\n> http://www.patientos.com\n> corporate: (888)-NBR-1EMR || fax 857.241.3022\n>\n\nOh it seems to be the join that is throwing it off, because this runs in 600\nms\n\nselect ac.* from application_controls_view ac\nwhere ac.application_control_id in (\n50000745,\n50000760,\n50000759,\n50000758,\n50000757,\n50000756,\n50000753,\n50000751,\n50000750,\n50000749,\n50000748,\n50000746,\n50000744,\n50001328,\n50000752,\n50000754,\n50000755,\n50002757,\n50002756,\n50002755,\n50002754,\n50001168,\n50020825,\n50021077,\n50020821,\n50020822,\n50020824,\n50020823,\n50020820,\n50020819,\n50020809,\n50020810,\n50020806,\n50020807,\n50020817,\n50021066,\n50020808\n)\n\n\n\nnever mind, makes sense now - its fixed\n\n\n-- \nGregory Caulton\nPrincipal at PatientOS Inc.\npersonal email: [email protected]\nhttp://www.patientos.com\ncorporate: (888)-NBR-1EMR || fax 857.241.3022\n\nOn Sun, Jul 26, 2009 at 1:02 AM, Greg Caulton <[email protected]> wrote:\nHello,It seems to me that the following query should be a lot faster.  This runs in 17 seconds (regardless how many times I run it)select ac.* from application_controls_view ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\nif I do not use the view the query runs in under 100 msselect ac.* from application_controls ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\nThe view is SELECT t.action_form_type_ref_id, r1.display AS action_form_type_display, t.action_order_type_ref_id, r2.display AS action_order_type_display, t.action_view_ref_id, r3.display AS action_view_display, t.active_ind, t.application_control_id, t.application_control_name, t.application_view_id, t.background_color, t.background_processing_ind, t.base_model_type_ref_id, r4.display AS base_model_type_display, t.base_value_script, t.class_instantiate_script, t.class_name, t.combo_add_to_list_ind, t.combo_category_ref_id, r5.display AS combo_category_display, t.combo_dynamic_search_ind, t.combo_filter_ref_id, r6.display AS combo_filter_display, t.combo_group_ref_id, r7.display AS combo_group_display, t.combo_short_display_ind, t.combo_values_term_id, t.comparison_operator_ref_id, r8.display AS comparison_operator_display, t.context_ref_id, r9.display AS context_display, t.control_description, t.control_format, t.control_format_ref_id, r10.display AS \n\n         <snip for brevity>t.parameter_ref_id = r30.ref_id AND t.parameter_source_ref_id = r31.ref_id AND t.record_item_ref_id = r32.ref_id AND t.repeating_section_view_ref_id = r33.ref_id AND t.report_print_ref_id = r34.ref_id AND t.right_arrow_action_ref_id = r35.ref_id AND t.right_click_action_ref_id = r36.ref_id AND t.section_view_ref_id = r37.ref_id AND t.select_action_ref_id = r38.ref_id AND t.source_ref_id = r39.ref_id AND t.state_field_type_ref_id = r40.ref_id AND t.table_access_ref_id = r41.ref_id AND t.update_user_ref_id = r42.ref_id AND t.value_data_type_ref_id = r43.ref_id;\nso basically it joins 43 times to the refs table on the primary keythe explain confirms the nested loops\"   {NESTLOOP \"\"   :startup_cost 2660771.70 \"\"   :total_cost 3317979.85 \"\n\n\"   :plan_rows 27 \"\"   :plan_width 4708 \"\"   :targetlist (\"\"      {TARGETENTRY \"\"      :expr \"\"         {VAR \"\"         :varno 65001 \"\n\n\"         :varattno 29 \"\"         :vartype 20 \"\"         :vartypmod -1 \"\"         :varlevelsup 0 \"\"         :varnoold 5 \"            <snip for brevity>\n\n\n\"              ->  Index Scan using refs_pk on refs r17  (cost=0.00..5.45 rows=1 width=50)\"\"                    Index Cond: (r17.ref_id = t.detail_record_item_ref_id)\"\"        ->  Index Scan using refs_pk on refs r1  (cost=0.00..5.45 rows=1 width=50)\"\n\n\"              Index Cond: (r1.ref_id = t.action_form_type_ref_id)\"\"  ->  Index Scan using refs_pk on refs r  (cost=0.00..5.45 rows=1 width=8)\"\"        Index Cond: (r.ref_id = t.custom_controller_ref_id)\"\n\n\"        Filter: ((r.ref_key)::text ~~ '%ERNEST%'::text)\"I did a vacuum analyze and so the primary key (indexes of course) is being used.  But the above query is still 17s.  If I dont return so many columns it comes down to around 10 seconds.\nselect ac.application_control_id from application_controls_view ac, refs r where ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%ERNEST%';But in either case this is only 37 rows.  So 1554 lookups on a unique index on a table of 34000 rows means 6ms per internal join - note that many of those values are the same.\nDoes this seem right to you?  Anything I can tune ?  -- Gregory CaultonPrincipal at PatientOS Inc.personal email: [email protected]\nhttp://www.patientos.com\ncorporate: (888)-NBR-1EMR || fax  857.241.3022\nOh it seems to be the join that is throwing it off, because this runs in 600 msselect ac.* from application_controls_view acwhere ac.application_control_id in (50000745,50000760,\n50000759,50000758,50000757,50000756,50000753,50000751,50000750,50000749,50000748,50000746,50000744,50001328,50000752,50000754,50000755,50002757,50002756,\n50002755,50002754,50001168,50020825,50021077,50020821,50020822,50020824,50020823,50020820,50020819,50020809,50020810,50020806,50020807,50020817,50021066,\n50020808)never mind, makes sense now  - its fixed-- Gregory CaultonPrincipal at PatientOS Inc.personal email: [email protected]\nhttp://www.patientos.comcorporate: (888)-NBR-1EMR || fax  857.241.3022", "msg_date": "Sun, 26 Jul 2009 01:09:32 -0400", "msg_from": "Greg Caulton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested loop Query performance on PK" }, { "msg_contents": "Hello,\n\nLe 26/07/09 7:09, Greg Caulton a �crit :\n> On Sun, Jul 26, 2009 at 1:02 AM, Greg Caulton <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Hello,\n> \n> It seems to me that the following query should be a lot faster. \n> This runs in 17 seconds (regardless how many times I run it)\n> \n> select ac.* from application_controls_view ac, refs r where\n> ac.custom_controller_ref_id = r.ref_id and r.ref_key like '%XYZ%';\n> [...]\n> Does this seem right to you? Anything I can tune ? \n> [...]\n> \n> Oh it seems to be the join that is throwing it off, because this runs in\n> 600 ms\n> \n> select ac.* from application_controls_view ac\n> where ac.application_control_id in (\n> 50000745,\n> 50000760, \n> [...]\n> 50021066,\n> 50020808\n> )\n> \n> never mind, makes sense now - its fixed\n> [...]\n\nThe following rewritten query may be satisfiable for the generic case of\nusing arbitrary LIKE pattern for refs.ref_key and performing in a short\nacceptable time as well:\n\nSELECT ac.*\nFROM application_controls_view AS ac\nINNER JOIN (\n SELECT ref_id\n FROM refs\n WHERE ref_key LIKE '%XYZ%'\n) AS r\nON ac.custom_controller_ref_id = r.ref_id;\n\nThe hint is to build a subquery, from refs table, and to move in the\nWHERE clause that only refers to refs column (ref_key here). This\nsubquery results in a shorter table than the original (refs here),\nthence reducing the number of joins to perform with ac (no matter\nworking with view or original table).\n\nRegards.\n\n--\nnha / Lyon / France.\n", "msg_date": "Sun, 26 Jul 2009 18:28:36 +0200", "msg_from": "nha <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested loop Query performance on PK" } ]
[ { "msg_contents": "Hello,\n\nI am trying to optimize the count of files when I am using filters\n(select by some row/s parameter/s)\n\nIn this case I think that postgresql really count all files.\nResulting in unacceptable times of 4 seconds in http server response.\nTriggers+store in this case do not see very acceptable, because I need\nstore 1.5 millions of counting possibilities.\n\nMy question is:\nAny method for indirect count like ordered indexes + quadratic count?\nAny module?\nAny suggestion?\n\n-- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\n--\nPublicidad y Servicios http://www.pas-world.com\nDirectorio http://www.precioventa.com\nTienda http://informatica.precioventa.com/es/\nAutoridad certificadora http://ca.precioventa.com/es/\n--\n--\n\n", "msg_date": "Mon, 27 Jul 2009 11:06:41 +0200", "msg_from": "Developer <[email protected]>", "msg_from_op": true, "msg_subject": "More speed counting rows" }, { "msg_contents": "On Mon, Jul 27, 2009 at 3:06 AM, Developer<[email protected]> wrote:\n> Hello,\n>\n> I am trying to optimize the count of files when I am using filters\n> (select by some row/s parameter/s)\n>\n> In this case I think that postgresql really count all files.\n> Resulting in unacceptable times of 4 seconds in http server response.\n> Triggers+store in this case do not see very acceptable, because I need\n> store 1.5 millions of counting possibilities.\n>\n> My question is:\n> Any method for indirect count like ordered indexes + quadratic count?\n> Any module?\n> Any suggestion?\n\nPostgres cannot just use indexes, it has tot hit the tables. Rather\nthan suspecting what pgsql is doing, use explain analyze select ... to\nsee what your query is actually doing. If it is having to scan the\ntable each time, then faster IO or a different table layout may be in\norder.\n", "msg_date": "Mon, 27 Jul 2009 07:39:32 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More speed counting rows" }, { "msg_contents": "Developer wrote:\n> Hello,\n> \n> I am trying to optimize the count of files when I am using filters\n> (select by some row/s parameter/s)\n> \n> In this case I think that postgresql really count all files.\n> Resulting in unacceptable times of 4 seconds in http server response.\n> Triggers+store in this case do not see very acceptable, because I need\n> store 1.5 millions of counting possibilities.\n> \n> My question is:\n> Any method for indirect count like ordered indexes + quadratic count?\n> Any module?\n> Any suggestion?\n> \n\nI had a similar problem where HTTP requests triggered a count(*) over a \ntable that was growing rapidly. The bigger the table got, the longer \nthe count took. In my case, however, the counts only have to be a \nreasonable estimate of the current state, so I solved this problem with \na count_sums table that gets updated every 30 minutes using a simple \nperl script in a cron job. The HTTP requests now trigger a very fast \nselect from a tiny, 9 row, 2 column table.\n\nHow \"up to date\" do the counts need to be? If the count takes 4 \nseconds, can you run it every minute and store the counts in a table for \nretrieval by the HTTP requests? Or does it absolutely have to be the \nexact count at the moment of the request?\n\nIf it needs to be more real-time, you could expand on this by adding \npost insert/delete triggers that automatically update the counts table \nto keep it current. In my case it just wasn't necessary.\n\n\t- Chris\n", "msg_date": "Mon, 27 Jul 2009 08:59:08 -0600", "msg_from": "Chris Ernst <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More speed counting rows" }, { "msg_contents": "On Mon, Jul 27, 2009 at 5:06 AM, Developer<[email protected]> wrote:\n> Hello,\n>\n> I am trying to optimize the count of files when I am using filters\n> (select by some row/s parameter/s)\n> My question is:\n> Any method for indirect count like ordered indexes + quadratic count?\n> Any module?\n> Any suggestion?\n\nIf all you need is a good-enough estimate, you can try reporting the\ntuples count out of the stats tables. It'll only be as up-to-date as\nautovac makes it, though. I do this in one app to give me ballpark\nfigures for some constantly-growing tables.\n\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Mon, 27 Jul 2009 11:25:23 -0400", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More speed counting rows" }, { "msg_contents": "\n> How \"up to date\" do the counts need to be? If the count takes 4 \n> seconds, can you run it every minute and store the counts in a table for \n> retrieval by the HTTP requests? \nNow, I am storing integer value for filter in memory with timeout, but\nin busy server, system sure crash without system memory (>700MB for all\ncombinations, if combinations counted > deleted memory count by\ntimeout).\n\n> Or does it absolutely have to be the \n> exact count at the moment of the request?\nSome applications could fail, without exact number.\n\n\n\n-- \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n--\n--\nPublicidad y Servicios http://www.pas-world.com\nDirectorio http://www.precioventa.com\nTienda http://informatica.precioventa.com/es/\nAutoridad certificadora http://ca.precioventa.com/es/\n--\n--\n\n", "msg_date": "Mon, 27 Jul 2009 21:04:39 +0200", "msg_from": "Developer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: More speed counting rows" } ]
[ { "msg_contents": "Hi,\nsubject is the following type of query needed in a function to select data:\n\nSELECT ' 13.04.2009 12:00:00 ' AS zeit,\n \n'M' AS ganglinientyp,\n \nm.zs_nr AS zs,\n \nj_ges,\n \nde_mw_abh_j_lkw(mw_abh) AS j_lkw,\n \nde_mw_abh_v_pkw(mw_abh) AS v_pkw,\n \nde_mw_abh_v_lkw(mw_abh) AS v_lkw,\n \nde_mw_abh_p_bel(mw_abh) AS p_bel\n \nFROM messungen_v_dat_2009_04_13 m\n \nINNER JOIN de_mw w ON w.nr = m.mw_nr\n \nWHERE m.ganglinientyp = 'M'\n \nAND ' 890 ' = m.minute_tag;\nexplain analyse brings up \n\nNested Loop (cost=0.00..66344.47 rows=4750 width=10) (actual \ntime=134.160..19574.228 rows=4148 loops=1)\n -> Index Scan using messungen_v_dat_2009_04_13_gtyp_minute_tag_idx \non messungen_v_dat_2009_04_13 m (cost=0.00..10749.14 rows=4750 width=8) \n(actual time=64.681..284.732 rows=4148 loops=1)\n Index Cond: ((ganglinientyp = 'M'::bpchar) AND (891::smallint = \nminute_tag))\n -> Index Scan using de_nw_nr_idx on de_mw w (cost=0.00..10.69 \nrows=1 width=10) (actual time=4.545..4.549 rows=1 loops=4148)\n Index Cond: (w.nr = m.mw_nr)\n Total runtime: 19590.078 ms\n\nSeems quite slow to me.\nIs this query plan near to optimal or are their any serious flaws?\n", "msg_date": "Mon, 27 Jul 2009 16:09:12 +0200", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "select query performance question" }, { "msg_contents": "Hello\n\nmaybe is wrong tip, but your function like de* should be slow. What is\ntime of query without calling these functions?\n\nPavel Stehule\n\n2009/7/27 Thomas Zaksek <[email protected]>:\n> Hi,\n> subject is the following type of query needed in a function to select data:\n>\n> SELECT ' 13.04.2009 12:00:00 ' AS zeit,\n>\n>\n>    'M' AS ganglinientyp,\n>\n>\n>     m.zs_nr AS zs,\n>\n>\n>  j_ges,\n>\n>\n>  de_mw_abh_j_lkw(mw_abh) AS j_lkw,\n>\n>\n>  de_mw_abh_v_pkw(mw_abh) AS v_pkw,\n>\n>\n>   de_mw_abh_v_lkw(mw_abh) AS v_lkw,\n>\n>\n>    de_mw_abh_p_bel(mw_abh) AS p_bel\n>\n>\n>  FROM messungen_v_dat_2009_04_13 m\n>\n>\n> INNER JOIN de_mw w ON w.nr = m.mw_nr\n>\n>\n>  WHERE  m.ganglinientyp = 'M'\n>\n>                                                                         AND\n> ' 890 ' = m.minute_tag;\n> explain analyse brings up\n> Nested Loop  (cost=0.00..66344.47 rows=4750 width=10) (actual\n> time=134.160..19574.228 rows=4148 loops=1)\n>  ->  Index Scan using messungen_v_dat_2009_04_13_gtyp_minute_tag_idx on\n> messungen_v_dat_2009_04_13 m  (cost=0.00..10749.14 rows=4750 width=8)\n> (actual time=64.681..284.732 rows=4148 loops=1)\n>        Index Cond: ((ganglinientyp = 'M'::bpchar) AND (891::smallint =\n> minute_tag))\n>  ->  Index Scan using de_nw_nr_idx on de_mw w  (cost=0.00..10.69 rows=1\n> width=10) (actual time=4.545..4.549 rows=1 loops=4148)\n>        Index Cond: (w.nr = m.mw_nr)\n> Total runtime: 19590.078 ms\n>\n> Seems quite slow to me.\n> Is this query plan near to optimal or are their any serious flaws?\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 27 Jul 2009 16:22:08 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query performance question" }, { "msg_contents": "On Mon, 27 Jul 2009, Thomas Zaksek wrote:\n> Nested Loop (cost=0.00..66344.47 rows=4750 width=10)\n> (actual time=134.160..19574.228 rows=4148 loops=1)\n> -> Index Scan using messungen_v_dat_2009_04_13_gtyp_minute_tag_idx on messungen_v_dat_2009_04_13 m\n> (cost=0.00..10749.14 rows=4750 width=8)\n> (actual time=64.681..284.732 rows=4148 loops=1)\n> Index Cond: ((ganglinientyp = 'M'::bpchar) AND (891::smallint = > minute_tag))\n> -> Index Scan using de_nw_nr_idx on de_mw w\n> (cost=0.00..10.69 rows=1 width=10)\n> (actual time=4.545..4.549 rows=1 loops=4148)\n> Index Cond: (w.nr = m.mw_nr)\n> Total runtime: 19590.078 ms\n>\n> Seems quite slow to me.\n\nNot necessarily. Consider that your query is fetching 4148 different rows \nin an index scan. That means that your index finds 4148 row locations on \ndisc, and 4148 separate disc operations need to be performed to fetch \nthem. If you divide the time taken by that number, you get:\n\n19590.078 / 4148 = 4.7 (milliseconds per seek)\n\nWhich seems quite good actually. That's as fast as hard drives work.\n\nNow if the data was in cache, it would be a completely different story - I \nwould expect the whole query to complete within a few milliseconds.\n\nMatthew\n\n-- \n And why do I do it that way? Because I wish to remain sane. Um, actually,\n maybe I should just say I don't want to be any worse than I already am.\n - Computer Science Lecturer\n", "msg_date": "Mon, 27 Jul 2009 15:43:26 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query performance question" }, { "msg_contents": "Thomas Zaksek <[email protected]> wrote: \n \n> Is this query plan near to optimal or are their any serious flaws?\n \nI didn't see any problem with the query, but with the information\nprovided, we can't really tell if you need to reconfigure something,\nor maybe add an index.\n \nThe plan generated for the query is doing an index scan and on one\ntable and randomly accessing related rows in another, with an average\ntime per result row of about 4ms. Either you've got *really* fast\ndrives or you're getting some benefit from cache. Some obvious\nquestions:\n \nWhat version of PostgreSQL is this?\n \nWhat OS is the server on?\n \nWhat does the server hardware look like? (RAM, drive array, etc.)\n \nWhat are the non-default lines in the postgresql.conf file?\n \nWhat are the definitions of these two tables? How many rows?\n \n-Kevin\n", "msg_date": "Mon, 27 Jul 2009 09:43:38 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query performance question" }, { "msg_contents": "Hi Thomas,\n\nHow is 'messungen_v_dat_2009_04_13_gtyp_minute_tag_idx' defined? What is \nthe row count for the table?\n\nMike\n\n> Hi,\n> subject is the following type of query needed in a function to select \n> data:\n>\n> SELECT ' 13.04.2009 12:00:00 ' AS zeit,\n> \n> 'M' AS ganglinientyp,\n> \n> m.zs_nr AS zs,\n> \n> j_ges,\n> \n> de_mw_abh_j_lkw(mw_abh) AS j_lkw,\n> \n> de_mw_abh_v_pkw(mw_abh) AS v_pkw,\n> \n> de_mw_abh_v_lkw(mw_abh) AS v_lkw,\n> \n> de_mw_abh_p_bel(mw_abh) AS p_bel\n> \n> FROM messungen_v_dat_2009_04_13 m\n> \n> INNER JOIN de_mw w ON w.nr = m.mw_nr\n> \n> WHERE m.ganglinientyp = 'M'\n> \n> AND ' 890 ' = m.minute_tag;\n> explain analyse brings up\n> Nested Loop (cost=0.00..66344.47 rows=4750 width=10) (actual \n> time=134.160..19574.228 rows=4148 loops=1)\n> -> Index Scan using messungen_v_dat_2009_04_13_gtyp_minute_tag_idx \n> on messungen_v_dat_2009_04_13 m (cost=0.00..10749.14 rows=4750 \n> width=8) (actual time=64.681..284.732 rows=4148 loops=1)\n> Index Cond: ((ganglinientyp = 'M'::bpchar) AND (891::smallint \n> = minute_tag))\n> -> Index Scan using de_nw_nr_idx on de_mw w (cost=0.00..10.69 \n> rows=1 width=10) (actual time=4.545..4.549 rows=1 loops=4148)\n> Index Cond: (w.nr = m.mw_nr)\n> Total runtime: 19590.078 ms\n>\n> Seems quite slow to me.\n> Is this query plan near to optimal or are their any serious flaws?\n>\n\n", "msg_date": "Mon, 27 Jul 2009 16:31:00 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select query performance question" }, { "msg_contents": "Kevin Grittner wrote:\n> Thomas Zaksek <[email protected]> wrote: \n> \n> \n>> Is this query plan near to optimal or are their any serious flaws?\n>> \n> \n> I didn't see any problem with the query, but with the information\n> provided, we can't really tell if you need to reconfigure something,\n> or maybe add an index.\n> \n> The plan generated for the query is doing an index scan and on one\n> table and randomly accessing related rows in another, with an average\n> time per result row of about 4ms. Either you've got *really* fast\n> drives or you're getting some benefit from cache. Some obvious\n> questions:\n> \n> What version of PostgreSQL is this?\n> \n> What OS is the server on?\n> \n> What does the server hardware look like? (RAM, drive array, etc.)\n> \n> What are the non-default lines in the postgresql.conf file?\n> \n> What are the definitions of these two tables? How many rows?\n> \n> -Kevin\n> \nPostgresql 8.3\n\nFreebsd 7.2\n\nA HP Server with Dual Opteron, 8GB Ram and a RAID 5 SCSI System\n\n\\d+ de_mw;\n Table \"de_mw\"\n Column | Type | Modifiers \n| Description\n---------+----------+----------------------------------------------------+-------------\n nr | integer | not null default nextval('de_mw_nr_seq'::regclass) |\n j_ges | smallint | |\n mw_abh | integer | |\n mw_test | bit(19) | |\nIndexes:\n \"de_mw_pkey\" PRIMARY KEY, btree (nr)\n \"de_mw_j_ges_key\" UNIQUE, btree (j_ges, mw_abh, mw_test)\n \"de_nw_nr_idx\" btree (nr)\nHas OIDs: no\n\n\n\\d+ messungen_v_dat_2009_04_13\n Table \"messungen_v_dat_2009_04_13\"\n Column | Type | Modifiers | Description\n---------------+--------------+-----------+-------------\n ganglinientyp | character(1) | not null |\n minute_tag | smallint | not null |\n zs_nr | integer | not null |\n mw_nr | integer | |\nIndexes:\n \"messungen_v_dat_2009_04_13_pkey\" PRIMARY KEY, btree (ganglinientyp, \nminute_tag, zs_nr)\n \"messungen_v_dat_2009_04_13_gtyp_minute_tag_idx\" btree \n(ganglinientyp, minute_tag)\n \"messungen_v_dat_2009_04_13_gtyp_minute_tag_zs_nr_idx\" btree \n(ganglinientyp, minute_tag, zs_nr)\n \"messungen_v_dat_2009_04_13_minute_tag_idx\" btree (minute_tag)\nForeign-key constraints:\n \"messungen_v_dat_2009_04_13_mw_nr_fkey\" FOREIGN KEY (mw_nr) \nREFERENCES de_mw(nr)\n \"messungen_v_dat_2009_04_13_zs_nr_fkey\" FOREIGN KEY (zs_nr) \nREFERENCES de_zs(zs)\nInherits: messungen_v_dat\nHas OIDs: no\n\nselect count(*) from messungen_v_dat_2009_04_13\ntraffic_nrw_0_4_0-# ;\n count\n---------\n 6480685\n(1 row)\n\n\ntraffic_nrw_0_4_0=# select count(*) from de_mw;\n count\n----------\n 23853134\n(1 row)\n\n\n\n", "msg_date": "Wed, 29 Jul 2009 12:37:28 +0200", "msg_from": "Thomas Zaksek <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select query performance question" } ]
[ { "msg_contents": "Hi. I'm seeing some weird behavior in Postgres. I'm running read only\nqueries (SELECT that is - no UPDATE or DELETE or INSERT is happening at\nall). I can run one rather complicated query and the results come back...\neventually. Likewise with another. But, when I run both queries at the\nsame time, Postgres seems to ground to a halt. Neither one completes. In\nfact, pgAdmin locks up - I need to cancel them using psql.\nI'd expect this from MySQL but not Postgres. Am I doing something wrong? Or\nmissing something?\n\nHi.  I'm seeing some weird behavior in Postgres.  I'm running read only queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at all).  I can run one rather complicated query and the results come back... eventually.  Likewise with another.  But, when I run both queries at the same time, Postgres seems to ground to a halt.  Neither one completes.  In fact, pgAdmin locks up - I need to cancel them using psql.\nI'd expect this from MySQL but not Postgres.  Am I doing something wrong? Or missing something?", "msg_date": "Mon, 27 Jul 2009 20:54:05 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Will Postgres ever lock with read only queries?" }, { "msg_contents": "On 07/27/2009 08:54 PM, Robert James wrote:\n> Hi. I'm seeing some weird behavior in Postgres. I'm running read \n> only queries (SELECT that is - no UPDATE or DELETE or INSERT is \n> happening at all). I can run one rather complicated query and the \n> results come back... eventually. Likewise with another. But, when I \n> run both queries at the same time, Postgres seems to ground to a halt. \n> Neither one completes. In fact, pgAdmin locks up - I need to cancel \n> them using psql.\n> I'd expect this from MySQL but not Postgres. Am I doing something \n> wrong? Or missing something?\n\nI've never had straight queries block each other. What is the query? \nWhat version of PostgreSQL? What operating system?\n\nCheers,\nmark\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Mon, 27 Jul 2009 21:02:08 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will Postgres ever lock with read only queries?" }, { "msg_contents": "Robert James wrote:\n> Hi. I'm seeing some weird behavior in Postgres. I'm running read only \n> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at \n> all). I can run one rather complicated query and the results come \n> back... eventually. Likewise with another. But, when I run both \n> queries at the same time, Postgres seems to ground to a halt. Neither \n> one completes. In fact, pgAdmin locks up - I need to cancel them using \n> psql.\n> I'd expect this from MySQL but not Postgres. Am I doing something \n> wrong? Or missing something?\n\nThey're probably not blocking each other but more likely you're \nexhausting your servers resources. If they return \"eventually\" \nindividually, then running both at the same time will take at least \n\"eventually x2\".\n\nAs Mark said, what are the queries? What postgres version? What o/s? \nWhat are your hardware specs (how much memory, disk speeds/types etc)?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Tue, 28 Jul 2009 11:21:01 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will Postgres ever lock with read only queries?" }, { "msg_contents": "Chris <[email protected]> writes:\n> Robert James wrote:\n>> Hi. I'm seeing some weird behavior in Postgres. I'm running read only \n>> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at \n>> all). I can run one rather complicated query and the results come \n>> back... eventually. Likewise with another. But, when I run both \n>> queries at the same time, Postgres seems to ground to a halt.\n\n> They're probably not blocking each other but more likely you're \n> exhausting your servers resources. If they return \"eventually\" \n> individually, then running both at the same time will take at least \n> \"eventually x2\".\n\nIt could be a lot more than x2. If the two queries together eat enough\nRAM to drive the machine into swapping, where it didn't swap while\ndoing one at a time, the slowdown could be orders of magnitude.\n\nWatching vmstat output might be informative --- it would at least give\nan idea if the bottleneck is CPU, I/O, or swap.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Jul 2009 21:40:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will Postgres ever lock with read only queries? " }, { "msg_contents": "Thanks for the replies. I'm running Postgres 8.2 on Windows XP, Intel Core\nDuo (though Postgres seems to use only one 1 core).\nThe queries are self joins on very large tables, with lots of nested loops.\n\nOn Mon, Jul 27, 2009 at 9:40 PM, Tom Lane <[email protected]> wrote:\n\n> Chris <[email protected]> writes:\n> > Robert James wrote:\n> >> Hi. I'm seeing some weird behavior in Postgres. I'm running read only\n> >> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at\n> >> all). I can run one rather complicated query and the results come\n> >> back... eventually. Likewise with another. But, when I run both\n> >> queries at the same time, Postgres seems to ground to a halt.\n>\n> > They're probably not blocking each other but more likely you're\n> > exhausting your servers resources. If they return \"eventually\"\n> > individually, then running both at the same time will take at least\n> > \"eventually x2\".\n>\n> It could be a lot more than x2. If the two queries together eat enough\n> RAM to drive the machine into swapping, where it didn't swap while\n> doing one at a time, the slowdown could be orders of magnitude.\n>\n> Watching vmstat output might be informative --- it would at least give\n> an idea if the bottleneck is CPU, I/O, or swap.\n>\n> regards, tom lane\n>\n\nThanks for the replies.  I'm running Postgres 8.2 on Windows XP, Intel Core Duo (though Postgres seems to use only one 1 core).The queries are self joins on very large tables, with lots of nested loops.\nOn Mon, Jul 27, 2009 at 9:40 PM, Tom Lane <[email protected]> wrote:\nChris <[email protected]> writes:\n> Robert James wrote:\n>> Hi.  I'm seeing some weird behavior in Postgres.  I'm running read only\n>> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at\n>> all).  I can run one rather complicated query and the results come\n>> back... eventually.  Likewise with another.  But, when I run both\n>> queries at the same time, Postgres seems to ground to a halt.\n\n> They're probably not blocking each other but more likely you're\n> exhausting your servers resources. If they return \"eventually\"\n> individually, then running both at the same time will take at least\n> \"eventually x2\".\n\nIt could be a lot more than x2.  If the two queries together eat enough\nRAM to drive the machine into swapping, where it didn't swap while\ndoing one at a time, the slowdown could be orders of magnitude.\n\nWatching vmstat output might be informative --- it would at least give\nan idea if the bottleneck is CPU, I/O, or swap.\n\n                        regards, tom lane", "msg_date": "Tue, 28 Jul 2009 09:17:56 -0400", "msg_from": "Robert James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Will Postgres ever lock with read only queries?" }, { "msg_contents": "Can you run those two queries with psql?\n\nI remember having some trouble running multiple queries in the same pgadmin\nprocess. Both would get stuck until both finished I think. I went to\nrunning a pgadmin process per query.\n\nOn Tue, Jul 28, 2009 at 9:17 AM, Robert James <[email protected]>wrote:\n\n> Thanks for the replies. I'm running Postgres 8.2 on Windows XP, Intel Core\n> Duo (though Postgres seems to use only one 1 core).\n> The queries are self joins on very large tables, with lots of nested loops.\n>\n> On Mon, Jul 27, 2009 at 9:40 PM, Tom Lane <[email protected]> wrote:\n>\n>> Chris <[email protected]> writes:\n>> > Robert James wrote:\n>> >> Hi. I'm seeing some weird behavior in Postgres. I'm running read only\n>> >> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at\n>> >> all). I can run one rather complicated query and the results come\n>> >> back... eventually. Likewise with another. But, when I run both\n>> >> queries at the same time, Postgres seems to ground to a halt.\n>>\n>> > They're probably not blocking each other but more likely you're\n>> > exhausting your servers resources. If they return \"eventually\"\n>> > individually, then running both at the same time will take at least\n>> > \"eventually x2\".\n>>\n>> It could be a lot more than x2. If the two queries together eat enough\n>> RAM to drive the machine into swapping, where it didn't swap while\n>> doing one at a time, the slowdown could be orders of magnitude.\n>>\n>> Watching vmstat output might be informative --- it would at least give\n>> an idea if the bottleneck is CPU, I/O, or swap.\n>>\n>> regards, tom lane\n>>\n>\n>\n\nCan you run those two queries with psql?I remember having some trouble running multiple queries in the same pgadmin process.  Both would get stuck until both finished I think.  I went to running a pgadmin process per query.\nOn Tue, Jul 28, 2009 at 9:17 AM, Robert James <[email protected]> wrote:\nThanks for the replies.  I'm running Postgres 8.2 on Windows XP, Intel Core Duo (though Postgres seems to use only one 1 core).The queries are self joins on very large tables, with lots of nested loops.\n\nOn Mon, Jul 27, 2009 at 9:40 PM, Tom Lane <[email protected]> wrote:\nChris <[email protected]> writes:\n> Robert James wrote:\n>> Hi.  I'm seeing some weird behavior in Postgres.  I'm running read only\n>> queries (SELECT that is - no UPDATE or DELETE or INSERT is happening at\n>> all).  I can run one rather complicated query and the results come\n>> back... eventually.  Likewise with another.  But, when I run both\n>> queries at the same time, Postgres seems to ground to a halt.\n\n> They're probably not blocking each other but more likely you're\n> exhausting your servers resources. If they return \"eventually\"\n> individually, then running both at the same time will take at least\n> \"eventually x2\".\n\nIt could be a lot more than x2.  If the two queries together eat enough\nRAM to drive the machine into swapping, where it didn't swap while\ndoing one at a time, the slowdown could be orders of magnitude.\n\nWatching vmstat output might be informative --- it would at least give\nan idea if the bottleneck is CPU, I/O, or swap.\n\n                        regards, tom lane", "msg_date": "Tue, 28 Jul 2009 10:25:45 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will Postgres ever lock with read only queries?" }, { "msg_contents": "Robert James wrote:\n> Thanks for the replies. I'm running Postgres 8.2 on Windows XP, Intel \n> Core Duo (though Postgres seems to use only one 1 core).\n\nA single query can only use one core, but it will use both if multiple \nqueries come in.\n\n> The queries are self joins on very large tables, with lots of nested loops.\n\nIf you want help optimizing them, you'll need to send through\n- explain analyze\n- table definitions\nand of course\n- the query itself\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Wed, 29 Jul 2009 08:36:12 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Will Postgres ever lock with read only queries?" } ]
[ { "msg_contents": "I understand that checkpointing is a necessary part of a pgsql\ndatabase, but I am also under the impression that you want to find a\nbalance between how frequently you checkpoint and how much a given\ncheckpoint has to do. It's all about balancing the disk I/O out to get\na consistent throughput and forstall the db 'stalling' while it writes\nout large checkpoints. However, when I check out our production\nsystem, I think we're checkpointing a little too frequently (I am\n_not_ referring to the 'checkpointing too fast' message). An example:\nJul 26 04:40:05 checkpoint starting: time\nJul 26 04:40:35 checkpoint complete: wrote 150 buffers (0.1%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=29.836 s,\nJul 26 04:40:35 sync=0.128 s, total=29.974 s\nJul 26 04:45:05 checkpoint starting: time\nJul 26 04:45:48 checkpoint complete: wrote 219 buffers (0.1%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=43.634 s,\nJul 26 04:45:48 sync=0.047 s, total=43.687 s\nJul 26 04:50:05 checkpoint starting: time\nJul 26 04:50:35 checkpoint complete: wrote 153 buffers (0.1%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=30.418 s,\nJul 26 04:50:35 sync=0.148 s, total=30.577 s\nJul 26 04:55:05 checkpoint starting: time\nJul 26 04:55:26 checkpoint complete: wrote 108 buffers (0.0%); 0\ntransaction log file(s) added, 0 removed, 0 recycled; write=21.429 s,\n\nWhile I see the number of buffers fluctuating decently, I note that\npercentage only fluctuates from 0.0% to 0.4% for the duration of an\nentire day. It seems to me that we might want to space the checkpoints\nout a bit less frequently and get maybe 1 or 2% before we write things\nout.\n\nIs my understanding of all this accurate, or am I off base here? We're\nrunning 8.3.7 (going to 8.4.x soon). Checkpoint settings currently:\n name | start_setting\n| stop_setting | source\n---------------------------------+-------------------------------------+-------------------------------------+----------------------\n checkpoint_segments | 128\n| 128 | configuration file\n checkpoint_warning | 240\n| 240 | configuration file\n\nMore than happy to provide additional info as requested. TIA!\n-- \nDouglas J Hunley, RHCT\[email protected] : http://douglasjhunley.com : Twitter: @hunleyd\n\nObsessively opposed to the typical.\n", "msg_date": "Tue, 28 Jul 2009 13:18:39 -0400", "msg_from": "Doug Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "really stupid question about checkpointing" }, { "msg_contents": "On Tue, 28 Jul 2009, Doug Hunley wrote:\n\n> Jul 26 04:45:48 checkpoint complete: wrote 219 buffers (0.1%); 0\n> transaction log file(s) added, 0 removed, 0 recycled; write=43.634 s,\n\nEach buffer is 8KB here. So this one, the largest of the checkpoints you \nshowed in your sample, is writing out 1.71MB spread out over 43 seconds. \nIt hardly seems worthwhile to further spread out I/O when the total amount \nof it is so small.\n\n> checkpoint_segments | 128\n\nThis is on the high side, and given the low buffer write statistics you're \nseeing I'd bet that it's checkpoint_timeout you'd need to increase in \norder spread out checkpoints further. You might get a useful improvement \nincreasing checkpoint_timeout a bit, checkpoint_segments you already have \nset to an extremely high value--one that is normally only justified if you \nhave a lot more write activity than your logs suggest.\n\nIf I were you, I'd cut checkpoint_segments in half, double \ncheckpoint_timeout, and check back again on the logs in a day. That \nshould reduce the amount of disk space wasted by the pg_log overhead while \nnetting you better performance, because right now you're probably only \nhaving timed checkpoints rather than segment based ones. If you look at \npg_stat_bgwriter you'll probably find that checkpoints_req is close to 0 \nwhile checkpoints_timed is not.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 28 Jul 2009 22:08:52 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: really stupid question about checkpointing" } ]
[ { "msg_contents": "* *Message-id*:\n <[email protected]\n <http://archives.postgresql.org/pgsql-performance/2009-07/msg00293.php>>\n\n------------------------------------------------------------------------\nOn Mon, 27 Jul 2009, Dave Youatt wrote:\n\n Greg, those are compelling numbers for the new Nehalem processors.\n Great news for postgresql. Do you think it's due to the new internal\n interconnect...\n\n\nUnlikely. Different threads on the same CPU core share their resources,\nso they don't need an explicit communication channel at all (I'm\nsimplifying massively here). A real interconnect is only needed between\nCPUs and between different cores on a CPU, and of course to the outside\nworld. Scott's explanation of why SMT works better now is much more\nlikely to be the real reason.\n\n:-) there's also this interconnect thingie between sockets, cores and\nmemory. Nehalem has a new one (for Intel), integrated memory controller,\nthat is. And a new on-chip cache organization.\n\n I'm still betting on the interconnect(s), particularly for\nbandwidth-intensive, data pumping server apps. And it looks like the\nother new interconnect (\"QuickPath\") plays well w/the integrated memory\ncontroller for multi-socket systems.\n\nGreg, in your spare time... Also, curious how Nehalem compares w/AMD\nPhenom II, esp the newer ones w/multi-lane(?) HT\n\nAnd apologies to the list for straying off topic a bit.\n\n\n\n\n\n\n\n\nMessage-id: <[email protected]>\n\n\n\n\nOn Mon, 27 Jul 2009, Dave Youatt wrote:\n\nGreg,\nthose are compelling numbers for the new Nehalem processors.\nGreat news for postgresql. Do you think it's due to the new internal\ninterconnect...\n\n\nUnlikely. Different threads on the same CPU core share their resources,\nso they don't need an explicit communication channel at all\n(I'm simplifying massively here). A real interconnect is only\nneeded between CPUs and between different cores on a CPU, and\nof course to the outside world.\nScott's explanation of why SMT works better now is much more\nlikely to be the real reason.\n\n\n:-) there's also this interconnect thingie between sockets, cores and\nmemory. Nehalem has a new one (for Intel), integrated memory\ncontroller, that is.  And a new on-chip cache organization.\n\n I'm still betting on the interconnect(s), particularly for\nbandwidth-intensive, data pumping server apps.  And it looks like the\nother new interconnect (\"QuickPath\") plays well w/the integrated memory\ncontroller for multi-socket systems.\n\nGreg, in your spare time...  Also, curious how Nehalem compares w/AMD\nPhenom II, esp the newer ones w/multi-lane(?) HT\n\nAnd apologies to the list for straying off topic a bit.", "msg_date": "Tue, 28 Jul 2009 11:42:16 -0700", "msg_from": "Dave Youatt <[email protected]>", "msg_from_op": true, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" }, { "msg_contents": "On Tue, 28 Jul 2009, Dave Youatt wrote:\n> Unlikely. Different threads on the same CPU core share their resources, so they don't\n> need an explicit communication channel at all (I'm simplifying massively here). A real\n> interconnect is only needed between CPUs and between different cores on a CPU, and of\n> course to the outside world. Scott's explanation of why SMT works better now is much more\n> likely to be the real reason.\n\nActually, no, I wrote that. Please give at least some indication when \nreplying to an email which parts of it are your words and which are quotes \nfrom someone else. Emails can be incredibly confusing without that \ndistinction.\n\nYou actually wrote:\n\n> :-) there's also this interconnect thingie between sockets, cores and memory. Nehalem has\n> a new one (for Intel), integrated memory controller, that is.  And a new on-chip cache\n> organization.\n\nThis, (like I mention elsewhere) will make the CPU faster overall, but is \nunlikely to increase the performance gain of switching SMT on. In fact, \nhaving a lower latency memory controller is more likely to reduce some of \nthe problem that SMT is trying to address - that of a single thread \nstalling on memory access.\n\nHaving said that, memory access latency is not scaling as quickly as CPU \nspeed, so over time SMT is going to get more important.\n\nMatthew\n\n-- \n\"Take care that thou useth the proper method when thou taketh the measure of\n high-voltage circuits so that thou doth not incinerate both thee and the\n meter; for verily, though thou has no account number and can be easily\n replaced, the meter doth have one, and as a consequence, bringeth much woe\n upon the Supply Department.\" -- The Ten Commandments of Electronics", "msg_date": "Wed, 29 Jul 2009 12:16:41 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: hyperthreaded cpu still an issue in 8.4?" } ]
[ { "msg_contents": "When reviewing the vacuum logs, I notice that on any given day\nautovacuum only seems to touch four of the tables in one of our\nschemas (not counting toast tables). However, if I look at the\npgstatspack output for the same day, I see that there are plenty of\nother tables receiving a high number of inserts and deletes. How can I\ntell if autovacuum is accurately choosing the tables that need its\nattention (these four tables apparently) or if autovacuum is simply\nnever making it to the other tables cause its too busy with these\ntables (my suspicion)? This is on 8.3.7 with the following settings in\npostgresql.conf:\nautovacuum = on\nlog_autovacuum_min_duration = 0\nautovacuum_vacuum_threshold = 250\nautovacuum_analyze_threshold = 125\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_naptime = 5min\n\nAny/all other information can be provided as needed. TIA, again.\n-- \nDouglas J Hunley, RHCT\[email protected] : http://douglasjhunley.com : Twitter: @hunleyd\n\nObsessively opposed to the typical.\n", "msg_date": "Wed, 29 Jul 2009 12:47:31 -0400", "msg_from": "Doug Hunley <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum 'stuck' ?" }, { "msg_contents": "On Wed, Jul 29, 2009 at 12:47 PM, Doug Hunley<[email protected]> wrote:\n> When reviewing the vacuum logs, I notice that on any given day\n> autovacuum only seems to touch four of the tables in one of our\n> schemas (not counting toast tables). However, if I look at the\n> pgstatspack output for the same day, I see that there are plenty of\n> other tables receiving a high number of inserts and deletes. How can I\n> tell if autovacuum is accurately choosing the tables that need its\n> attention (these four tables apparently) or if autovacuum is simply\n> never making it to the other tables cause its too busy with these\n> tables (my suspicion)? This is on 8.3.7 with the following settings in\n> postgresql.conf:\n> autovacuum = on\n> log_autovacuum_min_duration = 0\n> autovacuum_vacuum_threshold = 250\n> autovacuum_analyze_threshold = 125\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_naptime = 5min\n>\n> Any/all other information can be provided as needed. TIA, again.\n\nDisclaimer: I am not an expert on autovacuum.\n\nIf most of the activity on your other tables is UPDATEs, and given\nthat you are running 8.3, it is possible that they are all HOT\nupdates, and vacuuming isn't much needed. In terms of figuring out\nwhat is going on with those tables, perhaps you could try any or all\nof the following:\n\n1. Lower your autovacuum_naptime (say, to the default value instead of\nfive times that amount) and see if it vacuums more stuff. On a\nrelated note, does autovacuum do stuff every time it wakes up? Or\njust now and then? If the latter, it's probably fine.\n\n2. Fire off a manual VACUUM VERBOSE on one of the other tables you\nthink might need attention and examine (or post) the output.\n\n3. Get Greg Sabino Mullane's check_postgres.pl script and use it to\nlook for bloat. Or, low tech way that I have used, compare:\n\nSELECT COALESCE(SUM(pg_column_size(x)), 0) AS size FROM your_table_name x\nvs.\nSELECT pg_relation_size('your_table_name'::regclass)\n\n(There's probably an easy way to do better than this; maybe someone\nwill enlighten me?)\n\nAlso, keep in mind that vacuuming is a little like dieting. No one\nparticularly likes it, and there's no value (and possibly some harm)\nin doing more of it than you need. If you're not getting fat (i.e.\nyour queries aren't running slowly) then it's probably not worth\nworrying about too much.\n\n...Robert\n", "msg_date": "Thu, 30 Jul 2009 17:45:26 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum 'stuck' ?" } ]
[ { "msg_contents": "Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune\nbefore and everything worked fine for me.\n\nAnd now i have ~93% cpu load. Here's changed values of config:\n\ndefault_statistics_target = 50\nmaintenance_work_mem = 1GB\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\neffective_cache_size = 22GB\nwork_mem = 192MB\nwal_buffers = 8MB\ncheckpoint_segments = 16\nshared_buffers = 7680MB\nmax_connections = 80\n\n\nMy box is Nehalem 2xQuad 2.8 with RAM 32Gb, and there's only\npostgresql working on it.\n\nFor connection pooling i'm using pgbouncer's latest version with\npool_size 20 (used 30 before, but now lowered) and 10k connections.\n\nWhat parameters i should give more attention on?\n", "msg_date": "Thu, 30 Jul 2009 18:06:40 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "> Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune\n> before and everything worked fine for me.\n>\n> And now i have ~93% cpu load. Here's changed values of config:\n>\n> default_statistics_target = 50\n> maintenance_work_mem = 1GB\n> constraint_exclusion = on\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 22GB\n> work_mem = 192MB\n> wal_buffers = 8MB\n> checkpoint_segments = 16\n> shared_buffers = 7680MB\n> max_connections = 80\n>\n>\n> My box is Nehalem 2xQuad 2.8 with RAM 32Gb, and there's only\n> postgresql working on it.\n>\n> For connection pooling i'm using pgbouncer's latest version with\n> pool_size 20 (used 30 before, but now lowered) and 10k connections.\n>\n> What parameters i should give more attention on?\n>\n\nAll the values seem quite reasonable to me. What about the _cost variables?\n\nI guess one or more queries are evaluated using a different execution\nplan, probably sequential scan instead of index scan, hash join instead of\nmerge join, or something like that.\n\nTry to log the \"slow\" statements - see \"log_min_statement_duration\". That\nmight give you slow queries (although not necessarily the ones causing\nproblems), and you can analyze them.\n\nWhat is the general I/O activity? Is there a lot of data read/written to\nthe disks, is there a lot of I/O wait?\n\nregards\nTomas\n\nPS: Was the database analyzed recently?\n\n", "msg_date": "Thu, 30 Jul 2009 15:39:42 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Unfortunately had to downgrade back to 8.3. Now having troubles with\nthat and still solving them.\n\nFor future upgrade, what is the basic steps?\n\n>Was the database analyzed recently?\nHm... there was smth like auto analyzer in serverlog when i started it\nfirst time, but i didn't mention that.\nShould I analyze whole db? How to do it?\n\nAnd how should I change _cost variables?\n\nI/O was very high. at first memory usage grew up and then began to full swap.\n\n2009/7/30 <[email protected]>:\n>> Hi, list. I've just upgraded pgsql from 8.3 to 8.4. I've used pgtune\n>> before and everything worked fine for me.\n>>\n>> And now i have ~93% cpu load. Here's changed values of config:\n>>\n>> default_statistics_target = 50\n>> maintenance_work_mem = 1GB\n>> constraint_exclusion = on\n>> checkpoint_completion_target = 0.9\n>> effective_cache_size = 22GB\n>> work_mem = 192MB\n>> wal_buffers = 8MB\n>> checkpoint_segments = 16\n>> shared_buffers = 7680MB\n>> max_connections = 80\n>>\n>>\n>> My box is Nehalem 2xQuad 2.8 with RAM 32Gb, and there's only\n>> postgresql working on it.\n>>\n>> For connection pooling i'm using pgbouncer's latest version with\n>> pool_size 20 (used 30 before, but now lowered) and 10k connections.\n>>\n>> What parameters i should give more attention on?\n>>\n>\n> All the values seem quite reasonable to me. What about the _cost variables?\n>\n> I guess one or more queries are evaluated using a different execution\n> plan, probably sequential scan instead of index scan, hash join instead of\n> merge join, or something like that.\n>\n> Try to log the \"slow\" statements - see \"log_min_statement_duration\". That\n> might give you slow queries (although not necessarily the ones causing\n> problems), and you can analyze them.\n>\n> What is the general I/O activity? Is there a lot of data read/written to\n> the disks, is there a lot of I/O wait?\n>\n> regards\n> Tomas\n>\n> PS: Was the database analyzed recently?\n>\n>\n", "msg_date": "Thu, 30 Jul 2009 20:07:59 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "> Unfortunately had to downgrade back to 8.3. Now having troubles with\n> that and still solving them.\n>\n> For future upgrade, what is the basic steps?\n\n1. create database\n2. dump the data from the old database\n3. load the data into the new database\n4. analyze etc. (I prefer to do this manually at the beginning)\n5. check that everything is working (that the correct execution plans are\nused, etc.)\n\nYou may even run the (2) and (3) at once - use pipe instead of a file.\n\n>\n>>Was the database analyzed recently?\n> Hm... there was smth like auto analyzer in serverlog when i started it\n> first time, but i didn't mention that.\n> Should I analyze whole db? How to do it?\n\nJust execute 'ANALYZE' and the whole database will be analyzed, but when\nthe autovacuum daemon is running this should be performed automatically (I\nguess - check the pg_stat_user_tables, there's information about last\nmanual/automatic vacuuming and/or analysis).\n\n> And how should I change _cost variables?\n\nI haven't noticed you've not modified those variables, so don't change them.\n\n> I/O was very high. at first memory usage grew up and then began to full\n> swap.\n\nOK, this seems to be the cause. What were the original values of the\nconfig variables? If you've lowered the work_mem and you need to sort a\nlot of data, this may be a problem. What amounts of data are you working\nwith? If the data were not analyzed recently, the execution plans will be\ninefficient and this may be the result.\n\nregards\nTomas\n\n", "msg_date": "Thu, 30 Jul 2009 16:24:38 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "> OK, this seems to be the cause. What were the original values of the\n> config variables? If you've lowered the work_mem and you need to sort a\n> lot of data, this may be a problem. What amounts of data are you working\n> with? If the data were not analyzed recently, the execution plans will be\n> inefficient and this may be the result.\n\nThe reason is that i'm using config that i used before and it worked\nperfect. work_mem is 192mb.\nI tried ANALYZE, but it didn't change anything.\n\nAmounts of data... at least, backup is over ~2.2Gb.\nI tried to use EXPLAIN ANALYZE for slow queries that i get from\nserverlog, but it also didn't change anything.\n\n\n2009/7/30 <[email protected]>:\n>> Unfortunately had to downgrade back to 8.3. Now having troubles with\n>> that and still solving them.\n>>\n>> For future upgrade, what is the basic steps?\n>\n> 1. create database\n> 2. dump the data from the old database\n> 3. load the data into the new database\n> 4. analyze etc. (I prefer to do this manually at the beginning)\n> 5. check that everything is working (that the correct execution plans are\n> used, etc.)\n>\n> You may even run the (2) and (3) at once - use pipe instead of a file.\n>\n>>\n>>>Was the database analyzed recently?\n>> Hm... there was smth like auto analyzer in serverlog when i started it\n>> first time, but i didn't mention that.\n>> Should I analyze whole db? How to do it?\n>\n> Just execute 'ANALYZE' and the whole database will be analyzed, but when\n> the autovacuum daemon is running this should be performed automatically (I\n> guess - check the pg_stat_user_tables, there's information about last\n> manual/automatic vacuuming and/or analysis).\n>\n>> And how should I change _cost variables?\n>\n> I haven't noticed you've not modified those variables, so don't change them.\n>\n>> I/O was very high. at first memory usage grew up and then began to full\n>> swap.\n>\n> OK, this seems to be the cause. What were the original values of the\n> config variables? If you've lowered the work_mem and you need to sort a\n> lot of data, this may be a problem. What amounts of data are you working\n> with? If the data were not analyzed recently, the execution plans will be\n> inefficient and this may be the result.\n>\n> regards\n> Tomas\n>\n>\n", "msg_date": "Thu, 30 Jul 2009 21:31:00 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "My additional comments:\n\[email protected] wrote:\n>> ...\n>> For future upgrade, what is the basic steps?\n>> \n>\n> \n0. Create test database - work out bugs and performance issues before \ngoing live.\n> 1. create database\n> \n...cluster. You only need to create the individual database if the \noptions you select for the dump do not create the database(s).\n> 2. dump the data from the old database\n> \n...using the dump tools from the *new* version. With several cores, you \nmight want to consider using the binary dump options in pg_dump if you \nwant to use the new parallel restore feature in pg_restore with a \npossible dramatic increase in restore speed (benchmarks I've seen \nsuggest that with 8 cores you may even see an almost 8x restore speedup \nso it's worth the effort). The manual suggests that setting --jobs to \nthe number of cores on the server is a good first approximation. See the \n-Fc options on pg_dump and the --jobs option in pg_restore for details.\n\nCheers,\nSteve\n\n\n\n\n\n\n\nMy additional comments:\n\[email protected] wrote:\n\n\n...\nFor future upgrade, what is the basic steps?\n \n\n\n \n\n0. Create test database - work out bugs and performance issues before\ngoing live.\n\n1. create database\n \n\n...cluster. You only need to create the individual database if the\noptions you select for the dump do not create the database(s).\n\n2. dump the data from the old database\n \n\n...using the dump tools from the *new* version. With several cores, you\nmight want to consider using the binary dump options in pg_dump if you\nwant to use the new parallel restore feature in pg_restore with a\npossible dramatic increase in restore speed (benchmarks I've seen\nsuggest that with 8 cores you may even see an almost 8x restore speedup\nso it's worth the effort). The manual suggests that setting --jobs to\nthe number of cores on the server is a good first approximation. See\nthe -Fc options on pg_dump and the --jobs option in pg_restore for\ndetails.\n\nCheers,\nSteve", "msg_date": "Thu, 30 Jul 2009 09:05:09 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Steve Crawford <[email protected]> wrote: \n \n> benchmarks I've seen suggest that with 8 cores you may even see an\n> almost 8x restore speedup\n \nI'm curious what sort of data in what environment showed that ratio.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 11:35:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Kevin Grittner wrote:\n> Steve Crawford <[email protected]> wrote: \n> \n> \n>> benchmarks I've seen suggest that with 8 cores you may even see an\n>> almost 8x restore speedup\n>> \n> \n> I'm curious what sort of data in what environment showed that ratio.\n> \n> \nWas going on memory from a presentation I watched. Reports on the web \nhave shown anything from a 3x increase using 8 cores to other \nnon-detailed reports of \"up to\" 8x improvement. If you have one big \ntable, don't expect much if any improvement. If you have lots of smaller \ntables/indexes then parallel restore will probably benefit you. This is \nall based on the not-atypical assumption that your restore will be CPU \nbound. I don't think parallel restore will be much use beyond the point \nyou hit IO limits.\n\nCheers,\nSteve\n\n\n\n\n\n\n\nKevin Grittner wrote:\n\nSteve Crawford <[email protected]> wrote: \n \n \n\nbenchmarks I've seen suggest that with 8 cores you may even see an\nalmost 8x restore speedup\n \n\n \nI'm curious what sort of data in what environment showed that ratio.\n \n \n\nWas going on memory from a presentation I watched. Reports on the web\nhave shown anything from a 3x increase using 8 cores to other\nnon-detailed reports of \"up to\" 8x improvement. If you have one big\ntable, don't expect much if any improvement. If you have lots of\nsmaller tables/indexes then parallel restore will probably benefit you.\nThis is all based on the not-atypical assumption that your restore will\nbe CPU bound. I don't think parallel restore will be much use beyond\nthe point you hit IO limits.\n\nCheers,\nSteve", "msg_date": "Thu, 30 Jul 2009 09:58:20 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Thu, 30 Jul 2009, Kevin Grittner wrote:\n> Steve Crawford <[email protected]> wrote:\n>> benchmarks I've seen suggest that with 8 cores you may even see an\n>> almost 8x restore speedup\n>\n> I'm curious what sort of data in what environment showed that ratio.\n\nIt depends on a lot of things. However, certainly for index creation, \ntests on servers over here have indicated that running four \"CREATE INDEX\" \nstatements at the time runs four times as fast, assuming the table fits in \nmaintenance_work_mem.\n\nMatthew\n\n-- \n I have an inferiority complex. But it's not a very good one.\n", "msg_date": "Thu, 30 Jul 2009 17:59:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Matthew Wakeling <[email protected]> wrote: \n \n> tests on servers over here have indicated that running four \"CREATE\n> INDEX\" statements at the time runs four times as fast, assuming the\n> table fits in maintenance_work_mem.\n \nI'm benchmarking a patch to the parallel restore, and just out of\ncuriosity I've been comparing the multi-job approach, with various\nnumbers of jobs, to a restore within a single database transaction;\nand I'm seeing (on serious production-quality servers) the parallel\nrestore run in 55% to 75% of the time of a restore running off the\nsame dump file using the -1 switch. The 16 processor machine got the\nbest results, running with anywhere from 12 to 20 jobs. The 2\nprocessor machine got the lesser benefit, running with 2 to 4 jobs. \n(The exact number of jobs really didn't make a difference big enough\nto emerge from the noise.)\n \nI've got 431 user tables with 578 indexes in a database which, freshly\nrestored, is 70GB. (That's 91GB with the fragmentation and reasonable\ndead space we have in production.) Real production data; nothing\nsynthetic.\n \nSince the dump to custom format ran longer than the full pg_dump\npiped directly to psql would have taken, the overall time to use this\ntechnique is clearly longer for our databases on our hardware. I'm\nsure there are cases where people don't have the option to pipe things\nthrough, or that there may sometime be a big enough savings in the\nmultiple jobs to pay off, even without overlapping the dump and\nrestore, and with the necessity to write and read the data an extra\ntime; but there are clearly situations where the piped approach is\nfaster.\n \nWe may want to try to characterize the conditions under which each is\na win, so we can better target our advice....\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 12:17:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Since the dump to custom format ran longer than the full pg_dump\n> piped directly to psql would have taken, the overall time to use this\n> technique is clearly longer for our databases on our hardware.\n\nHmmm ... AFAIR there isn't a good reason for dump to custom format to\ntake longer than plain text dump, except for applying compression.\nMaybe -Z0 would be worth testing? Or is the problem that you have to\nwrite the data to a disk file rather than just piping it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jul 2009 13:28:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n \n> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n> to take longer than plain text dump, except for applying\n> compression. Maybe -Z0 would be worth testing? Or is the problem\n> that you have to write the data to a disk file rather than just\n> piping it?\n \nI'm not sure without benchmarking that. I was writing to the same\nRAID as the database I was dumping, so contention was probably a\nsignificant issue. But it would be interesting to compare different\npermutations to see what impact each has alone and in combination.\n \nI'm OK with setting up a benchmark run each night for a while, to\nshake out what I can, on this and the artificial cases.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 12:35:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Tom Lane wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Since the dump to custom format ran longer than the full pg_dump\n>> piped directly to psql would have taken, the overall time to use this\n>> technique is clearly longer for our databases on our hardware.\n> \n> Hmmm ... AFAIR there isn't a good reason for dump to custom format to\n> take longer than plain text dump, except for applying compression.\n> Maybe -Z0 would be worth testing? Or is the problem that you have to\n> write the data to a disk file rather than just piping it?\n\nI always dump with -Z0(and compress afterwards or even in a pipe to get \ntwo cores busy) because otherwise custom dump times are simply ridiculous.\nHowever Kevin is on something here - on the typical 4-8 core box I \ntested I managed to an around cores/2 speedup for the restore which \nmeans that for a pure upgrade or testing similiar to what kevin is doing \ncustom dumps + parallel restore might result in no win or even a loss.\n\nOn on of our datasets I did some benchmarking a while ago (for those who \nattended bruce pg_migrator talk @pgcon these are same numbers):\n\n\n* 150GB Database (on-disk - ~100GB as a plain text dump)\n\ntime to dump(-C0): \t\t\t\t120min\ntime to restore(single threaded):\t180min\ntime to restore(-j 16):\t\t\t59min\n\nhowever the problem is that this does not actually mean that parallel \nrestore shaves you ~120min in dump/restore time because you get the \nfollowing real runtimes:\n\nplain text dump + single threaded restore in a pipe: 188min\ncustom dump to file + parallel restore:\t179min\n\n\nthis is without compression, with the default custom dump + parallel \nrestore is way slower than the simple approach on reasonable hardware.\n\n\nStefan\n", "msg_date": "Thu, 30 Jul 2009 20:14:19 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> Since the dump to custom format ran longer than the full pg_dump\n>> piped directly to psql would have taken, the overall time to use\n>> this technique is clearly longer for our databases on our hardware.\n> \n> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n> to take longer than plain text dump, except for applying\n> compression. Maybe -Z0 would be worth testing? Or is the problem\n> that you have to write the data to a disk file rather than just\n> piping it?\n \nI did some checking with the DBA who normally copies these around for\ndevelopment and test environments. He confirmed that when the source\nand target are on the same machine, a pg_dump piped to psql takes\nabout two hours. If he pipes across the network, it runs more like\nthree hours.\n \nMy pg_dump to custom format ran for six hours. The single-transaction\nrestore from that dump file took two hours, with both on the same\nmachine. I can confirm with benchmarks, but this guy generally knows\nwhat he's talking about (and we do create a lot of development and\ntest databases this way).\n \nEither the compression is tripling the dump time, or there is\nsomething inefficient about how pg_dump writes to the disk.\n \nAll of this is on a RAID 5 array with 5 drives using xfs with\nnoatime,nobarrier and a 256MB BBU controller.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 13:14:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Kevin Grittner wrote:\n> Tom Lane <[email protected]> wrote: \n>> \"Kevin Grittner\" <[email protected]> writes:\n>>> Since the dump to custom format ran longer than the full pg_dump\n>>> piped directly to psql would have taken, the overall time to use\n>>> this technique is clearly longer for our databases on our hardware.\n>> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n>> to take longer than plain text dump, except for applying\n>> compression. Maybe -Z0 would be worth testing? Or is the problem\n>> that you have to write the data to a disk file rather than just\n>> piping it?\n> \n> I did some checking with the DBA who normally copies these around for\n> development and test environments. He confirmed that when the source\n> and target are on the same machine, a pg_dump piped to psql takes\n> about two hours. If he pipes across the network, it runs more like\n> three hours.\n> \n> My pg_dump to custom format ran for six hours. The single-transaction\n> restore from that dump file took two hours, with both on the same\n> machine. I can confirm with benchmarks, but this guy generally knows\n> what he's talking about (and we do create a lot of development and\n> test databases this way).\n> \n> Either the compression is tripling the dump time, or there is\n> something inefficient about how pg_dump writes to the disk.\n\nseems about right - compression in pg_dump -Fc is a serious bottleneck \nand unless can significantly speed it up or make it use of multiple \ncores (either for the dump itself - which would be awsome - or for the \ncompression) I would recommend to not use it at all.\n\n\nStefan\n", "msg_date": "Thu, 30 Jul 2009 20:24:25 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\n\nOn 7/30/09 11:14 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> Tom Lane <[email protected]> wrote:\n>> \"Kevin Grittner\" <[email protected]> writes:\n>>> Since the dump to custom format ran longer than the full pg_dump\n>>> piped directly to psql would have taken, the overall time to use\n>>> this technique is clearly longer for our databases on our hardware.\n>> \n>> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n>> to take longer than plain text dump, except for applying\n>> compression. Maybe -Z0 would be worth testing? Or is the problem\n>> that you have to write the data to a disk file rather than just\n>> piping it?\n> \n> I did some checking with the DBA who normally copies these around for\n> development and test environments. He confirmed that when the source\n> and target are on the same machine, a pg_dump piped to psql takes\n> about two hours. If he pipes across the network, it runs more like\n> three hours.\n> \n> My pg_dump to custom format ran for six hours. The single-transaction\n> restore from that dump file took two hours, with both on the same\n> machine. I can confirm with benchmarks, but this guy generally knows\n> what he's talking about (and we do create a lot of development and\n> test databases this way).\n> \n> Either the compression is tripling the dump time, or there is\n> something inefficient about how pg_dump writes to the disk.\n> \n> All of this is on a RAID 5 array with 5 drives using xfs with\n> noatime,nobarrier and a 256MB BBU controller.\n> \n\nOf course Compression has a HUGE effect if your I/O system is half-decent.\nMax GZIP compression speed with the newest Intel CPU's is something like\n50MB/sec (it is data dependant, obviously -- it is usually closer to\n30MB/sec). Max gzip decompression ranges from 50 to 150MB/sec (it can get\nreally high only if the ratio is extremely large, like if you compress a\nrepeating sequence of 256 bytes).\n\nThe new parallel restore is nice and all, but we're still limited by the\nweek it takes to dump the whole thing compressed. Parallel restore is a\nlot faster when restoring compressed dumps though, even without any indexes\nto make, since all that decompression is CPU hungry.\n\n> -Kevin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 30 Jul 2009 11:46:29 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n \n> Max GZIP compression speed with the newest Intel CPU's is something\n> like 50MB/sec (it is data dependant, obviously -- it is usually\n> closer to 30MB/sec).\n \nApplying 30MB/sec to the 70GB accounts for 40 minutes. If those\nnumbers are good, there's something else at play here.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 13:58:46 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On 30-7-2009 20:46 Scott Carey wrote:\n> Of course Compression has a HUGE effect if your I/O system is half-decent.\n> Max GZIP compression speed with the newest Intel CPU's is something like\n> 50MB/sec (it is data dependant, obviously -- it is usually closer to\n> 30MB/sec). Max gzip decompression ranges from 50 to 150MB/sec (it can get\n> really high only if the ratio is extremely large, like if you compress a\n> repeating sequence of 256 bytes).\n\nI just ran some quick numbers on our lightly loaded Nehalem X5570 (2.93+ \nGhz depending on turbo-mode). I compressed a 192MB text file I had at \nhand using gzip -1, -2, -3, -6 and -9 and outputted its results to \n/dev/null. The file was in the kernels file cache all the time and I did \nthe tests 3 times.\n\nGzip -1 reached 54MB/s, -2 got 47MB/s, -3 got 32MB/s, -6 got 18MB/s and \n-9 got to 12MB/s. Just running cat on the file made it do 6400MB/s (i.e. \nit took 0.030 seconds to copy the file from memory to nowhere).\nThose files where respectively 69MB, 66MB, 64MB, 59MB and 58MB.\n\nGunzip on the -1 file took 1.66 seconds, i.e. it read data at 41MB/s and \noutputted it to /dev/null at 115MB/s. The -9 file took 1.46s, so it read \n40MB/s and wrote 131MB/s.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 30 Jul 2009 21:56:02 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 7/30/09 11:14 AM, \"Stefan Kaltenbrunner\" <[email protected]> wrote:\n\n> Tom Lane wrote:\n>> \"Kevin Grittner\" <[email protected]> writes:\n>>> Since the dump to custom format ran longer than the full pg_dump\n>>> piped directly to psql would have taken, the overall time to use this\n>>> technique is clearly longer for our databases on our hardware.\n>> \n>> Hmmm ... AFAIR there isn't a good reason for dump to custom format to\n>> take longer than plain text dump, except for applying compression.\n>> Maybe -Z0 would be worth testing? Or is the problem that you have to\n>> write the data to a disk file rather than just piping it?\n> \n> I always dump with -Z0(and compress afterwards or even in a pipe to get\n> two cores busy) because otherwise custom dump times are simply ridiculous.\n> However Kevin is on something here - on the typical 4-8 core box I\n> tested I managed to an around cores/2 speedup for the restore which\n> means that for a pure upgrade or testing similiar to what kevin is doing\n> custom dumps + parallel restore might result in no win or even a loss.\n> \n> On on of our datasets I did some benchmarking a while ago (for those who\n> attended bruce pg_migrator talk @pgcon these are same numbers):\n> \n> \n> * 150GB Database (on-disk - ~100GB as a plain text dump)\n> \n> time to dump(-C0): 120min\n> time to restore(single threaded): 180min\n> time to restore(-j 16): 59min\n\n\nNote also that with ext3 and XFS (untuned) parallel restore = HORRIBLY\nFRAGMENTED tables, to the point of sequential scans being rather slow. At\nleast, they're mostly just interleaved with each other so there is little\nseeking backwards, but still... Beware.\n\nXFS with allocsize=64m or so interleaves them in reasonably large chunks\nthough and prevents significant fragmentation.\n\n> \n> however the problem is that this does not actually mean that parallel\n> restore shaves you ~120min in dump/restore time because you get the\n> following real runtimes:\n> \n> plain text dump + single threaded restore in a pipe: 188min\n> custom dump to file + parallel restore: 179min\n\nOn the other hand, I find that the use case where one DB is dumped to a\nbackup, and then this backup is restored on several others -- that parallel\nrestore is extremely useful there.\n\nDump needs to be parallelized or at least pipelined to use more cores. COPY\non one thread, compression on another?\n\nOne trick with a dump, that works only if you have tables or schemas that\ncan safely dump in different transactions, is to dump concurrently on\ndifferent slices of the DB manually. This makes a huge difference if that\nis possible. \n\n> \n> \n> this is without compression, with the default custom dump + parallel\n> restore is way slower than the simple approach on reasonable hardware.\n> \n> \n> Stefan\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 30 Jul 2009 12:59:46 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\n\n\nOn 7/30/09 11:24 AM, \"Stefan Kaltenbrunner\" <[email protected]> wrote:\n\n> Kevin Grittner wrote:\n>> Tom Lane <[email protected]> wrote:\n>>> \"Kevin Grittner\" <[email protected]> writes:\n>>>> Since the dump to custom format ran longer than the full pg_dump\n>>>> piped directly to psql would have taken, the overall time to use\n>>>> this technique is clearly longer for our databases on our hardware.\n>>> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n>>> to take longer than plain text dump, except for applying\n>>> compression. Maybe -Z0 would be worth testing? Or is the problem\n>>> that you have to write the data to a disk file rather than just\n>>> piping it?\n>> \n>> I did some checking with the DBA who normally copies these around for\n>> development and test environments. He confirmed that when the source\n>> and target are on the same machine, a pg_dump piped to psql takes\n>> about two hours. If he pipes across the network, it runs more like\n>> three hours.\n>> \n>> My pg_dump to custom format ran for six hours. The single-transaction\n>> restore from that dump file took two hours, with both on the same\n>> machine. I can confirm with benchmarks, but this guy generally knows\n>> what he's talking about (and we do create a lot of development and\n>> test databases this way).\n>> \n>> Either the compression is tripling the dump time, or there is\n>> something inefficient about how pg_dump writes to the disk.\n> \n> seems about right - compression in pg_dump -Fc is a serious bottleneck\n> and unless can significantly speed it up or make it use of multiple\n> cores (either for the dump itself - which would be awsome - or for the\n> compression) I would recommend to not use it at all.\n> \n\nThat's not an option when a dump compressed is 200GB and uncompressed is\n1.3TB, for example.\n\n\n> \n> Stefan\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 30 Jul 2009 13:01:26 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> Dump needs to be parallelized or at least pipelined to use more cores. COPY\n> on one thread, compression on another?\n\nWe already do that (since compression happens on the pg_dump side).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jul 2009 16:15:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "\n\n\nOn 7/30/09 11:58 AM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> Scott Carey <[email protected]> wrote:\n> \n>> Max GZIP compression speed with the newest Intel CPU's is something\n>> like 50MB/sec (it is data dependant, obviously -- it is usually\n>> closer to 30MB/sec).\n> \n> Applying 30MB/sec to the 70GB accounts for 40 minutes. If those\n> numbers are good, there's something else at play here.\n\nIt is rather data dependant, try gzip on command line as a test on some\ndata. On a random tarball on my Nehalem system, I just got 23MB/sec\ncompression rate on an uncompressable file.\nDecompression with gunzip was 145MB/sec.\n\nOn a text file that I manually created with randommly placed repeating\nsegments that compresses 200x to 1, compression was 115MB/sec (bytes in per\nsec), and decompression (bytes out per sec) was 265MB/sec.\n\nThe array in this machine will do 800MB/sec reads/sec with 'dd' and\n700MB/sec writes.\n\nOne core has no chance.\n\n\nNow, what needs to be known with the pg_dump is not just how fast\ncompression can go (assuming its gzip) but also what the duty cycle time of\nthe compression is. If it is single threaded, there is all the network and\ndisk time to cut out of this, as well as all the CPU time that pg_dump does\nwithout compression.\n\n> -Kevin\n> \n\n", "msg_date": "Thu, 30 Jul 2009 13:16:12 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 7/30/09 1:15 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> Scott Carey <[email protected]> writes:\n>> Dump needs to be parallelized or at least pipelined to use more cores. COPY\n>> on one thread, compression on another?\n> \n> We already do that (since compression happens on the pg_dump side).\n> \n> regards, tom lane\n> \n\nWell, that isn't what I meant. pg_dump uses CPU outside of compression\ndoing various things, If that Cpu is 10% as much as the compression, then\nsplitting them up would yield ~10% gain when CPU bound. \n\n", "msg_date": "Thu, 30 Jul 2009 13:19:58 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n \n> Now, what needs to be known with the pg_dump is not just how fast\n> compression can go (assuming its gzip) but also what the duty cycle\n> time of the compression is. If it is single threaded, there is all\n> the network and disk time to cut out of this, as well as all the CPU\n> time that pg_dump does without compression.\n \nWell, I established a couple messages back on this thread that pg_dump\npiped to psql to a database on the same machine writes the 70GB\ndatabase to disk in two hours, while pg_dump to a custom format file\nat default compression on the same machine writes the 50GB file in six\nhours. No network involved, less disk space written. I'll try it\ntonight at -Z0.\n \nOne thing I've been wondering about is what, exactly, is compressed in\ncustom format. Is it like a .tar.gz file, where the compression is a\nlayer over the top, or are individual entries compressed? If the\nlatter, what's the overhead on setting up each compression stream? Is\nthere some minimum size before that kicks in? (I know, I should go\ncheck the code myself. Maybe in a bit. Of course, if someone already\nknows, it would be quicker....)\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 15:58:27 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 7/30/09 1:58 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n> Scott Carey <[email protected]> wrote:\n> \n>> Now, what needs to be known with the pg_dump is not just how fast\n>> compression can go (assuming its gzip) but also what the duty cycle\n>> time of the compression is. If it is single threaded, there is all\n>> the network and disk time to cut out of this, as well as all the CPU\n>> time that pg_dump does without compression.\n> \n> Well, I established a couple messages back on this thread that pg_dump\n> piped to psql to a database on the same machine writes the 70GB\n> database to disk in two hours, while pg_dump to a custom format file\n> at default compression on the same machine writes the 50GB file in six\n> hours. No network involved, less disk space written. I'll try it\n> tonight at -Z0.\n\nSo, I'm not sure what the pg_dump custom format overhead is minus the\ncompression -- there is probably some non-compression overhead from that\nformat other than the compression.\n\n-Z1 might be interesting too, but obviously it takes some time. Interesting\nthat your uncompressed case is only 40% larger. For me, the compressed dump\nis in the range of 20% the size of the uncompressed one.\n\n\n> \n> One thing I've been wondering about is what, exactly, is compressed in\n> custom format. Is it like a .tar.gz file, where the compression is a\n> layer over the top, or are individual entries compressed?\n\nIt is instructive to open up a compressed custom format file in 'less' or\nanother text viewer.\n\nBasically, it is the same as the uncompressed dump with all the DDL\nuncompressed, but the binary chunks compressed. It would seem (educated\nguess, looking at the raw file, and not the code) that the table data is\ncompressed and the DDL points to an index in the file where the compressed\nblob for the copy lives.\n\n\n> If the\n> latter, what's the overhead on setting up each compression stream? Is\n> there some minimum size before that kicks in? (I know, I should go\n> check the code myself. Maybe in a bit. Of course, if someone already\n> knows, it would be quicker....)\n\nGzip does have some quirky performance behavior depending on the chunk size\nof data you stream into it.\n\n> \n> -Kevin\n> \n\n", "msg_date": "Thu, 30 Jul 2009 14:20:05 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n \n> Gzip does have some quirky performance behavior depending on the\n> chunk size of data you stream into it.\n \nYeah, I've run into that before. If we're sending each individual\ndatum to a gzip function rather than waiting until we've got a\ndecent-size buffer, that could explain it.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 16:43:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> One thing I've been wondering about is what, exactly, is compressed in\n> custom format. Is it like a .tar.gz file, where the compression is a\n> layer over the top, or are individual entries compressed?\n\nIndividual entries. Eyeball examination of a dump file shows that we\nonly compress table-data entries, and don't for example waste time\nfiring up the compressor to process a function body. It's possible\nthat it'd be worth trying to have some lower limit on the amount of\ndata in a table before we bother to compress it, but I bet that it\nwouldn't make any difference on your databases ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jul 2009 17:51:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> Gzip does have some quirky performance behavior depending on the chunk size\n> of data you stream into it.\n\nCan you enlarge on that comment? I'm not sure that pg_dump is aware\nthat there's anything to worry about there.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jul 2009 17:53:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n> Scott Carey <[email protected]> writes:\n>> Gzip does have some quirky performance behavior depending on the\n>> chunk size of data you stream into it.\n> \n> Can you enlarge on that comment? I'm not sure that pg_dump is aware\n> that there's anything to worry about there.\n \nIf the library used here is anything like the native library used by\nJava, it'd be worth putting a buffer layer ahead of the calls to gzip,\nso it isn't dealing with each individual value as a separate call. I\nseem to remember running into that issue in Java, where throwing a\nBufferedOutputStream in there fixed the performance issue.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 17:00:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Hey guyz, thanks for help. I solved the problems. The reason was in\nbad query, that i've accidentally committed right after upgrading.\nPostgreSQL 8.4 is perfect! Analyze works like a charm, and MUCH better\nthan in 8.3.\n\n2009/7/31 Kevin Grittner <[email protected]>:\n> Tom Lane <[email protected]> wrote:\n>> Scott Carey <[email protected]> writes:\n>>> Gzip does have some quirky performance behavior depending on the\n>>> chunk size of data you stream into it.\n>>\n>> Can you enlarge on that comment?  I'm not sure that pg_dump is aware\n>> that there's anything to worry about there.\n>\n> If the library used here is anything like the native library used by\n> Java, it'd be worth putting a buffer layer ahead of the calls to gzip,\n> so it isn't dealing with each individual value as a separate call.  I\n> seem to remember running into that issue in Java, where throwing a\n> BufferedOutputStream in there fixed the performance issue.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 31 Jul 2009 05:02:45 +0700", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 7/30/09 2:53 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> Scott Carey <[email protected]> writes:\n>> Gzip does have some quirky performance behavior depending on the chunk size\n>> of data you stream into it.\n> \n> Can you enlarge on that comment? I'm not sure that pg_dump is aware\n> that there's anything to worry about there.\n> \n> regards, tom lane\n> \n\nFor example, one of the things that gzip does is calculate the crc of the\nitem being compressed. Calculating that incrementally is less efficient\nthan doing it in bulk.\nFor whatever reason, some other internals of gzip tend to perform much\nbetter if submitting say, 4k or 8k or 16k chunks rather than little bits at\na time. But I'm sure some of that also depends on what library you're using\nsince they all vary somewhat.\n\n", "msg_date": "Thu, 30 Jul 2009 15:07:42 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> On 7/30/09 2:53 PM, \"Tom Lane\" <[email protected]> wrote:\n>> Scott Carey <[email protected]> writes:\n>>> Gzip does have some quirky performance behavior depending on the chunk size\n>>> of data you stream into it.\n>> \n>> Can you enlarge on that comment? I'm not sure that pg_dump is aware\n>> that there's anything to worry about there.\n\n> For whatever reason, some other internals of gzip tend to perform much\n> better if submitting say, 4k or 8k or 16k chunks rather than little bits at\n> a time. But I'm sure some of that also depends on what library you're using\n> since they all vary somewhat.\n\nAFAIK there is only one widely-used implementation of zlib, and it\nhasn't changed much in a long time.\n\nI did some tracing and verified that pg_dump passes data to deflate()\none table row at a time. I'm not sure about the performance\nimplications of that, but it does seem like it might be something to\nlook into.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 30 Jul 2009 18:30:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane<[email protected]> wrote:\n> I did some tracing and verified that pg_dump passes data to deflate()\n> one table row at a time.  I'm not sure about the performance\n> implications of that, but it does seem like it might be something to\n> look into.\n\nI suspect if this was a problem the zlib people would have added\ninternal buffering ages ago. I find it hard to believe we're not the\nfirst application to use it this way.\n\nI suppose it wouldn't be the first time a problem like this went\nunfixed though. Is the zlib software actively maintained or was your\nearlier comment implying it's currently an orphaned codebase?\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 30 Jul 2009 23:40:10 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Greg Stark <[email protected]> wrote: \n \n> I suspect if this was a problem the zlib people would have added\n> internal buffering ages ago. I find it hard to believe we're not the\n> first application to use it this way.\n \nI think that most uses of this library are on entire files or streams.\nThey may have felt that adding another layer of buffering would just\nhurt performance for the typical use case, and anyone using it in some\nother way could always use their own buffering layer. In Java adding\nthat layer took 30 characters of code, so it didn't make a very big\nimpression on me -- it took a while to even remember I'd had to do it.\n \n-Kevin\n", "msg_date": "Thu, 30 Jul 2009 17:49:14 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Thu, 30 Jul 2009, Rauan Maemirov wrote:\n\n> maintenance_work_mem = 1GB\n> work_mem = 192MB\n> shared_buffers = 7680MB\n> max_connections = 80\n> My box is Nehalem 2xQuad 2.8 with RAM 32Gb\n\nWhile it looks like you sorted out your issue downthread, I wanted to \npoint out that your setting for work_mem could be dangerously high here \nand contribute to problems with running out memory or using swap. If each \nof your 80 clients was doing a sort at the same time, you'd be using 80 * \n192MB + 7680MB = 15360GB of RAM just for the server. The problem is that \neach client could do multiple sorts, so usage might even got higher. \nUnless you have a big data warehouse setup, more common work_mem settings \nare in the 16-64MB range rather than going this high. Just something to \nkeep an eye on if you find a lot of memory is being used by the database \nprocesses. I really need to refine the pgtune model to more carefully \naccount for this particular problem, it's a bit too aggressive here for \npeople who aren't proactively watching the server's RAM after changing the \nsettings.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 31 Jul 2009 00:10:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Thu, Jul 30, 2009 at 10:10 PM, Greg Smith<[email protected]> wrote:\n> On Thu, 30 Jul 2009, Rauan Maemirov wrote:\n>\n>> maintenance_work_mem = 1GB\n>> work_mem = 192MB\n>> shared_buffers = 7680MB\n>> max_connections = 80\n>> My box is Nehalem 2xQuad 2.8 with RAM 32Gb\n>\n> While it looks like you sorted out your issue downthread, I wanted to point\n> out that your setting for work_mem could be dangerously high here and\n> contribute to problems\n\nThe real danger here is that you can set up your pg server to fail\nONLY under heavy load, when it runs out of memory and goes into a swap\nstorm. So, without proper load testing and profiling, you may not\nknow you're headed for danger until your server goes unresponsive\nmidday at the most critical of times. And restarting it will just\nlead to the same failure again as the clients all reconnect and pummel\nyour server.\n\nMeanwhile, going from 192 to 16MB might result in a total slowdown\nmeasured in a fraction of a percentage overall, and prevent this kind\nof failure.\n\nIf there's one single request you can justify big work_mem for then\nset it for just that one query. It's not uncommon to have a reporting\nuser limited to a few connections and with \"alter user reportinguser\nset work_mem='512MB';\" so that it can run fast but not deplete your\nserver's resources on accident during heavy load.\n", "msg_date": "Thu, 30 Jul 2009 23:11:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey wrote:\n> \n> \n> On 7/30/09 11:24 AM, \"Stefan Kaltenbrunner\" <[email protected]> wrote:\n> \n>> Kevin Grittner wrote:\n>>> Tom Lane <[email protected]> wrote:\n>>>> \"Kevin Grittner\" <[email protected]> writes:\n>>>>> Since the dump to custom format ran longer than the full pg_dump\n>>>>> piped directly to psql would have taken, the overall time to use\n>>>>> this technique is clearly longer for our databases on our hardware.\n>>>> Hmmm ... AFAIR there isn't a good reason for dump to custom format\n>>>> to take longer than plain text dump, except for applying\n>>>> compression. Maybe -Z0 would be worth testing? Or is the problem\n>>>> that you have to write the data to a disk file rather than just\n>>>> piping it?\n>>> I did some checking with the DBA who normally copies these around for\n>>> development and test environments. He confirmed that when the source\n>>> and target are on the same machine, a pg_dump piped to psql takes\n>>> about two hours. If he pipes across the network, it runs more like\n>>> three hours.\n>>>\n>>> My pg_dump to custom format ran for six hours. The single-transaction\n>>> restore from that dump file took two hours, with both on the same\n>>> machine. I can confirm with benchmarks, but this guy generally knows\n>>> what he's talking about (and we do create a lot of development and\n>>> test databases this way).\n>>>\n>>> Either the compression is tripling the dump time, or there is\n>>> something inefficient about how pg_dump writes to the disk.\n>> seems about right - compression in pg_dump -Fc is a serious bottleneck\n>> and unless can significantly speed it up or make it use of multiple\n>> cores (either for the dump itself - which would be awsome - or for the\n>> compression) I would recommend to not use it at all.\n>>\n> \n> That's not an option when a dump compressed is 200GB and uncompressed is\n> 1.3TB, for example.\n\nyeah that was not meant as \"don't use compression at all\" but rather as \n\"use a different way to compress than what pg_dump provides internally\".\n\n\nStefan\n", "msg_date": "Fri, 31 Jul 2009 08:02:58 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "That's true. I tried to lower work_mem from 192 to 64, and it caused\ntotal slowdown.\nBy the way, is there any performance tips for tuning joins? I noticed,\nthat my joins on 8.4 slowed down, on 8.3 it was faster a bit.\n\n2009/7/31 Scott Marlowe <[email protected]>:\n> On Thu, Jul 30, 2009 at 10:10 PM, Greg Smith<[email protected]> wrote:\n>> On Thu, 30 Jul 2009, Rauan Maemirov wrote:\n>>\n>>> maintenance_work_mem = 1GB\n>>> work_mem = 192MB\n>>> shared_buffers = 7680MB\n>>> max_connections = 80\n>>> My box is Nehalem 2xQuad 2.8 with RAM 32Gb\n>>\n>> While it looks like you sorted out your issue downthread, I wanted to point\n>> out that your setting for work_mem could be dangerously high here and\n>> contribute to problems\n>\n> The real danger here is that you can set up your pg server to fail\n> ONLY under heavy load, when it runs out of memory and goes into a swap\n> storm.  So, without proper load testing and profiling, you may not\n> know you're headed for danger until your server goes unresponsive\n> midday at the most critical of times.  And restarting it will just\n> lead to the same failure again as the clients all reconnect and pummel\n> your server.\n>\n> Meanwhile, going from 192 to 16MB might result in a total slowdown\n> measured in a fraction of a percentage overall, and prevent this kind\n> of failure.\n>\n> If there's one single request you can justify big work_mem for then\n> set it for just that one query.  It's not uncommon to have a reporting\n> user limited to a few connections and with \"alter user reportinguser\n> set work_mem='512MB';\" so that it can run fast but not deplete your\n> server's resources on accident during heavy load.\n>\n", "msg_date": "Fri, 31 Jul 2009 12:40:36 +0600", "msg_from": "Rauan Maemirov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Thu, Jul 30, 2009 at 10:07 AM, Rauan Maemirov<[email protected]> wrote:\n> Unfortunately had to downgrade back to 8.3. Now having troubles with\n> that and still solving them.\n>\n> For future upgrade, what is the basic steps?\n>\n>>Was the database analyzed recently?\n> Hm... there was smth like auto analyzer in serverlog when i started it\n> first time, but i didn't mention that.\n> Should I analyze whole db? How to do it?\n>\n> And how should I change _cost variables?\n>\n> I/O was very high. at first memory usage grew up and then began to full swap.\n\nThere is at least one known case of memory leak 8.4.0. Possibly you\ngot hit by that or another early adopter bug. I think in your case\nit's probably to soon to have upgraded...with 10k connections even\nminor annoyances can be real nasty.\n\nI'd wait a few months while in the meantime stage your app on a\ntesting database and double check the important query plans to make\nsure there are no big performance regressions. Each version of pg has\na couple for various reasons.\n\nmerlin\n", "msg_date": "Fri, 31 Jul 2009 11:11:57 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane<[email protected]> wrote:\n>> I did some tracing and verified that pg_dump passes data to deflate()\n>> one table row at a time. �I'm not sure about the performance\n>> implications of that, but it does seem like it might be something to\n>> look into.\n\n> I suspect if this was a problem the zlib people would have added\n> internal buffering ages ago. I find it hard to believe we're not the\n> first application to use it this way.\n\nI dug into this a bit more. zlib *does* have internal buffering --- it\nhas to, because it needs a minimum lookahead of several hundred bytes\nto ensure that compression works properly. The per-call overhead of\ndeflate() looks a bit higher than one could wish when submitting short\nchunks, but oprofile shows that \"pg_dump -Fc\" breaks down about like\nthis:\n\nsamples % image name symbol name\n1103922 74.7760 libz.so.1.2.3 longest_match\n215433 14.5927 libz.so.1.2.3 deflate_slow\n55368 3.7504 libz.so.1.2.3 compress_block\n41715 2.8256 libz.so.1.2.3 fill_window\n17535 1.1878 libc-2.9.so memcpy\n13663 0.9255 libz.so.1.2.3 adler32\n4613 0.3125 libc-2.9.so _int_malloc\n2942 0.1993 libc-2.9.so free\n2552 0.1729 libc-2.9.so malloc\n2155 0.1460 libz.so.1.2.3 pqdownheap\n2128 0.1441 libc-2.9.so _int_free\n1702 0.1153 libz.so.1.2.3 deflate\n1648 0.1116 libc-2.9.so mempcpy\n\nlongest_match is the core lookahead routine and is not going to be\naffected by submission sizes, because it isn't called unless adequate\ndata (ie, the longest possible match length) is available in zlib's\ninternal buffer. It's possible that doing more buffering on our end\nwould reduce the deflate_slow component somewhat, but it looks like\nthe most we could hope to get that way is in the range of 10% speedup.\nSo I'm wondering if anyone can provide concrete evidence of large\nwins from buffering zlib's input.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Jul 2009 13:04:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane <[email protected]> wrote:\n\n> Greg Stark <[email protected]> writes:\n>> On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane<[email protected]> wrote:\n>>> I did some tracing and verified that pg_dump passes data to deflate()\n>>> one table row at a time.  I'm not sure about the performance\n>>> implications of that, but it does seem like it might be something to\n>>> look into.\n>\n>> I suspect if this was a problem the zlib people would have added\n>> internal buffering ages ago. I find it hard to believe we're not the\n>> first application to use it this way.\n>\n> I dug into this a bit more. zlib *does* have internal buffering --- it\n> has to, because it needs a minimum lookahead of several hundred bytes\n> to ensure that compression works properly. The per-call overhead of\n> deflate() looks a bit higher than one could wish when submitting short\n> chunks, but oprofile shows that \"pg_dump -Fc\" breaks down about like\n> this:\n\n\tDuring dump (size of dump is 2.6 GB),\n\nNo Compression :\n- postgres at 70-100% CPU and pg_dump at something like 10-20%\n- dual core is useful (a bit...)\n- dump size 2.6G\n- dump time 2m25.288s\n\nCompression Level 1 :\n- postgres at 70-100% CPU and pg_dump at 20%-100%\n- dual core is definitely useful\n- dump size 544MB\n- dump time 2m33.337s\n\nSince this box is mostly idle right now, eating CPU for compression is no \nproblem...\n\nAdding an option to use LZO instead of gzip could be useful...\n\nCompressing the uncompressed 2.6GB dump :\n\n- gzip -1 :\n\n- compressed size : 565 MB\n- compression throughput : 28.5 MB/s\n- decompression throughput : 74 MB/s\n\n- LZO -1 :\n- compressed size : 696M\n- compression throughput : 86 MB/s\n- decompression throughput : 247 MB/s\n\nConclusion : LZO could help for fast disks (RAID) or slow disks on a \nCPU-starved server...\n", "msg_date": "Sat, 01 Aug 2009 01:01:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 7/31/09 4:01 PM, \"PFC\" <[email protected]> wrote:\n\n> On Fri, 31 Jul 2009 19:04:52 +0200, Tom Lane <[email protected]> wrote:\n> \n>> Greg Stark <[email protected]> writes:\n>>> On Thu, Jul 30, 2009 at 11:30 PM, Tom Lane<[email protected]> wrote:\n>>>> I did some tracing and verified that pg_dump passes data to deflate()\n>>>> one table row at a time.  I'm not sure about the performance\n>>>> implications of that, but it does seem like it might be something to\n>>>> look into.\n>> \n>>> I suspect if this was a problem the zlib people would have added\n>>> internal buffering ages ago. I find it hard to believe we're not the\n>>> first application to use it this way.\n>> \n>> I dug into this a bit more. zlib *does* have internal buffering --- it\n>> has to, because it needs a minimum lookahead of several hundred bytes\n>> to ensure that compression works properly. The per-call overhead of\n>> deflate() looks a bit higher than one could wish when submitting short\n>> chunks, but oprofile shows that \"pg_dump -Fc\" breaks down about like\n>> this:\n> \n> During dump (size of dump is 2.6 GB),\n> \n> No Compression :\n> - postgres at 70-100% CPU and pg_dump at something like 10-20%\n> - dual core is useful (a bit...)\n> - dump size 2.6G\n> - dump time 2m25.288s\n> \n> Compression Level 1 :\n> - postgres at 70-100% CPU and pg_dump at 20%-100%\n> - dual core is definitely useful\n> - dump size 544MB\n> - dump time 2m33.337s\n> \n> Since this box is mostly idle right now, eating CPU for compression is no\n> problem...\n> \n\nI get very different (contradictory) behavior. Server with fast RAID, 32GB\nRAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2\n8.3.6\nNo disk wait time during any test. One test beforehand was used to prime\nthe disk cache.\n100% CPU in the below means one core fully used. 800% means the system is\nfully loaded.\n\npg_dump > file (on a subset of the DB with lots of tables with small\ntuples)\n6m 27s, 4.9GB; 12.9MB/sec\n50% CPU in postgres, 50% CPU in pg_dump\n\npg_dump -Fc > file.gz\n9m6s, output is 768M (6.53x compression); 9.18MB/sec\n30% CPU in postgres, 70% CPU in pg_dump\n\npg_dump | gzip > file.2.gz\n6m22s, 13MB/sec. \n50% CPU in postgres, 50% Cpu in pg_dump, 50% cpu in gzip\n\nThe default (5) compression level was used.\n\nSo, when using pg_dump alone, I could not get significantly more than one\ncore of CPU (all on the same box). No matter how I tried, pg_dump plus the\npostgres process dumping data always totaled about 102% -- it would\nflulctuate in top, give or take 15% at times, but the two always were very\nclose (within 3%) of this total.\n\nPiping the whole thing to gzip gets some speedup. This indicates that\nperhaps the implementation or use of gzip is inappropriate on pg_dump's side\nor the library version is older or slower. Alternatively, the use of gzip\ninside pg_dump fails to pipeline CPU useage as well as piping it does, as\nthe above shows 50% more CPU utilization when piping.\n\nI can do the same test with a single table that is 10GB later (which does\ndump much faster than 13MB/sec and has rows that average about 500 bytes in\nsize). But overall I have found pg_dump's performace sorely lacking, and\nthis is a data risk in the big picture. Postgres is very good about not\nlosing data, but that only goes up to the limits of the hardware and OS,\nwhich is not good enough. Because of long disaster recovery times and poor\nreplication/contingency features, it is a fairly unsafe place for data once\nit gets beyond a certain size and a BC plan requires minimal downtime.\n\n> Adding an option to use LZO instead of gzip could be useful...\n> \n> Compressing the uncompressed 2.6GB dump :\n> \n> - gzip -1 :\n> \n> - compressed size : 565 MB\n> - compression throughput : 28.5 MB/s\n> - decompression throughput : 74 MB/s\n> \n> - LZO -1 :\n> - compressed size : 696M\n> - compression throughput : 86 MB/s\n> - decompression throughput : 247 MB/s\n> \n> Conclusion : LZO could help for fast disks (RAID) or slow disks on a\n> CPU-starved server...\n> \n\nLZO would be a great option, it is very fast, especially decompression.\nWith gzip, one rarely gains by going below gzip -3 or above gzip -6.\n\n\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Mon, 3 Aug 2009 10:00:26 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> I get very different (contradictory) behavior. Server with fast RAID, 32GB\n> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2\n> 8.3.6\n> No disk wait time during any test. One test beforehand was used to prime\n> the disk cache.\n> 100% CPU in the below means one core fully used. 800% means the system is\n> fully loaded.\n\n> pg_dump > file (on a subset of the DB with lots of tables with small\n> tuples)\n> 6m 27s, 4.9GB; 12.9MB/sec\n> 50% CPU in postgres, 50% CPU in pg_dump\n\n> pg_dump -Fc > file.gz\n> 9m6s, output is 768M (6.53x compression); 9.18MB/sec\n> 30% CPU in postgres, 70% CPU in pg_dump\n\n> pg_dump | gzip > file.2.gz\n> 6m22s, 13MB/sec. \n> 50% CPU in postgres, 50% Cpu in pg_dump, 50% cpu in gzip\n\nI don't see anything very contradictory here. What you're demonstrating\nis that it's nice to be able to throw a third CPU at the compression\npart of the problem. That's likely to remain true if we shift to a\ndifferent compression algorithm. I suspect if you substituted lzo for\ngzip in the third case, the picture wouldn't change very much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Aug 2009 14:56:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "\n\n\nOn 8/3/09 11:56 AM, \"Tom Lane\" <[email protected]> wrote:\n\n> Scott Carey <[email protected]> writes:\n>> I get very different (contradictory) behavior. Server with fast RAID, 32GB\n>> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2\n>> 8.3.6\n>> No disk wait time during any test. One test beforehand was used to prime\n>> the disk cache.\n>> 100% CPU in the below means one core fully used. 800% means the system is\n>> fully loaded.\n> \n>> pg_dump > file (on a subset of the DB with lots of tables with small\n>> tuples)\n>> 6m 27s, 4.9GB; 12.9MB/sec\n>> 50% CPU in postgres, 50% CPU in pg_dump\n> \n>> pg_dump -Fc > file.gz\n>> 9m6s, output is 768M (6.53x compression); 9.18MB/sec\n>> 30% CPU in postgres, 70% CPU in pg_dump\n> \n>> pg_dump | gzip > file.2.gz\n>> 6m22s, 13MB/sec.\n>> 50% CPU in postgres, 50% Cpu in pg_dump, 50% cpu in gzip\n> \n> I don't see anything very contradictory here.\n\nThe other poster got nearly 2 CPUs of work from just pg_dump + postgres.\nThat contradicts my results (but could be due to data differences or\npostgres version differences).\nIn the other use case, compression was not slower, but just used more CPU\n(also contradicting my results).\n\n\n> What you're demonstrating\n> is that it's nice to be able to throw a third CPU at the compression\n> part of the problem.\n\nNo, 1.5 CPU. A full use of a second would even be great.\n\nI'm also demonstrating that there is some artificial bottleneck somewhere\npreventing postgres and pg_dump to operate concurrently. Instead, one waits\nwhile the other does work.\n\nYour claim earlier in this thread was that there was already pipelined work\nbeing done due to pg_dump + postgresql -- which seems to be true for the\nother test case but not mine.\n\nAs a consequence, adding compression throttles the postgres process even\nthough the compression hasn't caused 100% CPU (or close) on any task\ninvolved.\n\n> That's likely to remain true if we shift to a\n> different compression algorithm. I suspect if you substituted lzo for\n> gzip in the third case, the picture wouldn't change very much.\n> \n\nThat is exactly the point. LZO would be nice (and help mitigate this\nproblem), but it doesn't solve the real problem here. Pg_dump is slow and\nartificially throttles without even getting 100% CPU from itself or\npostgres.\n\nThe problem still remains: dumping with -Fc can be significantly slower\nthan raw piped to a compression utility, even if no task is CPU or I/O\nbound. Dumping and piping to gzip is faster. But parallel restore won't\nwork without custom or raw format.\n\n\n\n> regards, tom lane\n> \n\n", "msg_date": "Mon, 3 Aug 2009 12:28:32 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "\n> I get very different (contradictory) behavior. Server with fast RAID, \n> 32GB\n> RAM, 2 x 4 core 3.16Ghz Xeon 54xx CPUs. CentOS 5.2\n> 8.3.6\n\n\tThat's a very different serup from my (much less powerful) box, so that \nwould explain it...\n\n> No disk wait time during any test. One test beforehand was used to prime\n> the disk cache.\n> 100% CPU in the below means one core fully used. 800% means the system \n> is\n> fully loaded.\n>\n> pg_dump > file (on a subset of the DB with lots of tables with small\n> tuples)\n> 6m 27s, 4.9GB; 12.9MB/sec\n> 50% CPU in postgres, 50% CPU in pg_dump\n\n\tIf there is no disk wait time, then why do you get 50/50 and not 100/100 \nor at least 1 core maxed out ? That's interesting...\n\nCOPY annonces TO '/dev/null';\nCOPY 413526\nTemps : 13871,093 ms\n\n\\copy annonces to '/dev/null'\nTemps : 14037,946 ms\n\ntime pg_dump -Fc -t annonces -U annonces --compress=0 annonces >/dev/null\nreal 0m14.596s\nuser 0m0.700s\nsys 0m0.372s\n\n\tIn all 3 cases postgres maxes out one core (I've repeated the test until \nall data was cached, so there is no disk access at all in vmstat).\n\tSize of dump is 312MB.\n\n\n\n", "msg_date": "Mon, 03 Aug 2009 22:15:32 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Mon, Aug 3, 2009 at 2:56 PM, Tom Lane<[email protected]> wrote:\n> I don't see anything very contradictory here.  What you're demonstrating\n> is that it's nice to be able to throw a third CPU at the compression\n> part of the problem.  That's likely to remain true if we shift to a\n> different compression algorithm.  I suspect if you substituted lzo for\n> gzip in the third case, the picture wouldn't change very much.\n\nlzo is much, much, (much) faster than zlib. Note, I've tried several\ntimes to contact the author to get clarification on licensing terms\nand have been unable to get a response.\n\n[root@devdb merlin]# time lzop -c dump.sql > /dev/null\n\nreal\t0m16.683s\nuser\t0m15.573s\nsys\t0m0.939s\n[root@devdb merlin]# time gzip -c dump.sql > /dev/null\n\nreal\t3m43.090s\nuser\t3m41.471s\nsys\t0m1.036s\n\nmerlin\n", "msg_date": "Mon, 3 Aug 2009 16:50:46 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\n> lzo is much, much, (much) faster than zlib. Note, I've tried several\n\ndecompression speed is even more awesome...\n\n> times to contact the author to get clarification on licensing terms\n> and have been unable to get a response.\n\nlzop and the LZO library are distributed under the terms of the GNU \nGeneral Public License (GPL).\nsource : http://www.lzop.org/lzop_man.php\n", "msg_date": "Mon, 03 Aug 2009 23:30:48 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "On Mon, Aug 3, 2009 at 5:30 PM, PFC<[email protected]> wrote:\n>\n>> lzo is much, much, (much) faster than zlib.  Note, I've tried several\n>\n> decompression speed is even more awesome...\n>\n>> times to contact the author to get clarification on licensing terms\n>> and have been unable to get a response.\n>\n> lzop and the LZO library are distributed under the terms of the GNU General\n> Public License (GPL).\n> source : http://www.lzop.org/lzop_man.php\n\nyeah...I have another project I'm working on that is closed source,\nplus I was curious if something could be worked out for pg...lzo seems\nideal for database usage. The author is MIA or too busy hacking to\nanswer his email :-).\n\nmerlin\n", "msg_date": "Tue, 4 Aug 2009 09:28:55 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Merlin Moncure escribi�:\n> On Mon, Aug 3, 2009 at 5:30 PM, PFC<[email protected]> wrote:\n> >\n> >> lzo is much, much, (much) faster than zlib. �Note, I've tried several\n> >\n> > decompression speed is even more awesome...\n> >\n> >> times to contact the author to get clarification on licensing terms\n> >> and have been unable to get a response.\n> >\n> > lzop and the LZO library are distributed under the terms of the GNU General\n> > Public License (GPL).\n> > source : http://www.lzop.org/lzop_man.php\n> \n> yeah...I have another project I'm working on that is closed source,\n> plus I was curious if something could be worked out for pg...lzo seems\n> ideal for database usage.\n\nI think this was already discussed here. It turns out that a specific\nexception for PG wouldn't be acceptable because of the multiple\ncommercial derivates. LZO would have to become BSD, which presumably\nthe author just doesn't want to do.\n\nMaybe we could have a --enable-lzo switch similar to what we do with\nreadline. Of course, a non-LZO-enabled build would not be able to read\na dump from such a build. (We could also consider LZO for TOAST\ncompression).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 4 Aug 2009 11:30:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\nOn 8/4/09 8:30 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\n> Merlin Moncure escribió:\n>> On Mon, Aug 3, 2009 at 5:30 PM, PFC<[email protected]> wrote:\n>>> \n>>>> lzo is much, much, (much) faster than zlib.  Note, I've tried several\n>>> \n>>> decompression speed is even more awesome...\n>>> \n>>>> times to contact the author to get clarification on licensing terms\n>>>> and have been unable to get a response.\n>>> \n>>> lzop and the LZO library are distributed under the terms of the GNU General\n>>> Public License (GPL).\n>>> source : http://www.lzop.org/lzop_man.php\n>> \n>> yeah...I have another project I'm working on that is closed source,\n>> plus I was curious if something could be worked out for pg...lzo seems\n>> ideal for database usage.\n> \n> I think this was already discussed here. It turns out that a specific\n> exception for PG wouldn't be acceptable because of the multiple\n> commercial derivates. LZO would have to become BSD, which presumably\n> the author just doesn't want to do.\n> \n> Maybe we could have a --enable-lzo switch similar to what we do with\n> readline. Of course, a non-LZO-enabled build would not be able to read\n> a dump from such a build. (We could also consider LZO for TOAST\n> compression).\n> \n\nThere are a handful of other compression algorithms very similar to LZO in\nperformance / compression level under various licenses.\n\nLZO is just the best known and most widely used.\n\nhttp://www.fastlz.org/ (MIT)\nhttp://www.quicklz.com/ (GPL again)\nhttp://oldhome.schmorp.de/marc/liblzf.html (BSD -ish)\n\nZFS uses LZJB (CDDL) source code here:\nhttp://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/\ncompress.c\n(a good read for one of the most simple LZ compression algorithms in terms\nof lines of code -- about 100 lines)\n\nFastlz, with its MIT license, is probably the most obvious choice.\n\n> --\n> Alvaro Herrera http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Tue, 4 Aug 2009 10:32:00 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "Scott Carey <[email protected]> writes:\n> There are a handful of other compression algorithms very similar to LZO in\n> performance / compression level under various licenses.\n> LZO is just the best known and most widely used.\n\nAnd after we get done with the license question, we need to ask about\npatents. The compression area is just a minefield of patents. gzip is\nknown to avoid all older patents (and would be pretty solid prior art\nagainst newer ones). I'm far less confident about lesser-known systems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 04 Aug 2009 16:40:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions " }, { "msg_contents": "On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane<[email protected]> wrote:\n> Scott Carey <[email protected]> writes:\n>> There are a handful of other compression algorithms very similar to LZO in\n>> performance / compression level under various licenses.\n>> LZO is just the best known and most widely used.\n>\n> And after we get done with the license question, we need to ask about\n> patents.  The compression area is just a minefield of patents.  gzip is\n> known to avoid all older patents (and would be pretty solid prior art\n> against newer ones).  I'm far less confident about lesser-known systems.\n\nI did a little bit of research. LZO and friends are variants of LZW.\nThe main LZW patent died in 2003, and AFAIK there has been no patent\nenforcement cases brought against LZO or it's cousins (LZO dates to\n1996). OK, I'm no attorney, etc, but the internet seems to believe\nthat the algorithms are patent free. LZO is quite widely used, in\nboth open source and some relatively high profile commercial projects.\n\nI downloaded the libraries and did some tests.\n2.5 G sql dump:\n\ncompression time:\nzlib: 4m 1s\nlzo: 17s\nfastlz: 28.8s\nliblzf: 26.7s\n\ncompression size:\nzlib: 609M 75%\nlzo: 948M 62%\nfastlz: 936M 62.5%\nliblzf: 916M 63.5%\n\nA couple of quick notes: liblzf produces (possibly) architecture\ndependent archives according to its header, and fastlz is not declared\n'stable' according to its website.\n\nmerlin\n", "msg_date": "Wed, 5 Aug 2009 10:12:58 -0400", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" }, { "msg_contents": "\n\n\nOn 8/5/09 7:12 AM, \"Merlin Moncure\" <[email protected]> wrote:\n\n> On Tue, Aug 4, 2009 at 4:40 PM, Tom Lane<[email protected]> wrote:\n>> Scott Carey <[email protected]> writes:\n>>> There are a handful of other compression algorithms very similar to LZO in\n>>> performance / compression level under various licenses.\n>>> LZO is just the best known and most widely used.\n>> \n>> And after we get done with the license question, we need to ask about\n>> patents.  The compression area is just a minefield of patents.  gzip is\n>> known to avoid all older patents (and would be pretty solid prior art\n>> against newer ones).  I'm far less confident about lesser-known systems.\n> \n> I did a little bit of research. LZO and friends are variants of LZW.\n> The main LZW patent died in 2003, and AFAIK there has been no patent\n> enforcement cases brought against LZO or it's cousins (LZO dates to\n> 1996). OK, I'm no attorney, etc, but the internet seems to believe\n> that the algorithms are patent free. LZO is quite widely used, in\n> both open source and some relatively high profile commercial projects.\n> \n\nThat doesn't sound right to me, LZW is patent protected in a few ways, and\nis a LZ78 scheme.\n\nLZO, zlib, and the others here are LZ77 schemes which avoid the LZW patents.\nThere are some other patents in the territory with respect to how the hash\nlookups are done for the LZ77 'sliding window' approach. Most notably,\nusing a tree is patented, and a couple other (obvious) tricks that are\ngenerally avoided anyway for any algorithms that are trying to be fast\nrather than produce the highest compression.\n\nhttp://en.wikipedia.org/wiki/Lossless_data_compression#Historical_legal_issu\nes\nhttp://en.wikipedia.org/wiki/LZ77_and_LZ78\nhttp://en.wikipedia.org/wiki/Lempel%E2%80%93Ziv%E2%80%93Welch\nhttp://www.faqs.org/faqs/compression-faq/part1/section-7.html\nhttp://www.ross.net/compression/patents.html\n\nNote, US patents are either 17 years after grant, or 20 years after filing.\nA very large chunk of those in this space have expired, but a few were\nfiled/granted in the early 90's -- though those are generally more specific\nand easy to avoid. Or very obvious duplicates of previous patents.\n\nMore notably, one of these, if interpreted broadly, would apply to zlib as\nwell (Gibson and Graybill) but the patent mentions LZRW1, and any broader\nscope would have prior art conflicts with ones that are now long expired.\nIts 17 years after grant on that, but not 20 years after filing.\n\n\n\n\n> I downloaded the libraries and did some tests.\n> 2.5 G sql dump:\n> \n> compression time:\n> zlib: 4m 1s\n> lzo: 17s\n> fastlz: 28.8s\n> liblzf: 26.7s\n> \n> compression size:\n> zlib: 609M 75%\n> lzo: 948M 62%\n> fastlz: 936M 62.5%\n> liblzf: 916M 63.5%\n> \n\nInteresting how that conflicts with some other benchmarks out there (where\nLZO ad the others are about the same). But, they're all an order of\nmagnitude faster than gzip/zlib.\n\n\n> A couple of quick notes: liblzf produces (possibly) architecture\n> dependent archives according to its header, and fastlz is not declared\n> 'stable' according to its website.\n> \n\n\n> merlin\n> \n\n", "msg_date": "Wed, 5 Aug 2009 10:00:20 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL 8.4 performance tuning questions" } ]
[ { "msg_contents": "Hi all,\n\nHas anybody worked on Greenplum MapReduce programming ?\n\nI am facing a problem while trying to execute the below Greenplum \nMapreduce program written in YAML (in blue). \n\nThe error is thrown in the 7th line as:\nError: YAML syntax error - found character that cannot start any token \nwhile scanning for the next token, at line 7 (in red)\n\nIf somebody can explain this and the potential solution\n\n%YAML 1.1\n---\nVERSION: 1.0.0.1 \nDATABASE: test_db1\nUSER: gpadmin\nDEFINE: \n - INPUT:\n NAME: doc\n TABLE: documents \n - INPUT:\n NAME: kw\n TABLE: keywords\n - MAP: \n NAME: doc_map \n LANGUAGE: python \n FUNCTION: |\n i = 0 \n terms = {}\n for term in data.lower().split(): \n i = i + 1\n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n for term in terms: \n yield([doc_id, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - doc_id integer \n - data text \n RETURNS: \n - doc_id integer \n - term text \n - positions text \n - MAP: \n NAME: kw_map \n LANGUAGE: python \n FUNCTION: | \n i = 0 \n terms = {} \n for term in keyword.lower().split(): \n i = i + 1 \n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n yield([keyword_id, i, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - keyword_id integer \n - keyword text \n RETURNS: \n - keyword_id integer \n - nterms integer \n - term text \n - positions text \n - TASK: \n NAME: doc_prep \n SOURCE: doc \n MAP: doc_map\n - TASK: \n NAME: kw_prep \n SOURCE: kw \n MAP: kw_map \n - INPUT: \n NAME: term_join \n QUERY: | \n SELECT doc.doc_id, kw.keyword_id, kw.term, \nkw.nterms, \n doc.positions as doc_positions, \n kw.positions as kw_positions \n FROM doc_prep doc INNER JOIN kw_prep kw ON \n(doc.term = kw.term)\n - REDUCE: \n NAME: term_reducer \n TRANSITION: term_transition \n FINALIZE: term_finalizer \n - TRANSITION: \n NAME: term_transition \n LANGUAGE: python \n PARAMETERS: \n - state text \n - term text \n - nterms integer \n - doc_positions text \n - kw_positions text \n FUNCTION: | \n if state: \n kw_split = state.split(':') \n else: \n kw_split = [] \n for i in range(0,nterms): \n kw_split.append('') \n for kw_p in kw_positions.split(','): \n kw_split[int(kw_p)-1] = doc_positions \n outstate = kw_split[0] \n for s in kw_split[1:]: \n outstate = outstate + ':' + s \n return outstate \n - FINALIZE: \n NAME: term_finalizer \n LANGUAGE: python \n RETURNS: \n - count integer \n MODE: MULTI \n FUNCTION: | \n if not state: \n return 0 \n kw_split = state.split(':') \n previous = None \n for i in range(0,len(kw_split)): \n isplit = kw_split[i].split(',') \n if any(map(lambda(x): x == '', isplit)): \n return 0 \n adjusted = set(map(lambda(x): int(x)-i, \nisplit)) \n if (previous): \n previous = \nadjusted.intersection(previous) \n else: \n previous = adjusted \n if previous: \n return len(previous) \n return 0\n - TASK: \n NAME: term_match \n SOURCE: term_join \n REDUCE: term_reducer \n - INPUT: \n NAME: final_output \n QUERY: | \n SELECT doc.*, kw.*, tm.count \n FROM documents doc, keywords kw, term_match tm \n WHERE doc.doc_id = tm.doc_id \n AND kw.keyword_id = tm.keyword_id \n AND tm.count > 0 \n EXECUTE: \n - RUN: \n SOURCE: final_output \n TARGET: STDOUT\n\n\n\nRegards,\n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi all,\n\nHas anybody worked on Greenplum MapReduce\nprogramming ?\n\nI am facing a problem while trying to\nexecute the below Greenplum Mapreduce program written in YAML (in blue).\n\n\nThe error is thrown in the 7th line\nas:\nError: YAML syntax error - found\ncharacter that cannot start any token while scanning for the next token,\nat line 7 (in red)\n\nIf somebody can explain this and the\npotential solution\n\n%YAML 1.1\n---\nVERSION: 1.0.0.1 \nDATABASE: test_db1\nUSER: gpadmin\nDEFINE: \n       \n- INPUT:\n       \n        NAME: doc\n       \n        TABLE: documents \n       \n- INPUT:\n       \n        NAME: kw\n       \n        TABLE: keywords\n       \n- MAP: \n       \n        NAME:      \n          doc_map \n       \n        LANGUAGE:      \n  python \n       \n        FUNCTION:      \n   |\n       \n               \ni = 0 \n       \n               \nterms = {}\n       \n               \nfor term in data.lower().split(): \n       \n               \n        i = i + 1\n       \n               \n        if term in terms: \n       \n               \n               \nterms[term] += ','+str(i) \n       \n               \n        else: \n       \n               \n               \nterms[term] = str(i) \n       \n               \nfor term in terms: \n       \n               \n        yield([doc_id, term, terms[term]])\n         \n       \n        OPTIMIZE: STRICT IMMUTABLE\n\n       \n        PARAMETERS: \n       \n               \n- doc_id integer \n       \n               \n- data text \n       \n        RETURNS: \n       \n               \n- doc_id integer \n       \n               \n- term text \n       \n               \n- positions text         \n       \n- MAP: \n       \n        NAME:      \n  kw_map \n       \n        LANGUAGE:      \n  python \n       \n        FUNCTION:      \n  | \n       \n               \ni = 0 \n       \n               \nterms = {} \n       \n               \nfor term in keyword.lower().split(): \n       \n               \n        i = i + 1 \n       \n               \n        if term in terms: \n       \n               \n               \nterms[term] += ','+str(i) \n       \n               \n        else: \n       \n               \n               \nterms[term] = str(i) \n       \n               \n        yield([keyword_id, i, term,\nterms[term]]) \n       \n        OPTIMIZE: STRICT IMMUTABLE\n\n       \n        PARAMETERS: \n       \n               \n- keyword_id integer \n       \n               \n- keyword text \n       \n        RETURNS: \n       \n               \n- keyword_id integer \n       \n               \n- nterms integer \n       \n               \n- term text \n       \n               \n- positions text          \n       \n- TASK: \n       \n        NAME: doc_prep \n       \n        SOURCE: doc \n       \n        MAP: doc_map\n       \n- TASK: \n       \n        NAME: kw_prep \n       \n        SOURCE: kw \n       \n        MAP: kw_map      \n   \n       \n- INPUT: \n       \n        NAME: term_join \n       \n        QUERY: | \n       \n               \nSELECT doc.doc_id, kw.keyword_id, kw.term, kw.nterms, \n       \n               \n         doc.positions as doc_positions,\n\n       \n               \n        kw.positions as kw_positions\n\n       \n               \n FROM doc_prep doc INNER JOIN kw_prep kw ON (doc.term = kw.term)\n       \n- REDUCE: \n       \n        NAME: term_reducer \n       \n        TRANSITION: term_transition\n\n       \n        FINALIZE: term_finalizer  \n      \n       \n- TRANSITION: \n       \n        NAME: term_transition \n       \n        LANGUAGE: python \n       \n        PARAMETERS: \n       \n               \n- state text \n       \n               \n- term text \n       \n               \n- nterms integer \n       \n               \n- doc_positions text \n       \n               \n- kw_positions text \n       \n        FUNCTION: | \n       \n               \nif state: \n       \n               \n        kw_split = state.split(':')\n\n       \n               \nelse: \n       \n               \n        kw_split = [] \n       \n               \n        for i in range(0,nterms): \n       \n               \n               \nkw_split.append('') \n       \n               \nfor kw_p in kw_positions.split(','): \n       \n               \n        kw_split[int(kw_p)-1] = doc_positions\n         \n       \n               \noutstate = kw_split[0] \n       \n               \nfor s in kw_split[1:]: \n       \n               \n        outstate = outstate + ':' +\ns \n       \n               \nreturn outstate         \n       \n  - FINALIZE: \n       \n        NAME: term_finalizer \n       \n        LANGUAGE: python \n       \n        RETURNS: \n       \n               \n- count integer \n       \n        MODE: MULTI \n       \n        FUNCTION: | \n       \n               \nif not state: \n       \n               \n        return 0 \n       \n               \nkw_split = state.split(':') \n       \n               \nprevious = None \n       \n               \nfor i in range(0,len(kw_split)): \n       \n               \n        isplit = kw_split[i].split(',')\n\n       \n               \n        if any(map(lambda(x): x ==\n'', isplit)): \n       \n               \n               \nreturn 0 \n       \n               \n        adjusted = set(map(lambda(x):\nint(x)-i, isplit)) \n       \n               \n        if (previous): \n       \n               \n               \nprevious = adjusted.intersection(previous) \n       \n               \n        else: \n       \n               \n               \nprevious = adjusted \n       \n               \nif previous: \n       \n               \n        return len(previous) \n       \n               \nreturn 0\n       \n- TASK: \n       \n        NAME: term_match \n       \n        SOURCE: term_join \n       \n        REDUCE: term_reducer \n       \n- INPUT: \n       \n        NAME: final_output \n       \n        QUERY: | \n       \n               \nSELECT doc.*, kw.*, tm.count \n       \n               \nFROM documents doc, keywords kw, term_match tm \n       \n               \nWHERE doc.doc_id = tm.doc_id \n       \n               \n  AND kw.keyword_id = tm.keyword_id \n       \n               \n  AND tm.count > 0 \n       \nEXECUTE: \n       \n        - RUN: \n       \n               \nSOURCE: final_output \n       \n               \nTARGET: STDOUT\n\n\n\nRegards,\n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Thu, 30 Jul 2009 17:54:46 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Greenplum MapReduce" }, { "msg_contents": "Suvankar Roy wrote:\n> \n> Hi all,\n> \n> Has anybody worked on Greenplum MapReduce programming ?\n\nIt's a commercial product, you need to contact greenplum.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Mon, 03 Aug 2009 12:04:58 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Greenplum MapReduce" }, { "msg_contents": "Hi Robert,\n\nThanks much for your valuable inputs....\n\nThis spaces and tabs problem is killing me in a way, it is pretty \ncumbersome to say the least....\n\nRegards,\n\nSuvankar Roy\n\n\n\n\"Robert Mah\" <[email protected]> \nSent by: Robert Mah <[email protected]>\n08/02/2009 10:52 PM\n\nTo\n\"'Suvankar Roy'\" <[email protected]>, \n<[email protected]>\ncc\n\nSubject\nRE: [PERFORM] Greenplum MapReduce\n\n\n\n\n\n\nSuvankar:\n \nCheck your file for spaces vs tabs (one of them is bad and yes, it \nmatters).\n \nAnd as an personal aside, this is yet another reason I hate YAML.\n \nCheers,\nRob\n \nFrom: [email protected] [\nmailto:[email protected]] On Behalf Of Suvankar Roy\nSent: Thursday, July 30, 2009 8:25 AM\nTo: [email protected]\nSubject: [PERFORM] Greenplum MapReduce\n \n\nHi all, \n\nHas anybody worked on Greenplum MapReduce programming ? \n\nI am facing a problem while trying to execute the below Greenplum \nMapreduce program written in YAML (in blue). \n\nThe error is thrown in the 7th line as: \nError: YAML syntax error - found character that cannot start any token \nwhile scanning for the next token, at line 7 (in red) \n\nIf somebody can explain this and the potential solution \n\n%YAML 1.1 \n--- \nVERSION: 1.0.0.1 \nDATABASE: test_db1 \nUSER: gpadmin \nDEFINE: \n - INPUT: \n NAME: doc \n TABLE: documents \n - INPUT: \n NAME: kw \n TABLE: keywords \n - MAP: \n NAME: doc_map \n LANGUAGE: python \n FUNCTION: | \n i = 0 \n terms = {} \n for term in data.lower().split(): \n i = i + 1 \n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n for term in terms: \n yield([doc_id, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - doc_id integer \n - data text \n RETURNS: \n - doc_id integer \n - term text \n - positions text \n - MAP: \n NAME: kw_map \n LANGUAGE: python \n FUNCTION: | \n i = 0 \n terms = {} \n for term in keyword.lower().split(): \n i = i + 1 \n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n yield([keyword_id, i, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - keyword_id integer \n - keyword text \n RETURNS: \n - keyword_id integer \n - nterms integer \n - term text \n - positions text \n - TASK: \n NAME: doc_prep \n SOURCE: doc \n MAP: doc_map \n - TASK: \n NAME: kw_prep \n SOURCE: kw \n MAP: kw_map \n - INPUT: \n NAME: term_join \n QUERY: | \n SELECT doc.doc_id, kw.keyword_id, kw.term, \nkw.nterms, \n doc.positions as doc_positions, \n kw.positions as kw_positions \n FROM doc_prep doc INNER JOIN kw_prep kw ON \n(doc.term = kw.term) \n - REDUCE: \n NAME: term_reducer \n TRANSITION: term_transition \n FINALIZE: term_finalizer \n - TRANSITION: \n NAME: term_transition \n LANGUAGE: python \n PARAMETERS: \n - state text \n - term text \n - nterms integer \n - doc_positions text \n - kw_positions text \n FUNCTION: | \n if state: \n kw_split = state.split(':') \n else: \n kw_split = [] \n for i in range(0,nterms): \n kw_split.append('') \n for kw_p in kw_positions.split(','): \n kw_split[int(kw_p)-1] = doc_positions \n\n outstate = kw_split[0] \n for s in kw_split[1:]: \n outstate = outstate + ':' + s \n return outstate \n - FINALIZE: \n NAME: term_finalizer \n LANGUAGE: python \n RETURNS: \n - count integer \n MODE: MULTI \n FUNCTION: | \n if not state: \n return 0 \n kw_split = state.split(':') \n previous = None \n for i in range(0,len(kw_split)): \n isplit = kw_split[i].split(',') \n if any(map(lambda(x): x == '', isplit)): \n return 0 \n adjusted = set(map(lambda(x): int(x)-i, \nisplit)) \n if (previous): \n previous = \nadjusted.intersection(previous) \n else: \n previous = adjusted \n if previous: \n return len(previous) \n return 0 \n - TASK: \n NAME: term_match \n SOURCE: term_join \n REDUCE: term_reducer \n - INPUT: \n NAME: final_output \n QUERY: | \n SELECT doc.*, kw.*, tm.count \n FROM documents doc, keywords kw, term_match tm \n WHERE doc.doc_id = tm.doc_id \n AND kw.keyword_id = tm.keyword_id \n AND tm.count > 0 \n EXECUTE: \n - RUN: \n SOURCE: final_output \n TARGET: STDOUT \n\n\n\nRegards, \n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n \n \nForwardSourceID:NT000058B6 \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi Robert,\n\nThanks much for your valuable inputs....\n\nThis spaces and tabs problem is killing\nme in a way, it is pretty cumbersome to say the least....\n\nRegards,\n\nSuvankar Roy\n\n\n\n\n\n\"Robert Mah\"\n<[email protected]> \nSent by: Robert Mah <[email protected]>\n08/02/2009 10:52 PM\n\n\n\n\nTo\n\"'Suvankar Roy'\" <[email protected]>,\n<[email protected]>\n\n\ncc\n\n\n\nSubject\nRE: [PERFORM] Greenplum MapReduce\n\n\n\n\n\n\n\n\nSuvankar:\n \nCheck your file for spaces\nvs tabs (one of them is bad and yes, it matters).\n \nAnd as an personal aside,\nthis is yet another reason I hate YAML.\n \nCheers,\nRob\n \nFrom: [email protected]\n[mailto:[email protected]]\nOn Behalf Of Suvankar Roy\nSent: Thursday, July 30, 2009 8:25 AM\nTo: [email protected]\nSubject: [PERFORM] Greenplum MapReduce\n \n\nHi all, \n\nHas anybody worked on Greenplum MapReduce programming ?\n\n\nI am facing a problem while trying to execute the below Greenplum Mapreduce\nprogram written in YAML (in blue). \n\nThe error is thrown in the 7th line as:\n\nError: YAML syntax error - found character that cannot start any token\nwhile scanning for the next token, at line 7 (in red)\n\n\nIf somebody can explain this and the potential solution\n\n\n%YAML 1.1 \n--- \nVERSION: 1.0.0.1 \nDATABASE: test_db1 \nUSER: gpadmin \nDEFINE: \n        - INPUT:\n\n                NAME: doc\n\n                TABLE: documents\n\n        - INPUT:\n\n                NAME: kw\n\n                TABLE: keywords\n\n        - MAP: \n                NAME:    \n            doc_map \n                LANGUAGE:  \n      python \n                FUNCTION:  \n       |\n\n                    \n   i = 0 \n                    \n   terms = {} \n                    \n   for term in data.lower().split(): \n                    \n           i = i + 1\n\n                    \n           if term in terms: \n                    \n                   terms[term]\n+= ','+str(i) \n                    \n           else: \n                    \n                   terms[term]\n= str(i) \n                    \n   for term in terms: \n                    \n           yield([doc_id, term, terms[term]])\n         \n\n                OPTIMIZE: STRICT\nIMMUTABLE \n                PARAMETERS: \n                    \n   - doc_id integer \n                    \n   - data text \n                RETURNS: \n                    \n   - doc_id integer \n                    \n   - term text \n                    \n   - positions text         \n        - MAP: \n                NAME:    \n    kw_map \n                LANGUAGE:  \n      python \n                FUNCTION:  \n      | \n                    \n   i = 0 \n                    \n   terms = {} \n                    \n   for term in keyword.lower().split(): \n                    \n           i = i + 1 \n                    \n           if term in terms: \n                    \n                   terms[term]\n+= ','+str(i) \n                    \n           else: \n                    \n                   terms[term]\n= str(i) \n                    \n           yield([keyword_id, i, term, terms[term]])\n\n                OPTIMIZE: STRICT\nIMMUTABLE \n                PARAMETERS: \n                    \n   - keyword_id integer \n                    \n   - keyword text \n                RETURNS: \n                    \n   - keyword_id integer \n                    \n   - nterms integer \n                    \n   - term text \n                    \n   - positions text          \n\n        - TASK: \n                NAME: doc_prep\n\n                SOURCE: doc \n                MAP: doc_map\n\n        - TASK: \n                NAME: kw_prep \n                SOURCE: kw \n                MAP: kw_map  \n       \n\n        - INPUT: \n                NAME: term_join\n\n                QUERY: | \n                    \n   SELECT doc.doc_id, kw.keyword_id, kw.term, kw.nterms, \n                    \n            doc.positions as doc_positions,\n\n                    \n           kw.positions as kw_positions \n                    \n    FROM doc_prep doc INNER JOIN kw_prep kw ON (doc.term = kw.term)\n\n        - REDUCE: \n                NAME: term_reducer\n\n                TRANSITION: term_transition\n\n                FINALIZE: term_finalizer\n        \n        - TRANSITION: \n                NAME: term_transition\n\n                LANGUAGE: python\n\n                PARAMETERS: \n                    \n   - state text \n                    \n   - term text \n                    \n   - nterms integer \n                    \n   - doc_positions text \n                    \n   - kw_positions text \n                FUNCTION: | \n                    \n   if state: \n                    \n           kw_split = state.split(':') \n                    \n   else: \n                    \n           kw_split = [] \n                    \n           for i in range(0,nterms): \n                    \n                   kw_split.append('')\n\n                    \n   for kw_p in kw_positions.split(','): \n                    \n           kw_split[int(kw_p)-1] = doc_positions\n         \n\n                    \n   outstate = kw_split[0] \n                    \n   for s in kw_split[1:]: \n                    \n           outstate = outstate + ':' + s\n\n                    \n   return outstate         \n          - FINALIZE: \n                NAME: term_finalizer\n\n                LANGUAGE: python\n\n                RETURNS: \n                    \n   - count integer \n                MODE: MULTI \n                FUNCTION: | \n                    \n   if not state: \n                    \n           return 0 \n                    \n   kw_split = state.split(':') \n                    \n   previous = None \n                    \n   for i in range(0,len(kw_split)): \n                    \n           isplit = kw_split[i].split(',')\n\n                    \n           if any(map(lambda(x): x == '',\nisplit)): \n                    \n                   return\n0 \n                    \n           adjusted = set(map(lambda(x):\nint(x)-i, isplit)) \n                    \n           if (previous): \n                    \n                   previous\n= adjusted.intersection(previous) \n                    \n           else: \n                    \n                   previous\n= adjusted \n                    \n   if previous: \n                    \n           return len(previous) \n                    \n   return 0 \n        - TASK: \n                NAME: term_match\n\n                SOURCE: term_join\n\n                REDUCE: term_reducer\n\n        - INPUT: \n                NAME: final_output\n\n                QUERY: | \n                    \n   SELECT doc.*, kw.*, tm.count \n                    \n   FROM documents doc, keywords kw, term_match tm \n                    \n   WHERE doc.doc_id = tm.doc_id \n                    \n     AND kw.keyword_id = tm.keyword_id \n                    \n     AND tm.count > 0 \n        EXECUTE: \n                - RUN: \n                    \n   SOURCE: final_output \n                    \n   TARGET: STDOUT\n\n\n\n\nRegards, \n\nSuvankar Roy\n=====-----=====-----=====\nNotice: The information contained in\nthis e-mail\nmessage and/or attachments to it may\ncontain \nconfidential or privileged information.\nIf you are \nnot the intended recipient, any dissemination,\nuse, \nreview, distribution, printing or copying\nof the \ninformation contained in this e-mail\nmessage \nand/or attachments to it are strictly\nprohibited. If \nyou have received this communication\nin error, \nplease notify us by reply e-mail or\ntelephone and \nimmediately and permanently delete\nthe message \nand any attachments. Thank you\n \n \nForwardSourceID:NT000058B6\n   \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Mon, 3 Aug 2009 10:27:58 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Greenplum MapReduce" }, { "msg_contents": "Suvankar Roy wrote:\n> Hi all,\n> \n> Has anybody worked on Greenplum MapReduce programming ?\n> \n> I am facing a problem while trying to execute the below Greenplum \n> Mapreduce program written in YAML (in blue). \n\nThe other poster suggested contacting Greenplum and I can only agree.\n\n> The error is thrown in the 7th line as:\n> Error: YAML syntax error - found character that cannot start any token \n> while scanning for the next token, at line 7 (in red)\n\nThere is no red, particularly if viewing messages as plain text (which \nmost people do on mailing lists). Consider indicating a line some other \nway next time (commonly below the line you put something like \"this is \nline 7 ^^^^^\")\n\nThe most common problem I get with YAML files though is when a tab is \naccidentally inserted instead of spaces at the start of a line.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 03 Aug 2009 10:25:27 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Greenplum MapReduce" }, { "msg_contents": "Hi Richard,\n\nI sincerely regret the inconvenience caused.....\n\n%YAML 1.1\n---\nVERSION: 1.0.0.1 \nDATABASE: test_db1\nUSER: gpadmin\nDEFINE: \n - INPUT: #****** This the line which is causing the error ******#\n NAME: doc\n TABLE: documents \n - INPUT:\n NAME: kw\n TABLE: keywords\n - MAP: \n NAME: doc_map \n LANGUAGE: python \n FUNCTION: |\n i = 0 \n terms = {}\n for term in data.lower().split(): \n i = i + 1\n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n for term in terms: \n yield([doc_id, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - doc_id integer \n - data text \n RETURNS: \n - doc_id integer \n - term text \n - positions text \n - MAP: \n NAME: kw_map \n LANGUAGE: python \n FUNCTION: | \n i = 0 \n terms = {} \n for term in keyword.lower().split(): \n i = i + 1 \n if term in terms: \n terms[term] += ','+str(i) \n else: \n terms[term] = str(i) \n yield([keyword_id, i, term, terms[term]]) \n OPTIMIZE: STRICT IMMUTABLE \n PARAMETERS: \n - keyword_id integer \n - keyword text \n RETURNS: \n - keyword_id integer \n - nterms integer \n - term text \n - positions text \n - TASK: \n NAME: doc_prep \n SOURCE: doc \n MAP: doc_map\n - TASK: \n NAME: kw_prep \n SOURCE: kw \n MAP: kw_map \n - INPUT: \n NAME: term_join \n QUERY: | \n SELECT doc.doc_id, kw.keyword_id, kw.term, \nkw.nterms, \n doc.positions as doc_positions, \n kw.positions as kw_positions \n FROM doc_prep doc INNER JOIN kw_prep kw ON \n(doc.term = kw.term)\n - REDUCE: \n NAME: term_reducer \n TRANSITION: term_transition \n FINALIZE: term_finalizer \n - TRANSITION: \n NAME: term_transition \n LANGUAGE: python \n PARAMETERS: \n - state text \n - term text \n - nterms integer \n - doc_positions text \n - kw_positions text \n FUNCTION: | \n if state: \n kw_split = state.split(':') \n else: \n kw_split = [] \n for i in range(0,nterms): \n kw_split.append('') \n for kw_p in kw_positions.split(','): \n kw_split[int(kw_p)-1] = doc_positions \n outstate = kw_split[0] \n for s in kw_split[1:]: \n outstate = outstate + ':' + s \n return outstate \n - FINALIZE: \n NAME: term_finalizer \n LANGUAGE: python \n RETURNS: \n - count integer \n MODE: MULTI \n FUNCTION: | \n if not state: \n return 0 \n kw_split = state.split(':') \n previous = None \n for i in range(0,len(kw_split)): \n isplit = kw_split[i].split(',') \n if any(map(lambda(x): x == '', isplit)): \n return 0 \n adjusted = set(map(lambda(x): int(x)-i, \nisplit)) \n if (previous): \n previous = \nadjusted.intersection(previous) \n else: \n previous = adjusted \n if previous: \n return len(previous) \n return 0\n - TASK: \n NAME: term_match \n SOURCE: term_join \n REDUCE: term_reducer \n - INPUT: \n NAME: final_output \n QUERY: | \n SELECT doc.*, kw.*, tm.count \n FROM documents doc, keywords kw, term_match tm \n WHERE doc.doc_id = tm.doc_id \n AND kw.keyword_id = tm.keyword_id \n AND tm.count > 0 \n EXECUTE: \n - RUN: \n SOURCE: final_output \n TARGET: STDOUT\n\n\nI have learnt that unnecessary TABs can the cause of this, so trying to \novercome that, hopefully the problem will subside then....\n\nRegards,\n\nSuvankar Roy\n\n\n\n\nRichard Huxton <[email protected]> \n08/03/2009 02:55 PM\n\nTo\nSuvankar Roy <[email protected]>\ncc\[email protected]\nSubject\nRe: [PERFORM] Greenplum MapReduce\n\n\n\n\n\n\nSuvankar Roy wrote:\n> Hi all,\n> \n> Has anybody worked on Greenplum MapReduce programming ?\n> \n> I am facing a problem while trying to execute the below Greenplum \n> Mapreduce program written in YAML (in blue). \n\nThe other poster suggested contacting Greenplum and I can only agree.\n\n> The error is thrown in the 7th line as:\n> Error: YAML syntax error - found character that cannot start any token \n> while scanning for the next token, at line 7 (in red)\n\nThere is no red, particularly if viewing messages as plain text (which \nmost people do on mailing lists). Consider indicating a line some other \nway next time (commonly below the line you put something like \"this is \nline 7 ^^^^^\")\n\nThe most common problem I get with YAML files though is when a tab is \naccidentally inserted instead of spaces at the start of a line.\n\n-- \n Richard Huxton\n Archonet Ltd\n\nForwardSourceID:NT000058E2 \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you\n\n\n\nHi Richard,\n\nI sincerely regret the inconvenience\ncaused.....\n\n%YAML 1.1\n---\nVERSION: 1.0.0.1 \nDATABASE: test_db1\nUSER: gpadmin\nDEFINE: \n        - INPUT: #****** This the line which is causing\nthe error ******#\n                NAME: doc\n                TABLE: documents\n\n        - INPUT:\n                NAME: kw\n                TABLE: keywords\n        - MAP: \n                NAME:    \n      doc_map \n                LANGUAGE:  \n    python \n                FUNCTION:  \n     |\n                    \n   i = 0 \n                    \n   terms = {}\n                    \n   for term in data.lower().split(): \n                    \n           i = i + 1\n                    \n           if term in terms: \n                    \n                   terms[term]\n+= ','+str(i) \n                    \n           else: \n                    \n                   terms[term]\n= str(i) \n                    \n   for term in terms: \n                    \n           yield([doc_id, term, terms[term]])\n\n                OPTIMIZE: STRICT\nIMMUTABLE \n                PARAMETERS: \n                    \n   - doc_id integer \n                    \n   - data text \n                RETURNS: \n                    \n   - doc_id integer \n                    \n   - term text \n                    \n   - positions text \n        - MAP: \n                NAME:   kw_map\n\n                LANGUAGE:  \n    python \n                FUNCTION:  \n    | \n                    \n   i = 0 \n                    \n   terms = {} \n                    \n   for term in keyword.lower().split(): \n                    \n           i = i + 1 \n                    \n           if term in terms: \n                    \n                   terms[term]\n+= ','+str(i) \n                    \n           else: \n                    \n                   terms[term]\n= str(i) \n                    \n           yield([keyword_id, i, term, terms[term]])\n\n                OPTIMIZE: STRICT\nIMMUTABLE \n                PARAMETERS: \n                    \n   - keyword_id integer \n                    \n   - keyword text \n                RETURNS: \n                    \n   - keyword_id integer \n                    \n   - nterms integer \n                    \n   - term text \n                    \n   - positions text \n        - TASK: \n                NAME: doc_prep\n\n                SOURCE: doc \n                MAP: doc_map\n        - TASK: \n                NAME: kw_prep \n                SOURCE: kw \n                MAP: kw_map \n        - INPUT: \n                NAME: term_join\n\n                QUERY: | \n                    \n   SELECT doc.doc_id, kw.keyword_id, kw.term, \nkw.nterms, \n                    \n           doc.positions as doc_positions,\n\n                    \n           kw.positions as kw_positions \n                    \n    FROM doc_prep doc INNER JOIN kw_prep kw ON \n(doc.term = kw.term)\n        - REDUCE: \n                NAME: term_reducer\n\n                TRANSITION: term_transition\n\n                FINALIZE: term_finalizer\n\n        - TRANSITION: \n                NAME: term_transition\n\n                LANGUAGE: python\n\n                PARAMETERS: \n                    \n   - state text \n                    \n   - term text \n                    \n   - nterms integer \n                    \n   - doc_positions text \n                    \n   - kw_positions text \n                FUNCTION: | \n                    \n   if state: \n                    \n           kw_split = state.split(':') \n                    \n   else: \n                    \n           kw_split = [] \n                    \n           for i in range(0,nterms): \n                    \n                   kw_split.append('')\n\n                    \n   for kw_p in kw_positions.split(','): \n                    \n           kw_split[int(kw_p)-1] = doc_positions\n\n                    \n   outstate = kw_split[0] \n                    \n   for s in kw_split[1:]: \n                    \n           outstate = outstate + ':' + s\n\n                    \n   return outstate \n        - FINALIZE: \n                NAME: term_finalizer\n\n                LANGUAGE: python\n\n                RETURNS: \n                    \n   - count integer \n                MODE: MULTI \n                FUNCTION: | \n                    \n   if not state: \n                    \n           return 0 \n                    \n   kw_split = state.split(':') \n                    \n   previous = None \n                    \n   for i in range(0,len(kw_split)): \n                    \n           isplit = kw_split[i].split(',')\n\n                    \n           if any(map(lambda(x): x == '',\nisplit)): \n                    \n                   return\n0 \n                    \n           adjusted = set(map(lambda(x):\nint(x)-i, \nisplit)) \n                    \n           if (previous): \n                    \n                   previous\n= \nadjusted.intersection(previous) \n                    \n           else: \n                    \n                   previous\n= adjusted \n                    \n   if previous: \n                    \n           return len(previous) \n                    \n   return 0\n        - TASK: \n                NAME: term_match\n\n                SOURCE: term_join\n\n                REDUCE: term_reducer\n\n        - INPUT: \n                NAME: final_output\n\n                QUERY: | \n                    \n   SELECT doc.*, kw.*, tm.count \n                    \n   FROM documents doc, keywords kw, term_match tm \n                    \n   WHERE doc.doc_id = tm.doc_id \n                    \n     AND kw.keyword_id = tm.keyword_id \n                    \n     AND tm.count > 0 \n        EXECUTE: \n                - RUN: \n                    \n   SOURCE: final_output \n                    \n   TARGET: STDOUT\n\n\nI have learnt that unnecessary TABs\ncan the cause of this, so trying to overcome that, hopefully the problem\nwill subside then....\n\nRegards,\n\nSuvankar Roy\n\n\n\n\n\n\nRichard Huxton <[email protected]>\n\n08/03/2009 02:55 PM\n\n\n\n\nTo\nSuvankar Roy <[email protected]>\n\n\ncc\[email protected]\n\n\nSubject\nRe: [PERFORM] Greenplum MapReduce\n\n\n\n\n\n\n\n\nSuvankar Roy wrote:\n> Hi all,\n> \n> Has anybody worked on Greenplum MapReduce programming ?\n> \n> I am facing a problem while trying to execute the below Greenplum\n\n> Mapreduce program written in YAML (in blue). \n\nThe other poster suggested contacting Greenplum and I can only agree.\n\n> The error is thrown in the 7th line as:\n> Error: YAML syntax error - found character that cannot start any token\n\n> while scanning for the next token, at line 7 (in red)\n\nThere is no red, particularly if viewing messages as plain text (which\n\nmost people do on mailing lists). Consider indicating a line some other\n\nway next time (commonly below the line you put something like \"this\nis \nline 7 ^^^^^\")\n\nThe most common problem I get with YAML files though is when a tab is \naccidentally inserted instead of spaces at the start of a line.\n\n-- \n   Richard Huxton\n   Archonet Ltd\n\nForwardSourceID:NT000058E2\n   \n=====-----=====-----=====\nNotice: The information contained in this e-mail\nmessage and/or attachments to it may contain \nconfidential or privileged information. If you are \nnot the intended recipient, any dissemination, use, \nreview, distribution, printing or copying of the \ninformation contained in this e-mail message \nand/or attachments to it are strictly prohibited. If \nyou have received this communication in error, \nplease notify us by reply e-mail or telephone and \nimmediately and permanently delete the message \nand any attachments. Thank you", "msg_date": "Mon, 3 Aug 2009 15:06:49 +0530", "msg_from": "Suvankar Roy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Greenplum MapReduce" }, { "msg_contents": "Suvankar Roy wrote:\n> Hi Richard,\n> \n> I sincerely regret the inconvenience caused.....\n\nNo big inconvenience, but the lists can be very busy sometimes and the \neasier you make it for people to answer your questions the better the \nanswers you will get.\n\n> %YAML 1.1\n> ---\n> VERSION: 1.0.0.1 \n> DATABASE: test_db1\n> USER: gpadmin\n> DEFINE: \n> - INPUT: #****** This the line which is causing the error ******#\n > NAME: doc\n > TABLE: documents\n\nIf it looks fine, always check for tabs. Oh, and you could have cut out \nall the rest of the file, really.\n\n> I have learnt that unnecessary TABs can the cause of this, so trying to \n> overcome that, hopefully the problem will subside then....\n\nI'm always getting this. It's easy to accidentally introduce a tab \ncharacter when reformatting YAML. It might be worth checking if your \ntext editor has an option to always replace tabs with spaces.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 03 Aug 2009 10:49:47 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Greenplum MapReduce" } ]
[ { "msg_contents": "Dear All,\n \n \n We are using Postgres 8.3.7 in our java application. We are doing performances tuning and load testing in our setup. we have noticed that ,some of our queries to the database taking long time to return the results.Please find our setup details belows.\n \nWe observed that postgres is running in windows is slower than the linux .\n \n Machine &amp; Database Details :\n \n Windows configuration:\n 4 GB RAM\n 4*1.6 GHZ\n windows 2008 server standard edition\n \n Postgresql configuration:\n \n shared_buffers: 1 GB\n Effective_cache_size: 2GB\n fsync: off\t (even we tested this parameter is on ,we observed the same slowness )\n \n \n Database Details : \n \n Postgres Database : PostgreSQL 8.3.7.1\n Driver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604)\n We are using 40 database connections.\n \n \n We have few tables which will be having more amount data.While running our application STATSDATA table will be created daily with table name with date.\n like as STATSDATA8_21_2009\n \n Schema for STATSDATA table\n \n create table STATSDATA8_21_2009(\n POLLID Numeric(19),\n INSTANCE varchar(100),\n TTIME Numeric(19),\n VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)\n\nSchema for PolledData\n\ncreate table PolledData(\n\"NAME\" varchar(50) NOT NULL ,\n\"ID\" BIGINT NOT NULL ,\n\"AGENT\" varchar(50) NOT NULL ,\n\"COMMUNITY\" varchar(100) NOT NULL ,\n\"PERIOD\" INTEGER NOT NULL,\n\"ACTIVE\" varchar(10),\n\"OID\" varchar(200) NOT NULL,\n\"LOGDIRECTLY\" varchar(10),\n\"LOGFILE\" varchar(100),\n\"SSAVE\" varchar(10),\n\"THRESHOLD\" varchar(10),\n\"ISMULTIPLEPOLLEDDATA\" varchar(10),\n\"PREVIOUSSEVERITY\" INTEGER,\n\"NUMERICTYPE\" INTEGER,\n\"SAVEABSOLUTES\" varchar(10),\n\"TIMEAVG\" varchar(10),\n\"PORT\" INTEGER,\n\"WEBNMS\" varchar(100),\n\"GROUPNAME\" varchar(100),\n\"LASTCOUNTERVALUE\" BIGINT ,\n\"LASTTIMEVALUE\" BIGINT ,\n\"TIMEVAL\" BIGINT NOT NULL ,\n\"POLICYNAME\" varchar(100),\n\"THRESHOLDLIST\" varchar(200),\n\"DNSNAME\" varchar(100),\n\"SUFFIX\" varchar(20),\n\"STATSDATATABLENAME\" varchar(100),\n\"POLLERNAME\" varchar(200),\n\"FAILURECOUNT\" INTEGER,\n\"FAILURETHRESHOLD\" INTEGER,\n\"PARENTOBJ\" varchar(100),\n\"PROTOCOL\" varchar(50),\n\"SAVEPOLLCOUNT\" INTEGER,\n\"CURRENTSAVECOUNT\" INTEGER,\n\"SAVEONTHRESHOLD\" varchar(10),\n\"SNMPVERSION\" varchar(10),\n\"USERNAME\" varchar(30),\n\"CONTEXTNAME\" varchar(30),\nPRIMARY KEY (\"ID\",\"NAME\",\"AGENT\",\"OID\"),\nindex PolledData0_ndx ( \"NAME\"),\nindex PolledData1_ndx ( \"AGENT\"),\nindex PolledData2_ndx ( \"OID\"),\nindex PolledData3_ndx ( \"ID\"),\nindex PolledData4_ndx ( \"PARENTOBJ\"),\n )\n\n \nWe have 300k row's in PolledData Table.In each STATSDATA table ,we have almost 12 to 13 million rows. Every one minute interval ,we insert data into to STATSDATA table. In our application ,we use insert and select query to STATSDATA table at regular interval. Please let us know why the below query takes more time to return the results. is there any thing we need to do to tune the postgres database ? \n\n\n\n\nPlease find explain analyze output.\n\n \n First Query :\n\npostgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\n AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa\n ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME &gt;= 1250838027454) AND ( TTIME &lt;=\n 1250838079654) ) ) ) t1;\n QUERY PLAN\n \n --------------------------------------------------------------------------------\n ------------------------------------------------------------------\n Aggregate (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1\n 7818.412 rows=1 loops=1)\n -&gt; Merge Join (cost=717526.23..767505.06 rows=2556821 width=0) (actual time\n =17560.469..17801.790 rows=13721 loops=1)\n Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))\n -&gt; Sort (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239\n 2.659..2416.093 rows=13721 loops=1)\n Sort Key: statsdata8_21_2009.pollid\n Sort Method: quicksort Memory: 792kB\n -&gt; Seq Scan on statsdata8_21_2009 (cost=0.00..68773.27 rows=136\n 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)\n Filter: ((ttime &gt;= 1250838027454::numeric) AND (ttime &lt;= 12\n 50838079654::numeric))\n -&gt; Materialize (cost=647817.78..688331.92 rows=3241131 width=8) (actu\n al time=15167.767..15282.232 rows=21582 loops=1)\n -&gt; Sort (cost=647817.78..655920.61 rows=3241131 width=8) (actua\n l time=15167.756..15218.645 rows=21574 loops=1)\n Sort Key: ((polleddata.id)::numeric)\n Sort Method: external merge Disk: 736kB\n -&gt; Seq Scan on polleddata (cost=0.00..164380.31 rows=3241\n 131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)\n Total runtime: 17826.511 ms\n (14 rows)\n \n Second Query :\n \n postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\n AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledDa\n ta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME &gt;= 1250767134601) AND ( TTIME &lt;=\n 1250767384601) ) ) ) t1;\n QUERY PLAN\n \n --------------------------------------------------------------------------------\n -----------------------------------------------------------------\n Aggregate (cost=1238144.31..1238144.32 rows=1 width=0) (actual time=111796.187\n ..111796.188 rows=1 loops=1)\n -&gt; Merge Join (cost=1034863.23..1212780.47 rows=10145533 width=0) (actual t\n ime=111685.204..111783.670 rows=13126 loops=1)\n Merge Cond: (statsdata8_20_2009.pollid = ((polleddata.id)::numeric))\n -&gt; Sort (cost=387045.44..387168.91 rows=49389 width=8) (actual time=1\n 09756.892..109770.670 rows=13876 loops=1)\n Sort Key: statsdata8_20_2009.pollid\n Sort Method: quicksort Memory: 799kB\n -&gt; Seq Scan on statsdata8_20_2009 (cost=0.00..382519.60 rows=49\n 389 width=8) (actual time=16.898..109698.188 rows=13876 loops=1)\n Filter: ((ttime &gt;= 1250767134601::numeric) AND (ttime &lt;= 12\n 50767384601::numeric))\n -&gt; Materialize (cost=647817.78..688331.92 rows=3241131 width=8) (actu\n al time=1928.266..1960.672 rows=13915 loops=1)\n -&gt; Sort (cost=647817.78..655920.61 rows=3241131 width=8) (actua\n l time=1928.253..1941.423 rows=5830 loops=1)\n Sort Key: ((polleddata.id)::numeric)\n Sort Method: external merge Disk: 744kB\n -&gt; Seq Scan on polleddata (cost=0.00..164380.31 rows=3241\n 131 width=8) (actual time=195.961..1724.824 rows=23474 loops=1)\n Total runtime: 111805.644 ms\n (14 rows)\n \n Third Query \n \n postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\n AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa\n ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME &gt;= 1250838027454) AND ( TTIME &lt;=\n 1250838027454) ) ) union all SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIM\n E, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledData.ID=STATSDATA8_20\n _2009.POLLID) AND ( ( TTIME &gt;= 1250767134601) AND ( TTIME &lt;= 1250767134601) ) )\n )t1 ;\n QUERY PLAN\n \n --------------------------------------------------------------------------------\n -----------------------------------------------------------------\n Aggregate (cost=719553.16..719553.17 rows=1 width=0) (actual time=603669.894..\n 603669.895 rows=1 loops=1)\n -&gt; Append (cost=0.00..719553.15 rows=2 width=0) (actual time=12736.956..603\n 668.946 rows=228 loops=1)\n -&gt; Subquery Scan \"*SELECT* 1\" (cost=0.00..203804.22 rows=1 width=0) (\n actual time=12736.953..506562.673 rows=227 loops=1)\n -&gt; Nested Loop (cost=0.00..203804.20 rows=1 width=78) (actual t\n ime=12736.949..506561.858 rows=227 loops=1)\n Join Filter: ((public.polleddata.id)::numeric = statsdata8_\n 21_2009.pollid)\n -&gt; Seq Scan on statsdata8_21_2009 (cost=0.00..70574.88 ro\n ws=1 width=32) (actual time=0.047..29066.227 rows=227 loops=1)\n Filter: ((ttime &gt;= 1250838027454::numeric) AND (ttime\n &lt;= 1250838027454::numeric))\n -&gt; Seq Scan on polleddata (cost=0.00..132939.93 rows=1929\n 3 width=54) (actual time=362.780..2066.030 rows=23474 loops=227)\n -&gt; Subquery Scan \"*SELECT* 2\" (cost=0.00..515748.94 rows=1 width=0) (\n actual time=4855.541..97105.635 rows=1 loops=1)\n -&gt; Nested Loop (cost=0.00..515748.92 rows=1 width=78) (actual t\n ime=4855.537..97105.628 rows=1 loops=1)\n Join Filter: ((public.polleddata.id)::numeric = statsdata8_\n 20_2009.pollid)\n -&gt; Seq Scan on statsdata8_20_2009 (cost=0.00..382519.60 r\n ows=1 width=32) (actual time=3136.008..93985.540 rows=1 loops=1)\n Filter: ((ttime &gt;= 1250767134601::numeric) AND (ttime\n &lt;= 1250767134601::numeric))\n -&gt; Seq Scan on polleddata (cost=0.00..132939.93 rows=1929\n 3 width=54) (actual time=371.394..3087.391 rows=23474 loops=1)\n Total runtime: 603670.065 ms\n (15 rows)\n \nPlease let me know if you need any more details in this.\n\n\nRegards,\nPari\n\nDear All, We are using Postgres 8.3.7 in our java application. We are doing performances tuning and load testing in our setup. we have noticed that ,some of our queries to the database taking long time to return the results.Please find our setup details belows. We observed that postgres is running in windows is slower than the linux . Machine & Database Details : Windows configuration: 4 GB RAM 4*1.6 GHZ windows 2008 server standard edition Postgresql configuration: shared_buffers: 1 GB Effective_cache_size: 2GB fsync: off  (even we tested this parameter is on ,we observed the same slowness ) Database Details : Postgres Database : PostgreSQL 8.3.7.1 Driver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604) We are using 40 database connections. We have few tables which will be having more amount data.While running our application STATSDATA table will be created daily with table name with date. like as STATSDATA8_21_2009 Schema for STATSDATA table create table STATSDATA8_21_2009( POLLID Numeric(19), INSTANCE varchar(100), TTIME Numeric(19), VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)Schema for PolledDatacreate table PolledData(\"NAME\" varchar(50) NOT NULL ,\"ID\" BIGINT NOT NULL ,\"AGENT\" varchar(50) NOT NULL ,\"COMMUNITY\" varchar(100) NOT NULL ,\"PERIOD\" INTEGER NOT NULL,\"ACTIVE\" varchar(10),\"OID\" varchar(200) NOT NULL,\"LOGDIRECTLY\" varchar(10),\"LOGFILE\" varchar(100),\"SSAVE\" varchar(10),\"THRESHOLD\" varchar(10),\"ISMULTIPLEPOLLEDDATA\" varchar(10),\"PREVIOUSSEVERITY\" INTEGER,\"NUMERICTYPE\" INTEGER,\"SAVEABSOLUTES\" varchar(10),\"TIMEAVG\" varchar(10),\"PORT\" INTEGER,\"WEBNMS\" varchar(100),\"GROUPNAME\" varchar(100),\"LASTCOUNTERVALUE\" BIGINT ,\"LASTTIMEVALUE\" BIGINT ,\"TIMEVAL\" BIGINT NOT NULL ,\"POLICYNAME\" varchar(100),\"THRESHOLDLIST\" varchar(200),\"DNSNAME\" varchar(100),\"SUFFIX\" varchar(20),\"STATSDATATABLENAME\" varchar(100),\"POLLERNAME\" varchar(200),\"FAILURECOUNT\" INTEGER,\"FAILURETHRESHOLD\" INTEGER,\"PARENTOBJ\" varchar(100),\"PROTOCOL\" varchar(50),\"SAVEPOLLCOUNT\" INTEGER,\"CURRENTSAVECOUNT\" INTEGER,\"SAVEONTHRESHOLD\" varchar(10),\"SNMPVERSION\" varchar(10),\"USERNAME\" varchar(30),\"CONTEXTNAME\" varchar(30),PRIMARY KEY (\"ID\",\"NAME\",\"AGENT\",\"OID\"),index PolledData0_ndx ( \"NAME\"),index PolledData1_ndx ( \"AGENT\"),index PolledData2_ndx ( \"OID\"),index PolledData3_ndx ( \"ID\"),index PolledData4_ndx ( \"PARENTOBJ\"), ) We have 300k row's in PolledData Table.In each STATSDATA table ,we have almost 12 to 13 million rows. Every one minute interval ,we insert data into to STATSDATA table. In our application ,we use insert and select query to STATSDATA table at regular interval. Please let us know why the below query takes more time to return the results. is there any thing we need to do to tune the postgres database ? Please find explain analyze output. First Query :postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838079654) ) ) ) t1;                                                                     QUERY PLAN -------------------------------------------------------------------------------- ------------------------------------------------------------------  Aggregate  (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1 7818.412 rows=1 loops=1)    ->  Merge Join  (cost=717526.23..767505.06 rows=2556821 width=0) (actual time =17560.469..17801.790 rows=13721 loops=1)          Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))          ->  Sort  (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239 2.659..2416.093 rows=13721 loops=1)                Sort Key: statsdata8_21_2009.pollid                Sort Method:  quicksort  Memory: 792kB                ->  Seq Scan on statsdata8_21_2009  (cost=0.00..68773.27 rows=136 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)                      Filter: ((ttime >= 1250838027454::numeric) AND (ttime <= 12 50838079654::numeric))          ->  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8) (actu al time=15167.767..15282.232 rows=21582 loops=1)                ->  Sort  (cost=647817.78..655920.61 rows=3241131 width=8) (actua l time=15167.756..15218.645 rows=21574 loops=1)                      Sort Key: ((polleddata.id)::numeric)                      Sort Method:  external merge  Disk: 736kB                      ->  Seq Scan on polleddata  (cost=0.00..164380.31 rows=3241 131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)  Total runtime: 17826.511 ms (14 rows) Second Query : postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME >=  1250767134601) AND ( TTIME <=   1250767384601) ) ) ) t1;                                                                    QUERY PLAN -------------------------------------------------------------------------------- -----------------------------------------------------------------  Aggregate  (cost=1238144.31..1238144.32 rows=1 width=0) (actual time=111796.187 ..111796.188 rows=1 loops=1)    ->  Merge Join  (cost=1034863.23..1212780.47 rows=10145533 width=0) (actual t ime=111685.204..111783.670 rows=13126 loops=1)          Merge Cond: (statsdata8_20_2009.pollid = ((polleddata.id)::numeric))          ->  Sort  (cost=387045.44..387168.91 rows=49389 width=8) (actual time=1 09756.892..109770.670 rows=13876 loops=1)                Sort Key: statsdata8_20_2009.pollid                Sort Method:  quicksort  Memory: 799kB                ->  Seq Scan on statsdata8_20_2009  (cost=0.00..382519.60 rows=49 389 width=8) (actual time=16.898..109698.188 rows=13876 loops=1)                      Filter: ((ttime >= 1250767134601::numeric) AND (ttime <= 12 50767384601::numeric))          ->  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8) (actu al time=1928.266..1960.672 rows=13915 loops=1)                ->  Sort  (cost=647817.78..655920.61 rows=3241131 width=8) (actua l time=1928.253..1941.423 rows=5830 loops=1)                      Sort Key: ((polleddata.id)::numeric)                      Sort Method:  external merge  Disk: 744kB                      ->  Seq Scan on polleddata  (cost=0.00..164380.31 rows=3241 131 width=8) (actual time=195.961..1724.824 rows=23474 loops=1)  Total runtime: 111805.644 ms (14 rows) Third Query postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838027454) ) )  union all  SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIM E, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledData.ID=STATSDATA8_20 _2009.POLLID) AND ( ( TTIME >= 1250767134601) AND ( TTIME <= 1250767134601) ) ) )t1 ;                                                                    QUERY PLAN -------------------------------------------------------------------------------- -----------------------------------------------------------------  Aggregate  (cost=719553.16..719553.17 rows=1 width=0) (actual time=603669.894.. 603669.895 rows=1 loops=1)    ->  Append  (cost=0.00..719553.15 rows=2 width=0) (actual time=12736.956..603 668.946 rows=228 loops=1)          ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..203804.22 rows=1 width=0) ( actual time=12736.953..506562.673 rows=227 loops=1)                ->  Nested Loop  (cost=0.00..203804.20 rows=1 width=78) (actual t ime=12736.949..506561.858 rows=227 loops=1)                      Join Filter: ((public.polleddata.id)::numeric = statsdata8_ 21_2009.pollid)                      ->  Seq Scan on statsdata8_21_2009  (cost=0.00..70574.88 ro ws=1 width=32) (actual time=0.047..29066.227 rows=227 loops=1)                            Filter: ((ttime >= 1250838027454::numeric) AND (ttime  <= 1250838027454::numeric))                      ->  Seq Scan on polleddata  (cost=0.00..132939.93 rows=1929 3 width=54) (actual time=362.780..2066.030 rows=23474 loops=227)          ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..515748.94 rows=1 width=0) ( actual time=4855.541..97105.635 rows=1 loops=1)                ->  Nested Loop  (cost=0.00..515748.92 rows=1 width=78) (actual t ime=4855.537..97105.628 rows=1 loops=1)                      Join Filter: ((public.polleddata.id)::numeric = statsdata8_ 20_2009.pollid)                      ->  Seq Scan on statsdata8_20_2009  (cost=0.00..382519.60 r ows=1 width=32) (actual time=3136.008..93985.540 rows=1 loops=1)                            Filter: ((ttime >= 1250767134601::numeric) AND (ttime  <= 1250767134601::numeric))                      ->  Seq Scan on polleddata  (cost=0.00..132939.93 rows=1929 3 width=54) (actual time=371.394..3087.391 rows=23474 loops=1)  Total runtime: 603670.065 ms (15 rows) Please let me know if you need any more details in this.Regards,Pari", "msg_date": "Thu, 30 Jul 2009 05:54:30 -0700", "msg_from": "parimala <[email protected]>", "msg_from_op": true, "msg_subject": "Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version" }, { "msg_contents": "\n> ,some of our queries to the database taking long time to return the \n> results.\n\n> fsync: off\t (even we tested this parameter is on ,we observed the same \n> slowness )\n\n\tIf your queries take long time to return results, I suppose you are \ntalking about SELECTs.\n\n\tfsync = off will not make SELECTs faster (only inserts, updates, deletes) \nbut it is not worth it as you risk data loss.\n\n\tsynchronous_commit = on has about the same advantages (faster...) as \nfsync=off, but with no risk of data loss, so it is much better !\n\n\n> We have 300k row's in PolledData Table.In each STATSDATA table ,we have \n> almost 12 to 13 million rows.\n\n\tOK. So you insert 13 million rows per day ?\n\tThat is about 150 rows per second.\n\n> Every one minute interval ,we insert data into to STATSDATA table.\n\n\tI assume you are making an INSERT INTO statsdata VALUES (...... 150 \nvalues .....)\n\tand not 150 inserts, yes ?\n\n> First Query :\n> SELECT COUNT(*) FROM (\n\tSELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL\n\tFROM PolledData, STATSDATA8_21_2009 WHERE\n\t( ( PolledData.ID=STATSDATA8_21_2009.POLLID)\n\tAND ( ( TTIME >= 1250838027454)\n\tAND ( TTIME <=1250838079654) ) ) ) t1;\n\n* You could rewrite as :\n\nSELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL\n FROM PolledData\nJOIN STATSDATA8_21_2009 ON ( PolledData.ID = STATSDATA8_21_2009.POLLID)\nWHERE TTIME BETWEEN ... AND ...\n\n- It is exactly the same query, but much easier to read.\n\n* some ANALYZE-ing of your tables would be useful, since the estimates \n from the planner look suspiciously different from reality\n- ANALYZE is fast, you can run it often if you INSERT rows all the time\n\n* You are joining on POLLID which is a NUMERIC in one table and a BIGINT \nin the other table.\n- Is there any reason for this type difference ?\n- Could you use BIGINT in both tables ?\n- BIGINT is faster than NUMERIC and uses less space.\n- Type conversions use CPU cycles too.\n\n* Should StatsData.ID have a foreign key REFERENCES PolledData.ID ?\n- This won't make the query faster, but if you know all rows in StatsData \nreference rows in PolledData (because of the FK constraint) and you want a \ncount(*) like above, you don't need to JOIN.\n\n* TTIME >= 1250838027454 AND TTIME <=1250838079654\n- TTIME should be TIMESTAMP (with or without TIMEZONE) or BIGINT but \ncertainly not NUMERIC\n- An index on StatsData.TTIME would be useful, it would avoid Seq Scan, \nreplacing it with a Bitmap Scan, much faster\n\n* work_mem\n- since you have few connections you could increase work_mem\n\n> Second Query :\n\n\tSame as first query\n\n> Third Query\n\nSELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, \nVAL\n FROM PolledData, STATSDATA8_21_2009\nWHERE ( ( PolledData.ID=STATSDATA8_21_2009.POLLID)\nAND ( ( TTIME >= 1250838027454) AND ( TTIME <=1250838027454) ) )\n\nunion all SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL FROM \nPolledData, STATSDATA8_20_2009\nWHERE ( ( PolledData.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME >= \n1250767134601) AND ( TTIME <= 1250767134601) ) ) )t1 ;\n\nBasically this is, again, exactly the same query as above, but two times, \nand UNION ALL'ed\n\n* You could rewrite it like this :\n\nSELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIME, VAL\n FROM\n( SELECT ... FROM STATSDATA8_21_2009 WHERE TTIME BETWEEN ... AND ... )\nUNION ALL SELECT ... FROM STATSDATA8_20_2009 WHERE TTIME BETWEEN ... AND \n... )\n)\nJOIN STATSDATA8_21_2009 ON ( PolledData.ID = STATSDATA8_21_2009.POLLID)\n\n* If TTIME is the current time, and you insert data as it comes, data in \nStatsData tables is probably already ordered on TTIME.\n- If it is not the case, once a table is filled and becomes read-only, \nconsider CLUSTER on the index you created on TTIME\n- It will make range queries on TTIME much faster\n\n* Query plan\nSeq Scan on statsdata8_21_2009 (cost=0.00..70574.88 rows=1 width=32) \n(actual time=0.047..29066.227 rows=227 loops=1)\nSeq Scan on statsdata8_20_2009 (cost=0.00..382519.60 rows=1 width=32) \n(actual time=3136.008..93985.540 rows=1 loops=1)\n\nPostgres thinks there is 1 row in those tables... that's probably not the \ncase !\nThe first one returns 227 rows, so the plan chosen in a catastrophe.\n\nI was a bit intrigued by your query, so I made a little test...\n\nBEGIN;\nCREATE TABLE test( x INT, y INT );\nINSERT INTO test (SELECT n,n FROM generate_series( 1,1000000 ) AS n );\nCREATE INDEX test_x ON test( x );\nCREATE INDEX test_y ON test( y );\nCOMMIT;\n\nANALYZE test;\n\ntest=> EXPLAIN ANALYZE SELECT * FROM test a JOIN test b ON (b.x=a.x) WHERE \na.x BETWEEN 0 AND 10000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=480.53..23759.14 rows=10406 width=16) (actual \ntime=15.614..1085.085 rows=10000 loops=1)\n Hash Cond: (b.x = a.x)\n -> Seq Scan on test b (cost=0.00..14424.76 rows=999976 width=8) \n(actual time=0.013..477.516 rows=1000000 loops=1)\n -> Hash (cost=350.46..350.46 rows=10406 width=8) (actual \ntime=15.581..15.581 rows=10000 loops=1)\n -> Index Scan using test_x on test a (cost=0.00..350.46 \nrows=10406 width=8) (actual time=0.062..8.537 rows=10000 loops=1)\n Index Cond: ((x >= 0) AND (x <= 10000))\n Total runtime: 1088.462 ms\n(7 lignes)\n\ntest=> set enable_seqscan TO 0;\nSET\ntest=> EXPLAIN ANALYZE SELECT * FROM test a JOIN test b ON (b.x=a.x) WHERE \na.x BETWEEN 0 AND 10000;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..30671.03 rows=10406 width=16) (actual \ntime=0.075..85.897 rows=10000 loops=1)\n -> Index Scan using test_x on test a (cost=0.00..350.46 rows=10406 \nwidth=8) (actual time=0.066..8.377 rows=10000 loops=1)\n Index Cond: ((x >= 0) AND (x <= 10000))\n -> Index Scan using test_x on test b (cost=0.00..2.90 rows=1 width=8) \n(actual time=0.005..0.006 rows=1 loops=10000)\n Index Cond: (b.x = a.x)\n Total runtime: 90.160 ms\n(6 lignes)\n\ntest=> set enable_nestloop TO 0;\nSET\ntest=> EXPLAIN ANALYZE SELECT * FROM test a JOIN test b ON (b.x=a.x) WHERE \na.x BETWEEN 0 AND 10000;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..31200.45 rows=10406 width=16) (actual \ntime=0.081..35.735 rows=10000 loops=1)\n Merge Cond: (a.x = b.x)\n -> Index Scan using test_x on test a (cost=0.00..350.46 rows=10406 \nwidth=8) (actual time=0.059..8.093 rows=10000 loops=1)\n Index Cond: ((x >= 0) AND (x <= 10000))\n -> Index Scan using test_x on test b (cost=0.00..28219.98 rows=999976 \nwidth=8) (actual time=0.016..7.494 rows=10001 loops=1)\n Total runtime: 40.013 ms\n(6 lignes)\n\n\nI wonder why it doesn't choose the merge join at first...\n\n\n\n\n", "msg_date": "Sun, 02 Aug 2009 22:33:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL so slow on Windows ( Postgres 8.3.7)\n version" }, { "msg_contents": "parimala escreveu:\n\n[Don't repeat your answer. It's a PITA to receive multiple identical copies]\n\n> We are using Postgres 8.3.7 in our java application. We are doing\n> performances tuning and load testing in our setup. we have noticed that\n> ,some of our queries to the database taking long time to return the\n> results.Please find our setup details belows.\n> \n> We observed that postgres is running in windows is slower than the linux .\n> \nThat is true and it will be for quite some time. Windows port is very recent\nif we compare it with the long road Unix support.\n\n> Postgresql configuration:\n> \n> shared_buffers: 1 GB\nI don't use Windows but I read some Windows users saying that it isn't\nappropriate to set the shared_buffers too high. Take a look at the archives.\n\n> Effective_cache_size: 2GB\n> fsync: off (even we tested this parameter is on ,we observed the same\n> slowness )\n> \nWhat about the other parameters that are different from default (uncommented\nparameters)? Also, don't turn off the fsync unless you're pretty sure about\nthe consequences.\n\n> We have 300k row's in PolledData Table.In each STATSDATA table ,we have\n> almost 12 to 13 million rows. Every one minute interval ,we insert data\n> into to STATSDATA table. In our application ,we use insert and select\n> query to STATSDATA table at regular interval. Please let us know why the\n> below query takes more time to return the results. is there any thing we\n> need to do to tune the postgres database ?\n> \nIt seems very strange that your queries are not using the indexes. Do you have\nautovacuum turn on? Do you recently analyze your tables?\n\n> Merge Cond: (statsdata8_21_2009.pollid =\n> ((polleddata.id)::numeric))\nOut of curiosity, why does foreign key have different datatype of its primary key?\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Sun, 02 Aug 2009 17:55:29 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL so slow on Windows ( Postgres 8.3.7)\n version" } ]
[ { "msg_contents": "Hi,\n\nI would like to know if my configuration is ok, We run a web application with high transaction rate and the database machine on Mondays / Tuesdays is always at 100% CPU with no IO/Wait . the machine is a Dual Xeon Quad core, 12gb RAM, 4gb/s Fibre Channel on Netapp SAN, with pg_xlog on separate Lun,\nCould you please provide some feedback on the configuration\n\nmaintenance_work_mem = 704MB\nconstraint_exclusion = on\ncheckpoint_completion_target = 0.9\neffective_cache_size = 8GB\nwork_mem = 72MB\nwal_buffers = 8MB\ncheckpoint_segments = 16\nshared_buffers = 2816MB\nmax_connections = 32\n\nI have limited connections down to 32 as if I put up higher the machine load average goes through the roof and will decrease performance even more.\nIn the process of looking at a 4 x AMD 6 core Opteron machine with 32GB Ram to replace if I cannot get any more performance out of this machine\n\nKind Regards\nChristopher Dunn\n\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI would like to know if my configuration is ok, We run a web\napplication with high transaction rate and the database machine on Mondays /\nTuesdays is always at 100% CPU with no IO/Wait . the machine is a Dual Xeon\nQuad core, 12gb RAM, 4gb/s Fibre Channel on Netapp SAN, with pg_xlog on separate\nLun,\nCould you please provide some feedback on the configuration \n \nmaintenance_work_mem = 704MB \nconstraint_exclusion = on \ncheckpoint_completion_target = 0.9 \neffective_cache_size = 8GB \nwork_mem = 72MB \nwal_buffers = 8MB \ncheckpoint_segments = 16\nshared_buffers = 2816MB \nmax_connections = 32 \n \nI have limited connections down to 32 as if I put up higher\nthe machine load average goes through the roof and will decrease performance even\nmore.\nIn the process of looking at a 4 x AMD 6 core Opteron \nmachine with 32GB Ram to replace if I cannot get any more performance out of\nthis machine\n \nKind Regards\nChristopher Dunn", "msg_date": "Fri, 31 Jul 2009 12:22:45 +0800", "msg_from": "Chris Dunn <[email protected]>", "msg_from_op": true, "msg_subject": "Performance 8.4.0" }, { "msg_contents": "Your settings look reasonable, I'd bump up checkpoint_segments to at least \ndouble where you've got it set at now to lower general overhead a bit. I \ndoubt that will help you much though.\n\nIf you're at 100% CPU with no I/O wait, typically that means you have some \nheavy queries running that are gobbling up your CPU time. Finding and \nimproving those will probably buy you more than adjusting server \nparameters given that you already have everything in the right general \nballpark.\n\nSuggestions:\n\n-Set log_min_duration_statement and analyze the results. See \nhttp://wiki.postgresql.org/wiki/Logging_Difficult_Queries for more \ninformation about tools that might help.\n\n-Capture snapshots of what the system is doing when it gets bogged down. \nI like to run the following periodically:\n\ntop -c -b -n 1\npsql -c \"select * from pg_stat_activity\"\n\n\"top -c\" will show you which processes are gobbling CPU time, and then you \ncan see more detail about what those processes are doing by matching them \nup with the corresponding lines in pg_stat_activity.\n\nIf you want more performance out of the hardware you've already, finding \nthe worst queries and seeing if you can speed them up would be where I'd \nstart in your case. It only takes one badly written one to drag the whole \nsystem to crawl if a couple of clients get caught up executing it at the \nsame time.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 31 Jul 2009 00:53:11 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance 8.4.0" }, { "msg_contents": "On Fri, Jul 31, 2009 at 12:22 AM, Chris Dunn<[email protected]> wrote:\n> constraint_exclusion = on\n\nThis is critical if you need it, but a waste of CPU time if you don't.\n Other than that your paramaters look good. Are you using the default\npage cost settings? I see you have 12 GB RAM; how big is your\ndatabase?\n\n...Robert\n", "msg_date": "Sun, 2 Aug 2009 11:25:30 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance 8.4.0" }, { "msg_contents": "* Robert Haas ([email protected]) wrote:\n> On Fri, Jul 31, 2009 at 12:22 AM, Chris Dunn<[email protected]> wrote:\n> > constraint_exclusion = on\n> \n> This is critical if you need it, but a waste of CPU time if you don't.\n> Other than that your paramaters look good. Are you using the default\n> page cost settings? I see you have 12 GB RAM; how big is your\n> database?\n\nWith 8.4, you can set 'constraint_exclusion = partition', where it'll\nhandle inheirited tables and UNION ALL queries but not other possible\ncases. It's set that way by default, and is pretty inexpensive to leave\nin place (since it only gets tried when it's likely you want it).\n\nI'd recommend setting it to partition under 8.4 rather than disabling it\nentirely. Under older versions, set it to 'off' if you don't need it.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Sun, 2 Aug 2009 13:29:50 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance 8.4.0" } ]
[ { "msg_contents": "Dear All,\n\n\nWe are\nusing Postgres 8.3.7 in our java application. We are doing performances\ntuning and load testing in our setup. we have noticed that ,some of our\nqueries to the database taking long time to return the results.Please\nfind our setup details belows.\n\nWe observed that postgres is running in windows is slower than the linux .\n\nMachine & Database Details :\n\nWindows configuration:\n4 GB RAM\n4*1.6 GHZ\nwindows 2008 server standard edition\n\nPostgresql configuration:\n\nshared_buffers: 1 GB\nEffective_cache_size: 2GB\nfsync: off (even we tested this parameter is on ,we observed the same slowness )\n\n\nDatabase Details : \n\nPostgres Database : PostgreSQL 8.3.7.1\nDriver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604)\nWe are using 40 database connections.\n\n\nWe have few tables which will be having more amount data.While running\nour application STATSDATA table will be created daily with table name\nwith date.\nlike as STATSDATA8_21_2009\n\nSchema for STATSDATA table\n\ncreate table STATSDATA8_21_2009(\nPOLLID Numeric(19),\nINSTANCE varchar(100),\nTTIME Numeric(19),\nVAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)\n\nSchema for PolledData\n\ncreate table PolledData(\n\"NAME\" varchar(50) NOT NULL ,\n\"ID\" BIGINT NOT NULL ,\n\"AGENT\" varchar(50) NOT NULL ,\n\"COMMUNITY\" varchar(100) NOT NULL ,\n\"PERIOD\" INTEGER NOT NULL,\n\"ACTIVE\" varchar(10),\n\"OID\" varchar(200) NOT NULL,\n\"LOGDIRECTLY\" varchar(10),\n\"LOGFILE\" varchar(100),\n\"SSAVE\" varchar(10),\n\"THRESHOLD\" varchar(10),\n\"ISMULTIPLEPOLLEDDATA\" varchar(10),\n\"PREVIOUSSEVERITY\" INTEGER,\n\"NUMERICTYPE\" INTEGER,\n\"SAVEABSOLUTES\" varchar(10),\n\"TIMEAVG\" varchar(10),\n\"PORT\" INTEGER,\n\"WEBNMS\" varchar(100),\n\"GROUPNAME\" varchar(100),\n\"LASTCOUNTERVALUE\" BIGINT ,\n\"LASTTIMEVALUE\" BIGINT ,\n\"TIMEVAL\" BIGINT NOT NULL ,\n\"POLICYNAME\" varchar(100),\n\"THRESHOLDLIST\" varchar(200),\n\"DNSNAME\" varchar(100),\n\"SUFFIX\" varchar(20),\n\"STATSDATATABLENAME\" varchar(100),\n\"POLLERNAME\" varchar(200),\n\"FAILURECOUNT\" INTEGER,\n\"FAILURETHRESHOLD\" INTEGER,\n\"PARENTOBJ\" varchar(100),\n\"PROTOCOL\" varchar(50),\n\"SAVEPOLLCOUNT\" INTEGER,\n\"CURRENTSAVECOUNT\" INTEGER,\n\"SAVEONTHRESHOLD\" varchar(10),\n\"SNMPVERSION\" varchar(10),\n\"USERNAME\" varchar(30),\n\"CONTEXTNAME\" varchar(30),\nPRIMARY KEY (\"ID\",\"NAME\",\"AGENT\",\"OID\"),\nindex PolledData0_ndx ( \"NAME\"),\nindex PolledData1_ndx ( \"AGENT\"),\nindex PolledData2_ndx ( \"OID\"),\nindex PolledData3_ndx ( \"ID\"),\nindex PolledData4_ndx ( \"PARENTOBJ\"),\n)\n\n\nWe\nhave 300k row's in PolledData Table.In each STATSDATA table ,we have\nalmost 12 to 13 million rows. Every one minute interval ,we insert data\ninto to STATSDATA table. In our application ,we use insert and select\nquery to STATSDATA table at regular interval. Please let us know why\nthe below query takes more time to return the results. is there any\nthing we need to do to tune the postgres database ? \n\n\n\n\nPlease find explain analyze output.\n\n\nFirst Query :\n\npostgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\nAME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa\nta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <=\n1250838079654) ) ) ) t1;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n------------------------------------------------------------------\n Aggregate (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1\n7818.412 rows=1 loops=1)\n -> Merge Join (cost=717526.23..767505.06 rows=2556821 width=0) (actual time\n=17560.469..17801.790 rows=13721 loops=1)\n Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))\n -> Sort (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239\n2.659..2416.093 rows=13721 loops=1)\n Sort Key: statsdata8_21_2009.pollid\n Sort Method: quicksort Memory: 792kB\n -> Seq Scan on statsdata8_21_2009 (cost=0.00..68773.27 rows=136\n19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)\n Filter: ((ttime >= 1250838027454::numeric) AND (ttime <= 12\n50838079654::numeric))\n -> Materialize (cost=647817.78..688331.92 rows=3241131 width=8) (actu\nal time=15167.767..15282.232 rows=21582 loops=1)\n -> Sort (cost=647817.78..655920.61 rows=3241131 width=8) (actua\nl time=15167.756..15218.645 rows=21574 loops=1)\n Sort Key: ((polleddata.id)::numeric)\n Sort Method: external merge Disk: 736kB\n -> Seq Scan on polleddata (cost=0.00..164380.31 rows=3241\n131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)\n Total runtime: 17826.511 ms\n(14 rows)\n\nSecond Query :\n\npostgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\nAME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledDa\nta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME >= 1250767134601) AND ( TTIME <=\n 1250767384601) ) ) ) t1;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n-----------------------------------------------------------------\n Aggregate (cost=1238144.31..1238144.32 rows=1 width=0) (actual time=111796.187\n..111796.188 rows=1 loops=1)\n -> Merge Join (cost=1034863.23..1212780.47 rows=10145533 width=0) (actual t\nime=111685.204..111783.670 rows=13126 loops=1)\n Merge Cond: (statsdata8_20_2009.pollid = ((polleddata.id)::numeric))\n -> Sort (cost=387045.44..387168.91 rows=49389 width=8) (actual time=1\n09756.892..109770.670 rows=13876 loops=1)\n Sort Key: statsdata8_20_2009.pollid\n Sort Method: quicksort Memory: 799kB\n -> Seq Scan on statsdata8_20_2009 (cost=0.00..382519.60 rows=49\n389 width=8) (actual time=16.898..109698.188 rows=13876 loops=1)\n Filter: ((ttime >= 1250767134601::numeric) AND (ttime <= 12\n50767384601::numeric))\n -> Materialize (cost=647817.78..688331.92 rows=3241131 width=8) (actu\nal time=1928.266..1960.672 rows=13915 loops=1)\n -> Sort (cost=647817.78..655920.61 rows=3241131 width=8) (actua\nl time=1928.253..1941.423 rows=5830 loops=1)\n Sort Key: ((polleddata.id)::numeric)\n Sort Method: external merge Disk: 744kB\n -> Seq Scan on polleddata (cost=0.00..164380.31 rows=3241\n131 width=8) (actual time=195.961..1724.824 rows=23474 loops=1)\n Total runtime: 111805.644 ms\n(14 rows)\n\nThird Query \n\npostgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N\nAME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa\nta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <=\n1250838027454) ) ) union all SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIM\nE, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledData.ID=STATSDATA8_20\n_2009.POLLID) AND ( ( TTIME >= 1250767134601) AND ( TTIME <= 1250767134601) ) )\n)t1 ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n-----------------------------------------------------------------\n Aggregate (cost=719553.16..719553.17 rows=1 width=0) (actual time=603669.894..\n603669.895 rows=1 loops=1)\n -> Append (cost=0.00..719553.15 rows=2 width=0) (actual time=12736.956..603\n668.946 rows=228 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..203804.22 rows=1 width=0) (\nactual time=12736.953..506562.673 rows=227 loops=1)\n -> Nested Loop (cost=0.00..203804.20 rows=1 width=78) (actual t\nime=12736.949..506561.858 rows=227 loops=1)\n Join Filter: ((public.polleddata.id)::numeric = statsdata8_\n21_2009.pollid)\n -> Seq Scan on statsdata8_21_2009 (cost=0.00..70574.88 ro\nws=1 width=32) (actual time=0.047..29066.227 rows=227 loops=1)\n Filter: ((ttime >= 1250838027454::numeric) AND (ttime\n <= 1250838027454::numeric))\n -> Seq Scan on polleddata (cost=0.00..132939.93 rows=1929\n3 width=54) (actual time=362.780..2066.030 rows=23474 loops=227)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..515748.94 rows=1 width=0) (\nactual time=4855.541..97105.635 rows=1 loops=1)\n -> Nested Loop (cost=0.00..515748.92 rows=1 width=78) (actual t\nime=4855.537..97105.628 rows=1 loops=1)\n Join Filter: ((public.polleddata.id)::numeric = statsdata8_\n20_2009.pollid)\n -> Seq Scan on statsdata8_20_2009 (cost=0.00..382519.60 r\nows=1 width=32) (actual time=3136.008..93985.540 rows=1 loops=1)\n Filter: ((ttime >= 1250767134601::numeric) AND (ttime\n <= 1250767134601::numeric))\n -> Seq Scan on polleddata (cost=0.00..132939.93 rows=1929\n3 width=54) (actual time=371.394..3087.391 rows=23474 loops=1)\n Total runtime: 603670.065 ms\n(15 rows)\n\nPlease let me know if you need any more details in this.\n\n\nRegards,\nPari\n\n\n Yahoo! recommends that you upgrade to the new and safer Internet Explorer 8. http://downloads.yahoo.com/in/internetexplorer/\nDear All, We are\nusing Postgres 8.3.7 in our java application. We are doing performances\ntuning and load testing in our setup. we have noticed that ,some of our\nqueries to the database taking long time to return the results.Please\nfind our setup details belows. We observed that postgres is running in windows is slower than the linux . Machine & Database Details : Windows configuration: 4 GB RAM 4*1.6 GHZ windows 2008 server standard edition Postgresql configuration: shared_buffers: 1 GB Effective_cache_size: 2GB fsync: off  (even we tested this parameter is on ,we observed the same slowness ) Database Details : Postgres Database : PostgreSQL 8.3.7.1 \n Driver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604) We are using 40 database connections. \nWe have few tables which will be having more amount data.While running\nour application STATSDATA table will be created daily with table name\nwith date. like as STATSDATA8_21_2009 Schema for STATSDATA table create table STATSDATA8_21_2009( POLLID Numeric(19), INSTANCE varchar(100), TTIME Numeric(19), VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)Schema for PolledDatacreate table PolledData(\"NAME\" varchar(50) NOT NULL ,\"ID\" BIGINT NOT NULL ,\"AGENT\" varchar(50) NOT NULL ,\"COMMUNITY\" varchar(100) NOT NULL ,\"PERIOD\" INTEGER NOT NULL,\"ACTIVE\" varchar(10),\"OID\"\n varchar(200) NOT NULL,\"LOGDIRECTLY\" varchar(10),\"LOGFILE\" varchar(100),\"SSAVE\" varchar(10),\"THRESHOLD\" varchar(10),\"ISMULTIPLEPOLLEDDATA\" varchar(10),\"PREVIOUSSEVERITY\" INTEGER,\"NUMERICTYPE\" INTEGER,\"SAVEABSOLUTES\" varchar(10),\"TIMEAVG\" varchar(10),\"PORT\" INTEGER,\"WEBNMS\" varchar(100),\"GROUPNAME\" varchar(100),\"LASTCOUNTERVALUE\" BIGINT ,\"LASTTIMEVALUE\" BIGINT ,\"TIMEVAL\" BIGINT NOT NULL ,\"POLICYNAME\" varchar(100),\"THRESHOLDLIST\" varchar(200),\"DNSNAME\" varchar(100),\"SUFFIX\" varchar(20),\"STATSDATATABLENAME\" varchar(100),\"POLLERNAME\" varchar(200),\"FAILURECOUNT\" INTEGER,\"FAILURETHRESHOLD\" INTEGER,\"PARENTOBJ\" varchar(100),\"PROTOCOL\" varchar(50),\"SAVEPOLLCOUNT\" INTEGER,\"CURRENTSAVECOUNT\" INTEGER,\"SAVEONTHRESHOLD\" varchar(10),\"SNMPVERSION\" varchar(10),\"USERNAME\" varchar(30),\"CONTEXTNAME\" varchar(30),PRIMARY KEY (\"ID\",\"NAME\",\"AGENT\",\"OID\"),index PolledData0_ndx ( \"NAME\"),index PolledData1_ndx ( \"AGENT\"),index PolledData2_ndx ( \"OID\"),index PolledData3_ndx ( \"ID\"),index PolledData4_ndx ( \"PARENTOBJ\"), ) We\nhave 300k row's in PolledData Table.In each STATSDATA table ,we have\nalmost 12 to 13 million rows. Every one minute interval ,we insert data\ninto to STATSDATA table. In our application ,we use insert and select\nquery to STATSDATA table at regular interval. Please let us know why\nthe below query takes more time to return the results. is there any\nthing we need to do to tune the postgres database ? Please find explain analyze output. First Query :postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838079654) ) ) ) t1;\n                                                                     QUERY PLAN -------------------------------------------------------------------------------- ------------------------------------------------------------------  Aggregate  (cost=773897.12..773897.13 rows=1 width=0) (actual time=17818.410..1 7818.412 rows=1 loops=1)    ->  Merge Join  (cost=717526.23..767505.06 rows=2556821 width=0) (actual time =17560.469..17801.790 rows=13721 loops=1)\n          Merge Cond: (statsdata8_21_2009.pollid = ((polleddata.id)::numeric))          ->  Sort  (cost=69708.44..69742.49 rows=13619 width=8) (actual time=239 2.659..2416.093 rows=13721 loops=1)                Sort Key: statsdata8_21_2009.pollid                Sort Method:  quicksort  Memory: 792kB                ->  Seq Scan on statsdata8_21_2009  (cost=0.00..68773.27 rows=136 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)\n                      Filter: ((ttime >= 1250838027454::numeric) AND (ttime <= 12 50838079654::numeric))          ->  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8) (actu al time=15167.767..15282.232 rows=21582 loops=1)                ->  Sort  (cost=647817.78..655920.61 rows=3241131 width=8) (actua l time=15167.756..15218.645 rows=21574 loops=1)                      Sort Key: ((polleddata.id)::numeric)\n                      Sort Method:  external merge  Disk: 736kB                      ->  Seq Scan on polleddata  (cost=0.00..164380.31 rows=3241 131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)  Total runtime: 17826.511 ms (14 rows) Second Query : postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_20_2009.POLLID) AND ( ( TTIME >=  1250767134601) AND ( TTIME <=   1250767384601) ) )\n ) t1;                                                                    QUERY PLAN -------------------------------------------------------------------------------- -----------------------------------------------------------------  Aggregate  (cost=1238144.31..1238144.32 rows=1 width=0) (actual time=111796.187 ..111796.188 rows=1 loops=1)    ->  Merge Join  (cost=1034863.23..1212780.47 rows=10145533 width=0) (actual t ime=111685.204..111783.670 rows=13126\n loops=1)          Merge Cond: (statsdata8_20_2009.pollid = ((polleddata.id)::numeric))          ->  Sort  (cost=387045.44..387168.91 rows=49389 width=8) (actual time=1 09756.892..109770.670 rows=13876 loops=1)                Sort Key: statsdata8_20_2009.pollid                Sort Method:  quicksort  Memory: 799kB                ->  Seq Scan on statsdata8_20_2009  (cost=0.00..382519.60 rows=49 389 width=8) (actual time=16.898..109698.188 rows=13876 loops=1)\n                      Filter: ((ttime >= 1250767134601::numeric) AND (ttime <= 12 50767384601::numeric))          ->  Materialize  (cost=647817.78..688331.92 rows=3241131 width=8) (actu al time=1928.266..1960.672 rows=13915 loops=1)                ->  Sort  (cost=647817.78..655920.61 rows=3241131 width=8) (actua l time=1928.253..1941.423 rows=5830 loops=1)                      Sort Key: ((polleddata.id)::numeric)\n                      Sort Method:  external merge  Disk: 744kB                      ->  Seq Scan on polleddata  (cost=0.00..164380.31 rows=3241 131 width=8) (actual time=195.961..1724.824 rows=23474 loops=1)  Total runtime: 111805.644 ms (14 rows) Third Query postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID, PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData, STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838027454) ) )  union\n all  SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIM E, VAL FROM PolledData, STATSDATA8_20_2009 WHERE ( ( PolledData.ID=STATSDATA8_20 _2009.POLLID) AND ( ( TTIME >= 1250767134601) AND ( TTIME <= 1250767134601) ) ) )t1 ;                                                                    QUERY PLAN -------------------------------------------------------------------------------- -----------------------------------------------------------------  Aggregate \n (cost=719553.16..719553.17 rows=1 width=0) (actual time=603669.894.. 603669.895 rows=1 loops=1)    ->  Append  (cost=0.00..719553.15 rows=2 width=0) (actual time=12736.956..603 668.946 rows=228 loops=1)          ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..203804.22 rows=1 width=0) ( actual time=12736.953..506562.673 rows=227 loops=1)                ->  Nested Loop  (cost=0.00..203804.20 rows=1 width=78) (actual t ime=12736.949..506561.858 rows=227 loops=1)                      Join Filter: ((public.polleddata.id)::numeric = statsdata8_\n 21_2009.pollid)                      ->  Seq Scan on statsdata8_21_2009  (cost=0.00..70574.88 ro ws=1 width=32) (actual time=0.047..29066.227 rows=227 loops=1)                            Filter: ((ttime >= 1250838027454::numeric) AND (ttime  <= 1250838027454::numeric))                      ->  Seq Scan on polleddata  (cost=0.00..132939.93 rows=1929 3 width=54) (actual time=362.780..2066.030 rows=23474 loops=227)          -> \n Subquery Scan \"*SELECT* 2\"  (cost=0.00..515748.94 rows=1 width=0) ( actual time=4855.541..97105.635 rows=1 loops=1)                ->  Nested Loop  (cost=0.00..515748.92 rows=1 width=78) (actual t ime=4855.537..97105.628 rows=1 loops=1)                      Join Filter: ((public.polleddata.id)::numeric = statsdata8_ 20_2009.pollid)                      ->  Seq Scan on statsdata8_20_2009  (cost=0.00..382519.60 r ows=1 width=32) (actual time=3136.008..93985.540 rows=1 loops=1)\n                            Filter: ((ttime >= 1250767134601::numeric) AND (ttime  <= 1250767134601::numeric))                      ->  Seq Scan on polleddata  (cost=0.00..132939.93 rows=1929 3 width=54) (actual time=371.394..3087.391 rows=23474 loops=1)  Total runtime: 603670.065 ms (15 rows) Please let me know if you need any more details in this.Regards,Pari\n Looking for local information? Find it on Yahoo! Local", "msg_date": "Fri, 31 Jul 2009 11:15:55 +0530 (IST)", "msg_from": "pari krishnan <[email protected]>", "msg_from_op": true, "msg_subject": "Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version" }, { "msg_contents": "how about normalizing the schema for start ?\nby the looks of it, you have huge table,with plenty of varchars, that\nsmells like bad design of db.\n", "msg_date": "Mon, 3 Aug 2009 11:48:34 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL so slow on Windows ( Postgres 8.3.7)\n\tversion" }, { "msg_contents": "The few 'obvious' things I see :\n\nID and POLLID aren't of the same type (numeric vs bigint)\n\nTTIME isn't indexed.\n\nAnd as a general matter, you should stick to native datatypes if you don't \nneed numeric.\n\nBut as said in the other answer, maybe you should redo this schema and use \nmore consistent datatypes\n\nAnyway, from what I remenber, it's not advised to set up shared buffers that \nhigh for windows (I don't do so much windows myself, so maybe someone will be \nbetter informed).\n\nAnyway you can start by correcting the schema…\n\nOn Friday 31 July 2009 07:45:55 pari krishnan wrote:\n> Dear All,\n>\n>\n> We are\n> using Postgres 8.3.7 in our java application. We are doing performances\n> tuning and load testing in our setup. we have noticed that ,some of our\n> queries to the database taking long time to return the results.Please\n> find our setup details belows.\n>\n> We observed that postgres is running in windows is slower than the linux .\n>\n> Machine & Database Details :\n>\n> Windows configuration:\n> 4 GB RAM\n> 4*1.6 GHZ\n> windows 2008 server standard edition\n>\n> Postgresql configuration:\n>\n> shared_buffers: 1 GB\n> Effective_cache_size: 2GB\n> fsync: off (even we tested this parameter is on ,we observed the same\n> slowness )\n>\n>\n> Database Details :\n>\n> Postgres Database : PostgreSQL 8.3.7.1\n> Driver Version : PostgreSQL 8.3 JDBC4 with SSL (build 604)\n> We are using 40 database connections.\n>\n>\n> We have few tables which will be having more amount data.While running\n> our application STATSDATA table will be created daily with table name\n> with date.\n> like as STATSDATA8_21_2009\n>\n> Schema for STATSDATA table\n>\n> create table STATSDATA8_21_2009(\n> POLLID Numeric(19),\n> INSTANCE varchar(100),\n> TTIME Numeric(19),\n> VAL Numeric(13)) ;CREATE INDEX POLLID%_ndx on STATSDATA%(POLLID)\n>\n> Schema for PolledData\n>\n> create table PolledData(\n> \"NAME\" varchar(50) NOT NULL ,\n> \"ID\" BIGINT NOT NULL ,\n> \"AGENT\" varchar(50) NOT NULL ,\n> \"COMMUNITY\" varchar(100) NOT NULL ,\n> \"PERIOD\" INTEGER NOT NULL,\n> \"ACTIVE\" varchar(10),\n> \"OID\" varchar(200) NOT NULL,\n> \"LOGDIRECTLY\" varchar(10),\n> \"LOGFILE\" varchar(100),\n> \"SSAVE\" varchar(10),\n> \"THRESHOLD\" varchar(10),\n> \"ISMULTIPLEPOLLEDDATA\" varchar(10),\n> \"PREVIOUSSEVERITY\" INTEGER,\n> \"NUMERICTYPE\" INTEGER,\n> \"SAVEABSOLUTES\" varchar(10),\n> \"TIMEAVG\" varchar(10),\n> \"PORT\" INTEGER,\n> \"WEBNMS\" varchar(100),\n> \"GROUPNAME\" varchar(100),\n> \"LASTCOUNTERVALUE\" BIGINT ,\n> \"LASTTIMEVALUE\" BIGINT ,\n> \"TIMEVAL\" BIGINT NOT NULL ,\n> \"POLICYNAME\" varchar(100),\n> \"THRESHOLDLIST\" varchar(200),\n> \"DNSNAME\" varchar(100),\n> \"SUFFIX\" varchar(20),\n> \"STATSDATATABLENAME\" varchar(100),\n> \"POLLERNAME\" varchar(200),\n> \"FAILURECOUNT\" INTEGER,\n> \"FAILURETHRESHOLD\" INTEGER,\n> \"PARENTOBJ\" varchar(100),\n> \"PROTOCOL\" varchar(50),\n> \"SAVEPOLLCOUNT\" INTEGER,\n> \"CURRENTSAVECOUNT\" INTEGER,\n> \"SAVEONTHRESHOLD\" varchar(10),\n> \"SNMPVERSION\" varchar(10),\n> \"USERNAME\" varchar(30),\n> \"CONTEXTNAME\" varchar(30),\n> PRIMARY KEY (\"ID\",\"NAME\",\"AGENT\",\"OID\"),\n> index PolledData0_ndx ( \"NAME\"),\n> index PolledData1_ndx ( \"AGENT\"),\n> index PolledData2_ndx ( \"OID\"),\n> index PolledData3_ndx ( \"ID\"),\n> index PolledData4_ndx ( \"PARENTOBJ\"),\n> )\n>\n>\n> We\n> have 300k row's in PolledData Table.In each STATSDATA table ,we have\n> almost 12 to 13 million rows. Every one minute interval ,we insert data\n> into to STATSDATA table. In our application ,we use insert and select\n> query to STATSDATA table at regular interval. Please let us know why\n> the below query takes more time to return the results. is there any\n> thing we need to do to tune the postgres database ?\n>\n>\n>\n>\n> Please find explain analyze output.\n>\n>\n> First Query :\n>\n> postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID,\n> PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData,\n> STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND\n> ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838079654) ) ) ) t1;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------\n>----- ------------------------------------------------------------------\n> Aggregate (cost=773897.12..773897.13 rows=1 width=0) (actual\n> time=17818.410..1 7818.412 rows=1 loops=1)\n> -> Merge Join (cost=717526.23..767505.06 rows=2556821 width=0) (actual\n> time =17560.469..17801.790 rows=13721 loops=1)\n> Merge Cond: (statsdata8_21_2009.pollid =\n> ((polleddata.id)::numeric)) -> Sort (cost=69708.44..69742.49 rows=13619\n> width=8) (actual time=239 2.659..2416.093 rows=13721 loops=1)\n> Sort Key: statsdata8_21_2009.pollid\n> Sort Method: quicksort Memory: 792kB\n> -> Seq Scan on statsdata8_21_2009 (cost=0.00..68773.27\n> rows=136 19 width=8) (actual time=0.077..2333.132 rows=13721 loops=1)\n> Filter: ((ttime >= 1250838027454::numeric) AND (ttime\n> <= 12 50838079654::numeric))\n> -> Materialize (cost=647817.78..688331.92 rows=3241131 width=8)\n> (actu al time=15167.767..15282.232 rows=21582 loops=1)\n> -> Sort (cost=647817.78..655920.61 rows=3241131 width=8)\n> (actua l time=15167.756..15218.645 rows=21574 loops=1)\n> Sort Key: ((polleddata.id)::numeric)\n> Sort Method: external merge Disk: 736kB\n> -> Seq Scan on polleddata (cost=0.00..164380.31\n> rows=3241 131 width=8) (actual time=1197.278..14985.665 rows=23474 loops=1)\n> Total runtime: 17826.511 ms\n> (14 rows)\n>\n> Second Query :\n>\n> postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID,\n> PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData,\n> STATSDATA8_20_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_20_2009.POLLID) AND\n> ( ( TTIME >= 1250767134601) AND ( TTIME <= 1250767384601) ) ) ) t1;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------\n>----- -----------------------------------------------------------------\n> Aggregate (cost=1238144.31..1238144.32 rows=1 width=0) (actual\n> time=111796.187 ..111796.188 rows=1 loops=1)\n> -> Merge Join (cost=1034863.23..1212780.47 rows=10145533 width=0)\n> (actual t ime=111685.204..111783.670 rows=13126 loops=1)\n> Merge Cond: (statsdata8_20_2009.pollid =\n> ((polleddata.id)::numeric)) -> Sort (cost=387045.44..387168.91 rows=49389\n> width=8) (actual time=1 09756.892..109770.670 rows=13876 loops=1)\n> Sort Key: statsdata8_20_2009.pollid\n> Sort Method: quicksort Memory: 799kB\n> -> Seq Scan on statsdata8_20_2009 (cost=0.00..382519.60\n> rows=49 389 width=8) (actual time=16.898..109698.188 rows=13876 loops=1)\n> Filter: ((ttime >= 1250767134601::numeric) AND (ttime\n> <= 12 50767384601::numeric))\n> -> Materialize (cost=647817.78..688331.92 rows=3241131 width=8)\n> (actu al time=1928.266..1960.672 rows=13915 loops=1)\n> -> Sort (cost=647817.78..655920.61 rows=3241131 width=8)\n> (actua l time=1928.253..1941.423 rows=5830 loops=1)\n> Sort Key: ((polleddata.id)::numeric)\n> Sort Method: external merge Disk: 744kB\n> -> Seq Scan on polleddata (cost=0.00..164380.31\n> rows=3241 131 width=8) (actual time=195.961..1724.824 rows=23474 loops=1)\n> Total runtime: 111805.644 ms\n> (14 rows)\n>\n> Third Query\n>\n> postgres=# explain analyze SELECT COUNT(*) FROM ( SELECT ID,\n> PolledData.AGENT, N AME, INSTANCE, TTIME, VAL FROM PolledData,\n> STATSDATA8_21_2009 WHERE ( ( PolledDa ta.ID=STATSDATA8_21_2009.POLLID) AND\n> ( ( TTIME >= 1250838027454) AND ( TTIME <= 1250838027454) ) ) union all \n> SELECT ID, PolledData.AGENT, NAME, INSTANCE, TTIM E, VAL FROM PolledData,\n> STATSDATA8_20_2009 WHERE ( ( PolledData.ID=STATSDATA8_20 _2009.POLLID) AND\n> ( ( TTIME >= 1250767134601) AND ( TTIME <= 1250767134601) ) ) )t1 ;\n> QUERY\n> PLAN\n>\n> ---------------------------------------------------------------------------\n>----- -----------------------------------------------------------------\n> Aggregate (cost=719553.16..719553.17 rows=1 width=0) (actual\n> time=603669.894.. 603669.895 rows=1 loops=1)\n> -> Append (cost=0.00..719553.15 rows=2 width=0) (actual\n> time=12736.956..603 668.946 rows=228 loops=1)\n> -> Subquery Scan \"*SELECT* 1\" (cost=0.00..203804.22 rows=1\n> width=0) ( actual time=12736.953..506562.673 rows=227 loops=1)\n> -> Nested Loop (cost=0.00..203804.20 rows=1 width=78)\n> (actual t ime=12736.949..506561.858 rows=227 loops=1)\n> Join Filter: ((public.polleddata.id)::numeric =\n> statsdata8_ 21_2009.pollid)\n> -> Seq Scan on statsdata8_21_2009 \n> (cost=0.00..70574.88 ro ws=1 width=32) (actual time=0.047..29066.227\n> rows=227 loops=1)\n> Filter: ((ttime >= 1250838027454::numeric) AND\n> (ttime <= 1250838027454::numeric))\n> -> Seq Scan on polleddata (cost=0.00..132939.93\n> rows=1929 3 width=54) (actual time=362.780..2066.030 rows=23474 loops=227)\n> -> Subquery Scan \"*SELECT* 2\" (cost=0.00..515748.94 rows=1\n> width=0) ( actual time=4855.541..97105.635 rows=1 loops=1)\n> -> Nested Loop (cost=0.00..515748.92 rows=1 width=78)\n> (actual t ime=4855.537..97105.628 rows=1 loops=1)\n> Join Filter: ((public.polleddata.id)::numeric =\n> statsdata8_ 20_2009.pollid)\n> -> Seq Scan on statsdata8_20_2009 \n> (cost=0.00..382519.60 r ows=1 width=32) (actual time=3136.008..93985.540\n> rows=1 loops=1)\n> Filter: ((ttime >= 1250767134601::numeric) AND\n> (ttime <= 1250767134601::numeric))\n> -> Seq Scan on polleddata (cost=0.00..132939.93\n> rows=1929 3 width=54) (actual time=371.394..3087.391 rows=23474 loops=1)\n> Total runtime: 603670.065 ms\n> (15 rows)\n>\n> Please let me know if you need any more details in this.\n>\n>\n> Regards,\n> Pari\n>\n>\n> Yahoo! recommends that you upgrade to the new and safer Internet\n> Explorer 8. http://downloads.yahoo.com/in/internetexplorer/\n\n\n", "msg_date": "Mon, 3 Aug 2009 13:46:26 +0200", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is PostgreSQL so slow on Windows ( Postgres 8.3.7) version" } ]
[ { "msg_contents": "\nHi\n\nCan anyone tell me what are the tools to monitor postgres server. ? I am\nrunning my Postgres server on RHEL 5 machine.\n-- \nView this message in context: http://www.nabble.com/Performance-Monitoring-tool-tp24751382p24751382.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 30 Jul 2009 22:49:52 -0700 (PDT)", "msg_from": "mukeshp <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Monitoring tool" }, { "msg_contents": "depending on what you mean with 'monitor'. for up/down monitoring use nagios\n(http://www.nagios.org)for performance monitoring (and I guess the reason\nwhy you ask this on the postgresql performance list), use pgstatspack: (\nhttp://pgfoundry.org/projects/pgstatspack/)\n\nfrits\n\nOn Fri, Jul 31, 2009 at 7:49 AM, mukeshp <[email protected]> wrote:\n\n>\n> Hi\n>\n> Can anyone tell me what are the tools to monitor postgres server. ? I am\n> running my Postgres server on RHEL 5 machine.\n> --\n> View this message in context:\n> http://www.nabble.com/Performance-Monitoring-tool-tp24751382p24751382.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\ndepending on what you mean with 'monitor'. for up/down monitoring use nagios (http://www.nagios.org)for performance monitoring (and I guess the reason why you ask this on the postgresql performance list), use pgstatspack: (http://pgfoundry.org/projects/pgstatspack/)\nfritsOn Fri, Jul 31, 2009 at 7:49 AM, mukeshp <[email protected]> wrote:\n\nHi\n\nCan anyone tell me what are the tools to monitor postgres server. ? I am\nrunning my Postgres server on RHEL 5 machine.\n--\nView this message in context: http://www.nabble.com/Performance-Monitoring-tool-tp24751382p24751382.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 2 Aug 2009 20:08:07 +0200", "msg_from": "Frits Hoogland <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Monitoring tool" } ]
[ { "msg_contents": "Hi,\n\nEveryone says \"load test using your app\" - out of interest how does \neveryone do that at the database level?\n\nI've tried playr (https://area51.myyearbook.com/trac.cgi/wiki/Playr) but \nhaven't been able to get it working properly. I'm not sure what other \ntools are available.\n\nTIA.\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Fri, 31 Jul 2009 16:50:05 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": true, "msg_subject": "load / stress testing" }, { "msg_contents": "Chris:\n\nThere are a number of solutions on the market. A company called Soasta \nhas a cloud based testing solution, my company has a product/solution \nthat's highly customizable called StressWalk. On another computer I \nhave a larger list with some ranking info I can send if you are interested.\n\nHaving been involved in a number of stress tests for customers, I think \nthe most important questions in picking a service to help you get this \ndone are:\n\n * How much value am I likely to get by doing the testing? (If the\n app fails in production because your team missed something\n fundamental and it takes a day to fix via EIP, how much will that\n cost your company in lost revenue and intangibles like reputation?)\n * How much money can I afford to spend?\n * Do I have good instrumentation in place for my application,\n database, infrastructure, etc that I can easily correlate with the\n graphs produced by the stress testing application.\n * Am I looking for a rubber stamp or real analysis of what's going\n on -- do I or my team have the time, expertise and experience in\n application, database and infrastructure performance optimization\n to do the analysis ourselves?\n * Is my application UI difficult to stress test? (ie. Active X\n controls, Flex, Java Applet, SilverLight, Cross-site AJAX)\n * To create a realistic load profile, do I need to simulate one user\n scenario or several?\n * Does my application have user session and permission caching\n code? If yes, you may need to utilize a solution that can use\n large number of user credentials with different permission sets. \n If you do not have the users created already and it's a large\n enough population, you may want to use the stress testing\n service/tool to create and configure the users for your test in\n an automated way.\n\nThis should get you started. However, if you are leaning toward a \nservice -- my analysis is that there are a few services priced under \n$1000 but they do not look very useful. The services we have found that \nlook like they can provide some good value to their customers are in the \n$2500 - $15,000 range. For really complex stuff like application \ncomponents interacting with mainframes and synchronized data extract and \nload to facilitate the work, you can expect things from the top end of \nthat range up to $25-30K.\n\nI'd be happy to have a conversation if you are serious about solving \nthis problem.\n\n-Jerry\n\nJerry Champlin\nAbsolute Performance Inc.\nO: (303) 443-7000 x501\nC: (303) 588-2547\[email protected]\n\n\n\nChris wrote:\n> Hi,\n>\n> Everyone says \"load test using your app\" - out of interest how does \n> everyone do that at the database level?\n>\n> I've tried playr (https://area51.myyearbook.com/trac.cgi/wiki/Playr) \n> but haven't been able to get it working properly. I'm not sure what \n> other tools are available.\n>\n> TIA.\n\n\n\n\n\n\nChris:\n\nThere are a number of solutions on the market.  A company called Soasta\nhas a cloud based testing solution, my company has a product/solution\nthat's highly customizable  called StressWalk.  On another computer I\nhave a larger list with some ranking info I can send if you are\ninterested.\n\nHaving been involved in a number of stress tests for customers, I think\nthe most important questions in picking a service to help you get this\ndone are:\n\nHow much value am I likely to get by doing the testing?  (If the\napp fails in production because your team missed something fundamental\nand it takes a day to fix via EIP, how much will that cost your company\nin lost revenue and intangibles like reputation?) \n\nHow much money can I afford to spend?\nDo I have good instrumentation in place for my application,\ndatabase, infrastructure, etc that I can easily correlate with the\ngraphs produced by the stress testing application.\nAm I looking for a rubber stamp or real analysis of what's going\non -- do I or my team have the time, expertise and experience in\napplication, database and infrastructure performance optimization to do\nthe analysis ourselves?\nIs my application UI difficult to stress test?  (ie.  Active X\ncontrols, Flex, Java Applet, SilverLight, Cross-site AJAX)\nTo create a realistic load profile, do I need to simulate one\nuser scenario or several?\nDoes my application have user session and permission caching\ncode?  If yes, you may need to utilize a solution that can use large\nnumber of user credentials with different permission sets.  If you do\nnot have the users created already and it's a large enough population,\nyou may want to use the stress testing service/tool to create and\nconfigure the users  for your test in an automated way.\n\nThis should get you started.  However, if you are leaning toward a\nservice -- my analysis is that there are a few services priced under\n$1000 but they do not look very useful.  The services we have found\nthat look like they can provide some good value to their customers are\nin the $2500 - $15,000 range.  For really complex stuff like\napplication components interacting with mainframes and synchronized\ndata extract and load to facilitate the work, you can expect things\nfrom the top end of that range up to $25-30K.\n\nI'd be happy to have a conversation if you are serious about solving\nthis problem.\n\n-Jerry\nJerry Champlin\nAbsolute Performance Inc.\nO: (303) 443-7000 x501\nC: (303) 588-2547\[email protected]\n\n\n\nChris wrote:\nHi,\n \n\nEveryone says \"load test using your app\" - out of interest how does\neveryone do that at the database level?\n \n\nI've tried playr (https://area51.myyearbook.com/trac.cgi/wiki/Playr)\nbut haven't been able to get it working properly. I'm not sure what\nother tools are available.\n \n\nTIA.", "msg_date": "Fri, 31 Jul 2009 08:04:45 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: load / stress testing" }, { "msg_contents": "Try tsung, dig the archives for a pg specific howto. Tsung is open \nsource and supports multiple protocols.\n\nRegards,\n-- \ndim\n\nLe 31 juil. 2009 à 08:50, Chris <[email protected]> a écrit :\n\n> Hi,\n>\n> Everyone says \"load test using your app\" - out of interest how does \n> everyone do that at the database level?\n>\n> I've tried playr (https://area51.myyearbook.com/trac.cgi/wiki/Playr) \n> but haven't been able to get it working properly. I'm not sure what \n> other tools are available.\n>\n> TIA.\n> -- \n> Postgresql & php tutorials\n> http://www.designmagick.com/\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 1 Aug 2009 17:57:04 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: load / stress testing" } ]
[ { "msg_contents": "Hi folks,\n\nWe have problems with performance of a simple SQL statement.\n\nIf we add a LIMIT 50, the query is about 6 times slower than without a limit\n(query returns 2 rows).\n\nI have read this discussion:\nhttp://archives.postgresql.org/pgsql-performance/2008-09/msg00005.php but\nthere seems to be no solution in it.\n\nI tried this things:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server but changing\nsettings doesn't have significant effect.\n\nThe DDL statements (create tables, indices) are attached.\n\nThe events_events table contains 375K rows, the events_event_types contains\n71 rows.\n\nThe query:\nselect events_events.id FROM events_events\nleft join events_event_types on events_events.eventType_id=\nevents_event_types.id\nwhere events_event_types.severity=70\nand events_events.cleared='f'\norder by events_events.dateTime DESC\n\nIt takes 155ms to run this query (returning 2 rows)\n\nAfter adding LIMIT 10, it takes 950 ms to run.\n\nQuery plan: without limit:\n\"Sort (cost=20169.62..20409.50 rows=95952 width=16)\"\n\" Sort Key: events_events.datetime\"\n\" -> Hash Join (cost=2.09..12229.58 rows=95952 width=16)\"\n\" Hash Cond: (events_events.eventtype_id = events_event_types.id)\"\n\" -> Seq Scan on events_events (cost=0.00..9918.65 rows=359820\nwidth=24)\"\n\" Filter: (NOT cleared)\"\n\" -> Hash (cost=1.89..1.89 rows=16 width=8)\"\n\" -> Seq Scan on events_event_types (cost=0.00..1.89 rows=16\nwidth=8)\"\n\" Filter: (severity = 70)\"\n\nQuery plan: with limit:\n\"Limit (cost=0.00..12.50 rows=10 width=16)\"\n\" -> Nested Loop (cost=0.00..119932.21 rows=95952 width=16)\"\n\" -> Index Scan Backward using events_events_datetime_ind on\nevents_events (cost=0.00..18242.28 rows=359820 width=24)\"\n\" Filter: (NOT cleared)\"\n\" -> Index Scan using events_event_types_pkey on events_event_types\n(cost=0.00..0.27 rows=1 width=8)\"\n\" Index Cond: (events_event_types.id =\nevents_events.eventtype_id)\"\n\" Filter: (events_event_types.severity = 70)\"\n\nSo postgres seems to handle a query with limit different internally. Tried\nto set default_statistics_target to 10, 100, 200, but no significant\ndifferences.\n\nThis problem appears on both Postgres 8.3 and 8.4.\n\nAny suggestions?\n\nThanks in advance!\n\nBest regards,\n\nKees van Dieren\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nChamber of commerce Rotterdam: 22048547", "msg_date": "Fri, 31 Jul 2009 13:06:39 +0200", "msg_from": "Kees van Dieren <[email protected]>", "msg_from_op": true, "msg_subject": "SQL select query becomes slow when using limit (with no offset)" } ]
[ { "msg_contents": "Hi folks,\n\nWe have problems with performance of a simple SQL statement.\n\nIf we add a LIMIT 50, the query is about 6 times slower than without a limit\n(query returns 2 rows).\n\nI have read this discussion:\nhttp://archives.postgresql.org/pgsql-performance/2008-09/msg00005.php but\nthere seems to be no solution in it.\n\nI tried this things:\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server but changing\nsettings doesn't have significant effect.\n\nThe DDL statements (create tables, indices) are attached.\n\nThe events_events table contains 375K rows, the events_event_types contains\n71 rows.\n\nThe query:\nselect events_events.id FROM events_events\nleft join events_event_types on events_events.eventType_id=\nevents_event_types.id\nwhere events_event_types.severity=70\nand events_events.cleared='f'\norder by events_events.dateTime DESC\n\nIt takes 155ms to run this query (returning 2 rows)\n\nAfter adding LIMIT 10, it takes 950 ms to run.\n\nQuery plan: without limit:\n\"Sort (cost=20169.62..20409.50 rows=95952 width=16)\"\n\" Sort Key: events_events.datetime\"\n\" -> Hash Join (cost=2.09..12229.58 rows=95952 width=16)\"\n\" Hash Cond: (events_events.eventtype_id = events_event_types.id)\"\n\" -> Seq Scan on events_events (cost=0.00..9918.65 rows=359820\nwidth=24)\"\n\" Filter: (NOT cleared)\"\n\" -> Hash (cost=1.89..1.89 rows=16 width=8)\"\n\" -> Seq Scan on events_event_types (cost=0.00..1.89 rows=16\nwidth=8)\"\n\" Filter: (severity = 70)\"\n\nQuery plan: with limit:\n\"Limit (cost=0.00..12.50 rows=10 width=16)\"\n\" -> Nested Loop (cost=0.00..119932.21 rows=95952 width=16)\"\n\" -> Index Scan Backward using events_events_datetime_ind on\nevents_events (cost=0.00..18242.28 rows=359820 width=24)\"\n\" Filter: (NOT cleared)\"\n\" -> Index Scan using events_event_types_pkey on events_event_types\n(cost=0.00..0.27 rows=1 width=8)\"\n\" Index Cond: (events_event_types.id =\nevents_events.eventtype_id)\"\n\" Filter: (events_event_types.severity = 70)\"\n\nSo postgres seems to handle a query with limit different internally. Tried\nto set default_statistics_target to 10, 100, 200, but no significant\ndifferences.\n\nThis problem appears on both Postgres 8.3 and 8.4.\n\nAny suggestions?\n\nThanks in advance!\n\nBest regards,\n\nKees van Dieren\n\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nChamber of commerce Rotterdam: 22048547", "msg_date": "Fri, 31 Jul 2009 14:11:14 +0200", "msg_from": "Kees van Dieren <[email protected]>", "msg_from_op": true, "msg_subject": "SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "On Fri, Jul 31, 2009 at 1:11 PM, Kees van Dieren<[email protected]> wrote:\n> It takes 155ms to run this query (returning 2 rows)\n>\n> Query plan: without limit:\n> \"Sort  (cost=20169.62..20409.50 rows=95952 width=16)\"\n\nCould you send the results of EXPLAIN ANALYZE for both queries?\nEvidently the planner is expecting a lot more rows than the 2 rows\nyou're expecting but it's not clear where it's gone wrong.\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 31 Jul 2009 13:42:26 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "\n> The query:\n> select events_events.id FROM events_events\n> left join events_event_types on events_events.eventType_id=\n> events_event_types.id\n> where events_event_types.severity=70\n> and events_events.cleared='f'\n> order by events_events.dateTime DESC\n\n\tThe main problem seems to be lack of a suitable index...\n\n- Try creating an index on events_events( eventType_id, cleared )\n- Or the other way around : events_events( cleared, eventType_id )\n\n\t(depends on your other queries)\n\n\tPlease try both and report EXPLAIN ANALYZE.\n", "msg_date": "Fri, 31 Jul 2009 14:46:45 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no\n offset)" }, { "msg_contents": "Hi Folks,\n\nThanks for your response.\n\nI have added the following index (suggested by other post):\n\nCREATE INDEX events_events_cleared_eventtype\n ON events_events\n USING btree\n (eventtype_id, cleared)\n WHERE cleared = false;\n\nAlso with columns in reversed order.\n\nNo changes in response time noticed.\n\nIndex on cleared column already is there (indices are in sql file attached\nto initial post.). eventtype_id has a foreign key constraint, which adds an\nindex automatically I believe?\n\nThe explain analyze results for both queries:\nexplain analyze select events_events.id FROM events_events\nleft join events_event_types on events_events.eventType_id=\nevents_event_types.id\nwhere events_event_types.severity=70\nand not events_events.cleared\norder by events_events.dateTime DESC LIMIT 100\n>>>\n\"Limit (cost=0.00..125.03 rows=100 width=16) (actual time=0.046..3897.094\nrows=77 loops=1)\"\n\" -> Nested Loop (cost=0.00..120361.40 rows=96269 width=16) (actual\ntime=0.042..3896.881 rows=77 loops=1)\"\n\" -> Index Scan Backward using events_events_datetime_ind on\nevents_events (cost=0.00..18335.76 rows=361008 width=24) (actual\ntime=0.025..720.345 rows=360637 loops=1)\"\n\" Filter: (NOT cleared)\"\n\" -> Index Scan using events_event_types_pkey on events_event_types\n(cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=0\nloops=360637)\"\n\" Index Cond: (events_event_types.id =\nevents_events.eventtype_id)\"\n\" Filter: (events_event_types.severity = 70)\"\n\"Total runtime: 3897.268 ms\"\n\nexplain analyze select events_events.id FROM events_events\nleft join events_event_types on events_events.eventType_id=\nevents_event_types.id\nwhere events_event_types.severity=70\nand not events_events.cleared\norder by events_events.dateTime DESC\n>>>\n\"Sort (cost=20255.18..20495.85 rows=96269 width=16) (actual\ntime=1084.842..1084.951 rows=77 loops=1)\"\n\" Sort Key: events_events.datetime\"\n\" Sort Method: quicksort Memory: 20kB\"\n\" -> Hash Join (cost=2.09..12286.62 rows=96269 width=16) (actual\ntime=1080.789..1084.696 rows=77 loops=1)\"\n\" Hash Cond: (events_events.eventtype_id = events_event_types.id)\"\n\" -> Seq Scan on events_events (cost=0.00..9968.06 rows=361008\nwidth=24) (actual time=0.010..542.946 rows=360637 loops=1)\"\n\" Filter: (NOT cleared)\"\n\" -> Hash (cost=1.89..1.89 rows=16 width=8) (actual\ntime=0.077..0.077 rows=16 loops=1)\"\n\" -> Seq Scan on events_event_types (cost=0.00..1.89 rows=16\nwidth=8) (actual time=0.010..0.046 rows=16 loops=1)\"\n\" Filter: (severity = 70)\"\n\"Total runtime: 1085.145 ms\"\n\nAny suggestions?\n\nThanks in advance!\n\nBest regards,\n\nKees van Dieren\n\n\[email protected]\n\n2009/7/31 Greg Stark <[email protected]>\n\n> On Fri, Jul 31, 2009 at 1:11 PM, Kees van Dieren<[email protected]>\n> wrote:\n> > It takes 155ms to run this query (returning 2 rows)\n> >\n> > Query plan: without limit:\n> > \"Sort (cost=20169.62..20409.50 rows=95952 width=16)\"\n>\n> Could you send the results of EXPLAIN ANALYZE for both queries?\n> Evidently the planner is expecting a lot more rows than the 2 rows\n> you're expecting but it's not clear where it's gone wrong.\n>\n>\n> --\n> greg\n> http://mit.edu/~gsstark/resume.pdf <http://mit.edu/%7Egsstark/resume.pdf>\n>\n\n\n\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nChamber of commerce Rotterdam: 22048547\n\nHi Folks,Thanks for your response.I have added the following index (suggested by other post):CREATE INDEX events_events_cleared_eventtype  ON events_events  USING btree  (eventtype_id, cleared)\n  WHERE cleared = false;Also with columns in reversed order.No changes in response time noticed.Index on cleared column already is there (indices are in sql file attached to initial post.). eventtype_id has a foreign key constraint, which adds an index automatically I believe?\nThe explain analyze results for both queries:explain analyze select events_events.id FROM events_eventsleft join events_event_types on events_events.eventType_id=events_event_types.id\nwhere events_event_types.severity=70and not events_events.cleared order by events_events.dateTime DESC LIMIT 100>>>\"Limit  (cost=0.00..125.03 rows=100 width=16) (actual time=0.046..3897.094 rows=77 loops=1)\"\n\"  ->  Nested Loop  (cost=0.00..120361.40 rows=96269 width=16) (actual time=0.042..3896.881 rows=77 loops=1)\"\"        ->  Index Scan Backward using events_events_datetime_ind on events_events  (cost=0.00..18335.76 rows=361008 width=24) (actual time=0.025..720.345 rows=360637 loops=1)\"\n\"              Filter: (NOT cleared)\"\"        ->  Index Scan using events_event_types_pkey on events_event_types  (cost=0.00..0.27 rows=1 width=8) (actual time=0.003..0.003 rows=0 loops=360637)\"\n\"              Index Cond: (events_event_types.id = events_events.eventtype_id)\"\"              Filter: (events_event_types.severity = 70)\"\"Total runtime: 3897.268 ms\"\nexplain analyze select events_events.id FROM events_eventsleft join events_event_types on events_events.eventType_id=events_event_types.id\nwhere events_event_types.severity=70and not events_events.clearedorder by events_events.dateTime DESC>>>\"Sort  (cost=20255.18..20495.85 rows=96269 width=16) (actual time=1084.842..1084.951 rows=77 loops=1)\"\n\"  Sort Key: events_events.datetime\"\"  Sort Method:  quicksort  Memory: 20kB\"\"  ->  Hash Join  (cost=2.09..12286.62 rows=96269 width=16) (actual time=1080.789..1084.696 rows=77 loops=1)\"\n\"        Hash Cond: (events_events.eventtype_id = events_event_types.id)\"\"        ->  Seq Scan on events_events  (cost=0.00..9968.06 rows=361008 width=24) (actual time=0.010..542.946 rows=360637 loops=1)\"\n\"              Filter: (NOT cleared)\"\"        ->  Hash  (cost=1.89..1.89 rows=16 width=8) (actual time=0.077..0.077 rows=16 loops=1)\"\"              ->  Seq Scan on events_event_types  (cost=0.00..1.89 rows=16 width=8) (actual time=0.010..0.046 rows=16 loops=1)\"\n\"                    Filter: (severity = 70)\"\"Total runtime: 1085.145 ms\"\nAny suggestions?Thanks in advance!Best regards,Kees van [email protected]/7/31 Greg Stark <[email protected]>\nOn Fri, Jul 31, 2009 at 1:11 PM, Kees van Dieren<[email protected]> wrote:\n\n> It takes 155ms to run this query (returning 2 rows)\n>\n> Query plan: without limit:\n> \"Sort  (cost=20169.62..20409.50 rows=95952 width=16)\"\n\nCould you send the results of EXPLAIN ANALYZE for both queries?\nEvidently the planner is expecting a lot more rows than the 2 rows\nyou're expecting but it's not clear where it's gone wrong.\n\n\n--\ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n-- Squins | IT, HonestlyOranjestraat 232983 HL RidderkerkThe NetherlandsPhone: +31 (0)180 414520Mobile: +31 (0)6 30413841www.squins.com\nChamber of commerce Rotterdam: 22048547", "msg_date": "Wed, 5 Aug 2009 07:01:53 +0200", "msg_from": "Kees van Dieren <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "Kees van Dieren wrote:\n> Hi Folks,\n>\n> Thanks for your response.\n>\n> I have added the following index (suggested by other post):\n>\n> CREATE INDEX events_events_cleared_eventtype\n> ON events_events\n> USING btree\n> (eventtype_id, cleared)\n> WHERE cleared = false;\n>\n> Also with columns in reversed order.\n>\n> No changes in response time noticed.\n>\n> Index on cleared column already is there (indices are in sql file\n> attached to initial post.). eventtype_id has a foreign key constraint,\n> which adds an index automatically I believe?\n>\n> The explain analyze results for both queries:\n> explain analyze select events_events.id <http://events_events.id> FROM\n> events_events\n> left join events_event_types on\n> events_events.eventType_id=events_event_types.id\n> <http://events_event_types.id>\n> where events_event_types.severity=70\n> and not events_events.cleared\n> order by events_events.dateTime DESC LIMIT 100\n> >>>\n> \"Limit (cost=0.00..125.03 rows=100 width=16) (actual\n> time=0.046..3897.094 rows=77 loops=1)\"\n> \" -> Nested Loop (cost=0.00..120361.40 rows=96269 width=16) (actual\n> time=0.042..3896.881 rows=77 loops=1)\"\n> \" -> Index Scan Backward using events_events_datetime_ind on\n> events_events (cost=0.00..18335.76 rows=361008 width=24) (actual\n> time=0.025..720.345 rows=360637 loops=1)\"\n> \" Filter: (NOT cleared)\"\n> \" -> Index Scan using events_event_types_pkey on\n> events_event_types (cost=0.00..0.27 rows=1 width=8) (actual\n> time=0.003..0.003 rows=0 loops=360637)\"\n> \" Index Cond: (events_event_types.id\n> <http://events_event_types.id> = events_events.eventtype_id)\"\n> \" Filter: (events_event_types.severity = 70)\"\n> \"Total runtime: 3897.268 ms\"\n>\nThe plan here is guessing that we will find the 100 rows we want pretty\nquickly by scanning the dateTime index. As we aren't expecting to have\nto look through many rows to find 100 that match the criteria. With no\ncross column statistics it's more a guess than a good calculation. So\nthe guess is bad and we end up scanning 360k rows from the index before\nwe find what we want. My skills are not up to giving specific advise\non how to avert this problem. Maybe somebody else can help there.\n> explain analyze select events_events.id <http://events_events.id> FROM\n> events_events\n> left join events_event_types on\n> events_events.eventType_id=events_event_types.id\n> <http://events_event_types.id>\n> where events_event_types.severity=70\n> and not events_events.cleared\n> order by events_events.dateTime DESC\n> >>>\n> \"Sort (cost=20255.18..20495.85 rows=96269 width=16) (actual\n> time=1084.842..1084.951 rows=77 loops=1)\"\n> \" Sort Key: events_events.datetime\"\n> \" Sort Method: quicksort Memory: 20kB\"\n> \" -> Hash Join (cost=2.09..12286.62 rows=96269 width=16) (actual\n> time=1080.789..1084.696 rows=77 loops=1)\"\n> \" Hash Cond: (events_events.eventtype_id =\n> events_event_types.id <http://events_event_types.id>)\"\n> \" -> Seq Scan on events_events (cost=0.00..9968.06\n> rows=361008 width=24) (actual time=0.010..542.946 rows=360637 loops=1)\"\n> \" Filter: (NOT cleared)\"\n> \" -> Hash (cost=1.89..1.89 rows=16 width=8) (actual\n> time=0.077..0.077 rows=16 loops=1)\"\n> \" -> Seq Scan on events_event_types (cost=0.00..1.89\n> rows=16 width=8) (actual time=0.010..0.046 rows=16 loops=1)\"\n> \" Filter: (severity = 70)\"\n> \"Total runtime: 1085.145 ms\"\n>\n> Any suggestions?\nThis plan is faster as you avoid the index scan. The planner is\npreferring to do a tablescan to find what it needs. This is much faster\nthan the 360k random I/O index lookups. You can force this type of plan\nwith a subquery and the OFFSET 0 trick, but I'm not sure it's the best\nsolution.\n\neg\n\nexplain analyze SELECT * FROM\n (SELECT events_events.id <http://events_events.id> FROM events_events\n LEFT JOIN events_event_types on\nevents_events.eventType_id=events_event_types.id\n<http://events_event_types.id>\n WHERE events_event_types.severity=70\n AND not events_events.cleared\n ORDER BY events_events.dateTime DESC OFFSET 0) AS a LIMIT 100\n\nRegards\n\nRussell\n", "msg_date": "Wed, 05 Aug 2009 20:16:41 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with\n no offset)" }, { "msg_contents": "Thanks for your response.\n\nI think your analysis is correct, When there are more than 100 rows that\nmatch this query, limit 100 is fast.\n\nHowever, we often have less than hundred rows, so this is not sufficient for\nus.\n\nThis suggestion ('OFFSET 0' trick) did not show differences in response time\n(runs in 947ms).\n\nOne thing that helps, is limiting the set by adding this to where clause:\nand events_events.dateTime > '2009-07-24'.\n(query now runs in approx 500ms)\n\nThe workaround we implemented, is query caching in our application (Java,\nwith JPA / Hibernate second level query cache). This actually solves the\nproblem for us, but I'd prefer to get this query perform in postgres as\nwell. I'd think that the Postgresql query planner should be smarter in\nhandling LIMIT statements.\n\nWould it get attention if I submit this to\nhttp://www.postgresql.org/support/submitbug ? (in fact it is not really a\nbug, but an improvement request).\n\nBest regards,\n\nKees\n\n\n2009/8/5 Russell Smith <[email protected]>\n\n> Kees van Dieren wrote:\n> > Hi Folks,\n> >\n> > Thanks for your response.\n> >\n> > I have added the following index (suggested by other post):\n> >\n> > CREATE INDEX events_events_cleared_eventtype\n> > ON events_events\n> > USING btree\n> > (eventtype_id, cleared)\n> > WHERE cleared = false;\n> >\n> > Also with columns in reversed order.\n> >\n> > No changes in response time noticed.\n> >\n> > Index on cleared column already is there (indices are in sql file\n> > attached to initial post.). eventtype_id has a foreign key constraint,\n> > which adds an index automatically I believe?\n> >\n> > The explain analyze results for both queries:\n> > explain analyze select events_events.id <http://events_events.id> FROM\n> > events_events\n> > left join events_event_types on\n> > events_events.eventType_id=events_event_types.id\n> > <http://events_event_types.id>\n> > where events_event_types.severity=70\n> > and not events_events.cleared\n> > order by events_events.dateTime DESC LIMIT 100\n> > >>>\n> > \"Limit (cost=0.00..125.03 rows=100 width=16) (actual\n> > time=0.046..3897.094 rows=77 loops=1)\"\n> > \" -> Nested Loop (cost=0.00..120361.40 rows=96269 width=16) (actual\n> > time=0.042..3896.881 rows=77 loops=1)\"\n> > \" -> Index Scan Backward using events_events_datetime_ind on\n> > events_events (cost=0.00..18335.76 rows=361008 width=24) (actual\n> > time=0.025..720.345 rows=360637 loops=1)\"\n> > \" Filter: (NOT cleared)\"\n> > \" -> Index Scan using events_event_types_pkey on\n> > events_event_types (cost=0.00..0.27 rows=1 width=8) (actual\n> > time=0.003..0.003 rows=0 loops=360637)\"\n> > \" Index Cond: (events_event_types.id\n> > <http://events_event_types.id> = events_events.eventtype_id)\"\n> > \" Filter: (events_event_types.severity = 70)\"\n> > \"Total runtime: 3897.268 ms\"\n> >\n> The plan here is guessing that we will find the 100 rows we want pretty\n> quickly by scanning the dateTime index. As we aren't expecting to have\n> to look through many rows to find 100 that match the criteria. With no\n> cross column statistics it's more a guess than a good calculation. So\n> the guess is bad and we end up scanning 360k rows from the index before\n> we find what we want. My skills are not up to giving specific advise\n> on how to avert this problem. Maybe somebody else can help there.\n> > explain analyze select events_events.id <http://events_events.id> FROM\n> > events_events\n> > left join events_event_types on\n> > events_events.eventType_id=events_event_types.id\n> > <http://events_event_types.id>\n> > where events_event_types.severity=70\n> > and not events_events.cleared\n> > order by events_events.dateTime DESC\n> > >>>\n> > \"Sort (cost=20255.18..20495.85 rows=96269 width=16) (actual\n> > time=1084.842..1084.951 rows=77 loops=1)\"\n> > \" Sort Key: events_events.datetime\"\n> > \" Sort Method: quicksort Memory: 20kB\"\n> > \" -> Hash Join (cost=2.09..12286.62 rows=96269 width=16) (actual\n> > time=1080.789..1084.696 rows=77 loops=1)\"\n> > \" Hash Cond: (events_events.eventtype_id =\n> > events_event_types.id <http://events_event_types.id>)\"\n> > \" -> Seq Scan on events_events (cost=0.00..9968.06\n> > rows=361008 width=24) (actual time=0.010..542.946 rows=360637 loops=1)\"\n> > \" Filter: (NOT cleared)\"\n> > \" -> Hash (cost=1.89..1.89 rows=16 width=8) (actual\n> > time=0.077..0.077 rows=16 loops=1)\"\n> > \" -> Seq Scan on events_event_types (cost=0.00..1.89\n> > rows=16 width=8) (actual time=0.010..0.046 rows=16 loops=1)\"\n> > \" Filter: (severity = 70)\"\n> > \"Total runtime: 1085.145 ms\"\n> >\n> > Any suggestions?\n> This plan is faster as you avoid the index scan. The planner is\n> preferring to do a tablescan to find what it needs. This is much faster\n> than the 360k random I/O index lookups. You can force this type of plan\n> with a subquery and the OFFSET 0 trick, but I'm not sure it's the best\n> solution.\n>\n> eg\n>\n> explain analyze SELECT * FROM\n> (SELECT events_events.id <http://events_events.id> FROM events_events\n> LEFT JOIN events_event_types on\n> events_events.eventType_id=events_event_types.id\n> <http://events_event_types.id>\n> WHERE events_event_types.severity=70\n> AND not events_events.cleared\n> ORDER BY events_events.dateTime DESC OFFSET 0) AS a LIMIT 100\n>\n> Regards\n>\n> Russell\n>\n\n\n\n-- \nSquins | IT, Honestly\nOranjestraat 23\n2983 HL Ridderkerk\nThe Netherlands\nPhone: +31 (0)180 414520\nMobile: +31 (0)6 30413841\nwww.squins.com\nChamber of commerce Rotterdam: 22048547\n\nThanks for your response.I think your analysis is correct, When there are more than 100 rows that match this query, limit 100 is fast. However, we often have less than hundred rows, so this is not sufficient for us.\nThis suggestion ('OFFSET 0' trick) did not show differences in response time (runs in 947ms).One thing that helps, is limiting the set by adding this to where clause: and events_events.dateTime > '2009-07-24'.\n(query now runs in approx 500ms)The workaround we implemented,  is query caching in our application (Java, with JPA / Hibernate second level query cache). This actually solves the problem for us, but I'd prefer to get this query perform in postgres as well. I'd think that the Postgresql query planner should be smarter in handling LIMIT statements. \nWould it get attention if I submit this to http://www.postgresql.org/support/submitbug ? (in fact it is not really a bug, but an improvement request).Best regards,\nKees2009/8/5 Russell Smith <[email protected]>\nKees van Dieren wrote:\n> Hi Folks,\n>\n> Thanks for your response.\n>\n> I have added the following index (suggested by other post):\n>\n> CREATE INDEX events_events_cleared_eventtype\n>   ON events_events\n>   USING btree\n>   (eventtype_id, cleared)\n>   WHERE cleared = false;\n>\n> Also with columns in reversed order.\n>\n> No changes in response time noticed.\n>\n> Index on cleared column already is there (indices are in sql file\n> attached to initial post.). eventtype_id has a foreign key constraint,\n> which adds an index automatically I believe?\n>\n> The explain analyze results for both queries:\n> explain analyze select events_events.id <http://events_events.id> FROM\n> events_events\n> left join events_event_types on\n> events_events.eventType_id=events_event_types.id\n> <http://events_event_types.id>\n> where events_event_types.severity=70\n> and not events_events.cleared\n> order by events_events.dateTime DESC LIMIT 100\n> >>>\n> \"Limit  (cost=0.00..125.03 rows=100 width=16) (actual\n> time=0.046..3897.094 rows=77 loops=1)\"\n> \"  ->  Nested Loop  (cost=0.00..120361.40 rows=96269 width=16) (actual\n> time=0.042..3896.881 rows=77 loops=1)\"\n> \"        ->  Index Scan Backward using events_events_datetime_ind on\n> events_events  (cost=0.00..18335.76 rows=361008 width=24) (actual\n> time=0.025..720.345 rows=360637 loops=1)\"\n> \"              Filter: (NOT cleared)\"\n> \"        ->  Index Scan using events_event_types_pkey on\n> events_event_types  (cost=0.00..0.27 rows=1 width=8) (actual\n> time=0.003..0.003 rows=0 loops=360637)\"\n> \"              Index Cond: (events_event_types.id\n> <http://events_event_types.id> = events_events.eventtype_id)\"\n> \"              Filter: (events_event_types.severity = 70)\"\n> \"Total runtime: 3897.268 ms\"\n>\nThe plan here is guessing that we will find the 100 rows we want pretty\nquickly by scanning the dateTime index.  As we aren't expecting to have\nto look through many rows to find 100 that match the criteria.  With no\ncross column statistics it's more a guess than a good calculation.  So\nthe guess is bad and we end up scanning 360k rows from the index before\nwe find what we want.   My skills are not up to giving specific advise\non how to avert this problem.  Maybe somebody else can help there.\n> explain analyze select events_events.id <http://events_events.id> FROM\n> events_events\n> left join events_event_types on\n> events_events.eventType_id=events_event_types.id\n> <http://events_event_types.id>\n> where events_event_types.severity=70\n> and not events_events.cleared\n> order by events_events.dateTime DESC\n> >>>\n> \"Sort  (cost=20255.18..20495.85 rows=96269 width=16) (actual\n> time=1084.842..1084.951 rows=77 loops=1)\"\n> \"  Sort Key: events_events.datetime\"\n> \"  Sort Method:  quicksort  Memory: 20kB\"\n> \"  ->  Hash Join  (cost=2.09..12286.62 rows=96269 width=16) (actual\n> time=1080.789..1084.696 rows=77 loops=1)\"\n> \"        Hash Cond: (events_events.eventtype_id =\n> events_event_types.id <http://events_event_types.id>)\"\n> \"        ->  Seq Scan on events_events  (cost=0.00..9968.06\n> rows=361008 width=24) (actual time=0.010..542.946 rows=360637 loops=1)\"\n> \"              Filter: (NOT cleared)\"\n> \"        ->  Hash  (cost=1.89..1.89 rows=16 width=8) (actual\n> time=0.077..0.077 rows=16 loops=1)\"\n> \"              ->  Seq Scan on events_event_types  (cost=0.00..1.89\n> rows=16 width=8) (actual time=0.010..0.046 rows=16 loops=1)\"\n> \"                    Filter: (severity = 70)\"\n> \"Total runtime: 1085.145 ms\"\n>\n> Any suggestions?\nThis plan is faster as you avoid the index scan.  The planner is\npreferring to do a tablescan to find what it needs.  This is much faster\nthan the 360k random I/O index lookups.  You can force this type of plan\nwith a subquery and the OFFSET 0 trick, but I'm not sure it's the best\nsolution.\n\neg\n\nexplain analyze SELECT * FROM\n    (SELECT events_events.id <http://events_events.id> FROM events_events\n         LEFT JOIN events_event_types on\nevents_events.eventType_id=events_event_types.id\n<http://events_event_types.id>\n        WHERE events_event_types.severity=70\n                     AND not events_events.cleared\n        ORDER BY events_events.dateTime DESC OFFSET 0) AS a LIMIT 100\n\nRegards\n\nRussell\n-- Squins | IT, HonestlyOranjestraat 232983 HL RidderkerkThe NetherlandsPhone: +31 (0)180 414520Mobile: +31 (0)6 30413841www.squins.com\nChamber of commerce Rotterdam: 22048547", "msg_date": "Fri, 7 Aug 2009 10:00:28 +0200", "msg_from": "Kees van Dieren <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren<[email protected]> wrote:\n> Would it get attention if I submit this to\n> http://www.postgresql.org/support/submitbug ? (in fact it is not really a\n> bug, but an improvement request).\n\nI think that many of the people who read that mailing list also read\nthis one, including, critically, Tom Lane, and you're not the first\nperson to run into a problem caused by lack of cross-column\nstatistics. I don't think you're going to get very far by submitting\nthis as a bug. There are already several people interested in this\nproblem, but as most of us don't get paid to hack on PostgreSQL, it's\na question of finding enough round tuits; this is not an easy thing to\nfix.\n\nIf you are sufficiently bothered by this problem that you are willing\nto pay someone to fix it for you, there are several companies with\nwhom you can contract to get this feature developed and committed for\nthe next release of PostgreSQL.\n\n...Robert\n", "msg_date": "Fri, 7 Aug 2009 08:53:43 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "\nOn 8/7/09 5:53 AM, \"Robert Haas\" <[email protected]> wrote:\n\n> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren<[email protected]>\n> wrote:\n>> Would it get attention if I submit this to\n>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a\n>> bug, but an improvement request).\n> \n> I think that many of the people who read that mailing list also read\n> this one, including, critically, Tom Lane, and you're not the first\n> person to run into a problem caused by lack of cross-column\n> statistics. \n\nCritically, it should be understood that this general problem is not just\nborn from lack of cross-column statistics.\n\nIt is also one of using the statistical expected value to calculate cost\nwithout consideration of the variance. Often, the cost of a plan varies\nwidely and nonlinearly with a small change in the expected value the stats\nused to estimate cost.\n\nThe problem is acute with LIMIT and various other boundary conditions where\nthere is a steep 'cost cliff' for certain plan types. When LIMIT is\napplied, the planner changes its estimates, but does not take into account\nthe _greatly_ increased uncertainty of those estimates.\n\nImagine a case where the planner's estimate is 100% correct, and on average\none side of a join will have 2 tuples. The planner chooses nested loops to\ndo that join. But the distribution of the number of tuples at this node is\nskewed, so although the expected value is 2, a values of 10 and 0 are both\ncommon. When values of 10 occur, the execution time goes up significantly.\nAn alternate plan might be slower for the case where the actual values in\nexecution equal the expected values, but faster in the average case!\nThe average cost of a plan is NOT that cost of the query with average\nstatistics, due to variance, nonlinearity, and skew. And even if they were\nequal, it might not be preferable to choose the plan that is best on average\nover the one that is best at the 90th percentile.\n\nGetting back to the case above with the nestloop -- if the planner estimated\nthe cost of the nestloop join versus other joins with some idea of the\nvariance in the estimations it could favor more 'stable' execution plans.\n\nSo in short, cross-column statistics don't have to be gathered and used to\nmake this problem less acute. The planner just has to be more aware of\nvariance and the things that lead to it, such as column correlation.\nThus with a LIMIT applied, the expected value for the number of tuples\nscanned before a match will shrink, but the uncertainty of this estimate\ngrows significantly as well, so the right plan to choose is one that hedges\nagainst the uncertainty, not the one that assumes the expected value is\ncorrect.\nGathering and using cross column statistics will change the expected value\nfor some plans, but more importantly will allow the uncertainty to be\nreduced. Better stats go hand in hand with the uncertainty analysis because\none can never have all cross column statistics, across all tables and all\njoin-function spaces, analyzed. Stats gathering can never be complete or\nwithout flaws. The planner can never be perfect.\n\n\n \n\n> I don't think you're going to get very far by submitting\n> this as a bug. There are already several people interested in this\n> problem, but as most of us don't get paid to hack on PostgreSQL, it's\n> a question of finding enough round tuits; this is not an easy thing to\n> fix.\n> \n> If you are sufficiently bothered by this problem that you are willing\n> to pay someone to fix it for you, there are several companies with\n> whom you can contract to get this feature developed and committed for\n> the next release of PostgreSQL.\n> \n> ...Robert\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 7 Aug 2009 14:09:23 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "On Fri, Aug 7, 2009 at 5:09 PM, Scott Carey<[email protected]> wrote:\n> On 8/7/09 5:53 AM, \"Robert Haas\" <[email protected]> wrote:\n>\n>> On Fri, Aug 7, 2009 at 4:00 AM, Kees van Dieren<[email protected]>\n>> wrote:\n>>> Would it get attention if I submit this to\n>>> http://www.postgresql.org/support/submitbug ? (in fact it is not really a\n>>> bug, but an improvement request).\n>>\n>> I think that many of the people who read that mailing list also read\n>> this one, including, critically, Tom Lane, and you're not the first\n>> person to run into a problem caused by lack of cross-column\n>> statistics.\n>\n> Critically, it should be understood that this general problem is not just\n> born from lack of cross-column statistics.\n>\n> It is also one of using the statistical expected value to calculate cost\n> without consideration of the variance.  Often, the cost of a plan varies\n> widely and nonlinearly with a small change in the expected value the stats\n> used to estimate cost.\n>\n> The problem is acute with LIMIT and various other boundary conditions where\n> there is a steep 'cost cliff' for certain plan types.  When LIMIT is\n> applied, the planner changes its estimates, but does not take into account\n> the _greatly_ increased uncertainty of those estimates.\n>\n> Imagine a case where the planner's estimate is 100% correct, and on average\n> one side of a join will have 2 tuples.  The planner chooses nested loops to\n> do that join.  But the distribution of the number of tuples at this node is\n> skewed, so although the expected value is 2, a values of 10 and 0 are both\n> common.  When values of 10 occur, the execution time goes up significantly.\n> An alternate plan might be slower for the case where the actual values in\n> execution equal the expected values, but faster in the average case!\n> The average cost of a plan is NOT that cost of the query with average\n> statistics, due to variance, nonlinearity, and skew.  And even if they were\n> equal, it might not be preferable to choose the plan that is best on average\n> over the one that is best at the 90th percentile.\n>\n> Getting back to the case above with the nestloop -- if the planner estimated\n> the cost of the nestloop join versus other joins with some idea of the\n> variance in the estimations it could favor more 'stable' execution plans.\n>\n> So in short, cross-column statistics don't have to be gathered and used to\n> make this problem less acute.  The planner just has to be more aware of\n> variance and the things that lead to it, such as column correlation.\n> Thus with a LIMIT applied, the expected value for the number of tuples\n> scanned before a match will shrink, but the uncertainty of this estimate\n> grows significantly as well, so the right plan to choose is one that hedges\n> against the uncertainty, not the one that assumes the expected value is\n> correct.\n\nThis is a good analysis. I think I proposed some kind of idea about\ntracking uncertainty in the planner a while back, but I never got very\nfar with it. The problem is, first, figuring out how to estimate the\nuncertainty, and second, figuring out what to do with the result once\nyou've estimated it. The concept of hedging against uncertainty is\nthe right one, I think, but it's not obvious how to fit that into the\ncost-comparison algorithm that the planner uses. A sticking point\nhere too is that the comparison of path costs is already a hot spot;\nmaking it more complex will likely have a noticeable negative impact\non query planning time. For queries against huge databases that may\nnot matter much, but for OLTP queries it certainly does.\n\nThere are a couple of methods that have been proposed to deal with\nthis in the past. The most obvious one that's been talked about a\ncouple of times is switching a nested loop to a hash join if the\nnumber of iterations exceeds some bound, which would require some\nexecutor support.\n\nEmpirically, it seems to me that the planner generally follows a\npretty consistent pattern. If the inner relation is tiny or the\nnumber of loops is estimated to be very small, it uses a nested loop.\nWhen the inner rel is a bit bigger, or the number of iterations is\nnontrivial, it switches to a hash join. When the inner relation gets\ntoo big to fit in work_mem, or just big enough that hashing it looks\ntoo slow, it switches to a nested loop with inner index-scan or,\nespecially if a useful sort is available, a merge join.\n\nJust handling better the case where we pick a straight nested loop\nrather than a hash join would help a lot of people. Some basic\nconservatism about the number of outer rows would be useful here (in\nparticular, we should probably assume that there will be at least 2\nwhen costing a nest loop, unless the outer side is known unique), and\nit's also worth thinking about the fact that a hash join won't build\nthe table unless there is at least 1 outer row, which I don't think\nthe current costing algorithm takes into account. Or maybe we should\nestimate the amount by which the nested loop figures to beat out the\nhash join and only accepted the nested loop plan if the win exceeds\nsome threshold percentage.\n\n...Robert\n", "msg_date": "Fri, 7 Aug 2009 23:37:35 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "Robert Haas <[email protected]> wrote: \n \n> Just handling better the case where we pick a straight nested loop\n> rather than a hash join would help a lot of people. Some basic\n> conservatism about the number of outer rows would be useful here (in\n> particular, we should probably assume that there will be at least 2\n> when costing a nest loop, unless the outer side is known unique),\n> and it's also worth thinking about the fact that a hash join won't\n> build the table unless there is at least 1 outer row, which I don't\n> think the current costing algorithm takes into account. Or maybe we\n> should estimate the amount by which the nested loop figures to beat\n> out the hash join and only accepted the nested loop plan if the win\n> exceeds some threshold percentage.\n \nBut in our environment the most common cause of a sub-optimal planning\nchoice is over-estimating the cost of a nested loop. We've never been\nable to get good plans overall without dropping random_page_cost to\ntwice the seq_page_cost or less -- in highly cached situations we\nroutinely drop both to 0.1. Creating further bias against nested loop\nplans just means we'll have to push the numbers further from what the\npurportedly represent. It seems to me a significant unsolved problem\nis properly accounting for caching effects.\n \nThat said, I have not really clear idea on how best to solve that\nproblem. The only ideas which recur when facing these issues are:\n \n(1) The the planner refused to deal with fractional estimates of how\nmany rows will be returned in a loop -- it treats 0.01 as 1 on the\nbasis that you can't read a fractional row, rather than as a 1% chance\nthat you will read a row and need to do the related work. I have\nalways thought that changing this might allow more accurate estimates;\nperhaps I should hack a version which behaves that way and test it as\na \"proof of concept.\" Note that this is diametrically opposed to your\nsuggestion that we always assume at least two rows in the absence of a\nunique index.\n \n(2) Somehow use effective_cache_size in combination with some sort of\ncurrent activity metrics to dynamically adjust random access costs. \n(I know, that one's total hand-waving, but it seems to have some\npossibility of better modeling reality than what we currently do.)\n \n-Kevin\n", "msg_date": "Mon, 10 Aug 2009 10:19:18 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "On Mon, Aug 10, 2009 at 11:19 AM, Kevin\nGrittner<[email protected]> wrote:\n> Robert Haas <[email protected]> wrote:\n>\n>> Just handling better the case where we pick a straight nested loop\n>> rather than a hash join would help a lot of people.  Some basic\n>> conservatism about the number of outer rows would be useful here (in\n>> particular, we should probably assume that there will be at least 2\n>> when costing a nest loop, unless the outer side is known unique),\n>> and it's also worth thinking about the fact that a hash join won't\n>> build the table unless there is at least 1 outer row, which I don't\n>> think the current costing algorithm takes into account.  Or maybe we\n>> should estimate the amount by which the nested loop figures to beat\n>> out the hash join and only accepted the nested loop plan if the win\n>> exceeds some threshold percentage.\n>\n> But in our environment the most common cause of a sub-optimal planning\n> choice is over-estimating the cost of a nested loop.  We've never been\n> able to get good plans overall without dropping random_page_cost to\n> twice the seq_page_cost or less -- in highly cached situations we\n> routinely drop both to 0.1.\n\nEven if our statistics were perfect, you'd still need to do this if\nyour database is mostly cached. That's routine tuning.\n\n> Creating further bias against nested loop\n> plans just means we'll have to push the numbers further from what the\n> purportedly represent.\n\nNot at all. You'd have to set them closer to their real values i.e.\nthe cost of reading a page from cache.\n\n> It seems to me a significant unsolved problem\n> is properly accounting for caching effects.\n\nDefinitely true.\n\n> That said, I have not really clear idea on how best to solve that\n> problem.  The only ideas which recur when facing these issues are:\n>\n> (1)  The the planner refused to deal with fractional estimates of how\n> many rows will be returned in a loop -- it treats 0.01 as 1 on the\n> basis that you can't read a fractional row, rather than as a 1% chance\n> that you will read a row and need to do the related work.  I have\n> always thought that changing this might allow more accurate estimates;\n> perhaps I should hack a version which behaves that way and test it as\n> a \"proof of concept.\"  Note that this is diametrically opposed to your\n> suggestion that we always assume at least two rows in the absence of a\n> unique index.\n\nYou're right. The problem is that you can get hosed in both\ndirections. My queries are always referencing a column that they have\na foreign key towards, so this never happens to me. But it does\nhappen to other people.\n\n> (2)  Somehow use effective_cache_size in combination with some sort of\n> current activity metrics to dynamically adjust random access costs.\n> (I know, that one's total hand-waving, but it seems to have some\n> possibility of better modeling reality than what we currently do.)\n\nYeah, I gave a lightning talk on this at PGcon, but I haven't had time\nto do anything with it. There are a couple of problems. One is that\nyou have to have a source for your current activity metrics. Since a\nlot of the pages of interest will be in the OS buffer pool rather than\nPG shared buffers, there's no easy way to handle this, and you also\nhave to keep in mind that plans can be cached and reused, so you need\nthe estimates not to change too fast or you'll have horrible plan\nstability problems.\n\nThe other is that right now, the page costs are constant. Making them\nper-relation will mean that they require syscache lookups. I'm not\nsure whether that's expensive enough to impact planning time, and if\nso what to do about it.\n\n...Robert\n", "msg_date": "Mon, 10 Aug 2009 12:21:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no offset)" }, { "msg_contents": "Robert Haas wrote:\n> On Mon, Aug 10, 2009 at 11:19 AM, Kevin Grittner<[email protected]> wrote:\n>> (2) Somehow use effective_cache_size in combination with some sort of\n>> current activity metrics to dynamically adjust random access costs.\n>> (I know, that one's total hand-waving, but it seems to have some\n>> possibility of better modeling reality than what we currently do.)\n\nI was disappointed when I learned that effective_cache_size doesn't get \ngenerally used to predict the likelihood of a buffer fetch requiring \nphysical io.\n\n> Yeah, I gave a lightning talk on this at PGcon, but I haven't had time\n> to do anything with it. There are a couple of problems. One is that\n> you have to have a source for your current activity metrics. Since a\n> lot of the pages of interest will be in the OS buffer pool rather than\n> PG shared buffers, there's no easy way to handle this\n\nWhile there are portability concerns, mmap + mincore works across BSD, \nLinux, Solaris and will return a vector of file pages in the OS buffer \npool. So it's certainly possible that on supported systems, an activity \nmonitor can have direct knowledge of OS caching effectiveness on a per \nrelation/index basis.\n\n-- \n-Devin\n", "msg_date": "Mon, 10 Aug 2009 11:41:53 -0700", "msg_from": "Devin Ben-Hur <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL select query becomes slow when using limit (with no \toffset)" } ]
[ { "msg_contents": "Hello,\n\n\nI have a problem with an inner join + count().\n\nmy query is:\n\nexplain analyze select \nk.idn,k.kerdes_subject,k.kerdes_text,u.vezeteknev,u.keresztnev,u.idn as \nuser_id, kg.kategoria_neve, count(v.idn)\n\nFROM kategoriak as kg\n\nINNER JOIN kerdesek as k on kg.idn = k.kategoria_id\nINNER JOIN users as u ON k.user_id = u.idn\nINNER JOIN valaszok as v ON k.idn = v.kerdes_id\n\nwhere kg.idn=15 group by k.idn, k.kerdes_subject,k.kerdes_text, \nu.idn,u.vezeteknev,u.keresztnev,kg.kategoria_neve\n\nThe problem is with the count(v.idn).\n\nThis column has a relation with: v.kerdes_id = k.idn => k.kategoria_id = \n kg.idn\n\nand the WHERE says: kg.idn = 15.\n\nWhy does it run through all lines in v?\n\nthe explain sais:\n\n GroupAggregate (cost=103238.59..103602.66 rows=10402 width=1382) \n(actual time=8531.405..8536.633 rows=73 loops=1)\n -> Sort (cost=103238.59..103264.59 rows=10402 width=1382) (actual \ntime=8531.339..8533.199 rows=1203 loops=1)\n Sort Key: k.idn, k.kerdes_subject, k.kerdes_text, u.idn, \nu.vezeteknev, u.keresztnev, kg.kategoria_neve\n -> Hash Join (cost=3827.79..89951.54 rows=10402 width=1382) \n(actual time=1778.590..8523.015 rows=1203 loops=1)\n Hash Cond: (v.kerdes_id = k.idn)\n -> Seq Scan on valaszok v (cost=0.00..78215.98 \nrows=2080998 width=8) (actual time=59.714..5009.171 rows=2080998 loops=1)\n -> Hash (cost=3823.42..3823.42 rows=350 width=1378) \n(actual time=12.553..12.553 rows=74 loops=1)\n -> Nested Loop (cost=14.98..3823.42 rows=350 \nwidth=1378) (actual time=0.714..12.253 rows=74 loops=1)\n -> Nested Loop (cost=14.98..1056.38 \nrows=350 width=830) (actual time=0.498..5.952 rows=117 loops=1)\n -> Seq Scan on kategoriak kg \n(cost=0.00..1.30 rows=1 width=278) (actual time=0.066..0.076 rows=1 loops=1)\n Filter: (idn = 15)\n -> Bitmap Heap Scan on kerdesek k \n(cost=14.98..1051.58 rows=350 width=560) (actual time=0.374..5.430 \nrows=117 loops=1)\n Recheck Cond: (15 = kategoria_id)\n -> Bitmap Index Scan on \nkategoria_id_id_idx (cost=0.00..14.89 rows=350 width=0) (actual \ntime=0.212..0.212 rows=117 loops=1)\n Index Cond: (15 = \nkategoria_id)\n -> Index Scan using users_pkey on users u \n(cost=0.00..7.89 rows=1 width=552) (actual time=0.047..0.048 rows=1 \nloops=117)\n Index Cond: (k.user_id = u.idn)\n Total runtime: 8536.936 ms\n\n\n\nSo it run through more than 2 mill lines... but why? It should only \ncount those lines which has the category_id = 15...\n\nWhat am I doing wrong?\n\n\n\n-- \nAdam PAPAI\n", "msg_date": "Sun, 02 Aug 2009 18:28:37 +0200", "msg_from": "Adam PAPAI <[email protected]>", "msg_from_op": true, "msg_subject": "select count(idn) is slow (Seq Scan) instead of Bitmap Heap.. why?" }, { "msg_contents": "2009/8/2 Adam PAPAI <[email protected]>:\n> Hello,\n>\n>\n> I have a problem with an inner join + count().\n>\n> my query is:\n>\n> explain analyze select\n> k.idn,k.kerdes_subject,k.kerdes_text,u.vezeteknev,u.keresztnev,u.idn as\n> user_id, kg.kategoria_neve, count(v.idn)\n>\n> FROM kategoriak as kg\n>\n> INNER JOIN kerdesek as k on kg.idn = k.kategoria_id\n> INNER JOIN users as u ON k.user_id = u.idn\n> INNER JOIN valaszok as v ON k.idn = v.kerdes_id\n>\n> where kg.idn=15 group by k.idn, k.kerdes_subject,k.kerdes_text,\n> u.idn,u.vezeteknev,u.keresztnev,kg.kategoria_neve\n>\n> The problem is with the count(v.idn).\n>\n> This column has a relation with: v.kerdes_id = k.idn => k.kategoria_id =\n>  kg.idn\n>\n> and the WHERE says: kg.idn = 15.\n>\n> Why does it run through all lines in v?\n>\n> the explain sais:\n>\n>  GroupAggregate  (cost=103238.59..103602.66 rows=10402 width=1382) (actual\n> time=8531.405..8536.633 rows=73 loops=1)\n>   ->  Sort  (cost=103238.59..103264.59 rows=10402 width=1382) (actual\n> time=8531.339..8533.199 rows=1203 loops=1)\n>         Sort Key: k.idn, k.kerdes_subject, k.kerdes_text, u.idn,\n> u.vezeteknev, u.keresztnev, kg.kategoria_neve\n>         ->  Hash Join  (cost=3827.79..89951.54 rows=10402 width=1382)\n> (actual time=1778.590..8523.015 rows=1203 loops=1)\n>               Hash Cond: (v.kerdes_id = k.idn)\n>               ->  Seq Scan on valaszok v  (cost=0.00..78215.98 rows=2080998\n> width=8) (actual time=59.714..5009.171 rows=2080998 loops=1)\n>               ->  Hash  (cost=3823.42..3823.42 rows=350 width=1378) (actual\n> time=12.553..12.553 rows=74 loops=1)\n>                     ->  Nested Loop  (cost=14.98..3823.42 rows=350\n> width=1378) (actual time=0.714..12.253 rows=74 loops=1)\n>                           ->  Nested Loop  (cost=14.98..1056.38 rows=350\n> width=830) (actual time=0.498..5.952 rows=117 loops=1)\n>                                 ->  Seq Scan on kategoriak kg\n> (cost=0.00..1.30 rows=1 width=278) (actual time=0.066..0.076 rows=1 loops=1)\n>                                       Filter: (idn = 15)\n>                                 ->  Bitmap Heap Scan on kerdesek k\n> (cost=14.98..1051.58 rows=350 width=560) (actual time=0.374..5.430 rows=117\n> loops=1)\n>                                       Recheck Cond: (15 = kategoria_id)\n>                                       ->  Bitmap Index Scan on\n> kategoria_id_id_idx  (cost=0.00..14.89 rows=350 width=0) (actual\n> time=0.212..0.212 rows=117 loops=1)\n>                                             Index Cond: (15 = kategoria_id)\n>                           ->  Index Scan using users_pkey on users u\n> (cost=0.00..7.89 rows=1 width=552) (actual time=0.047..0.048 rows=1\n> loops=117)\n>                                 Index Cond: (k.user_id = u.idn)\n>  Total runtime: 8536.936 ms\n>\n>\n>\n> So it run through more than 2 mill lines... but why? It should only count\n> those lines which has the category_id = 15...\n>\n> What am I doing wrong?\n\nWell, I'm not sure if you're doing anything wrong, but you're\ndefinitely thinking about it wrong. There's no way to skip the lines\nin v that have kg.idn != 15 just by looking at v, because the idn\ncolumn is in kg, not in v. Obviously you have to look through kg\nfirst and find the lines where kg.idn = 15. Or since kg.idn =\nk.kategoria_id, you can alternatively start by scanning k for\nkategoria_id = 15, which is what the planner chose to do here. Once\nyou know which lines from k you need, then you can go through v and\nlook for lines that have a match in k based on the join condition\nk.idn = v.kerdes_id.\n\nDo you have an index on valaszok (kerdes_id)? Might be worth investigating.\n\n...Robert\n", "msg_date": "Tue, 4 Aug 2009 09:46:33 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(idn) is slow (Seq Scan) instead of Bitmap\n\tHeap.. why?" } ]
[ { "msg_contents": "The database is 8gb currently. Use to be a lot bigger but we removed all large objects out and developed a file server storage for it, and using default page costs for 8.4, I did have it changed in 8.1.4\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]]\nSent: Sunday, 2 August 2009 11:26 PM\nTo: Chris Dunn\nCc: [email protected]\nSubject: Re: [PERFORM] Performance 8.4.0\n\nOn Fri, Jul 31, 2009 at 12:22 AM, Chris Dunn<[email protected]> wrote:\n> constraint_exclusion = on\n\nThis is critical if you need it, but a waste of CPU time if you don't.\n Other than that your paramaters look good. Are you using the default\npage cost settings? I see you have 12 GB RAM; how big is your\ndatabase?\n\n...Robert\n", "msg_date": "Mon, 3 Aug 2009 10:04:22 +0800", "msg_from": "Chris Dunn <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Performance 8.4.0" }, { "msg_contents": "On Sun, Aug 2, 2009 at 10:04 PM, Chris Dunn<[email protected]> wrote:\n> The database is 8gb currently. Use to be a lot bigger but we removed all large objects out and developed a file server storage for it, and using default page costs for 8.4, I did have it changed in 8.1.4\n\nYou might want to play with lowering them. The default page costs\nmake page accesses expensive relative to per-tuple operations, which\nis appropriate if you are I/O-bound but not so much if you are CPU\nbound, and especially if the whole database is memory resident. I'd\ntry something like random_page_cost = seq_page_cost = 0.1 for\nstarters, or whatever values were working for you in 8.1, but the\nsweet spot may be higher or lower.\n\n...Robert\n", "msg_date": "Sun, 2 Aug 2009 22:16:50 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Performance 8.4.0" } ]
[ { "msg_contents": "All,\n \nNot sure what's wrong in below execution plan but at times the query\nruns for 5 minutes to complete and after a while it runs within a second\nor two.\n \nHere is explain analyze out of the query.\n \nSELECT\nOBJECTS.ID,OBJECTS.NAME,OBJECTS.TYPE,OBJECTS.STATUS,OBJECTS.ALTNAME,OBJE\nCTS.DOMAINID,OBJECTS.ASSIGNEDTO,OBJECTS.USER1,OBJECTS.USER2,\nOBJECTS.KEY1,OBJECTS.KEY2,OBJECTS.KEY3,OBJECTS.OUI,OBJECTS.PRODCLASS,OBJ\nECTS.STATUS2,OBJECTS.LASTMODIFIED,OBJECTS.LONGDATA,OBJECTS.DATA0,\nOBJECTS.DATA1 \nFROM OBJECTS \nWHERE OBJECTS.DOMAINID IN\n('HY3XGEzC0E9JxRwoXLOLbjNsghEA','3330000000000000000000000000') \nAND OBJECTS.TYPE IN ('cpe') \nORDER BY OBJECTS.LASTMODIFIED DESC LIMIT 501\n \n \nQUERY PLAN\n\n------------------------------------------------------------------------\n------------------------------------------------------------------------\n---------------------\n Limit (cost=0.00..9235.11 rows=501 width=912) (actual\ntime=0.396..2741.803 rows=501 loops=1)\n -> Index Scan Backward using ix_objects_type_lastmodified on objects\n(cost=0.00..428372.71 rows=23239 width=912) (actual time=0.394..2741.608\nrows=501 loops=1)\n Index Cond: ((\"type\")::text = 'cpe'::text)\n Filter: ((domainid)::text = ANY\n(('{HY3XGEzC0E9JxRwoXLOLbjNsghEA,3330000000000000000000000000}'::charact\ner varying[])::text[]))\n Total runtime: 2742.126 ms\n\n \nThe table is auto vaccumed regularly. I have enabled log_min_messages to\ndebug2 but nothing stands out during the times when the query took 5+\nminutes. Is rebuild of the index necessary here.\n \nThanks in Advance,\n \nStalin\n \nPg 8.2.7, Sol10.\n \n \n \n\n\n\n\n\nAll,\n \nNot sure what's \nwrong in below execution plan but at times the query runs for 5 minutes to \ncomplete and after a while it runs within a second or two.\n \nHere is explain \nanalyze out of the query.\n \nSELECT \nOBJECTS.ID,OBJECTS.NAME,OBJECTS.TYPE,OBJECTS.STATUS,OBJECTS.ALTNAME,OBJECTS.DOMAINID,OBJECTS.ASSIGNEDTO,OBJECTS.USER1,OBJECTS.USER2,OBJECTS.KEY1,OBJECTS.KEY2,OBJECTS.KEY3,OBJECTS.OUI,OBJECTS.PRODCLASS,OBJECTS.STATUS2,OBJECTS.LASTMODIFIED,OBJECTS.LONGDATA,OBJECTS.DATA0,OBJECTS.DATA1 \nFROM OBJECTS WHERE OBJECTS.DOMAINID IN \n('HY3XGEzC0E9JxRwoXLOLbjNsghEA','3330000000000000000000000000') AND \nOBJECTS.TYPE  IN ('cpe') ORDER BY OBJECTS.LASTMODIFIED DESC LIMIT \n501\n \n                                                                             \nQUERY \nPLAN                                                                 \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  \n(cost=0.00..9235.11 rows=501 width=912) (actual time=0.396..2741.803 rows=501 \nloops=1)   ->  Index Scan Backward using \nix_objects_type_lastmodified on objects  (cost=0.00..428372.71 rows=23239 \nwidth=912) (actual time=0.394..2741.608 rows=501 \nloops=1)         Index Cond: \n((\"type\")::text = \n'cpe'::text)         Filter: \n((domainid)::text = ANY \n(('{HY3XGEzC0E9JxRwoXLOLbjNsghEA,3330000000000000000000000000}'::character \nvarying[])::text[])) Total runtime: 2742.126 ms\n \nThe table is auto \nvaccumed regularly. I have enabled log_min_messages to debug2 but nothing stands \nout during the times when the query took 5+ minutes. Is rebuild of the \nindex necessary here.\n \nThanks in \nAdvance,\n \nStalin\n \nPg 8.2.7, \nSol10.", "msg_date": "Mon, 3 Aug 2009 13:09:40 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query help" }, { "msg_contents": "\"Subbiah Stalin-XCGF84\" <[email protected]> wrote: \n \n> Not sure what's wrong in below execution plan but at times the query\n> runs for 5 minutes to complete and after a while it runs within a\n> second or two.\n \nThe plan doesn't look entirely unreasonable for the given query,\nalthough it's hard to be sure of that without seeing the table\ndefinitions. Given the plan, the times look to be about what I'd\nexpect for uncached and cached timings. (That is, on subsequent runs,\nthe data is sitting in RAM, so you don't need to access the hard\ndrives.)\n \nIf the initial run time is unacceptable for your environment, and\nthere's no way to have the cached \"primed\" when it matters, please\ngive more details on your table layouts, and perhaps someone can make\na useful suggestion.\n \n> Pg 8.2.7, Sol10.\n \nOne quick suggestion -- upgrade your PostgreSQL version if at all\npossible. The latest bug-fix version of 8.2 is currently 8.2.13, and\nthere are significant performance improvements in 8.3 and the newly\nreleased 8.4.\n \n-Kevin\n", "msg_date": "Mon, 03 Aug 2009 14:48:00 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query help" }, { "msg_contents": "Sure I can provide those details. I have seen this query running 5+\nminutes for different values for doaminID too. Its just that it happens\nat random and gets fixed within few mins.\n\nShared buffer=8G, effective cache size=4G. Optimizer/autovaccum settings\nare defaults\n\n relname | relpages | reltuples\n------------------------------+----------+-----------\n ct_objects_id_u1 | 11906 | 671919\n ix_objects_altname | 13327 | 671919\n ix_objects_domainid_name | 24714 | 671919\n ix_objects_key3 | 9891 | 671919\n ix_objects_name | 11807 | 671919\n ix_objects_type_lastmodified | 38640 | 671919\n ix_objects_user1 | 20796 | 671919\n ix_objects_user2 | 20842 | 671919\n objects | 111873 | 671919\n\nThis database resides on a RAID 1+0 storage with 10 disks (5+5).\n\nLet me know if you need any other information.\n\nThanks Kevin.\n\nStalin\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Monday, August 03, 2009 12:48 PM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: Re: [PERFORM] Query help\n\n\"Subbiah Stalin-XCGF84\" <[email protected]> wrote: \n \n> Not sure what's wrong in below execution plan but at times the query \n> runs for 5 minutes to complete and after a while it runs within a \n> second or two.\n \nThe plan doesn't look entirely unreasonable for the given query,\nalthough it's hard to be sure of that without seeing the table\ndefinitions. Given the plan, the times look to be about what I'd expect\nfor uncached and cached timings. (That is, on subsequent runs, the data\nis sitting in RAM, so you don't need to access the hard\ndrives.)\n \nIf the initial run time is unacceptable for your environment, and\nthere's no way to have the cached \"primed\" when it matters, please give\nmore details on your table layouts, and perhaps someone can make a\nuseful suggestion.\n \n> Pg 8.2.7, Sol10.\n \nOne quick suggestion -- upgrade your PostgreSQL version if at all\npossible. The latest bug-fix version of 8.2 is currently 8.2.13, and\nthere are significant performance improvements in 8.3 and the newly\nreleased 8.4.\n \n-Kevin\n", "msg_date": "Mon, 3 Aug 2009 16:12:50 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query help" }, { "msg_contents": "\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> Shared buffer=8G, effective cache size=4G.\n \nThat is odd; if your shared buffers are at 8G, you must have more than\n4G of cache. How much RAM is used for cache at the OS level? \nNormally you would add that to the shared buffers to get your\neffective cache size, or at least take the larger of the two.\n \nHow much RAM is on this machine in total? Do you have any other\nprocesses which use a lot of RAM or might access a lot of disk from\ntime to time?\n \n> Let me know if you need any other information.\n \nThe \\d output for the object table, or the CREATE for it and its\nindexes, would be good. Since it's getting through the random reads\nby the current plan at the rate of about one every 5ms, I'd say your\ndrive array is OK. If you want to make this query faster you've\neither got to have the data in cache or it has to have reason to\nbelieve that a different plan is faster.\n \nOne thing which might help is to boost your work_mem setting to\nsomewhere in the 32MB to 64MB range, provided that won't drive you\ninto swapping. You could also try dropping the random_page_cost to\nmaybe 2 to see if that gets you a different plan. You can do a quick\ncheck on what plans these generate by changing them on a given\nconnection and then requesting just an EXPLAIN of the plan, to see if\nit's different. (This doesn't actually run the query, so it's fast.)\n \n-Kevin\n", "msg_date": "Mon, 03 Aug 2009 15:44:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query help" }, { "msg_contents": "Server has 32G memory and it's a dedicated to run PG and no other\napplication is sharing this database. I have checked checkpoints and\nthey don't occur during those slow query runtimes. Checkpoint_segments\nis set 128. here is quick snap from vmstat.\n\n # vmstat 5 5\n kthr memory page disk faults\ncpu\n r b w swap free re mf pi po fr de sr 1m 1m 1m m1 in sy cs us\nsy id\n 0 0 0 56466032 25908072 59 94 516 13 13 0 0 10 3 59 1 480 443 500 1\n1 98\n 0 0 0 51377520 20294328 6 8 0 32 32 0 0 0 4 1 0 368 185 361 0\n1 99\n 0 0 0 56466032 25908064 59 94 516 13 13 0 0 1 10 3 59 480 443 500 1\n1 98\n 0 0 0 51376984 20294168 57 427 0 16 16 0 0 0 0 1 0 380 781 396 1\n1 98\n 0 0 0 51376792 20294208 112 1131 2 50 50 0 0 0 0 5 2 398 2210 541 4\n3 92 \n\n\\d output --\n\n Table \"public.objects\"\n Column | Type | Modifiers\n--------------+-----------------------------+-----------\n id | character varying(28) | not null\n name | character varying(50) | not null\n altname | character varying(50) |\n type | character varying(3) |\n domainid | character varying(28) | not null\n status | smallint |\n dbver | integer |\n created | timestamp without time zone |\n lastmodified | timestamp without time zone |\n assignedto | character varying(28) |\n status2 | smallint |\n key1 | character varying(25) |\n key2 | character varying(25) |\n key3 | character varying(64) |\n oui | character varying(6) |\n prodclass | character varying(64) |\n user1 | character varying(50) |\n user2 | character varying(50) |\n data0 | character varying(2000) |\n data1 | character varying(2000) |\n longdata | character varying(1) |\nIndexes:\n \"ct_objects_id_u1\" PRIMARY KEY, btree (id), tablespace\n\"nbbs_index_data\"\n \"ix_objects_altname\" btree (altname), tablespace \"nbbs_index_data\"\n \"ix_objects_domainid_name\" btree (domainid, upper(name::text)),\ntablespace \"nbbs_index_data\"\n \"ix_objects_key3\" btree (upper(key3::text)), tablespace\n\"nbbs_index_data\"\n \"ix_objects_name\" btree (upper(name::text) varchar_pattern_ops),\ntablespace \"nbbs_index_data\"\n \"ix_objects_type_lastmodified\" btree (\"type\", lastmodified),\ntablespace \"nbbs_index_data\"\n \"ix_objects_user1\" btree (upper(user1::text)), tablespace\n\"nbbs_index_data\"\n \"ix_objects_user2\" btree (upper(user2::text)), tablespace\n\"nbbs_index_data\"\n\nWork_mem=64mb, r_p_c = 2 on the session gave similar execution plan\nexcept the cost different due to change r_p_c.\n\n QUERY\nPLAN\n------------------------------------------------------------------------\n-----------------------------------------------------------------\n Limit (cost=0.00..5456.11 rows=501 width=912)\n -> Index Scan Backward using ix_objects_type_lastmodified on objects\n(cost=0.00..253083.03 rows=23239 width=912)\n Index Cond: ((\"type\")::text = 'cpe'::text)\n Filter: ((domainid)::text = ANY\n(('{HY3XGEzC0E9JxRwoXLOLbjNsghEA,3330000000000000000000000000}'::charact\ner varying[])::text[]))\n(4 rows)\n\n\nGiven the nature of the ix_objects_type_lastmodified index, wondering if\nthe index requires rebuilt. I tested rebuilding it in another db, and it\ncame to 2500 pages as opposed to 38640 pages.\n\nThe puzzle being why the same query with same filters, runs most of\ntimes faster but at times runs 5+ mintues and it switches back to fast\nmode. If it had used a different execution plan than the above, how do I\nlist all execution plans executed for a given SQL.\n\nThanks,\nStalin\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Monday, August 03, 2009 1:45 PM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: RE: [PERFORM] Query help\n\n\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> Shared buffer=8G, effective cache size=4G.\n \nThat is odd; if your shared buffers are at 8G, you must have more than\n4G of cache. How much RAM is used for cache at the OS level? \nNormally you would add that to the shared buffers to get your effective\ncache size, or at least take the larger of the two.\n \nHow much RAM is on this machine in total? Do you have any other\nprocesses which use a lot of RAM or might access a lot of disk from time\nto time?\n \n> Let me know if you need any other information.\n \nThe \\d output for the object table, or the CREATE for it and its\nindexes, would be good. Since it's getting through the random reads by\nthe current plan at the rate of about one every 5ms, I'd say your drive\narray is OK. If you want to make this query faster you've either got to\nhave the data in cache or it has to have reason to believe that a\ndifferent plan is faster.\n \nOne thing which might help is to boost your work_mem setting to\nsomewhere in the 32MB to 64MB range, provided that won't drive you into\nswapping. You could also try dropping the random_page_cost to maybe 2\nto see if that gets you a different plan. You can do a quick check on\nwhat plans these generate by changing them on a given connection and\nthen requesting just an EXPLAIN of the plan, to see if it's different.\n(This doesn't actually run the query, so it's fast.)\n \n-Kevin\n", "msg_date": "Mon, 3 Aug 2009 17:17:32 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query help" }, { "msg_contents": "\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> Server has 32G memory and it's a dedicated to run PG and no other\n> application is sharing this database.\n \nIt's not likely to help with this particular problem, but it's\ngenerally best to start from a position of letting the optimizer know\nwhat it's really got for resources. An effective cache size of\nsomewhere around 30GB would probably be best here.\n \n> Given the nature of the ix_objects_type_lastmodified index,\n> wondering if the index requires rebuilt. I tested rebuilding it in\n> another db, and it came to 2500 pages as opposed to 38640 pages.\n \nThat's pretty serious bloat. Any idea how that happened? Have you\nhad long running database transaction which might have prevented\nnormal maintenance from working? If not, you may need more aggressive\nsettings for autovacuum. Anyway, sure, try this with the index\nrebuilt. If you don't want downtime, use CREATE INDEX CONCURRENTLY\nand then drop the old index. (You could then rename the new index to\nmatch the old, if needed.)\n \n> The puzzle being why the same query with same filters, runs most of\n> times faster but at times runs 5+ mintues and it switches back to\n> fast mode.\n \nIt is likely either that something has pushed the relevant data out of\ncache before the slow runs, or there is blocking. How big is this\ndatabase? Can you get a list of pg_stat_activity and pg_locks during\nan episode of slow run time?\n \n> If it had used a different execution plan than the above, how do I\n> list all execution plans executed for a given SQL.\n \nIt's unlikely that the slow runs are because of a different plan being\nchosen. I was wondering if a better plan might be available, but this\none looks pretty good with your current indexes. I can think of an\nindexing change or two which *might* cause the optimizer to pick a\ndifferent plan, but that is far from certain, and without knowing the\ncause of the occasional slow runs, it's hard to be sure that the new\nplan wouldn't get stalled for the same reasons.\n \nIf it's possible to gather more data during an episode of a slow run,\nparticularly the pg_stat_activity and pg_locks lists, run as the\ndatabase superuser, it would help pin down the cause. A vmstat during\nsuch an episode, to compare to a \"normal\" one, might also be\ninstructive.\n \n-Kevin\n", "msg_date": "Tue, 04 Aug 2009 10:56:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query help" }, { "msg_contents": "Thanks for the response kevin.\n\nDB size is about 30G. Bloat could have been due to recent load testing\nthat was done. Autovaccum wasn't aggressive enough to catch up with load\ntesting. I will rebuild those indexes if possible reload the table\nitself as they are bloated too.\n\nSure I will collect necessary stats on the next occurrence of the slow\nquery. \n\nStalin\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Tuesday, August 04, 2009 8:57 AM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: RE: [PERFORM] Query help\n\n\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> Server has 32G memory and it's a dedicated to run PG and no other \n> application is sharing this database.\n \nIt's not likely to help with this particular problem, but it's generally\nbest to start from a position of letting the optimizer know what it's\nreally got for resources. An effective cache size of somewhere around\n30GB would probably be best here.\n \n> Given the nature of the ix_objects_type_lastmodified index, wondering \n> if the index requires rebuilt. I tested rebuilding it in another db, \n> and it came to 2500 pages as opposed to 38640 pages.\n \nThat's pretty serious bloat. Any idea how that happened? Have you had\nlong running database transaction which might have prevented normal\nmaintenance from working? If not, you may need more aggressive settings\nfor autovacuum. Anyway, sure, try this with the index rebuilt. If you\ndon't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old\nindex. (You could then rename the new index to match the old, if\nneeded.)\n \n> The puzzle being why the same query with same filters, runs most of \n> times faster but at times runs 5+ mintues and it switches back to fast\n\n> mode.\n \nIt is likely either that something has pushed the relevant data out of\ncache before the slow runs, or there is blocking. How big is this\ndatabase? Can you get a list of pg_stat_activity and pg_locks during an\nepisode of slow run time?\n \n> If it had used a different execution plan than the above, how do I \n> list all execution plans executed for a given SQL.\n \nIt's unlikely that the slow runs are because of a different plan being\nchosen. I was wondering if a better plan might be available, but this\none looks pretty good with your current indexes. I can think of an\nindexing change or two which *might* cause the optimizer to pick a\ndifferent plan, but that is far from certain, and without knowing the\ncause of the occasional slow runs, it's hard to be sure that the new\nplan wouldn't get stalled for the same reasons.\n \nIf it's possible to gather more data during an episode of a slow run,\nparticularly the pg_stat_activity and pg_locks lists, run as the\ndatabase superuser, it would help pin down the cause. A vmstat during\nsuch an episode, to compare to a \"normal\" one, might also be\ninstructive.\n \n-Kevin\n", "msg_date": "Tue, 4 Aug 2009 13:07:06 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query help" }, { "msg_contents": "We have found the problem. Apparently there was a query doing count on\n45 million rows table run prior to the episode of slow query. Definitely\ncached data is pushed out the memory. Is there way to assign portion of\nmemory to recycling purposes like in oracle, so the cached data doesn't\nget affected by queries like these.\n\nStalin\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Tuesday, August 04, 2009 8:57 AM\nTo: Subbiah Stalin-XCGF84; [email protected]\nSubject: RE: [PERFORM] Query help\n\n\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> Server has 32G memory and it's a dedicated to run PG and no other \n> application is sharing this database.\n \nIt's not likely to help with this particular problem, but it's generally\nbest to start from a position of letting the optimizer know what it's\nreally got for resources. An effective cache size of somewhere around\n30GB would probably be best here.\n \n> Given the nature of the ix_objects_type_lastmodified index, wondering \n> if the index requires rebuilt. I tested rebuilding it in another db, \n> and it came to 2500 pages as opposed to 38640 pages.\n \nThat's pretty serious bloat. Any idea how that happened? Have you had\nlong running database transaction which might have prevented normal\nmaintenance from working? If not, you may need more aggressive settings\nfor autovacuum. Anyway, sure, try this with the index rebuilt. If you\ndon't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old\nindex. (You could then rename the new index to match the old, if\nneeded.)\n \n> The puzzle being why the same query with same filters, runs most of \n> times faster but at times runs 5+ mintues and it switches back to fast\n\n> mode.\n \nIt is likely either that something has pushed the relevant data out of\ncache before the slow runs, or there is blocking. How big is this\ndatabase? Can you get a list of pg_stat_activity and pg_locks during an\nepisode of slow run time?\n \n> If it had used a different execution plan than the above, how do I \n> list all execution plans executed for a given SQL.\n \nIt's unlikely that the slow runs are because of a different plan being\nchosen. I was wondering if a better plan might be available, but this\none looks pretty good with your current indexes. I can think of an\nindexing change or two which *might* cause the optimizer to pick a\ndifferent plan, but that is far from certain, and without knowing the\ncause of the occasional slow runs, it's hard to be sure that the new\nplan wouldn't get stalled for the same reasons.\n \nIf it's possible to gather more data during an episode of a slow run,\nparticularly the pg_stat_activity and pg_locks lists, run as the\ndatabase superuser, it would help pin down the cause. A vmstat during\nsuch an episode, to compare to a \"normal\" one, might also be\ninstructive.\n \n-Kevin\n", "msg_date": "Wed, 5 Aug 2009 15:16:04 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query help" }, { "msg_contents": "\n\n\nOn 8/5/09 12:16 PM, \"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n\n> We have found the problem. Apparently there was a query doing count on\n> 45 million rows table run prior to the episode of slow query. Definitely\n> cached data is pushed out the memory. Is there way to assign portion of\n> memory to recycling purposes like in oracle, so the cached data doesn't\n> get affected by queries like these.\n> \n> Stalin\n\nIn Postgres 8.3 and above, large sequential scans don't evict other things\nfrom shared_buffers. But they can push things out of the OS page cache.\n\n> \n> -----Original Message-----\n> From: Kevin Grittner [mailto:[email protected]]\n> Sent: Tuesday, August 04, 2009 8:57 AM\n> To: Subbiah Stalin-XCGF84; [email protected]\n> Subject: RE: [PERFORM] Query help\n> \n> \"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n> \n>> Server has 32G memory and it's a dedicated to run PG and no other\n>> application is sharing this database.\n> \n> It's not likely to help with this particular problem, but it's generally\n> best to start from a position of letting the optimizer know what it's\n> really got for resources. An effective cache size of somewhere around\n> 30GB would probably be best here.\n> \n>> Given the nature of the ix_objects_type_lastmodified index, wondering\n>> if the index requires rebuilt. I tested rebuilding it in another db,\n>> and it came to 2500 pages as opposed to 38640 pages.\n> \n> That's pretty serious bloat. Any idea how that happened? Have you had\n> long running database transaction which might have prevented normal\n> maintenance from working? If not, you may need more aggressive settings\n> for autovacuum. Anyway, sure, try this with the index rebuilt. If you\n> don't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old\n> index. (You could then rename the new index to match the old, if\n> needed.)\n> \n>> The puzzle being why the same query with same filters, runs most of\n>> times faster but at times runs 5+ mintues and it switches back to fast\n> \n>> mode.\n> \n> It is likely either that something has pushed the relevant data out of\n> cache before the slow runs, or there is blocking. How big is this\n> database? Can you get a list of pg_stat_activity and pg_locks during an\n> episode of slow run time?\n> \n>> If it had used a different execution plan than the above, how do I\n>> list all execution plans executed for a given SQL.\n> \n> It's unlikely that the slow runs are because of a different plan being\n> chosen. I was wondering if a better plan might be available, but this\n> one looks pretty good with your current indexes. I can think of an\n> indexing change or two which *might* cause the optimizer to pick a\n> different plan, but that is far from certain, and without knowing the\n> cause of the occasional slow runs, it's hard to be sure that the new\n> plan wouldn't get stalled for the same reasons.\n> \n> If it's possible to gather more data during an episode of a slow run,\n> particularly the pg_stat_activity and pg_locks lists, run as the\n> database superuser, it would help pin down the cause. A vmstat during\n> such an episode, to compare to a \"normal\" one, might also be\n> instructive.\n> \n> -Kevin\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 5 Aug 2009 12:30:38 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query help" }, { "msg_contents": "\"Subbiah Stalin-XCGF84\" <[email protected]> wrote:\n \n> We have found the problem.\n \nGreat news!\n \n> Apparently there was a query doing count on 45 million rows table\n> run prior to the episode of slow query. Definitely cached data is\n> pushed out the memory.\n \nYeah, that would completely explain your symptoms.\n \n> Is there way to assign portion of memory to recycling purposes like\n> in oracle, so the cached data doesn't get affected by queries like\n> these.\n \nRight now you have 8GB in shared buffers and 22GB in OS cache. You\ncould try playing with that ratio, but benefit there would be iffy. \nTo solve this particular problem, you might want to see if they\nactually need an exact count, versus a reasonable estimate. The\nestimated tuple count from the latest statistics run is often close\nenough (and a lot faster).\n \n-Kevin\n", "msg_date": "Wed, 05 Aug 2009 14:42:30 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query help" } ]
[ { "msg_contents": "\nI'm seeing an interesting phenomenon while I'm trying to \nperformance-optimise a GiST index. Basically, running a performance test \nappears to be the same thing as running a random number generator. For \nexample, here I'm running the same statement eight times in quick \nsuccession:\n\n> modmine_overlap_test=# \\timing\n> Timing is on.\n> modmine_overlap_test=# select count(*) from (select * FROM \n> locatedsequencefeatureoverlappingfeatures limit 1000000) AS a;\n> count\n> ---------\n> 1000000\n> (1 row)\n>\n> Time: 138583.140 ms\n>\n> Time: 153769.152 ms\n>\n> Time: 127518.574 ms\n>\n> Time: 49629.036 ms\n>\n> Time: 70926.034 ms\n>\n> Time: 7625.034 ms\n>\n> Time: 7382.609 ms\n>\n> Time: 7985.379 ms\n\n\"locatedsequencefeatureoverlappingfeatures\" is a view, which performs a \njoin with a GiST index. The machine was otherwise idle, and has plenty of \nRAM free.\n\nShouldn't the data be entirely in cache the second time I run the \nstatement? However, it's worse than that, because while the long-running \nstatements were running, I saw significant CPU usage in top - more than \neight seconds worth. Again, one one test there was no io-wait, but on a \nsubsequent test there was lots of io-wait.\n\nHow can this be so inconsistent?\n\nMatthew\n\n-- \n \"Interwoven alignment preambles are not allowed.\"\n If you have been so devious as to get this message, you will understand\n it, and you deserve no sympathy. -- Knuth, in the TeXbook\n", "msg_date": "Tue, 4 Aug 2009 17:06:17 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "GiST, caching, and consistency" }, { "msg_contents": "On Tue, Aug 4, 2009 at 12:06 PM, Matthew Wakeling<[email protected]> wrote:\n>\n> I'm seeing an interesting phenomenon while I'm trying to\n> performance-optimise a GiST index. Basically, running a performance test\n> appears to be the same thing as running a random number generator. For\n> example, here I'm running the same statement eight times in quick\n> succession:\n>\n>> modmine_overlap_test=# \\timing\n>> Timing is on.\n>> modmine_overlap_test=# select count(*) from (select * FROM\n>> locatedsequencefeatureoverlappingfeatures limit 1000000) AS a;\n>>  count\n>> ---------\n>>  1000000\n>> (1 row)\n>>\n>> Time: 138583.140 ms\n>>\n>> Time: 153769.152 ms\n>>\n>> Time: 127518.574 ms\n>>\n>> Time: 49629.036 ms\n>>\n>> Time: 70926.034 ms\n>>\n>> Time: 7625.034 ms\n>>\n>> Time: 7382.609 ms\n>>\n>> Time: 7985.379 ms\n>\n> \"locatedsequencefeatureoverlappingfeatures\" is a view, which performs a join\n> with a GiST index. The machine was otherwise idle, and has plenty of RAM\n> free.\n>\n> Shouldn't the data be entirely in cache the second time I run the statement?\n> However, it's worse than that, because while the long-running statements\n> were running, I saw significant CPU usage in top - more than eight seconds\n> worth. Again, one one test there was no io-wait, but on a subsequent test\n> there was lots of io-wait.\n>\n> How can this be so inconsistent?\n\nBeats me. It looks like the first few queries are pulling stuff into\ncache, and then after that it settles down, but I'm not sure why it\ntakes 5 repetitions to do that. Is the plan changing?\n\n...Robert\n", "msg_date": "Tue, 4 Aug 2009 18:56:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST, caching, and consistency" }, { "msg_contents": "On Tue, Aug 4, 2009 at 11:56 PM, Robert Haas<[email protected]> wrote:\n> Beats me.  It looks like the first few queries are pulling stuff into\n> cache, and then after that it settles down, but I'm not sure why it\n> takes 5 repetitions to do that.  Is the plan changing?\n\nYeah, we're just guessing without the explain analyze output.\n\nBut as long as we're guessing, perhaps it's doing a sequential scan on\none of the tables and each query is reading in new parts of the table\nuntil the whole table is in cache. Is this a machine with lots of RAM\nbut a small setting for shared_buffers?\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 5 Aug 2009 00:27:44 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST, caching, and consistency" }, { "msg_contents": "On Wed, 5 Aug 2009, Greg Stark wrote:\n> On Tue, Aug 4, 2009 at 11:56 PM, Robert Haas<[email protected]> wrote:\n>> Beats me.  It looks like the first few queries are pulling stuff into\n>> cache, and then after that it settles down, but I'm not sure why it\n>> takes 5 repetitions to do that.  Is the plan changing?\n>\n> Yeah, we're just guessing without the explain analyze output.\n>\n> But as long as we're guessing, perhaps it's doing a sequential scan on\n> one of the tables and each query is reading in new parts of the table\n> until the whole table is in cache. Is this a machine with lots of RAM\n> but a small setting for shared_buffers?\n\nmodmine_overlap_test=# explain analyse select count(*) from (select * FROM \nlocatedsequencefeatureoverlappingfeatures limit 1000000) AS a;\n\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=478847.24..478847.25 rows=1 width=0)\n (actual time=27546.424..27546.428 rows=1 loops=1)\n -> Limit (cost=0.01..466347.23 rows=1000000 width=8)\n (actual time=0.104..24349.407 rows=1000000 loops=1)\n -> Nested Loop\n (cost=0.01..9138533.31 rows=19595985 width=8)\n (actual time=0.099..17901.571 rows=1000000 loops=1)\n Join Filter: (l1.subjectid <> l2.subjectid)\n -> Seq Scan on location l1\n (cost=0.00..90092.22 rows=4030122 width=16)\n (actual time=0.013..11.467 rows=3396 loops=1)\n -> Index Scan using location_object_bioseg on location l2\n (cost=0.01..1.46 rows=35 width=16)\n (actual time=0.130..3.339 rows=295 loops=3396)\n Index Cond: ((l2.objectid = l1.objectid) AND (bioseg_create(l1.intermine_start, l1.intermine_end) && bioseg_create(l2.intermine_start, l2.intermine_end)))\n Total runtime: 27546.534 ms\n(8 rows)\n\nTime: 27574.164 ms\n\nIt is certainly doing a sequential scan. So are you saying that it will \nstart a sequential scan from a different part of the table each time, even \nin the absence of other simultaneous sequential scans? Looks like I'm \ngoing to have to remove the limit to get sensible results - I only added \nthat to make the query return in a sensible time for performance testing.\n\nSome trivial testing with \"select * from location limit 10;\" indicates \nthat it starts the sequential scan in the same place each time - but is \nthis different from the above query?\n\nTo answer your question:\n\nshared_buffers = 450MB\nMachine has 16GB or RAM\nThe location table is 389 MB\nThe location_object_bioseg index is 182 MB\n\nMatthew\n\n-- \n What goes up must come down. Ask any system administrator.", "msg_date": "Wed, 5 Aug 2009 11:20:18 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST, caching, and consistency" }, { "msg_contents": "On Wed, Aug 5, 2009 at 6:20 AM, Matthew Wakeling<[email protected]> wrote:\n> It is certainly doing a sequential scan. So are you saying that it will\n> start a sequential scan from a different part of the table each time, even\n> in the absence of other simultaneous sequential scans? Looks like I'm going\n> to have to remove the limit to get sensible results - I only added that to\n> make the query return in a sensible time for performance testing.\n>\n> Some trivial testing with \"select * from location limit 10;\" indicates that\n> it starts the sequential scan in the same place each time - but is this\n> different from the above query?\n\nMaybe it's because of this?\n\nhttp://www.postgresql.org/docs/8.3/static/runtime-config-compatible.html#GUC-SYNCHRONIZE-SEQSCANS\n\n...Robert\n", "msg_date": "Wed, 5 Aug 2009 09:42:54 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST, caching, and consistency" }, { "msg_contents": "On Wed, 5 Aug 2009, Robert Haas wrote:\n> On Wed, Aug 5, 2009 at 6:20 AM, Matthew Wakeling<[email protected]> wrote:\n>> It is certainly doing a sequential scan. So are you saying that it will\n>> start a sequential scan from a different part of the table each time, even\n>> in the absence of other simultaneous sequential scans? Looks like I'm going\n>> to have to remove the limit to get sensible results - I only added that to\n>> make the query return in a sensible time for performance testing.\n>>\n>> Some trivial testing with \"select * from location limit 10;\" indicates that\n>> it starts the sequential scan in the same place each time - but is this\n>> different from the above query?\n>\n> Maybe it's because of this?\n>\n> http://www.postgresql.org/docs/8.3/static/runtime-config-compatible.html#GUC-SYNCHRONIZE-SEQSCANS\n\nThanks, we had already worked that one out. What I'm surprised about is \nthat it will start the sequential scan from a different part of the table \nwhen there aren't any simultaneous scans, but not when I do the trivial \ntesting.\n\nHaving reduced the data quantity (so I can throw away the limit) makes my \ntests produce much more consistent results. I label this problem as \nsolved. Thanks all.\n\nMatthew\n\n-- \n $ rm core\n Segmentation Fault (core dumped)\n", "msg_date": "Wed, 5 Aug 2009 14:53:50 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: GiST, caching, and consistency" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> It is certainly doing a sequential scan. So are you saying that it will \n> start a sequential scan from a different part of the table each time, even \n> in the absence of other simultaneous sequential scans?\n\nYeah, that's the syncscan logic biting you. You can turn it off if you\nwant.\n\n> Some trivial testing with \"select * from location limit 10;\" indicates \n> that it starts the sequential scan in the same place each time - but is \n> this different from the above query?\n\nYup, you're not scanning enough of the table to move the syncscan start\npointer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Aug 2009 10:11:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: GiST, caching, and consistency " } ]
[ { "msg_contents": "Hi All,\n\nI encountered an odd issue regarding check constraints complaining \nwhen they're not really violated.\n\nFor this particular machine, I am running 8.3.7, but on a machine \nrunning 8.3.5, it seems to have succeeded. I also upgraded a third \nmachine from 8.3.5 to 8.3.7, and the query succeeded (so I'm thinking \nit's not related to different postgres versions)\n\nI have a table called \"m_class\" and the definition is something like \nthis:\n\n> tii=# \\d m_class\n> Table \"public.m_class\"\n> Column | Type \n> | Modifiers\n> -------------------------+-------------------------- \n> +--------------------------------------------------------------\n> id | integer | not null \n> default nextval(('m_class_id_seq'::text)::regclass)\n> class_type | smallint | not null\n> title | character varying(100) | not null\n> ...snip...\n> date_setup | timestamp with time zone | not null \n> default ('now'::text)::date\n> date_start | timestamp with time zone | not null\n> date_end | timestamp with time zone | not null\n> term_length | interval | not null \n> default '5 years'::interval\n> ...snip...\n> max_portfolio_file_size | integer |\n> Indexes:\n> \"m_class_pkey\" PRIMARY KEY, btree (id)\n> \"m_class_account_idx\" btree (account)\n> \"m_class_instructor_idx\" btree (instructor)\n> Check constraints:\n> \"end_after_start_check\" CHECK (date_end >= date_start)\n> \"end_within_term_length\" CHECK (date_end <= (date_start + \n> term_length))\n> \"min_password_length_check\" CHECK \n> (length(enrollment_password::text) >= 4)\n> \"positive_term_length\" CHECK (term_length > '00:00:00'::interval)\n> \"start_after_setup_check\" CHECK (date_start >= date_setup)\n> ...snip...\n\nWhen I run my update, it fails:\n> tii=# begin; update only \"public\".\"m_class\" set date_end='2009-09-03 \n> 05:38:24.030331-07',term_length='177 days 17:59:09.868431' where \n> id='2652020';\n> BEGIN\n> ERROR: new row for relation \"m_class\" violates check constraint \n> \"end_within_term_length\"\n> tii=# rollback;\n> ROLLBACK\n\nThe data reads:\n> tii=# select date_start, date_end, term_length, '2009-09-03 \n> 05:38:24.030331-07'::timestamptz - date_start AS new_term_length \n> from m_class where id = 2652020;\n> date_start | date_end | \n> term_length | new_term_length\n> -----------------------------+----------------------------- \n> +-------------+--------------------------\n> 2009-03-09 11:39:14.1619-07 | 2009-04-08 11:39:14.1619-07 | 30 \n> days | 177 days 17:59:09.868431\n\n\nBased on new_term_length, the update should succeed. However, it \ndoesn't. Would anyone have an explanation?\n\nThanks for your help!\n--Richard\n", "msg_date": "Tue, 4 Aug 2009 09:49:17 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "CHECK constraint fails when it's not supposed to" }, { "msg_contents": "On Tue, Aug 4, 2009 at 5:49 PM, Richard Yen<[email protected]> wrote:\n>\n> The data reads:\n>>\n>> tii=# select date_start, date_end, term_length, '2009-09-03\n>> 05:38:24.030331-07'::timestamptz - date_start AS new_term_length from\n>> m_class where id = 2652020;\n>>         date_start          |          date_end           | term_length |\n>>     new_term_length\n>>\n>> -----------------------------+-----------------------------+-------------+--------------------------\n>>  2009-03-09 11:39:14.1619-07 | 2009-04-08 11:39:14.1619-07 | 30 days     |\n>> 177 days 17:59:09.868431\n>\n\nIs the machine where it's failing Windows? Windows builds have used\nfloating point dates in the past. Floating point arithmetic can be\nfunny and result in numbers that are not perfectly precise and compare\nsuprisingly, especially when -- as you're effectively doing here --\nthe you're testing for equality.\n\nYou could rebuild with 64-bit integer timestamps which represent\nmilliseconds precisely. 8.4 defaults to integer timestamps even on\nWindows.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 5 Aug 2009 00:36:38 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CHECK constraint fails when it's not supposed to" } ]
[ { "msg_contents": "Hi All,\n\nI encountered an odd issue regarding check constraints complaining\nwhen they're not really violated.\n\nFor this particular machine, I am running 8.3.7, but on a machine\nrunning 8.3.5, it seems to have succeeded. I also upgraded a third\nmachine from 8.3.5 to 8.3.7, and the query succeeded (so I'm\nthinking\nit's not related to different postgres versions)\n\nI have a table called \"m_class\" and the definition is something like\nthis:\n\n> tii=# \\d m_class\n> Table \"public.m_class\"\n> Column | Type\n> | Modifiers\n> -------------------------+--------------------------\n> +--------------------------------------------------------------\n> id | integer | not null\n> default nextval(('m_class_id_seq'::text)::regclass)\n> class_type | smallint | not null\n> title | character varying(100) | not null\n> ...snip...\n> date_setup | timestamp with time zone | not null\n> default ('now'::text)::date\n> date_start | timestamp with time zone | not null\n> date_end | timestamp with time zone | not null\n> term_length | interval | not null\n> default '5 years'::interval\n> ...snip...\n> max_portfolio_file_size | integer |\n> Indexes:\n> \"m_class_pkey\" PRIMARY KEY, btree (id)\n> \"m_class_account_idx\" btree (account)\n> \"m_class_instructor_idx\" btree (instructor)\n> Check constraints:\n> \"end_after_start_check\" CHECK (date_end >= date_start)\n> \"end_within_term_length\" CHECK (date_end <= (date_start +\n> term_length))\n> \"min_password_length_check\" CHECK\n> (length(enrollment_password::text) >= 4)\n> \"positive_term_length\" CHECK (term_length > '00:00:00'::interval)\n> \"start_after_setup_check\" CHECK (date_start >= date_setup)\n> ...snip...\n\nWhen I run my update, it fails:\n> tii=# begin; update only \"public\".\"m_class\" set date_end='2009-09-03\n> 05:38:24.030331-07',term_length='177 days 17:59:09.868431' where\n> id='2652020';\n> BEGIN\n> ERROR: new row for relation \"m_class\" violates check constraint\n> \"end_within_term_length\"\n> tii=# rollback;\n> ROLLBACK\n\nThe data reads:\n> tii=# select date_start, date_end, term_length, '2009-09-03\n> 05:38:24.030331-07'::timestamptz - date_start AS new_term_length\n> from m_class where id = 2652020;\n> date_start | date_end |\n> term_length | new_term_length\n> -----------------------------+-----------------------------\n> +-------------+--------------------------\n> 2009-03-09 11:39:14.1619-07 | 2009-04-08 11:39:14.1619-07 | 30\n> days | 177 days 17:59:09.868431\n\n\nBased on new_term_length, the update should succeed. However, it\ndoesn't. Would anyone have an explanation?\n\nThanks for your help!\n--Richard\n", "msg_date": "Tue, 4 Aug 2009 11:13:30 -0700 (PDT)", "msg_from": "richyen <[email protected]>", "msg_from_op": true, "msg_subject": "CHECK constraint fails when it's not supposed to" } ]
[ { "msg_contents": "Hi,\n\nI am using postgresql 8.3 with FreeBSD. FreeBSD is using syslog by\ndefault for postgresql logs.\nI would like to disable syslog in postgresql.conf. Does this change\nincrease the performance?\nWhat is the impact of using syslog on postgresql performance?\n\nThanks.\n", "msg_date": "Wed, 5 Aug 2009 00:16:05 +0300", "msg_from": "Ibrahim Harrani <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql and syslog" }, { "msg_contents": "On Tue, Aug 4, 2009 at 5:16 PM, Ibrahim\nHarrani<[email protected]> wrote:\n> Hi,\n>\n> I am using postgresql 8.3 with FreeBSD. FreeBSD is using syslog by\n> default for postgresql logs.\n> I would like to disable syslog in postgresql.conf. Does this change\n> increase the performance?\n> What is the impact of using syslog on postgresql performance?\n>\n> Thanks.\n\nI suspect it wouldn't make much difference one way or the other, but I\nsuppose the thing to do is try it out and see. If you come up with\nany useful benchmarks, post 'em back here for the benefit of the next\nperson who asks...\n\n...Robert\n", "msg_date": "Thu, 6 Aug 2009 15:33:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql and syslog" }, { "msg_contents": "we have actually gone the opposite way and switched to using syslog\nfor logging purposes some time ago, with no performance issues.\n\nsyslog files are easily read by a lot of applications out there. We have\nbeen using rsyslog for aggregating logs from multiple servers, splunk\nfor analysis purposes and pgfouine for routine reports.\n\nI would be very surprised if logging had a significant overhead any method\nyou choose. there's probably something very wrong with your setup if this\nis the case.\n\njust another dimension, Michael\n\nwe have actually gone the opposite way and switched to using syslogfor logging purposes some time ago, with no performance issues.syslog files are easily read by a lot of applications out there. We havebeen using rsyslog for aggregating logs from multiple servers, splunk\nfor analysis purposes and pgfouine for routine reports.I would be very surprised if logging had a significant overhead any methodyou choose. there's probably something very wrong with your setup if this\nis the case.just another dimension, Michael", "msg_date": "Fri, 7 Aug 2009 14:06:34 +0100", "msg_from": "Michael Nacos <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql and syslog" }, { "msg_contents": "Michael Nacos escribi�:\n\n> I would be very surprised if logging had a significant overhead any method\n> you choose. there's probably something very wrong with your setup if this\n> is the case.\n\nEither something very wrong, or the load is extremely high. In the\nlatter case perhaps it would make sense to ship syslog to a remote\nmachine. Since it uses UDP sockets, it wouldn't block when overloaded\nbut rather lose messages (besides, it means it has low overhead).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 7 Aug 2009 15:56:50 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql and syslog" } ]
[ { "msg_contents": "I have a database (699221). It contains of 1.8GB data (707710). I am\ndoing a complex query. Which required to load a 80MB index (732287).\n\nI restarted Postgresql so the cache is empty and it has to read the\ntable and index from disk. Which I understand is an expensive process.\nBut what I don't understand is even I split the index into a different\ntablespace located on a completely separate disk (mounted on /hdd2)\nthere is still a very long I/O wait time. That index is only thing\nexist on that disk. Any idea why? Or any way I can find out what it is\nwaiting for? Thanks.\n\n(running DTrace tool kit iofile.d script to show I/O wait time by\nfilename and process)\n\nbash-3.00# ./iofile.d\nTracing... Hit Ctrl-C to end.\n^C\n PID CMD TIME FILE\n 2379 postgres 23273 /export/space/postgres8.3/lib/plpgsql.so\n 2224 metacity 24403 /lib/libm.so.2\n 2379 postgres 32345\n/export/space/pg_data/pg_data/data/base/699221/2619 2379 postgres\n 40992 /export/space/pg_data/pg_data/data/base/699221/2691 0\nsched 82205 <none>\n 2379 postgres 273205 /export/space/postgres8.3/bin/postgres\n 2379 postgres 1092140 <none>\n 2379 postgres 59461649 /hdd2/indexes/699221/732287\n\n(running DTrace tool kit iofildb.d script to show I/O bytes by\nfilename and process)\n\nbash-3.00# ./iofileb.d\nTracing... Hit Ctrl-C to end.\n^C\n PID CMD KB FILE\n 2379 postgres 8256\n/export/space/pg_data/pg_data/data/base/699221/699473 2379 postgres\n 87760 /hdd2/indexes/699221/732287\n 2379 postgres 832472\n/export/space/pg_data/pg_data/data/base/699221/707710.1\n 2379 postgres 1048576\n/export/space/pg_data/pg_data/data/base/699221/707710\n\n\n\n\n\n-- \nJohn\n", "msg_date": "Thu, 6 Aug 2009 12:50:51 +1000", "msg_from": "Ip Wing Kin John <[email protected]>", "msg_from_op": true, "msg_subject": "Bottleneck?" }, { "msg_contents": "On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin John<[email protected]> wrote:\n> I have a database (699221). It contains of 1.8GB data (707710). I am\n> doing a complex query. Which required to load a 80MB index (732287).\n>\n> I restarted Postgresql so the cache is empty and it has to read the\n> table and index from disk. Which I understand is an expensive process.\n> But what I don't understand is even I split the index into a different\n> tablespace located on a completely separate disk (mounted on /hdd2)\n> there is still a very long I/O wait time. That index is only thing\n> exist on that disk. Any idea why? Or any way I can find out what it is\n> waiting for? Thanks.\n\nOK before DTrace, did you run explain analyze on the query? I think\nthe output of that would be interesting.\n\nLooking at the DTrace output it looks to me like you're reading at\nleast one > 1GB table. since you're accessing a file with a .1 on it.\n", "msg_date": "Wed, 5 Aug 2009 22:19:38 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Hi Scott,\n\nYes I did that. And you are right because I restart my machine, so the \npostgres cache is empty. And I think postgresql is reading all 1.8GB of \ndata back into the cache when it does a seq scan on the status table.\n\nQUERY PLAN\n-------------------------------------------------------------------------------- \n-------------------------------------------------------------------------------- \n-------------------\nSort (cost=390162.53..390162.54 rows=3 width=567) (actual \ntime=106045.453..106 078.238 rows=80963 loops=1)\nSort Key: rec.startdatetime, rec.id\nSort Method: quicksort Memory: 43163kB\n-> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual \ntime=41205.683..105523.303 rows=80963 loops=1)\nJoin Filter: ((rec.parentguid)::text = (resolve.resolve)::text)\n-> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual \ntime=41127.859..105256.069 rows=80963 loops=1)\nJoin Filter: ((rec.guid)::text = (status.guid)::text)\n-> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual \ntime=41089.852..41177.137 rows=80000 loops=1)\n-> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual \ntime=36401.247..38505.042 rows=4000000 loops=1)\nHash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text)\n-> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) \n(actual time=1009.161..1029.849 rows=80000 loops=1)\n-> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual \ntime=35391.697..35391.697 rows=4000000 loops=1)\n-> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) (actual \ntime=5.095..32820.746 rows=4000000 loops =1)\nFilter: (startdatetime <= 1249281281666:: bigint)\n-> Index Scan using index_status_startdatetime on status rec \n(cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r ows=1 \nloops=80000)\nIndex Cond: (rec.startdatetime = (max(status.startdatetime)))\n-> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual \ntime=0.001..0.001 rows=1 loops=80963)\nTotal runtime: 106227.356 ms\n(18 rows)\n\n\n\nOn Aug 6, 2009 2:19pm, Scott Marlowe <[email protected]> wrote:\n> On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin [email protected]> wrote:\n\n> > I have a database (699221). It contains of 1.8GB data (707710). I am\n\n> > doing a complex query. Which required to load a 80MB index (732287).\n\n> >\n\n> > I restarted Postgresql so the cache is empty and it has to read the\n\n> > table and index from disk. Which I understand is an expensive process.\n\n> > But what I don't understand is even I split the index into a different\n\n> > tablespace located on a completely separate disk (mounted on /hdd2)\n\n> > there is still a very long I/O wait time. That index is only thing\n\n> > exist on that disk. Any idea why? Or any way I can find out what it is\n\n> > waiting for? Thanks.\n\n\n\n> OK before DTrace, did you run explain analyze on the query? I think\n\n> the output of that would be interesting.\n\n\n\n> Looking at the DTrace output it looks to me like you're reading at\n\n> least one > 1GB table. since you're accessing a file with a .1 on it.\n\n\nHi Scott,Yes I did that. And you are right because I restart my machine, so the postgres cache is empty. And I think postgresql is reading all 1.8GB of data back into the cache when it does a seq scan on the status table. QUERY PLAN -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ------------------- Sort (cost=390162.53..390162.54 rows=3 width=567) (actual time=106045.453..106 078.238 rows=80963 loops=1) Sort Key: rec.startdatetime, rec.id Sort Method: quicksort Memory: 43163kB -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual time=41205.683..105523.303 rows=80963 loops=1) Join Filter: ((rec.parentguid)::text = (resolve.resolve)::text) -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual time=41127.859..105256.069 rows=80963 loops=1) Join Filter: ((rec.guid)::text = (status.guid)::text) -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual time=41089.852..41177.137 rows=80000 loops=1) -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual time=36401.247..38505.042 rows=4000000 loops=1) Hash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text) -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) (actual time=1009.161..1029.849 rows=80000 loops=1) -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual time=35391.697..35391.697 rows=4000000 loops=1) -> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) (actual time=5.095..32820.746 rows=4000000 loops =1) Filter: (startdatetime <= 1249281281666:: bigint) -> Index Scan using index_status_startdatetime on status rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r ows=1 loops=80000) Index Cond: (rec.startdatetime = (max(status.startdatetime))) -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual time=0.001..0.001 rows=1 loops=80963) Total runtime: 106227.356 ms(18 rows)On Aug 6, 2009 2:19pm, Scott Marlowe <[email protected]> wrote:> On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin [email protected]> wrote:> > > I have a database (699221). It contains of 1.8GB data (707710). I am> > > doing a complex query. Which required to load a 80MB index (732287).> > >> > > I restarted Postgresql so the cache is empty and it has to read the> > > table and index from disk. Which I understand is an expensive process.> > > But what I don't understand is even I split the index into a different> > > tablespace located on a completely separate disk (mounted on /hdd2)> > > there is still a very long I/O wait time. That index is only thing> > > exist on that disk. Any idea why? Or any way I can find out what it is> > > waiting for? Thanks.> > > > OK before DTrace, did you run explain analyze on the query? I think> > the output of that would be interesting.> > > > Looking at the DTrace output it looks to me like you're reading at> > least one > 1GB table. since you're accessing a file with a .1 on it.>", "msg_date": "Thu, 06 Aug 2009 04:30:45 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Could you possibly attach that in plain text format? Your email\nclient seems to have eaten any text formatting / indentation.\n", "msg_date": "Wed, 5 Aug 2009 22:53:00 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Is this alright?\n\n> QUERY PLAN\n> -------------------------------------------------------------------------------- \n> -------------------------------------------------------------------------------- \n> -------------------\n> Sort (cost=390162.53..390162.54 rows=3 width=567) (actual \n> time=106045.453..106 078.238 rows=80963 loops=1)\n> Sort Key: rec.startdatetime, rec.id\n> Sort Method: quicksort Memory: 43163kB\n> -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) \n> (actual time=41205.683..105523.303 rows=80963 loops=1)\n> Join Filter: ((rec.parentguid)::text = (resolve.resolve)::text)\n> -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual \n> time=41127.859..105256.069 rows=80963 loops=1)\n> Join Filter: ((rec.guid)::text = (status.guid)::text)\n> -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual \n> time=41089.852..41177.137 rows=80000 loops=1)\n> -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual \n> time=36401.247..38505.042 rows=4000000 loops=1)\n> Hash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text)\n> -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 \n> width=32) (actual time=1009.161..1029.849 rows=80000 loops=1)\n> -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual \n> time=35391.697..35391.697 rows=4000000 loops=1)\n> -> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) \n> (actual time=5.095..32820.746 rows=4000000 loops =1)\n> Filter: (startdatetime -> Index Scan using index_status_startdatetime on \n> status rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r \n> ows=1 loops=80000)\n> Index Cond: (rec.startdatetime = (max(status.startdatetime)))\n> -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a \n> ctual time=0.001..0.001 rows=1 loops=80963)\n> Total runtime: 106227.356 ms\n> (18 rows)\n\n\nOn Aug 6, 2009 2:30pm, [email protected] wrote:\n> Hi Scott,\n\n> Yes I did that. And you are right because I restart my machine, so the \n> postgres cache is empty. And I think postgresql is reading all 1.8GB of \n> data back into the cache when it does a seq scan on the status table.\n\n> QUERY PLAN\n> -------------------------------------------------------------------------------- \n> -------------------------------------------------------------------------------- \n> -------------------\n> Sort (cost=390162.53..390162.54 rows=3 width=567) (actual \n> time=106045.453..106 078.238 rows=80963 loops=1)\n> Sort Key: rec.startdatetime, rec.id\n> Sort Method: quicksort Memory: 43163kB\n> -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) \n> (actual time=41205.683..105523.303 rows=80963 loops=1)\n> Join Filter: ((rec.parentguid)::text = (resolve.resolve)::text)\n> -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual \n> time=41127.859..105256.069 rows=80963 loops=1)\n> Join Filter: ((rec.guid)::text = (status.guid)::text)\n> -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual \n> time=41089.852..41177.137 rows=80000 loops=1)\n> -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual \n> time=36401.247..38505.042 rows=4000000 loops=1)\n> Hash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text)\n> -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 \n> width=32) (actual time=1009.161..1029.849 rows=80000 loops=1)\n> -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual \n> time=35391.697..35391.697 rows=4000000 loops=1)\n> -> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) \n> (actual time=5.095..32820.746 rows=4000000 loops =1)\n> Filter: (startdatetime -> Index Scan using index_status_startdatetime on \n> status rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r \n> ows=1 loops=80000)\n> Index Cond: (rec.startdatetime = (max(status.startdatetime)))\n> -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a \n> ctual time=0.001..0.001 rows=1 loops=80963)\n> Total runtime: 106227.356 ms\n> (18 rows)\n\n\n\n> On Aug 6, 2009 2:19pm, Scott Marlowe [email protected]> wrote:\n> > On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin [email protected]> \n> wrote:\n> >\n> > > I have a database (699221). It contains of 1.8GB data (707710). I am\n> >\n> > > doing a complex query. Which required to load a 80MB index (732287).\n> >\n> > >\n> >\n> > > I restarted Postgresql so the cache is empty and it has to read the\n> >\n> > > table and index from disk. Which I understand is an expensive process.\n> >\n> > > But what I don't understand is even I split the index into a different\n> >\n> > > tablespace located on a completely separate disk (mounted on /hdd2)\n> >\n> > > there is still a very long I/O wait time. That index is only thing\n> >\n> > > exist on that disk. Any idea why? Or any way I can find out what it is\n> >\n> > > waiting for? Thanks.\n> >\n> >\n> >\n> > OK before DTrace, did you run explain analyze on the query? I think\n> >\n> > the output of that would be interesting.\n> >\n> >\n> >\n> > Looking at the DTrace output it looks to me like you're reading at\n> >\n> > least one > 1GB table. since you're accessing a file with a .1 on it.\n> >\n\nIs this alright?> QUERY PLAN > -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------> Sort (cost=390162.53..390162.54 rows=3 width=567) (actual time=106045.453..106 078.238 rows=80963 loops=1)> Sort Key: rec.startdatetime, rec.id> Sort Method: quicksort Memory: 43163kB> -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual time=41205.683..105523.303 rows=80963 loops=1)> Join Filter: ((rec.parentguid)::text = (resolve.resolve)::text)> -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual time=41127.859..105256.069 rows=80963 loops=1)> Join Filter: ((rec.guid)::text = (status.guid)::text)> -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual time=41089.852..41177.137 rows=80000 loops=1)> -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual time=36401.247..38505.042 rows=4000000 loops=1)> Hash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text)> -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) (actual time=1009.161..1029.849 rows=80000 loops=1)> -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual time=35391.697..35391.697 rows=4000000 loops=1)> -> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) (actual time=5.095..32820.746 rows=4000000 loops =1)> Filter: (startdatetime -> Index Scan using index_status_startdatetime on status rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r ows=1 loops=80000)> Index Cond: (rec.startdatetime = (max(status.startdatetime)))> -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual time=0.001..0.001 rows=1 loops=80963)> Total runtime: 106227.356 ms> (18 rows)> On Aug 6, 2009 2:30pm, [email protected] wrote:> Hi Scott,> > Yes I did that. And you are right because I restart my machine, so the postgres cache is empty. And I think postgresql is reading all 1.8GB of data back into the cache when it does a seq scan on the status table.> > QUERY PLAN > -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------> Sort (cost=390162.53..390162.54 rows=3 width=567) (actual time=106045.453..106 078.238 rows=80963 loops=1)> Sort Key: rec.startdatetime, rec.id> Sort Method: quicksort Memory: 43163kB> -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual time=41205.683..105523.303 rows=80963 loops=1)> Join Filter: ((rec.parentguid)::text = (resolve.resolve)::text)> -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual time=41127.859..105256.069 rows=80963 loops=1)> Join Filter: ((rec.guid)::text = (status.guid)::text)> -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual time=41089.852..41177.137 rows=80000 loops=1)> -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual time=36401.247..38505.042 rows=4000000 loops=1)> Hash Cond: ((getcurrentguids.getcurrentguids)::text = (status.guid)::text)> -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) (actual time=1009.161..1029.849 rows=80000 loops=1)> -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual time=35391.697..35391.697 rows=4000000 loops=1)> -> Seq Scan on status (cost=0.00..2 85135.53 rows=3999962 width=16) (actual time=5.095..32820.746 rows=4000000 loops =1)> Filter: (startdatetime -> Index Scan using index_status_startdatetime on status rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.796..0.797 r ows=1 loops=80000)> Index Cond: (rec.startdatetime = (max(status.startdatetime)))> -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual time=0.001..0.001 rows=1 loops=80963)> Total runtime: 106227.356 ms> (18 rows)> > > > On Aug 6, 2009 2:19pm, Scott Marlowe [email protected]> wrote:> > On Wed, Aug 5, 2009 at 8:50 PM, Ip Wing Kin [email protected]> wrote:> > > > > I have a database (699221). It contains of 1.8GB data (707710). I am> > > > > doing a complex query. Which required to load a 80MB index (732287).> > > > >> > > > > I restarted Postgresql so the cache is empty and it has to read the> > > > > table and index from disk. Which I understand is an expensive process.> > > > > But what I don't understand is even I split the index into a different> > > > > tablespace located on a completely separate disk (mounted on /hdd2)> > > > > there is still a very long I/O wait time. That index is only thing> > > > > exist on that disk. Any idea why? Or any way I can find out what it is> > > > > waiting for? Thanks.> > > > > > > > OK before DTrace, did you run explain analyze on the query? I think> > > > the output of that would be interesting.> > > > > > > > Looking at the DTrace output it looks to me like you're reading at> > > > least one > 1GB table. since you're accessing a file with a .1 on it.> >", "msg_date": "Thu, 06 Aug 2009 05:13:54 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Sorry post again.\n\nQUERY PLAN\n-------------------------------------------------------------------------------- \n-------------------------------------------------------------------------------- \n-------------------\nSort (cost=390162.53..390162.54 rows=3 width=567) (actual \ntime=105726.803..105 756.743 rows=80963 loops=1)\nSort Key: rec.startdatetime, rec.id\nSort Method: quicksort Memory: 43163kB\n-> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual \ntime=41332.430..105220.859 rows=80963 loops=1)\nJoin Filter: ((rec.acsguid)::text = (resolve.resolve)::text)\n-> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual t \nime=41252.145..104952.438 rows=80963 loops=1)\nJoin Filter: ((rec.volumeguid)::text = (dummymediastatus.volumegu id)::text)\n-> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual \ntime=41212.903..41299.709 rows=80000 loops=1)\n-> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual \ntime=36360.938..38540.426 rows=4000000 loops=1)\nHash Cond: ((getcurrentguids.getcurrentguids)::text = \n(dummymediastatus.volumeguid)::text)\n-> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) \n(actual time=977.013..997.404 rows=80000 loops=1)\n-> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual \ntime=35383.529..35383.529 rows=4000000 loops=1)\n-> Seq Scan on dummymediastatus (cost=0.00..2 85135.53 rows=3999962 \nwidth=16) (actual time=5.081..32821.253 rows=4000000 loops =1)\nFilter: (startdatetime <= 1249281281666:: bigint)\n-> Index Scan using index_dummymediastatus_startdatetime on dumm \nymediastatus rec (cost=0.00..8.15 rows=3 width=414) (actual \ntime=0.791..0.792 r ows=1 loops=80000)\nIndex Cond: (rec.startdatetime = (max(dummymediastatus.star tdatetime)))\n-> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual \ntime=0.001..0.001 rows=1 loops=80963)\nTotal runtime: 105906.467 ms\n(18 rows)\n\nSorry post again. QUERY PLAN -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -------------------Sort (cost=390162.53..390162.54 rows=3 width=567) (actual time=105726.803..105 756.743 rows=80963 loops=1) Sort Key: rec.startdatetime, rec.id Sort Method: quicksort Memory: 43163kB -> Nested Loop IN Join (cost=360360.86..390162.51 rows=3 width=567) (actual time=41332.430..105220.859 rows=80963 loops=1) Join Filter: ((rec.acsguid)::text = (resolve.resolve)::text) -> Nested Loop (cost=360360.86..389999.01 rows=3 width=567) (actual t ime=41252.145..104952.438 rows=80963 loops=1) Join Filter: ((rec.volumeguid)::text = (dummymediastatus.volumegu id)::text) -> HashAggregate (cost=360360.86..360405.96 rows=3608 width=16) (actual time=41212.903..41299.709 rows=80000 loops=1) -> Hash Join (cost=335135.05..354817.67 rows=1108637 widt h=16) (actual time=36360.938..38540.426 rows=4000000 loops=1) Hash Cond: ((getcurrentguids.getcurrentguids)::text = (dummymediastatus.volumeguid)::text) -> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000 width=32) (actual time=977.013..997.404 rows=80000 loops=1) -> Hash (cost=285135.53..285135.53 rows=3999962 wid th=16) (actual time=35383.529..35383.529 rows=4000000 loops=1) -> Seq Scan on dummymediastatus (cost=0.00..2 85135.53 rows=3999962 width=16) (actual time=5.081..32821.253 rows=4000000 loops =1) Filter: (startdatetime <= 1249281281666:: bigint) -> Index Scan using index_dummymediastatus_startdatetime on dumm ymediastatus rec (cost=0.00..8.15 rows=3 width=414) (actual time=0.791..0.792 r ows=1 loops=80000) Index Cond: (rec.startdatetime = (max(dummymediastatus.star tdatetime))) -> Function Scan on resolve (cost=0.00..260.00 rows=1000 width=32) (a ctual time=0.001..0.001 rows=1 loops=80963)Total runtime: 105906.467 ms(18 rows)", "msg_date": "Thu, 06 Aug 2009 05:21:10 +0000", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Wed, Aug 5, 2009 at 11:21 PM, <[email protected]> wrote:\n> Sorry post again.\n\nNope, still mangled. Can you attach it?\n", "msg_date": "Thu, 6 Aug 2009 00:15:15 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Hi scott\n\nI attached the query plan with this email. The top one is the first\nrun after I restarted my machine. And the bottom one is the second\nrun.\n\nI am using PostgreSQL 8.3 on Solaris 10.\n\ncheers\n\nOn Thu, Aug 6, 2009 at 4:15 PM, Scott Marlowe<[email protected]> wrote:\n> On Wed, Aug 5, 2009 at 11:21 PM, <[email protected]> wrote:\n>> Sorry post again.\n>\n> Nope, still mangled. Can you attach it?\n>\n\n\n\n-- \nJohn\n", "msg_date": "Thu, 6 Aug 2009 16:23:22 +1000", "msg_from": "Ip Wing Kin John <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "OK, two things. First the row estimate starts going way off around\nthe time it gets to the hash aggregate / nested loop which seems to be\nmaking the planner use a bad plan for this many rows. You can try\nissuing\n\nset enable_nestloop = off;\n\nbefore running the query and see if that makes it any faster.\n\nSecondly, the first time you run this query you are reading the 1.8G\ntable sequentially, and at about 55MB/s, which isn't gonna get faster\nwithout more / faster drives under your machine.\n\nOn Thu, Aug 6, 2009 at 12:50 AM, Ip Wing Kin John<[email protected]> wrote:\n> Here u go. Both in the same file.\n>\n> On Thu, Aug 6, 2009 at 4:48 PM, Scott Marlowe<[email protected]> wrote:\n>> Much better... Looks like I got the second one...\n>>\n>> Can I get the first one too?  Thx.\n>>\n>> On Thu, Aug 6, 2009 at 12:46 AM, Ip Wing Kin John<[email protected]> wrote:\n>>> Hope you can get it this time.\n>>>\n>>> John\n>>>\n>>> On Thu, Aug 6, 2009 at 4:34 PM, Scott Marlowe<[email protected]> wrote:\n>>>> Sorry man, it's not coming through.  Try it this time addressed just to me.\n>>>>\n>>>> On Thu, Aug 6, 2009 at 12:23 AM, Ip Wing Kin John<[email protected]> wrote:\n>>>>> Hi scott\n>>>>>\n>>>>> I attached the query plan with this email. The top one is the first\n>>>>> run after I restarted my machine. And the bottom one is the second\n>>>>> run.\n>>>>>\n>>>>> I am using PostgreSQL 8.3 on Solaris 10.\n>>>>>\n>>>>> cheers\n>>>>>\n>>>>> On Thu, Aug 6, 2009 at 4:15 PM, Scott Marlowe<[email protected]> wrote:\n>>>>>> On Wed, Aug 5, 2009 at 11:21 PM, <[email protected]> wrote:\n>>>>>>> Sorry post again.\n>>>>>>\n>>>>>> Nope, still mangled.  Can you attach it?\n>>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> --\n>>>>> John\n>>>>>\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>> When fascism comes to America, it will be intolerance sold as diversity.\n>>>>\n>>>\n>>>\n>>>\n>>> --\n>>> John\n>>>\n>>\n>>\n>>\n>> --\n>> When fascism comes to America, it will be intolerance sold as diversity.\n>>\n>\n>\n>\n> --\n> John\n>\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n", "msg_date": "Thu, 6 Aug 2009 01:03:25 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Hi Scott,\n\nThanks for you suggestion. I have follow your suggestion by disable\nnestloop and have a substantial improvement. Takes 51s now. I have\nattached the new query plan in another file.\n\nWhat I want to ask is, is there any other way to hint the planner to\nchoose to use merge join rather than nested loop by modifying my SQL?\nI did try to sort my second inner join by the join condition, but the\nplanner still prefer to use nested loop.\n\nAs I am afraid changing the system wide configuration will have some\nside effect on my other queries.\n\nHere is my SQL.\n\nselect * from dummymediastatus rec INNER JOIN ( SELECT volumeGUID ,\nMAX(startDatetime) AS msdt FROM dummymediastatus INNER JOIN ( select *\nfrom getcurrentguids(1249281281666,'hardware.volume',null,null) ) AS\ncfg ON ( cfg.getcurrentguids = volumeGUID) WHERE startDatetime <=\n1249281281666 GROUP BY volumeGUID ) AS rec2 ON ( rec.volumeGUID =\nrec2.volumeGUID AND rec.startDatetime = rec2.msdt ) where ( ( 1>0\nand 1>0 ) and rec.acsGUID in ( SELECT * FROM resolve('acs0') ) )\norder by rec.startDatetime DESC,rec.id DESC;\n\nthanks\n\n\n\n\nOn Thu, Aug 6, 2009 at 5:03 PM, Scott Marlowe<[email protected]> wrote:\n> OK, two things. First the row estimate starts going way off around\n> the time it gets to the hash aggregate / nested loop which seems to be\n> making the planner use a bad plan for this many rows. You can try\n> issuing\n>\n> set enable_nestloop = off;\n>\n> before running the query and see if that makes it any faster.\n>\n> Secondly, the first time you run this query you are reading the 1.8G\n> table sequentially, and at about 55MB/s, which isn't gonna get faster\n> without more / faster drives under your machine.\n>\n> On Thu, Aug 6, 2009 at 12:50 AM, Ip Wing Kin John<[email protected]> wrote:\n>> Here u go. Both in the same file.\n>>\n>> On Thu, Aug 6, 2009 at 4:48 PM, Scott Marlowe<[email protected]> wrote:\n>>> Much better... Looks like I got the second one...\n>>>\n>>> Can I get the first one too? Thx.\n>>>\n>>> On Thu, Aug 6, 2009 at 12:46 AM, Ip Wing Kin John<[email protected]> wrote:\n>>>> Hope you can get it this time.\n>>>>\n>>>> John\n>>>>\n>>>> On Thu, Aug 6, 2009 at 4:34 PM, Scott Marlowe<[email protected]> wrote:\n>>>>> Sorry man, it's not coming through. Try it this time addressed just to me.\n>>>>>\n>>>>> On Thu, Aug 6, 2009 at 12:23 AM, Ip Wing Kin John<[email protected]> wrote:\n>>>>>> Hi scott\n>>>>>>\n>>>>>> I attached the query plan with this email. The top one is the first\n>>>>>> run after I restarted my machine. And the bottom one is the second\n>>>>>> run.\n>>>>>>\n>>>>>> I am using PostgreSQL 8.3 on Solaris 10.\n>>>>>>\n>>>>>> cheers\n>>>>>>\n>>>>>> On Thu, Aug 6, 2009 at 4:15 PM, Scott Marlowe<[email protected]> wrote:\n>>>>>>> On Wed, Aug 5, 2009 at 11:21 PM, <[email protected]> wrote:\n>>>>>>>> Sorry post again.\n>>>>>>>\n>>>>>>> Nope, still mangled. Can you attach it?\n>>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> --\n>>>>>> John\n>>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> --\n>>>>> When fascism comes to America, it will be intolerance sold as diversity.\n>>>>>\n>>>>\n>>>>\n>>>>\n>>>> --\n>>>> John\n>>>>\n>>>\n>>>\n>>>\n>>> --\n>>> When fascism comes to America, it will be intolerance sold as diversity.\n>>>\n>>\n>>\n>>\n>> --\n>> John\n>>\n>\n>\n>\n> --\n> When fascism comes to America, it will be intolerance sold as diversity.\n>\n\n\n\n-- \nJohn", "msg_date": "Mon, 10 Aug 2009 16:22:00 +1000", "msg_from": "Ip Wing Kin John <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Mon, Aug 10, 2009 at 12:22 AM, Ip Wing Kin John<[email protected]> wrote:\n> Hi Scott,\n>\n> Thanks for you suggestion. I have follow your suggestion by disable\n> nestloop and have a substantial improvement. Takes 51s now. I have\n> attached the new query plan in another file.\n>\n> What I want to ask is, is there any other way to hint the planner to\n> choose to use merge join rather than nested loop by modifying my SQL?\n> I did try to sort my second inner join by the join condition, but the\n> planner still prefer to use nested loop.\n>\n> As I am afraid changing the system wide configuration will have some\n> side effect on my other queries.\n\nYeah, that's more of a troubleshooting procedure than something you'd\nwant to institute system wide. If you must set it for this query, you\ncan do so just before you run it in your connection, then turn it back\non for the rest of your queries. I.e.:\n\nset enable_nestloop=off;\nselect ....;\nset enable_nestloop=on;\n\nI've had one or two big queries in the past that no amount of tuning\nand setting stats target higher and analyzing could force to choose\nthe right plan.\n\nIf you haven't already, try setting the default statistic target\nhigher and re-analyzing to see if that helps. After that you can play\naround a bit with the cost parameters to see what helps. Note that\njust like setting enable_nestloop on or off, you can do so for the\ncurrent connection only and not globally, especially while just\ntesting.\n", "msg_date": "Mon, 10 Aug 2009 01:07:40 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Mon, Aug 10, 2009 at 2:22 AM, Ip Wing Kin John<[email protected]> wrote:\n> Hi Scott,\n>\n> Thanks for you suggestion. I have follow your suggestion by disable\n> nestloop and have a substantial improvement. Takes 51s now. I have\n> attached the new query plan in another file.\n>\n> What I want to ask is, is there any other way to hint the planner to\n> choose to use merge join rather than nested loop by modifying my SQL?\n> I did try to sort my second inner join by the join condition, but the\n> planner still prefer to use nested loop.\n>\n> As I am afraid changing the system wide configuration will have some\n> side effect on my other queries.\n>\n> Here is my SQL.\n>\n> select * from dummymediastatus rec INNER JOIN ( SELECT volumeGUID ,\n> MAX(startDatetime) AS msdt FROM dummymediastatus INNER JOIN ( select *\n> from getcurrentguids(1249281281666,'hardware.volume',null,null) ) AS\n> cfg ON ( cfg.getcurrentguids = volumeGUID) WHERE startDatetime <=\n> 1249281281666 GROUP BY volumeGUID ) AS rec2 ON (  rec.volumeGUID =\n> rec2.volumeGUID AND  rec.startDatetime = rec2.msdt ) where  (  ( 1>0\n> and 1>0 )  and  rec.acsGUID in ( SELECT * FROM resolve('acs0') ) )\n> order by rec.startDatetime DESC,rec.id DESC;\n\nIt looks to me like a big chunk of your problem is here:\n\n-> Function Scan on getcurrentguids (cost=0.00..260 .00 rows=1000\nwidth=32) (actual time=977.013..997.404 rows=80000 loops=1)\n\nThe planner's estimate of the number of rows is off by a factor of 80\nhere. You should probably think about inlining the SQL contained\ninside that function, if possible. You might also want to look at the\n\"rows\" setting of CREATE OR REPLACE FUNCTION.\n\nAs tempting as it is to encapsulate some of your logic into a\nset-returning function of some sort, as you've done here, I've found\nthat it tends to suck. Even if you fix the row estimate, the planner\nwill still estimate join selectivity etc. poorly for those rows\nbecause, of course, there are no statistics.\n\n...Robert\n", "msg_date": "Mon, 10 Aug 2009 08:02:02 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" } ]
[ { "msg_contents": "On Thu, Aug 06, 2009 at 12:50:51PM +1000, Ip Wing Kin John wrote:\n> (running DTrace tool kit iofile.d script to show I/O wait time by\n> filename and process)\n\nIs the dtrace toolkit a viable product for a linux environment or\nis it strickly Sun/Oracle?\n", "msg_date": "Thu, 6 Aug 2009 10:53:07 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "Ray Stell <[email protected]> writes:\n> On Thu, Aug 06, 2009 at 12:50:51PM +1000, Ip Wing Kin John wrote:\n>> (running DTrace tool kit iofile.d script to show I/O wait time by\n>> filename and process)\n\n> Is the dtrace toolkit a viable product for a linux environment or\n> is it strickly Sun/Oracle?\n\ndtrace is available on Solaris and Mac OS X and probably a couple\nother platforms, but not Linux. For Linux there is SystemTap,\nwhich does largely the same kinds of things but has a different\nscripting syntax ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Aug 2009 11:01:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck? " }, { "msg_contents": "On Thu, Aug 06, 2009 at 11:01:52AM -0400, Tom Lane wrote:\n> \n> dtrace is available on Solaris and Mac OS X and probably a couple\n> other platforms, but not Linux. \n\nI wondered if anyone had given this a go:\n\n http://amitksaha.blogspot.com/2009/03/dtrace-on-linux.html\n", "msg_date": "Thu, 6 Aug 2009 11:43:05 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "I wasn't able to compile dtrace on either CentOS 5.3 or Fedora 11. But \nthe author is responsive and the problem doesn't look hard to fix. It \nsits in my inbox awaiting some hacking time...\n\nKen\n\nOn Thu, 06 Aug 2009 11:43:05 -0400, Ray Stell <[email protected]> wrote:\n\n> On Thu, Aug 06, 2009 at 11:01:52AM -0400, Tom Lane wrote:\n>>\n>> dtrace is available on Solaris and Mac OS X and probably a couple\n>> other platforms, but not Linux.\n>\n> I wondered if anyone had given this a go:\n>\n> http://amitksaha.blogspot.com/2009/03/dtrace-on-linux.html\n>\n\n\n\n-- \nUsing Opera's revolutionary e-mail client: http://www.opera.com/mail/\n", "msg_date": "Thu, 06 Aug 2009 11:57:47 -0400", "msg_from": "\"Kenneth Cox\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Thu, 2009-08-06 at 11:57 -0400, Kenneth Cox wrote:\n> I wasn't able to compile dtrace on either CentOS 5.3 or Fedora 11. But \n> the author is responsive and the problem doesn't look hard to fix. It \n> sits in my inbox awaiting some hacking time...\n\nWhy aren't you using systemtap again? As I recall it uses the same\ninterface as dtrace. The front end is just different.\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Thu, 06 Aug 2009 09:12:22 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Thu, Aug 06, 2009 at 09:12:22AM -0700, Joshua D. Drake wrote:\n> Why aren't you using systemtap again? \n\n1. significant solaris responsibilites\n2. significant linux responsibilities\n3. tool consolidation delusions\n\nCan you drive dtace toolkit via systemtap?\n", "msg_date": "Thu, 6 Aug 2009 12:38:13 -0400", "msg_from": "Ray Stell <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bottleneck?" }, { "msg_contents": "On Thu, 2009-08-06 at 12:38 -0400, Ray Stell wrote:\n> On Thu, Aug 06, 2009 at 09:12:22AM -0700, Joshua D. Drake wrote:\n> > Why aren't you using systemtap again? \n> \n> 1. significant solaris responsibilites\n\nThere is your problem right there ;)\n\n> 2. significant linux responsibilities\n> 3. tool consolidation delusions\n\nHah! I know this one.\n\n> \n> Can you drive dtace toolkit via systemtap?\n> \n\nI don't know. Tom?\n\nJoshua D. Drake\n\n-- \nPostgreSQL - XMPP: [email protected]\n Consulting, Development, Support, Training\n 503-667-4564 - http://www.commandprompt.com/\n The PostgreSQL Company, serving since 1997\n\n", "msg_date": "Thu, 06 Aug 2009 09:52:39 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bottleneck?" } ]
[ { "msg_contents": "PostgreSQL 8.3\nLinux RedHat 4.X\n24G of memory\n\nWhen loading a file generated from pg_dumpall is there a key setting in\nthe configuration file that would allow the load to work faster.\n\nThanks,\n\nLance Campbell\nProject Manager/Software Architect/DBA\nWeb Services at Public Affairs\n217-333-0382\n\n\n\n\n\n\nBest settings to load a fresh database\n\n\n\nPostgreSQL 8.3\nLinux RedHat 4.X\n24G of memory\n\nWhen loading a file generated from pg_dumpall is there a key setting in the configuration file that would allow the load to work faster.\nThanks,\n\nLance Campbell\nProject Manager/Software Architect/DBA\nWeb Services at Public Affairs\n217-333-0382", "msg_date": "Thu, 6 Aug 2009 13:42:06 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best settings to load a fresh database" }, { "msg_contents": "On Thu, Aug 06, 2009 at 01:42:06PM -0500, Campbell, Lance wrote:\n> PostgreSQL 8.3\n> Linux RedHat 4.X\n> 24G of memory\n> \n> When loading a file generated from pg_dumpall is there a key setting in\n> the configuration file that would allow the load to work faster.\n> \n> Thanks,\n> \n> Lance Campbell\n> Project Manager/Software Architect/DBA\n> Web Services at Public Affairs\n> 217-333-0382\n> \n\nI have found that increasing maintenance_work_mem speeds\nindex rebuilds, turn off synchronous_commit or fsync if\nyou really can afford to start over. Another big help is\nto use the parallel pg_restore from PostgreSQL 8.4.0 to\nperform the restore.\n\nCheers,\nKen\n", "msg_date": "Thu, 6 Aug 2009 14:02:03 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best settings to load a fresh database" }, { "msg_contents": "On Thu, Aug 6, 2009 at 12:42 PM, Campbell, Lance<[email protected]> wrote:\n> PostgreSQL 8.3\n> Linux RedHat 4.X\n> 24G of memory\n>\n> When loading a file generated from pg_dumpall is there a key setting in the\n> configuration file that would allow the load to work faster.\n\nThe ones I can think of are cranking up work_mem and\nmaintenance_work_mem and disabling fsync. Be sure to renable fsync\nafterwards if you value your data.\n", "msg_date": "Thu, 6 Aug 2009 13:02:03 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best settings to load a fresh database" }, { "msg_contents": "Kenneth Marshall escreveu:\n> I have found that increasing maintenance_work_mem speeds\n> index rebuilds, turn off synchronous_commit or fsync if\n> you really can afford to start over. Another big help is\n> to use the parallel pg_restore from PostgreSQL 8.4.0 to\n> perform the restore.\n> \nAnd make sure archive mode is turned off. Otherwise, you can't use the WAL\nbypass facility.\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Fri, 07 Aug 2009 04:30:34 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best settings to load a fresh database" } ]
[ { "msg_contents": "Just stumbled across this recent article published in the\nCommunications of the ACM:\n\nhttp://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext\n\nThe author shares some insights relating to difficulties processing a\n6.75 billion-row\ntable, a dummy table representing census-type data for everyone on earth, in\nPostgres.\n\nI'd really like to replicate the author's experiment, but it's not clear from\nthe article what his table definition looks like. He claims to be using a\n16-byte record to store the several columns he needs for each row, so perhaps\nhe's using a user-defined type?\n\nThe author implies with his definition of \"big data\" that the dataset he\nanalyzed is \"... too large to be placed in a relational database... \". From\nFig. 2, the SELECT query he ran took just under 10^5 seconds (~28 hours) when\nrun on 6.75 billion rows. This amount of time for the query didn't seem\nsurprising to me given how many rows he has to process, but in a recent post\non comp.databases.ingres someone claimed that on a far-inferior PC, Ingres\nran the same SELECT query in 105 minutes! This would be very impressive (a\n10-fold improvement over Postgres) if true.\n\nThe author complained that \"on larger tables [Postgres' planner] switched to\nsorting by grouping columns\", which he blamed for the slow query execution. I\ndon't personally see this plan as a problem, but maybe someone can enlighten\nme.\n\nOne intriguing tidbit I picked up from the article: \"in modern systems, as\ndemonstrated in the figure, random access to memory is typically slower than\nsequential access to disk.\" In hindsight, this seems plausible (since modern\ndisks can sustain sequential reads at well over 100MB/sec).\n\nAnyway, it would be very interesting to attempt to speed up the author's query\nif at all possible.\n", "msg_date": "Fri, 7 Aug 2009 16:17:12 -0400", "msg_from": "Josh Kupershmidt <[email protected]>", "msg_from_op": true, "msg_subject": "PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "On Fri, Aug 7, 2009 at 9:17 PM, Josh Kupershmidt<[email protected]> wrote:\n> Just stumbled across this recent article published in the\n> Communications of the ACM:\n>\n> http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext\n>\n> The author shares some insights relating to difficulties processing a\n> 6.75 billion-row\n> table, a dummy table representing census-type data for everyone on earth, in\n> Postgres.\n>\n> I'd really like to replicate the author's experiment, but it's not clear from\n> the article what his table definition looks like. He claims to be using a\n> 16-byte record to store the several columns he needs for each row, so perhaps\n> he's using a user-defined type?\n\nor four integers, or who knows. Postgres's per-row overhead is 24\nbytes plus a 16-bit line pointer so you're talking about 42 bytes per\nrow. There's per-page overhead and alignment but in this case it\nshouldn't be much.\n\n\n\n> The author implies with his definition of \"big data\" that the dataset he\n> analyzed is \"... too large to be placed in a relational database... \". From\n> Fig. 2, the SELECT query he ran took just under 10^5 seconds (~28 hours) when\n> run on 6.75 billion rows. This amount of time for the query didn't seem\n> surprising to me given how many rows he has to process, but in a recent post\n> on comp.databases.ingres someone claimed that on a far-inferior PC, Ingres\n> ran the same SELECT query in 105 minutes! This would be very impressive (a\n> 10-fold improvement over Postgres) if true.\n\n6.75 billion * 42 bytes is 283.5GB.\n\nAssuming you stick that on a single spindle capable of 100MB/s:\n\nYou have: 283.5GB / (100MB/s)\nYou want: min\n * 47.25\n\nSo something's not adding up.\n\n> One intriguing tidbit I picked up from the article: \"in modern systems, as\n> demonstrated in the figure, random access to memory is typically slower than\n> sequential access to disk.\" In hindsight, this seems plausible (since modern\n> disks can sustain sequential reads at well over 100MB/sec).\n\nSure, but the slowest PCIe bus can sustain 1GB/s and your memory\nbandwidth is probably at least 8GB/s.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Sat, 8 Aug 2009 01:28:20 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "On Fri, Aug 7, 2009 at 2:17 PM, Josh Kupershmidt<[email protected]> wrote:\n> Just stumbled across this recent article published in the\n> Communications of the ACM:\n>\n> http://cacm.acm.org/magazines/2009/8/34493-the-pathologies-of-big-data/fulltext\n>\n> The author shares some insights relating to difficulties processing a\n> 6.75 billion-row\n> table, a dummy table representing census-type data for everyone on earth, in\n> Postgres.\n>\n> I'd really like to replicate the author's experiment, but it's not clear from\n> the article what his table definition looks like. He claims to be using a\n> 16-byte record to store the several columns he needs for each row, so perhaps\n> he's using a user-defined type?\n>\n> The author implies with his definition of \"big data\" that the dataset he\n> analyzed is \"... too large to be placed in a relational database... \". From\n> Fig. 2, the SELECT query he ran took just under 10^5 seconds (~28 hours) when\n> run on 6.75 billion rows. This amount of time for the query didn't seem\n> surprising to me given how many rows he has to process, but in a recent post\n> on comp.databases.ingres someone claimed that on a far-inferior PC, Ingres\n> ran the same SELECT query in 105 minutes! This would be very impressive (a\n> 10-fold improvement over Postgres) if true.\n\nWell, from the article, I got the feeling he never showed up here on\nthe list to ask for help, and he just assumed he knew enough about\npostgresql to say it couldn't scale well. I just checked the\narchives, and his name doesn't show up.\n\nWhen you look at his slides, this one makes we wonder about a few points:\n\nhttp://deliveryimages.acm.org/10.1145/1540000/1536632/figs/f3.jpg\n\nHe was using 8 15kSAS in RAID-5. Just the fact that he's using RAID-5\nto test makes me wonder, but for his mostly-read workload it's useful.\n But on his machine he was only getting 53MB/second sequential reads?\nThat makes no sense. I was getting 50MB/s from a 4 disk SATA RAID on\nolder 120G hard drives years ago. SAS drives haven't been around that\nlong really, so I can't imagine having 7 disks (1 for parity) and only\ngetting 53/7 or 7.5MB/second from them. That's horrible. I had 9 Gig\n5.25 full height drives faster than that back in the day, on eight bit\nscsi controllers. His memory read speed was pretty bad too at only\n350MB/s. I have a 12 drive RAID-10 that can outrun his memory reads.\nSo I tend to think his OS was setup poorly, or his hardware was\nbroken, or something like that.\n\n> The author complained that \"on larger tables [Postgres' planner] switched to\n> sorting by grouping columns\", which he blamed for the slow query execution. I\n> don't personally see this plan as a problem, but maybe someone can enlighten\n> me.\n\nI'm sure that if he was on faster hardware it might have been quite a\nbit faster. I'd love to try his test on a real server with RAID-10\nand lots of memory. I'm certain I could get the run time down by a\ncouple factors.\n\nI wonder if he cranked up work_mem? I wonder if he even upped shared_buffers?\n\n> One intriguing tidbit I picked up from the article: \"in modern systems, as\n> demonstrated in the figure, random access to memory is typically slower than\n> sequential access to disk.\" In hindsight, this seems plausible (since modern\n> disks can sustain sequential reads at well over 100MB/sec).\n\nThis is generally always true. But his numbers are off by factors for\na modern system. Pentium IIs could sequentially read in the several\nhundreds of megs per second from memory. Any modern piece of kit,\nincluding my laptop, can do much much better than 350Meg/second from\nmemory.\n\nI wonder if he'd make his work available to mess with, as it seems he\ndid a pretty poor job setting up his database server / OS for this\ntest. At the very least I wonder if he has a colleague on this list\nwho might point him to us so we can try to help him improve the dismal\nperformance he seems to be getting. Or maybe he could just google\n\"postgresql performance tuning\" and take it from there...\n", "msg_date": "Fri, 7 Aug 2009 18:40:55 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "Oh I just noticed his graphic is \"values per second\" but he had\noriginally said they were 16 bit values. Even if they were 32 or 64\nbit values, I'd expect way more than what he's getting there.\n\nOn Fri, Aug 7, 2009 at 6:40 PM, Scott Marlowe<[email protected]> wrote:\n> Well, from the article, I got the feeling he never showed up here on\n> the list to ask for help, and he just assumed he knew enough about\n> postgresql to say it couldn't scale well.  I just checked the\n> archives, and his name doesn't show up.\n>\n> When you look at his slides, this one makes we wonder about a few points:\n>\n> http://deliveryimages.acm.org/10.1145/1540000/1536632/figs/f3.jpg\n>\n", "msg_date": "Fri, 7 Aug 2009 18:42:30 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "Well, there is CPU overhead for reading postgres pages and tuples. On a\ndisk subsystem that gets 1GB/sec sequential reads, I can't get more than\nabout 700MB/sec of I/O and on a select count(*) query on very large tables\nwith large rows (600 bytes) and its closer to 300MB/sec if the rows are\nsmaller (75 bytes). In both cases it is CPU bound with little i/o wait and\ndisk utilization under 65% in iostat.\n\nI also get over 13GB/sec to RAM from a single thread (Nehalem processor).\n\nI don't see how on any recent hardware, random access to RAM is slower than\nsequential from disk. RAM access, random or not, is measured in GB/sec...\n\n\n\nOn 8/7/09 5:42 PM, \"Scott Marlowe\" <[email protected]> wrote:\n\n> Oh I just noticed his graphic is \"values per second\" but he had\n> originally said they were 16 bit values. Even if they were 32 or 64\n> bit values, I'd expect way more than what he's getting there.\n> \n> On Fri, Aug 7, 2009 at 6:40 PM, Scott Marlowe<[email protected]> wrote:\n>> Well, from the article, I got the feeling he never showed up here on\n>> the list to ask for help, and he just assumed he knew enough about\n>> postgresql to say it couldn't scale well.  I just checked the\n>> archives, and his name doesn't show up.\n>> \n>> When you look at his slides, this one makes we wonder about a few points:\n>> \n>> http://deliveryimages.acm.org/10.1145/1540000/1536632/figs/f3.jpg\n>> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Fri, 7 Aug 2009 18:34:41 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "On Fri, Aug 7, 2009 at 7:34 PM, Scott Carey<[email protected]> wrote:\n> Well, there is CPU overhead for reading postgres pages and tuples.  On a\n> disk subsystem that gets 1GB/sec sequential reads, I can't get more than\n> about 700MB/sec of I/O and on a select count(*) query on very large tables\n> with large rows (600 bytes) and its closer to 300MB/sec if the rows are\n> smaller (75 bytes). In both cases it is CPU bound with little i/o wait and\n> disk utilization under 65% in iostat.\n>\n> I also get over 13GB/sec to RAM from a single thread (Nehalem processor).\n>\n> I don't see how on any recent hardware, random access to RAM is slower than\n> sequential from disk.  RAM access, random or not, is measured in GB/sec...\n\nI don't think anybody's arguing that.\n", "msg_date": "Fri, 7 Aug 2009 21:03:45 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" }, { "msg_contents": "\n>> I don't see how on any recent hardware, random access to RAM is slower \n>> than\n>> sequential from disk.  RAM access, random or not, is measured in \n>> GB/sec...\n>\n> I don't think anybody's arguing that.\n\nhttp://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2795&p=5\n\nThese guys mention about 50 ns memory latency ; this would translate into \n20 million memory \"seeks\" per second, which is in the same ballpark as the \nnumbers given by the article...\n\nIf you count 10GB/s bandwidth, 50 ns is the time to fetch 500 bytes.\n", "msg_date": "Sat, 08 Aug 2009 11:26:34 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG-related ACM Article: \"The Pathologies of Big Data\"" } ]
[ { "msg_contents": "Hi Everyone,\n\nI manage a freeBSD server that is dedicated to postgresql. The\nmachine has 4 gigs of ram and there is a single database powering a\nweb application that is hosted on a neighboring machine. The web\napplication is mostly reading the database but there are considerable\nwrites and I don't want to tune the machine exclusively for writes. I\nrealize more information would be needed to optimally tune the machine\nbut I am seeking advice on making some sane kernel settings for a\ngeneral purpose database on a dedicated system. Currently I have:\n\n$ cat /etc/sysctl.conf\n\nkern.ipc.shmmax=268435456\nkern.ipc.shmall=65536\n\nand\n\n$ cat /boot/loader.conf\nkern.ipc.semmni=\"256\"\nkern.ipc.semmns=\"512\"\nkern.ipc.semmnu=\"256\"\n\nIn postgresql.conf I have:\n\nmax_connections = 180\nshared_buffers = 28MB\n\nI would like to increase this to 256 connections and make sure the\nkernel settings are giving postgresql enough breathing room without.\nI suspect my settings are conservative and since the machine is\ndedicated to postgresql I would like to give it more resources if they\ncould be used. Any suggestions?\n\nculley\n", "msg_date": "Fri, 7 Aug 2009 14:24:21 -0700", "msg_from": "Culley Harrelson <[email protected]>", "msg_from_op": true, "msg_subject": "Need suggestions on kernel settings for dedicated FreeBSD/Postgresql\n\tmachine" }, { "msg_contents": "On Fri, Aug 7, 2009 at 5:24 PM, Culley Harrelson<[email protected]> wrote:\n> Hi Everyone,\n>\n> I manage a freeBSD server that is dedicated to postgresql.  The\n> machine has 4 gigs of ram and there is a single database powering a\n> web application that is hosted on a neighboring machine.  The web\n> application is mostly reading the database but there are considerable\n> writes and I don't want to tune the machine exclusively for writes.  I\n> realize more information would be needed to optimally tune the machine\n> but I am seeking advice on making some sane kernel settings for a\n> general purpose database on a dedicated system.  Currently I have:\n>\n> $ cat /etc/sysctl.conf\n>\n> kern.ipc.shmmax=268435456\n> kern.ipc.shmall=65536\n>\n> and\n>\n> $ cat /boot/loader.conf\n> kern.ipc.semmni=\"256\"\n> kern.ipc.semmns=\"512\"\n> kern.ipc.semmnu=\"256\"\n>\n> In postgresql.conf I have:\n>\n> max_connections = 180\n> shared_buffers = 28MB\n>\n> I would like to increase this to 256 connections and make sure the\n> kernel settings are giving postgresql enough breathing room without.\n> I suspect my settings are conservative and since the machine is\n> dedicated to postgresql I would like to give it more resources if they\n> could be used.  Any suggestions?\n\nThis might be worth a look, for starters.\n\nhttp://pgfoundry.org/projects/pgtune/\n\n...Robert\n", "msg_date": "Sat, 8 Aug 2009 23:40:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need suggestions on kernel settings for dedicated\n\tFreeBSD/Postgresql machine" }, { "msg_contents": "I will definitely look into this. I suspect I need to tune my kernel\nsettings first though...\n\nculley\n\nOn Sat, Aug 8, 2009 at 8:40 PM, Robert Haas<[email protected]> wrote:\n> On Fri, Aug 7, 2009 at 5:24 PM, Culley Harrelson<[email protected]> wrote:\n>> Hi Everyone,\n>>\n>> I manage a freeBSD server that is dedicated to postgresql.  The\n>> machine has 4 gigs of ram and there is a single database powering a\n>> web application that is hosted on a neighboring machine.  The web\n>> application is mostly reading the database but there are considerable\n>> writes and I don't want to tune the machine exclusively for writes.  I\n>> realize more information would be needed to optimally tune the machine\n>> but I am seeking advice on making some sane kernel settings for a\n>> general purpose database on a dedicated system.  Currently I have:\n>>\n>> $ cat /etc/sysctl.conf\n>>\n>> kern.ipc.shmmax=268435456\n>> kern.ipc.shmall=65536\n>>\n>> and\n>>\n>> $ cat /boot/loader.conf\n>> kern.ipc.semmni=\"256\"\n>> kern.ipc.semmns=\"512\"\n>> kern.ipc.semmnu=\"256\"\n>>\n>> In postgresql.conf I have:\n>>\n>> max_connections = 180\n>> shared_buffers = 28MB\n>>\n>> I would like to increase this to 256 connections and make sure the\n>> kernel settings are giving postgresql enough breathing room without.\n>> I suspect my settings are conservative and since the machine is\n>> dedicated to postgresql I would like to give it more resources if they\n>> could be used.  Any suggestions?\n>\n> This might be worth a look, for starters.\n>\n> http://pgfoundry.org/projects/pgtune/\n>\n> ...Robert\n>\n", "msg_date": "Sun, 9 Aug 2009 06:37:48 -0700", "msg_from": "Culley Harrelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need suggestions on kernel settings for dedicated\n\tFreeBSD/Postgresql machine" }, { "msg_contents": "Culley Harrelson wrote:\n> I will definitely look into this. I suspect I need to tune my kernel\n> settings first though...\n\nNo, not much. Sysctl and loader.conf settings are enough.\n\n>>> $ cat /etc/sysctl.conf\n>>>\n>>> kern.ipc.shmmax=268435456\n>>> kern.ipc.shmall=65536\n\nshmmax is in bytes, so this is 256 MB - way too low.\nshmall is in pages, so this is 256 MB also - which is in sync with the \nabove but will fall apart if some other service needs shm memory.\n\nSet shmall to 2 GB and shmmax to 1.9 GB.\n\n>>> $ cat /boot/loader.conf\n>>> kern.ipc.semmni=\"256\"\n>>> kern.ipc.semmns=\"512\"\n>>> kern.ipc.semmnu=\"256\"\n\nI think these are way too low also. I use 10240 and 16384 for semmni and \nsemmns habitually but these might be overtuned :)\n\n>>> In postgresql.conf I have:\n>>>\n>>> max_connections = 180\n>>> shared_buffers = 28MB\n\nDefinitely too low and out of sync with the above settings.\n\nSet shared_buffers to around 1800 MB or 1900 MB.\n\nThese settings are a good start, but you can find many tutorials and \ndocuments on tuning pgsql if you search around.\n\n", "msg_date": "Tue, 18 Aug 2009 16:12:17 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need suggestions on kernel settings for dedicated\n\tFreeBSD/Postgresql machine" } ]
[ { "msg_contents": "Hello,\n\nI'm trying to optimize the follow query which returns the top users\nordered by ranking. I'll show you my schema and \"explain analyze\" for\neach case.\n\nSo, i'm asking two things:\n\n1) Why \"ranking\" index is not used in the second query when sorting.\n2) Am i missing some obvious optimization like a missing index? :)\n\nSchemas:\n\n# \\d ranking\n Table \"public.ranking\"\n Column | Type | Modifiers\n-----------+-----------------------+-----------\n ranking | bigint |\n score | double precision |\n username | character varying(20) | not null\n variation | bigint |\nIndexes:\n \"ranking_tmp_pkey1\" PRIMARY KEY, btree (username)\n \"idxrank_6057\" btree (ranking) CLUSTER\n\n\n# \\d user\n Table \"public.user\"\n Column | Type | Modifiers\n------------+-----------------------+---------------------------------------------------\n id | integer | not null default\nnextval('user_id_seq'::regclass)\n username | character varying(20) | not null\n about | text |\n name | character varying(50) |\n photo | text |\n country_id | integer |\nIndexes:\n \"user_pkey\" PRIMARY KEY, btree (username)\n \"country_ranking_user_idx\" btree (country_id)\n\n\nExplain:\n\n# explain analyze SELECT * FROM \"ranking\" INNER JOIN \"user\" ON\n(\"ranking\".\"username\" = \"user\".\"username\") WHERE \"user\".\"country_id\" =\n1 ORDER BY \"ranking\".\"ranking\" ASC LIMIT 100;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=13.03..13.04 rows=1 width=180) (actual\ntime=965.229..965.302 rows=100 loops=1)\n -> Sort (cost=13.03..13.04 rows=1 width=180) (actual\ntime=965.227..965.256 rows=100 loops=1)\n Sort Key: ranking.ranking\n Sort Method: top-N heapsort Memory: 56kB\n -> Nested Loop (cost=0.00..13.02 rows=1 width=180) (actual\ntime=0.049..900.847 rows=57309 loops=1)\n -> Index Scan using country_ranking_user_idx on \"user\"\n (cost=0.00..6.49 rows=1 width=145) (actual time=0.023..57.633\nrows=57309 loops=1)\n Index Cond: (country_id = 1)\n -> Index Scan using ranking_tmp_pkey1 on ranking\n(cost=0.00..6.52 rows=1 width=35) (actual time=0.013..0.013 rows=1\nloops=57309)\n Index Cond: ((ranking.username)::text =\n(\"user\".username)::text)\n Total runtime: 965.412 ms\n(10 rows)\n\n# explain analyze SELECT * FROM \"ranking\" INNER JOIN \"user\" ON\n(\"ranking\".\"username\" = \"user\".\"username\") ORDER BY\n\"ranking\".\"ranking\" ASC LIMIT 100;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..137.02 rows=100 width=180) (actual\ntime=0.056..1.973 rows=100 loops=1)\n -> Nested Loop (cost=0.00..3081316.65 rows=2248753 width=180)\n(actual time=0.055..1.921 rows=100 loops=1)\n -> Index Scan using idxrank_6057 on ranking\n(cost=0.00..70735.73 rows=2248753 width=35) (actual time=0.021..0.076\nrows=100 loops=1)\n -> Index Scan using user_pkey on \"user\" (cost=0.00..1.33\nrows=1 width=145) (actual time=0.016..0.017 rows=1 loops=100)\n Index Cond: ((\"user\".username)::text = (ranking.username)::text)\n Total runtime: 2.043 ms\n(6 rows)\n\n\nThanks!\nFz\n", "msg_date": "Sat, 8 Aug 2009 03:02:47 -0300", "msg_from": "Fizu <[email protected]>", "msg_from_op": true, "msg_subject": "ORDER BY ... LIMIT and JOIN" }, { "msg_contents": "On Saturday 08 August 2009 08:02:47 Fizu wrote:\n> -> Index Scan using country_ranking_user_idx on \"user\"\n> (cost=0.00..6.49 rows=1 width=145) (actual time=0.023..57.633\n> rows=57309 loops=1)\n> Index Cond: (country_id = 1)\n\nThe planner is expecting one user with country_id = 1, but instead there are \n57309. Have you analyzed recently? Maybe increasing the statistics target will \nhelp.\n\n/Michael\n", "msg_date": "Sat, 8 Aug 2009 19:09:47 +0200", "msg_from": "Michael Andreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT and JOIN" }, { "msg_contents": "On Sat, Aug 8, 2009 at 2:09 PM, Michael Andreen<[email protected]> wrote:\n> The planner is expecting one user with country_id = 1, but instead there are\n> 57309. Have you analyzed recently? Maybe increasing the statistics target will\n> help.\n>\n> /Michael\n\n\nJust after analyze user and ranking it still taking so long to order\nby an indexed field.\n\n# explain analyze SELECT * FROM \"ranking\" INNER JOIN \"user\" ON\n(\"ranking\".\"username\" = \"user\".\"username\") WHERE \"user\".\"country_id\" =\n5 ORDER BY \"ranking\".\"ranking\" ASC LIMIT 100;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=15340.13..15340.38 rows=100 width=178) (actual\ntime=4955.795..4955.865 rows=100 loops=1)\n -> Sort (cost=15340.13..15343.69 rows=1425 width=178) (actual\ntime=4955.794..4955.820 rows=100 loops=1)\n Sort Key: ranking.ranking\n Sort Method: top-N heapsort Memory: 56kB\n -> Nested Loop (cost=0.00..15285.67 rows=1425 width=178)\n(actual time=20.951..4952.337 rows=1972 loops=1)\n -> Index Scan using country_ranking_user_idx on \"user\"\n (cost=0.00..4807.25 rows=1710 width=143) (actual\ntime=20.923..4898.931 rows=1972 loops=1)\n Index Cond: (country_id = 5)\n -> Index Scan using ranking_tmp_pkey on ranking\n(cost=0.00..6.12 rows=1 width=35) (actual time=0.024..0.025 rows=1\nloops=1972)\n Index Cond: ((ranking.username)::text =\n(\"user\".username)::text)\n Total runtime: 4955.974 ms\n(10 rows)\n\n# explain analyze SELECT * FROM \"ranking\" INNER JOIN \"user\" ON\n(\"ranking\".\"username\" = \"user\".\"username\") ORDER BY\n\"ranking\".\"ranking\" ASC LIMIT 100;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..136.78 rows=100 width=178) (actual\ntime=0.058..1.870 rows=100 loops=1)\n -> Nested Loop (cost=0.00..3116910.51 rows=2278849 width=178)\n(actual time=0.056..1.818 rows=100 loops=1)\n -> Index Scan using idxrank_6224 on ranking\n(cost=0.00..71682.17 rows=2278849 width=35) (actual time=0.022..0.065\nrows=100 loops=1)\n -> Index Scan using user_pkey on \"user\" (cost=0.00..1.32\nrows=1 width=143) (actual time=0.015..0.016 rows=1 loops=100)\n Index Cond: ((\"user\".username)::text = (ranking.username)::text)\n Total runtime: 1.946 ms\n(6 rows)\n\n\nThank you!\nM\n", "msg_date": "Sun, 9 Aug 2009 16:26:08 -0300", "msg_from": "Fizu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: ORDER BY ... LIMIT and JOIN" }, { "msg_contents": "On Sunday 09 August 2009 21:26:08 Fizu wrote:\n> -> Index Scan using country_ranking_user_idx on \"user\"\n> (cost=0.00..4807.25 rows=1710 width=143) (actual\n> time=20.923..4898.931 rows=1972 loops=1)\n> Index Cond: (country_id = 5)\n\nThe statistics looks good now, but almost all the time is still spent on \nfetching users with country_id = 5. The actual ordering is only a tiny part of \nthe full cost. Why it takes time probably depends on your hardware in relation \nto database size. I guess the database doesn't fit in ram? What settings have \nyou changed?\n\nClustering users on country_ranking_user_idx would probably help for this \nspecific case, but if it is a good idea depends on what other queries need to \nbe fast. If the table or indexes are bloated then clustering on any index or \ndoing reindex might do it.\n\n/Michael\n", "msg_date": "Mon, 10 Aug 2009 01:03:57 +0200", "msg_from": "Michael Andreen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT and JOIN" }, { "msg_contents": "On Sun, Aug 9, 2009 at 3:26 PM, Fizu<[email protected]> wrote:\n>               ->  Index Scan using country_ranking_user_idx on \"user\"\n>  (cost=0.00..4807.25 rows=1710 width=143) (actual\n> time=20.923..4898.931 rows=1972 loops=1)\n>                     Index Cond: (country_id = 5)\n\nAn index scan that picks up 1972 rows is taking 5 seconds? I think\nthere must be something wrong with this index. Is it possible that\nsince you apparently weren't analyzing this database, that maybe you\ndidn't vacuum it either? If so, you should probably do a VACUUM FULL\non your database and then a database-wide REINDEX, but at a minimum\nyou should try reindexing this particular index.\n\n...Robert\n", "msg_date": "Sun, 9 Aug 2009 21:30:38 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ORDER BY ... LIMIT and JOIN" } ]
[ { "msg_contents": "All,\n\nI've just been tweaking some autovac settings for a large database, and\ncame to wonder: why does vacuum_max_freeze_age default to such a high\nnumber? What's the logic behind that?\n\nAFAIK, you want max_freeze_age to be the largest possible interval of\nXIDs where an existing transaction might still be in scope, but no\nlarger. Yes?\n\nIf that's the case, I'd assert that users who do actually go through\n100M XIDs within a transaction window are probably doing some\nhand-tuning. And we could lower the default for most users\nconsiderably, such as to 1 million.\n\nHave I missed something?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 11 Aug 2009 14:14:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "On 8/11/09 2:14 PM, Josh Berkus wrote:\n> All,\n> \n> I've just been tweaking some autovac settings for a large database, and\n> came to wonder: why does vacuum_max_freeze_age default to such a high\n> number? What's the logic behind that?\n> \n> AFAIK, you want max_freeze_age to be the largest possible interval of\n> XIDs where an existing transaction might still be in scope, but no\n> larger. Yes?\n> \n> If that's the case, I'd assert that users who do actually go through\n> 100M XIDs within a transaction window are probably doing some\n> hand-tuning. And we could lower the default for most users\n> considerably, such as to 1 million.\n\n(replying to myself) actually, we don't want to set FrozenXID until the\nrow is not likely to be modified again. However, for most small-scale\ninstallations (ones where the user has not done any tuning) that's still\nlikely to be less than 100m transactions.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 11 Aug 2009 14:23:59 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "On Tue, Aug 11, 2009 at 5:23 PM, Josh Berkus<[email protected]> wrote:\n> On 8/11/09 2:14 PM, Josh Berkus wrote:\n>> All,\n>>\n>> I've just been tweaking some autovac settings for a large database, and\n>> came to wonder: why does vacuum_max_freeze_age default to such a high\n>> number?  What's the logic behind that?\n>>\n>> AFAIK, you want max_freeze_age to be the largest possible interval of\n>> XIDs where an existing transaction might still be in scope, but no\n>> larger.  Yes?\n>>\n>> If that's the case, I'd assert that users who do actually go through\n>> 100M XIDs within a transaction window are probably doing some\n>> hand-tuning.  And we could lower the default for most users\n>> considerably, such as to 1 million.\n>\n> (replying to myself) actually, we don't want to set FrozenXID until the\n> row is not likely to be modified again.  However, for most small-scale\n> installations (ones where the user has not done any tuning) that's still\n> likely to be less than 100m transactions.\n\nI don't think that's the name of the parameter, since a Google search\ngives zero hits. There are so many fiddly parameters for this thing\nthat I don't want to speculate about which one you meant.\n\n...Robert\n", "msg_date": "Tue, 11 Aug 2009 17:45:55 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "\n> I don't think that's the name of the parameter, since a Google search\n> gives zero hits. There are so many fiddly parameters for this thing\n> that I don't want to speculate about which one you meant.\n\nSorry, subject line had it correct.\n\nhttp://www.postgresql.org/docs/8.4/static/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Tue, 11 Aug 2009 15:06:54 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> I've just been tweaking some autovac settings for a large database, and\n> came to wonder: why does vacuum_max_freeze_age default to such a high\n> number? What's the logic behind that?\n\n(1) not destroying potentially useful forensic evidence too soon;\n(2) there's not really much to be gained by reducing it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Aug 2009 20:54:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m? " }, { "msg_contents": "On Tue, Aug 11, 2009 at 6:06 PM, Josh Berkus<[email protected]> wrote:\n>\n>> I don't think that's the name of the parameter, since a Google search\n>> gives zero hits.  There are so many fiddly parameters for this thing\n>> that I don't want to speculate about which one you meant.\n>\n> Sorry, subject line had it correct.\n>\n> http://www.postgresql.org/docs/8.4/static/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE\n\nAh. Yeah, I agree with Tom: how would it help to make this smaller?\nIt seems like that could possibly increase I/O, if the old data is\nchanging at all, but even if it doesn't it I don't see that it saves\nyou anything to freeze it sooner. Generally freezing is unnecessary\npain: if we had 128-bit transaction IDs, I'm guessing that we wouldn't\ncare about freezing or wraparound at all. (Of course that would\ncreate other problems, which is why we don't, but the point is\nfreezing is at best a necessary evil.)\n\n...Robert\n", "msg_date": "Tue, 11 Aug 2009 21:11:37 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Tom Lane schrieb:\n> Josh Berkus <[email protected]> writes:\n>> I've just been tweaking some autovac settings for a large database, and\n>> came to wonder: why does vacuum_max_freeze_age default to such a high\n>> number? What's the logic behind that?\n> \n> (1) not destroying potentially useful forensic evidence too soon;\n> (2) there's not really much to be gained by reducing it.\n\nIf there is not really much to gain by changing the value, why do not \nremove the parameter?\n\nGreetings from germany,\nTorsten\n", "msg_date": "Wed, 12 Aug 2009 08:48:45 +0200", "msg_from": "=?UTF-8?B?VG9yc3RlbiBaw7xobHNkb3JmZg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n \n> (2) there's not really much to be gained by reducing it.\n \nThat depends. The backup techniques I recently posted, using hard\nlinks and rsync, saved us the expense of another ten or twenty TB of\nmirrored SAN archival storage space, and expensive WAN bandwidth\nupgrades. In piloting this we found that we were sending our\ninsert-only data over the wire twice -- once after it was inserted and\nonce after it aged sufficiently to be frozen. Aggressive freezing\neffectively cut our bandwidth and storage needs for backup down almost\nby half. (Especially after we made sure we left enough time for the\nVACUUM FREEZE to complete before starting that night's backup\nprocess.)\n \nNot that most people have the same issue, but there are at least\n*some* situations where there is something significant to be gained by\naggressive freezing. Not that this is an argument for changing the\n*default*, of course; if someone is going to venture into these backup\ntechniques, they'd better have the technical savvy to deal with\ntweaking their freeze strategy.\n \n-Kevin\n", "msg_date": "Wed, 12 Aug 2009 13:17:05 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote: \n>> (2) there's not really much to be gained by reducing it.\n \n> That depends. The backup techniques I recently posted, using hard\n> links and rsync, saved us the expense of another ten or twenty TB of\n> mirrored SAN archival storage space, and expensive WAN bandwidth\n> upgrades. In piloting this we found that we were sending our\n> insert-only data over the wire twice -- once after it was inserted and\n> once after it aged sufficiently to be frozen. Aggressive freezing\n> effectively cut our bandwidth and storage needs for backup down almost\n> by half. (Especially after we made sure we left enough time for the\n> VACUUM FREEZE to complete before starting that night's backup\n> process.)\n\nHmmm ... if you're using VACUUM FREEZE, its behavior is unaffected by\nthis GUC anyway --- that option makes it use a freeze age of zero.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Aug 2009 17:22:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m? " }, { "msg_contents": "Tom Lane <[email protected]> wrote: \n \n> Hmmm ... if you're using VACUUM FREEZE, its behavior is unaffected\n> by this GUC anyway --- that option makes it use a freeze age of\n> zero.\n \nYeah, I know, but feel like I'm being a bit naughty in using VACUUM\nFREEZE -- the documentation says:\n \n| Selects aggressive \"freezing\" of tuples. Specifying FREEZE is\n| equivalent to performing VACUUM with the vacuum_freeze_min_age\n| parameter set to zero. The FREEZE option is deprecated and will be\n| removed in a future release; set the parameter instead.\n \nSo I figure that since it is deprecated, at some point I'll be setting\nthe vacuum_freeze_min_age option rather than leaving it at the default\nand using VACUUM FREEZE in the nightly maintenance run.\n \n-Kevin\n", "msg_date": "Wed, 12 Aug 2009 16:33:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Yeah, I know, but feel like I'm being a bit naughty in using VACUUM\n> FREEZE -- the documentation says:\n \n> | Selects aggressive \"freezing\" of tuples. Specifying FREEZE is\n> | equivalent to performing VACUUM with the vacuum_freeze_min_age\n> | parameter set to zero. The FREEZE option is deprecated and will be\n> | removed in a future release; set the parameter instead.\n \n> So I figure that since it is deprecated, at some point I'll be setting\n> the vacuum_freeze_min_age option rather than leaving it at the default\n> and using VACUUM FREEZE in the nightly maintenance run.\n\nI might be mistaken, but I think the reason we're planning to remove the\noption is mainly so we can get rid of FREEZE as a semi-reserved keyword.\nThe GUC isn't going anywhere.\n\nAnyway, the bottom line is what you said: fooling with this setting\nseems like something that's only needed by advanced users.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Aug 2009 17:57:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m? " }, { "msg_contents": "On Wed, Aug 12, 2009 at 5:57 PM, Tom Lane<[email protected]> wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n>> Yeah, I know, but feel like I'm being a bit naughty in using VACUUM\n>> FREEZE -- the documentation says:\n>\n>> | Selects aggressive \"freezing\" of tuples. Specifying FREEZE is\n>> | equivalent to performing VACUUM with the vacuum_freeze_min_age\n>> | parameter set to zero. The FREEZE option is deprecated and will be\n>> | removed in a future release; set the parameter instead.\n>\n>> So I figure that since it is deprecated, at some point I'll be setting\n>> the vacuum_freeze_min_age option rather than leaving it at the default\n>> and using VACUUM FREEZE in the nightly maintenance run.\n>\n> I might be mistaken, but I think the reason we're planning to remove the\n> option is mainly so we can get rid of FREEZE as a semi-reserved keyword.\n> The GUC isn't going anywhere.\n>\n> Anyway, the bottom line is what you said: fooling with this setting\n> seems like something that's only needed by advanced users.\n\nSomeone had the idea a while back of pre-freezing inserted tuples in\nthe WAL-bypass case.\n\nIt seems like in theory you could have a background process that would\niterate through dirty shared buffers and freeze tuples\nopportunistically before they are written back to disk, but I'm not\nsure that it would really be worth it.\n\n...Robert\n", "msg_date": "Wed, 12 Aug 2009 19:49:00 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Robert Haas <[email protected]> wrote: \n \n> Someone had the idea a while back of pre-freezing inserted tuples in\n> the WAL-bypass case.\n \nI'm sure I'm not the one who thought up the idea and first posted\nabout it, but I'm certainly an advocate for it.\n \n> It seems like in theory you could have a background process that\n> would iterate through dirty shared buffers and freeze tuples\n> opportunistically before they are written back to disk, but I'm not\n> sure that it would really be worth it.\n \nWe have routinely been doing a database-level VACUUM FREEZE after a\npg_dump | psql copy of a database, because:\n \n(1) Otherwise, users experience abysmal performance running routine\nqueries as every tuple scanned has its hint bits set during simple\nSELECT statements. The massive disk write levels during SELECTs was\nvery confusing at first, and if you search the archives, I'm sure\nyou'll find that I'm not the only one who's been confused by it.\n \n(2) Otherwise, there looms a point where every tuple restored, which\nis not subsequently updated or deleted, will need to be frozen by\nautovacuum -- all at the same time. Unless you're paying\nextraordinary attention to the issue, you won't know when it is\ncoming, but the day will come. Probably in the middle of some\ntime-critical process which is doing a lot of work.\n \n(3) We want to get this done before starting the WAL archiving, to\nprevent having massive quantities of WAL to transmit across the WAN.\n \n(4) With our improved backup processes we have another reason -- our\nPITR base backup space requirements and WAN bandwidth usage will be\nhigher if we don't start from a frozen state and stay frozen.\n \nSo really, we'd be pretty silly *not* to make sure that all tuples are\nfrozen and have hint bits set after a pg_dump | psql copy. It would\nspeed the process somewhat if the tuples could be written in that\nstate to start with.\n \n-Kevin\n", "msg_date": "Thu, 13 Aug 2009 09:32:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Robert,\n\n> Ah. Yeah, I agree with Tom: how would it help to make this smaller?\n> It seems like that could possibly increase I/O, if the old data is\n> changing at all, but even if it doesn't it I don't see that it saves\n> you anything to freeze it sooner. \n\nBefore 8.4, it actually does on tables which are purely cumulative\n(WORM). Within a short time, say, 10,000 transactions, the rows to be\nfrozen are still in the cache. By 100m transactions, they are in an\narchive partition which will need to be dragged from disk. So if I know\nthey won't be altered, then freezing them sooner would be better.\n\nHowever, I can easily manage this through the autovacuum settings. I\njust wanted confirmation of what I was thinking.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 13 Aug 2009 14:15:19 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "[ moving to -hackers ]\n\nIf this topic has been discussed previously, please point me to the\nearlier threads.\n\nWhy aren't we more opportunistic about freezing tuples? For instance, if\nwe already have a dirty buffer in cache, we should be more aggressive\nabout freezing those tuples than freezing tuples on disk.\n\nI looked at the code, and it looks like if we freeze one tuple on the\npage during VACUUM, we mark it dirty. Wouldn't that be a good\nopportunity to freeze all the other tuples on the page that we can?\n\nOr, perhaps when the bgwriter is flushing dirty buffers, it can look for\nopportunities to set hint bits or freeze tuples.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 13 Aug 2009 14:33:00 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "freezing tuples ( was: Why is vacuum_freeze_min_age 100m? )" }, { "msg_contents": "Jeff Davis wrote:\n\n> Why aren't we more opportunistic about freezing tuples? For instance, if\n> we already have a dirty buffer in cache, we should be more aggressive\n> about freezing those tuples than freezing tuples on disk.\n\nThe most widely cited reason is that you lose forensics data. Although\nthey are increasingly rare, there are still situations in which the heap\ntuple machinery messes up and the xmin/xmax/etc fields of the tuple are\nthe best/only way to find out what happened and thus fix the bug. If\nyou freeze early, there's just no way to know.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 13 Aug 2009 17:58:04 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "Alvaro Herrera <[email protected]> wrote: \n> Jeff Davis wrote:\n> \n>> Why aren't we more opportunistic about freezing tuples? For\n>> instance, if we already have a dirty buffer in cache, we should be\n>> more aggressive about freezing those tuples than freezing tuples on\n>> disk.\n> \n> The most widely cited reason is that you lose forensics data. \n> Although they are increasingly rare, there are still situations in\n> which the heap tuple machinery messes up and the xmin/xmax/etc\n> fields of the tuple are the best/only way to find out what happened\n> and thus fix the bug. If you freeze early, there's just no way to\n> know.\n \nAlthough I find it hard to believe that this is compelling argument in\nthe case where an entire table or database is loaded in a single\ndatabase transaction.\n \nIn the more general case, I'm not sure why this argument applies here\nbut not to cassert and other diagnostic options. It wouldn't surprise\nme to find workloads where writing data three times (once for the\ndata, once for hint bits, and once to freeze the tid) affects\nperformance more than cassert.\n \n-Kevin\n", "msg_date": "Thu, 13 Aug 2009 17:17:28 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "[PERFORM] Re: freezing tuples ( was: Why is\n\tvacuum_freeze_min_age100m? )" }, { "msg_contents": "On Thu, 2009-08-13 at 17:58 -0400, Alvaro Herrera wrote:\n> The most widely cited reason is that you lose forensics data. Although\n> they are increasingly rare, there are still situations in which the heap\n> tuple machinery messes up and the xmin/xmax/etc fields of the tuple are\n> the best/only way to find out what happened and thus fix the bug. If\n> you freeze early, there's just no way to know.\n\nAs it stands, it looks like it's not just one extra write for each\nbuffer, but potentially many (theoretically, as many as there are tuples\non a page). I suppose the reasoning is that tuples on the same page have\napproximately the same xmin, and are likely to be frozen at the same\ntime. But it seems entirely reasonable that the xmins on one page span\nseveral VACUUM runs, and that seems more likely with the FSM. That means\nthat a few tuples on the page are older than 100M and get frozen, and\nthe rest are only about 95M transactions old, so we have to come back\nand freeze them again, later.\n\nLet's say that we had a range like 50-100M, where if it's older than\n100M, we freeze it, and if it's older than 50M we freeze it only if it's\non a dirty page. We would still have forensic evidence, but we could\nmake a range such that we avoid writing multiple times.\n\nAnd people who don't care about forensic evidence can set it to 0-100M.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 13 Aug 2009 15:20:43 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On Thu, 2009-08-13 at 17:17 -0500, Kevin Grittner wrote:\n> It wouldn't surprise\n> me to find workloads where writing data three times (once for the\n> data, once for hint bits, and once to freeze the tid)\n\nI'm not sure that we're limited to 3 times, here. I could be missing\nsomething, but if you have tuples with different xmins on the same page,\nsome might be older than 100M, which you freeze, and then you will have\nto come back later to freeze the rest. As far as I can tell, the maximum\nnumber of writes is the number of tuples that fit on the page.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 13 Aug 2009 15:24:21 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Re: freezing tuples ( was: Why is\n\tvacuum_freeze_min_age100m? )" }, { "msg_contents": "On Thu, Aug 13, 2009 at 5:33 PM, Jeff Davis<[email protected]> wrote:\n> Or, perhaps when the bgwriter is flushing dirty buffers, it can look for\n> opportunities to set hint bits or freeze tuples.\n\nOne of the tricky things here is that the time you are mostly likely\nto want to do this is when you are loading a lot of data. But in that\ncase shared buffers are likely to be written back to disk before\ntransaction commit, so it'll be too early to do anything.\n\n...Robert\n", "msg_date": "Thu, 13 Aug 2009 18:25:16 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n\t100m? )" }, { "msg_contents": "\n>> Why aren't we more opportunistic about freezing tuples? For instance, if\n>> we already have a dirty buffer in cache, we should be more aggressive\n>> about freezing those tuples than freezing tuples on disk.\n> \n> The most widely cited reason is that you lose forensics data. Although\n> they are increasingly rare, there are still situations in which the heap\n> tuple machinery messes up and the xmin/xmax/etc fields of the tuple are\n> the best/only way to find out what happened and thus fix the bug. If\n> you freeze early, there's just no way to know.\n\nThat argument doesn't apply. If the page is in memory and is being\nwritten anyway, and some of the rows are past vacuum_freeze_min_age,\nthen why not freeze them rather than waiting for a vacuum process to\nread them off disk and rewrite them?\n\nWe're not talking about freezing every tuple as soon as it's out of\nscope. Just the ones which are more that 100m (or whatever the setting\nis) old. I seriously doubt that anyone is doing useful forensics using\nxids which are 100m old.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 13 Aug 2009 15:35:32 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n\t100m? )" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> Let's say that we had a range like 50-100M, where if it's older than\n> 100M, we freeze it, and if it's older than 50M we freeze it only if it's\n> on a dirty page. We would still have forensic evidence, but we could\n> make a range such that we avoid writing multiple times.\n\nYeah, making the limit \"slushy\" would doubtless save some writes, with\nnot a lot of downside.\n\n> And people who don't care about forensic evidence can set it to 0-100M.\n\nEverybody *thinks* they don't care about forensic evidence. Until they\nneed it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Aug 2009 18:46:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) " }, { "msg_contents": "On Thu, 2009-08-13 at 18:46 -0400, Tom Lane wrote:\n> Yeah, making the limit \"slushy\" would doubtless save some writes, with\n> not a lot of downside.\n\nOK, then should we make this a TODO? I'll make an attempt at this.\n\n> > And people who don't care about forensic evidence can set it to 0-100M.\n> \n> Everybody *thinks* they don't care about forensic evidence. Until they\n> need it.\n\nWe already allow setting vacuum_freeze_min_age to zero, so I don't see a\nsolution here other than documentation.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 13 Aug 2009 16:01:08 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> On Thu, 2009-08-13 at 18:46 -0400, Tom Lane wrote:\n>> Everybody *thinks* they don't care about forensic evidence. Until they\n>> need it.\n\n> We already allow setting vacuum_freeze_min_age to zero, so I don't see a\n> solution here other than documentation.\n\nYeah, we allow it. I just don't want to encourage it ... and definitely\nnot make it default.\n\nWhat are you envisioning exactly? If vacuum finds any reason to dirty\na page (or it's already dirty), then freeze everything on the page that's\ngot age > some lower threshold?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Aug 2009 19:05:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) " }, { "msg_contents": "Jeff, Tom,\n\n>> Let's say that we had a range like 50-100M, where if it's older than\n>> 100M, we freeze it, and if it's older than 50M we freeze it only if it's\n>> on a dirty page. We would still have forensic evidence, but we could\n>> make a range such that we avoid writing multiple times.\n> \n> Yeah, making the limit \"slushy\" would doubtless save some writes, with\n> not a lot of downside.\n\nThis would mean two settings: vacuum_freeze_min_age and\nvacuum_freeze_dirty_age. And we'd need to add those to the the\nautovacuum settings for each table as well. While we could just make\none setting 1/2 of the other, that prevents me from saying:\n\n\"freeze this table agressively if it's in memory, but wait a long time\nto vaccuum if it's on disk\"\n\nI can completely imagine a table which has a vacuum_freeze_dirty_age of\n10000 and a vacuum_freeze_min_age of 1m.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 13 Aug 2009 16:07:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "\n> What are you envisioning exactly? If vacuum finds any reason to dirty\n> a page (or it's already dirty), then freeze everything on the page that's\n> got age > some lower threshold?\n\nI was envisioning, if the page is already dirty and in memory *for any\nreason*, the freeze rows at below some threshold.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nwww.pgexperts.com\n", "msg_date": "Thu, 13 Aug 2009 16:16:37 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On Thu, 2009-08-13 at 19:05 -0400, Tom Lane wrote:\n> What are you envisioning exactly? If vacuum finds any reason to dirty\n> a page (or it's already dirty), then freeze everything on the page that's\n> got age > some lower threshold?\n\nYes. There are two ways to do the threshold:\n 1. Constant fraction of vacuum_freeze_min_age\n 2. Extra GUC\n\nI lean toward #1, because it avoids an extra GUC*, and it avoids the\nawkwardness when the \"lower\" setting is higher than the \"higher\"\nsetting.\n\nHowever, #2 might be nice for people who want to live on the edge or\nexperiment with new values. But I suspect most of the advantage would be\nhad just by saying that we opportunistically freeze tuples older than\n50% of vacuum_freeze_min_age.\n\nRegards,\n\tJeff Davis\n\n*: As an aside, these GUCs already have incredibly confusing names, and\nan extra variable would increase the confusion. For instance, they seem\nto use \"min\" and \"max\" interchangeably.\n\n", "msg_date": "Thu, 13 Aug 2009 16:20:23 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On Thu, 2009-08-13 at 18:25 -0400, Robert Haas wrote:\n> On Thu, Aug 13, 2009 at 5:33 PM, Jeff Davis<[email protected]> wrote:\n> > Or, perhaps when the bgwriter is flushing dirty buffers, it can look for\n> > opportunities to set hint bits or freeze tuples.\n> \n> One of the tricky things here is that the time you are mostly likely\n> to want to do this is when you are loading a lot of data. But in that\n> case shared buffers are likely to be written back to disk before\n> transaction commit, so it'll be too early to do anything.\n\nI think it would be useful in other cases, like avoiding repeated\nfreezing of different tuples on the same page.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 13 Aug 2009 16:21:09 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On Fri, Aug 14, 2009 at 12:07 AM, Josh Berkus<[email protected]> wrote:\n> \"freeze this table agressively if it's in memory, but wait a long time\n> to vaccuum if it's on disk\"\n\nWaitasec, \"in memory\"?\n\nThere are two projects here:\n\n1) Make vacuum when it's freezing tuples freeze every tuple > lesser\nage if it finds any tuples which are > max_age (or I suppose if the\npage is already dirty due to vacuum or something else). Vacuum still\nhas to read in all the pages before it finds out that they don't need\nfreezing so it doesn't mean distinguishing \"in memory\" from \"needs to\nbe read in\".\n\n2) Have something like bgwriter check if the page is dirty and vacuum\nand freeze things based on the lesser threshold. This would\neffectively only be vacuuming things that are \"in memory\"\n\nHowever the latter is a more complex and frought project. We looked at\nthis a while back in EDB and we found that the benefits were less than\nwe expected and the complexities more than we expected. I would\nrecommend sticking with (1) for now and only looking at (2) if we have\na more detailed plan and solid testable use cases.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 14 Aug 2009 00:21:47 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? )" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n>> What are you envisioning exactly? If vacuum finds any reason to dirty\n>> a page (or it's already dirty), then freeze everything on the page that's\n>> got age > some lower threshold?\n\n> I was envisioning, if the page is already dirty and in memory *for any\n> reason*, the freeze rows at below some threshold.\n\nI believe we've had this discussion before. I do *NOT* want freezing\noperations pushed into any random page access, and in particular will\ndo my best to veto any attempt to put them into the bgwriter. Freezing\nrequires accessing the clog and emitting a WAL record, and neither is\nappropriate for low-level code like bgwriter. The deadlock potential\nalone is sufficient reason why not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Aug 2009 19:21:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) " }, { "msg_contents": "On Fri, Aug 14, 2009 at 12:21 AM, Tom Lane<[email protected]> wrote:\n>> I was envisioning, if the page is already dirty and in memory *for any\n>> reason*, the freeze rows at below some threshold.\n>\n> I believe we've had this discussion before.  I do *NOT* want freezing\n> operations pushed into any random page access, and in particular will\n> do my best to veto any attempt to put them into the bgwriter.\n\nIt's possible Josh accidentally waved this red flag and really meant\njust to make it conditional on whether the page is dirty rather than\non whether vacuum dirtied it.\n\nHowever he did give me a thought....\n\nWith the visibility map vacuum currently only covers pages that are\nknown to have in-doubt tuples. That's why we have the anti-wraparound\nvacuums. However it could also check if the pages its skipping are in\nmemory and process them if they are even if they don't have in-doubt\ntuples.\n\nOr it could first go through ram and process any pages that are in\ncache before going to the visibility map and starting from page 0,\nwhich would hopefully avoid having to read them in later when we get\nto them and find they've been flushed out.\n\nI'm just brainstorming here. I'm not sure if either of these are\nactually worth the complexity and danger of finding new bottlenecks in\nspecial case optimization codepaths.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 14 Aug 2009 00:31:15 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? )" }, { "msg_contents": "On Thu, Aug 13, 2009 at 5:15 PM, Josh Berkus<[email protected]> wrote:\n> Robert,\n>\n>> Ah.  Yeah, I agree with Tom: how would it help to make this smaller?\n>> It seems like that could possibly increase I/O, if the old data is\n>> changing at all, but even if it doesn't it I don't see that it saves\n>> you anything to freeze it sooner.\n>\n> Before 8.4, it actually does on tables which are purely cumulative\n> (WORM).  Within a short time, say, 10,000 transactions, the rows to be\n> frozen are still in the cache.  By 100m transactions, they are in an\n> archive partition which will need to be dragged from disk.  So if I know\n> they won't be altered, then freezing them sooner would be better.\n>\n> However, I can easily manage this through the autovacuum settings.  I\n> just wanted confirmation of what I was thinking.\n\nInteresting. Thanks for the explanation.\n\n...Robert\n", "msg_date": "Thu, 13 Aug 2009 23:11:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is vacuum_freeze_min_age 100m?" }, { "msg_contents": "Jeff Davis <[email protected]> writes:\n> Yes. There are two ways to do the threshold:\n> 1. Constant fraction of vacuum_freeze_min_age\n> 2. Extra GUC\n\n> I lean toward #1, because it avoids an extra GUC*, and it avoids the\n> awkwardness when the \"lower\" setting is higher than the \"higher\"\n> setting.\n\nI tend to agree with Josh that you do need to offer two knobs. But\nexpressing the second knob as a fraction (with range 0 to 1) might be\nbetter than an independent \"min\" parameter. As you say, that'd be\nuseful to prevent people from setting them inconsistently.\n\n> *: As an aside, these GUCs already have incredibly confusing names, and\n> an extra variable would increase the confusion. For instance, they seem\n> to use \"min\" and \"max\" interchangeably.\n\nSome of them are in fact max's, I believe. They are complicated :-(.\nIt might be worth somebody taking two steps back and seeing if we need\nquite so many knobs. I think we got here partly by not wanting to\npredetermine vacuuming strategies, but it doesn't help to offer\nflexibility if people can't figure out how to use it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Aug 2009 14:37:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age 100m? ) " }, { "msg_contents": "On Fri, 2009-08-14 at 14:37 -0400, Tom Lane wrote:\n> I tend to agree with Josh that you do need to offer two knobs. But\n> expressing the second knob as a fraction (with range 0 to 1) might be\n> better than an independent \"min\" parameter. As you say, that'd be\n> useful to prevent people from setting them inconsistently.\n\nOk. Any ideas for a name?\n\nJosh suggests \"vacuum_freeze_dirty_age\" (or perhaps he was using at as a\nplaceholder). I don't particularly like that name, but I can't think of\nanything better without renaming vacuum_freeze_min_age.\n\n> > *: As an aside, these GUCs already have incredibly confusing names, and\n> > an extra variable would increase the confusion. For instance, they seem\n> > to use \"min\" and \"max\" interchangeably.\n> \n> Some of them are in fact max's, I believe.\n\nLooking at the definitions of vacuum_freeze_min_age and\nautovacuum_freeze_max_age there seems to be almost no distinction\nbetween \"min\" and \"max\" in those two names. I've complained about this\nbefore:\n\nhttp://archives.postgresql.org/pgsql-hackers/2008-12/msg01731.php\n\nI think both are essentially thresholds, so giving them two names with\nopposite meaning is misleading.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Fri, 14 Aug 2009 13:57:07 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On fre, 2009-08-14 at 13:57 -0700, Jeff Davis wrote:\n> Looking at the definitions of vacuum_freeze_min_age and\n> autovacuum_freeze_max_age there seems to be almost no distinction\n> between \"min\" and \"max\" in those two names.\n\nFor min, the action happens at or above the min values. For max, the\naction happens at or below the max value.\n\nWith those two particular parameters, the freezing happens exactly\nbetween the min and the max value.\n\n\n", "msg_date": "Sun, 16 Aug 2009 02:02:03 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On Sun, 2009-08-16 at 02:02 +0300, Peter Eisentraut wrote:\n> For min, the action happens at or above the min values. For max, the\n> action happens at or below the max value.\n\n>From the docs, 23.1.4:\n\n\"autovacuum is invoked on any table that might contain XIDs older than\nthe age specified by the configuration parameter\nautovacuum_freeze_max_age\"\n\nI interpret that to mean that the forced autovacuum run happens above\nthe value. You could reasonably call it the \"minimum age of relfrozenxid\nthat will cause autovacuum to forcibly run a vacuum\". \n\nSimilarly, you could call vacuum_freeze_min_age \"the maximum age a tuple\ncan be before a vacuum will freeze it\".\n\nI'm not trying to be argumentative, I'm just trying to show that it can\nbe confusing if you interpret it the wrong way. The first time I saw\nthose configuration names, I was confused, and ever since, I have to\nthink about it: \"is that variable called min or max?\".\n\nMy general feeling is that both of these are thresholds. The only real\nmaximum happens near wraparound.\n\n> With those two particular parameters, the freezing happens exactly\n> between the min and the max value.\n\nThanks, that's a helpful way to remember it.\n\nIt may be a little obsolete because now the freezing will normally\nhappen between vacuum_freeze_min_age and vacuum_freeze_table_age; but at\nleast I should be able to remember which of the other parameters is\n\"min\" and which one is \"max\".\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Sat, 15 Aug 2009 16:55:41 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "On lör, 2009-08-15 at 16:55 -0700, Jeff Davis wrote:\n> Similarly, you could call vacuum_freeze_min_age \"the maximum age a\n> tuple\n> can be before a vacuum will freeze it\".\n\nHeh, you could also call max_connections the \"minimum number of\nconnections before the server will refuse new connection attempts\".\n\nIt's not easy ... ;-)\n\n", "msg_date": "Sun, 16 Aug 2009 15:14:16 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is vacuum_freeze_min_age\n 100m? )" }, { "msg_contents": "Jeff Davis <[email protected]> wrote: \n \n> There are two ways to do the threshold:\n> 1. Constant fraction of vacuum_freeze_min_age\n> 2. Extra GUC\n \nI appreciate that there may be room to improve this while protecting\nthe forensic values; but there are already strategies for managing the\nday-to-day performance issues as long as you have adequate backup to\nnot need to rely on old XID information for recovery. What we don't\nhave covered is loading a database from pg_dump without having to\nrewrite all pages at least once afterward -- and likely two more\ntimes, with most maintenance strategies.\n \nI seem to remember that someone took a shot at making a special case\nof WAL-bypassed inserts, but there was a problem or two that were hard\nto overcome. Does anyone recall the details? Was that about\npre-setting the hint bits for a successful commit (based on the fact\nthat the entire table will be empty if rolled back and no data will be\nvisible to any other transaction until commit), or was it about\nsetting the frozen XID in the inserted tuples (based on the fact that\nthis is no less useful for forensic purposes than having all rows set\nto any other value)?\n \nShould we have a TODO item for this special case, or is it \"not\nwanted\" or viewed as having intractable problems?\n \n-Kevin\n", "msg_date": "Mon, 17 Aug 2009 09:38:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: freezing tuples ( was: Why is\n\t vacuum_freeze_min_age100m? )" } ]
[ { "msg_contents": "Hello All,\n\nI'm developing specialized message switching system and I've chosen to \nuse PostgreSQL as general tool to handle transactions, store and manage \nall the data.\nThis system has pretty strong timing requirements. For example, it must \nprocess not less than 10 messages per second. FYI: messages are short \n(aprx. 400-500 bytes length). Incoming message should be routed via \nspecial routing system to its destinations (i.e. one incoming message \nmay be routed in dozens of channels at once).\nNormally this system works excellent with PostgreSQL database, the \nperfomance is quite impressive.\nBUT sometimes bad things happen (delays). For example:\nI have \"log\" table which contains all log entries for the system \n(warnings, errors, detailed routing info, rejections, etc).\nThe table includes \"timestamp\" field and this field defaults to \"now()\":\nCREATE TABLE log\n(\n id bigserial NOT NULL,\n \"timestamp\" timestamp without time zone NOT NULL DEFAULT now(),\n.. etc.\nSo when incoming message is being processed, I do start new transaction \nand generate outgoing and log messages in this single transaction.\nNormally, viewing the log sorted by ID it comes in right timing order:\nID timestamp\n1 2009-08-08 00:00:00.111\n2 2009-08-08 00:00:00.211\n3 2009-08-08 00:01:00.311\netc.\nBUT it seems that rarely this transaction is being delayed to apply and \nlog entry is being inserted in wrong order:\nID timestamp\n1 2009-08-08 00:00:00.111\n2 2009-08-08 00:00:30.311\n3 2009-08-08 00:00:00.211\nYep, that's right - sometimes for 30 seconds or even more.\nI do understand that there should be some delays with the database, but \n30 seconds is unacceptable!\nDoes anybody know any way to solve this? I did monitor the system \nrunning at full load (~20 messages per second) - postmaster's processes \ndidn't eat more than 10-20% of CPU and memory. Neither did any of my \napplication's processes.\n\nBest regards, Nick.\n", "msg_date": "Wed, 12 Aug 2009 20:24:52 +0400", "msg_from": "Nickolay <[email protected]>", "msg_from_op": true, "msg_subject": "transaction delays to apply" }, { "msg_contents": "\n> Does anybody know any way to solve this? I did monitor the system \n> running at full load (~20 messages per second) - postmaster's processes \n> didn't eat more than 10-20% of CPU and memory. Neither did any of my \n> application's processes.\n\nnow() like current_timestamp is the time of transaction start. If your \nclient BEGINs, then idles for 30 seconds, then INSERTs, the timestamp in \nthe insert will be from 30 second ago. Try statement_timestamp() or \nclock_timestamp().\n", "msg_date": "Wed, 12 Aug 2009 23:34:19 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: transaction delays to apply" }, { "msg_contents": "Nickolay <[email protected]> writes:\n> BUT it seems that rarely this transaction is being delayed to apply and \n> log entry is being inserted in wrong order:\n> ID timestamp\n> 1 2009-08-08 00:00:00.111\n> 2 2009-08-08 00:00:30.311\n> 3 2009-08-08 00:00:00.211\n> Yep, that's right - sometimes for 30 seconds or even more.\n\nYou haven't provided enough information to let anyone guess at the\nproblem. Have you checked to see if one of the processes is blocking\non a lock, or perhaps there's a sudden spike in system load, or what?\nWatching pg_stat_activity, pg_locks, and/or \"vmstat 1\" output during\none of these events might help narrow down what's happening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Aug 2009 17:37:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: transaction delays to apply " }, { "msg_contents": "Tom Lane wrote:\n> Nickolay <[email protected]> writes:\n> \n>> BUT it seems that rarely this transaction is being delayed to apply and \n>> log entry is being inserted in wrong order:\n>> ID timestamp\n>> 1 2009-08-08 00:00:00.111\n>> 2 2009-08-08 00:00:30.311\n>> 3 2009-08-08 00:00:00.211\n>> Yep, that's right - sometimes for 30 seconds or even more.\n>> \n>\n> You haven't provided enough information to let anyone guess at the\n> problem. Have you checked to see if one of the processes is blocking\n> on a lock, or perhaps there's a sudden spike in system load, or what?\n> Watching pg_stat_activity, pg_locks, and/or \"vmstat 1\" output during\n> one of these events might help narrow down what's happening.\n>\n> \t\t\tregards, tom lane\n>\n> \n\nThe problem is that such thing happens very rare, and NOT at full load. \nI can't monitor the system all the time. Is there any way to investigate \nthe situation by any of pgsql logs or enable something like full debug? \nI do have a row-level lock (SELECT...FOR UPDATE) on another table during \nthis transaction, but one row are handled by not more than 2 processes \nat once and it should be very quick (SELECT, parse data and UPDATE).\nThank you very much for you help!\n\nBest regards, Nick.\n", "msg_date": "Thu, 13 Aug 2009 15:54:47 +0400", "msg_from": "Nickolay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: transaction delays to apply" }, { "msg_contents": "Tom Lane wrote:\n> Nickolay <[email protected]> writes:\n> \n>> BUT it seems that rarely this transaction is being delayed to apply and \n>> log entry is being inserted in wrong order:\n>> ID timestamp\n>> 1 2009-08-08 00:00:00.111\n>> 2 2009-08-08 00:00:30.311\n>> 3 2009-08-08 00:00:00.211\n>> Yep, that's right - sometimes for 30 seconds or even more.\n>> \n>\n> You haven't provided enough information to let anyone guess at the\n> problem. Have you checked to see if one of the processes is blocking\n> on a lock, or perhaps there's a sudden spike in system load, or what?\n> Watching pg_stat_activity, pg_locks, and/or \"vmstat 1\" output during\n> one of these events might help narrow down what's happening.\n>\n> \t\t\tregards, tom lane\n>\n> \nThank you, guys. Problem's solved. I'm guilty and stupid :-)\nOne of the SELECT's in the transaction was wrong. Its job was to select \nmessages from archive by several conditions, including:\ndate_time::date = now()::date\n(i.e. timestamp field \"date_time\" was being converted to date type). \nAfter first run, postgresql seems to fix my mistake by cache or \nsomething else and futher SELECT's are being executed in a matter of \nmilliseconds.\nFixed the statement to:\ndate_time >= now()::date\nand now everything seems to work just fine even at first run.\n\nBest regards, Nick.\n", "msg_date": "Thu, 13 Aug 2009 17:31:29 +0400", "msg_from": "Nickolay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: transaction delays to apply" } ]
[ { "msg_contents": "The writer process seems to be using inordinate amounts of memory:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\nCOMMAND\n11088 postgres 13 -2 3217m 2.9g 2.9g S 0 38.7 0:10.46 postgres:\nwriter process\n20190 postgres 13 -2 3219m 71m 68m S 0 0.9 0:52.48 postgres:\ncribq cribq [local] idle\n\nI am writing moderately large (~3k) records to my database a few times\na second. Even when I stop doing that, the process continues to take\nup all of that memory.\n\nAm I reading this right? Why is it using so much memory?\n\n", "msg_date": "Wed, 12 Aug 2009 21:44:20 -0700 (PDT)", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Memory usage of writer process" }, { "msg_contents": "Alex wrote:\n> The writer process seems to be using inordinate amounts of memory:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 11088 postgres 13 -2 3217m 2.9g 2.9g S 0 38.7 0:10.46 postgres:\n> writer process\n> 20190 postgres 13 -2 3219m 71m 68m S 0 0.9 0:52.48 postgres:\n> cribq cribq [local] idle\n> \n> I am writing moderately large (~3k) records to my database a few times\n> a second. Even when I stop doing that, the process continues to take\n> up all of that memory.\n> \n> Am I reading this right? Why is it using so much memory?\n\nshared_buffers?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 13 Aug 2009 16:29:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory usage of writer process" }, { "msg_contents": "This is postgres 8.4 BTW.\n\nIt says 2.9Gb of RESIDENT memory, that also seems to be shared. Is\nthis the writer sharing the records it wrote in a shared buffer?\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11088 postgres 13 -2 3217m 3.0g 3.0g S 0 39.5 0:14.23 postgres:\nwriter process\n 968 postgres 14 -2 3219m 1.4g 1.4g S 0 18.8 4:37.57 postgres:\ncribq cribq [local] idle\n24593 postgres 13 -2 3219m 331m 327m S 0 4.3 0:10.12 postgres:\ncribq cribq [local] idle\n26181 postgres 13 -2 3219m 323m 319m S 0 4.2 0:06.48 postgres:\ncribq cribq [local] idle\n12504 postgres 14 -2 3219m 297m 293m S 0 3.9 0:02.71 postgres:\ncribq cribq [local] idle\n13565 postgres 14 -2 3219m 292m 288m S 0 3.8 0:02.75 postgres:\ncribq cribq [local] idle\n 623 postgres 13 -2 3219m 292m 287m S 0 3.8 0:02.28 postgres:\ncribq cribq [local] idle\n\n\nOn Thu, Aug 13, 2009 at 1:29 PM, Alvaro\nHerrera<[email protected]> wrote:\n> Alex wrote:\n>> The writer process seems to be using inordinate amounts of memory:\n>>\n>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+\n>> COMMAND\n>> 11088 postgres  13  -2 3217m 2.9g 2.9g S    0 38.7   0:10.46 postgres:\n>> writer process\n>> 20190 postgres  13  -2 3219m  71m  68m S    0  0.9   0:52.48 postgres:\n>> cribq cribq [local] idle\n>>\n>> I am writing moderately large (~3k) records to my database a few times\n>> a second.  Even when I stop doing that, the process continues to take\n>> up all of that memory.\n>>\n>> Am I reading this right?  Why is it using so much memory?\n>\n> shared_buffers?\n>\n> --\n> Alvaro Herrera                                http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n>\n\n\n\n-- \nAlex Neth\nLiivid, Inc\nwww.liivid.com\n+1 206 499 4995\n+86 13761577188\n\nStephen Leacock - \"I detest life-insurance agents: they always argue\nthat I shall some day die, which is not so.\" -\nhttp://www.brainyquote.com/quotes/authors/s/stephen_leacock.html\n", "msg_date": "Thu, 13 Aug 2009 14:01:26 -0700", "msg_from": "Alex Neth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory usage of writer process" }, { "msg_contents": "\n\n\nOn 8/12/09 9:44 PM, \"Alex\" <[email protected]> wrote:\n\n> The writer process seems to be using inordinate amounts of memory:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n> 11088 postgres 13 -2 3217m 2.9g 2.9g S 0 38.7 0:10.46 postgres:\n> writer process\n> 20190 postgres 13 -2 3219m 71m 68m S 0 0.9 0:52.48 postgres:\n> cribq cribq [local] idle\n> \n> I am writing moderately large (~3k) records to my database a few times\n> a second. Even when I stop doing that, the process continues to take\n> up all of that memory.\n> \n> Am I reading this right? Why is it using so much memory?\n> \n\nIt is exclusively using the difference between the RES and SHR columns. So\n... Less than ~50MB and likely much less than that (2.9g - 2.9g with\nrounding error).\n\nSHR is the shared memory the process has touched, and is shared amongst all\npostgres processes. Typically, this maxes out at the value of your\nshared_buffers setting.\n\nBased on the above, I'd wager your shared_buffers setting is 3000MB.\n\n\nIf your writer process or any other process has a value for (RES - SHR) that\nis very large, then be concerned. For example, the second postgres process\nin the above top output is using about 3MB exclusively, but has touched\nabout 68MB of the shared space, and so it shows up as 68 + 3 = 71m in the\nRES column. 3MB is not much so this is not a concern.\n\n\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Thu, 13 Aug 2009 14:42:41 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory usage of writer process" }, { "msg_contents": "I am confused about what the OS is reporting for memory usage on CentOS 5.3 Linux. Looking at the resident memory size of the processes. Looking at the resident size of all postgres processes, the system should be using around 30Gb of physical ram. I know that it states that it is using a lot of shared memory. My question is how to I determine how much physical RAM postgres is using at any point in time?\n\nThis server has 24Gb of ram, and is reporting that 23GB is free for use. See calculation below\n\n(Memory Total - Used) + (Buffers + Cached) = Free Memory\n(24675740 - 24105052) + (140312 + 22825616) = 23,536,616 or ~23 Gigabytes\n\n\nSo if my server has 23Gb of ram that is free for use, why is postgres reporting resident sizes of 30GB? Shared memory is reporting the same values, so how is the OS reporting that only 1Gb of RAM is being used?\n\nHelp?\n\ntop - 12:43:41 up 2 days, 19:04, 2 users, load average: 4.99, 4.81, 4.33\nTasks: 245 total, 4 running, 241 sleeping, 0 stopped, 0 zombie\nCpu(s): 26.0%us, 0.0%sy, 0.0%ni, 73.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 24675740k total, 24105052k used, 570688k free, 140312k buffers\nSwap: 2097144k total, 272k used, 2096872k free, 22825616k cached\n---------------------\nPID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n19469 postgres 15 0 8324m 7.9g 7.9g S 0.0 33.7 0:54.30 postgres: writer process\n29763 postgres 25 0 8329m 4.5g 4.5g R 99.8 19.0 24:53.02 postgres: niadmin database x.x.x.49(51136) UPDATE\n29765 postgres 25 0 8329m 4.4g 4.4g R 99.8 18.8 24:42.77 postgres: niadmin database x.x.x.49(51138) UPDATE\n31778 postgres 25 0 8329m 4.2g 4.2g R 99.5 17.8 17:56.95 postgres: niadmin database x.x.x.49(51288) UPDATE\n31779 postgres 25 0 8329m 4.2g 4.2g R 99.1 17.8 17:59.62 postgres: niadmin database x.x.x.49(51289) UPDATE\n31780 postgres 23 0 8329m 4.1g 4.1g R 100.1 17.5 17:52.53 postgres: niadmin database x.x.x.49(51290) UPDATE\n19467 postgres 15 0 8320m 160m 160m S 0.0 0.7 0:00.24 /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data\n19470 postgres 15 0 8324m 2392 1880 S 0.0 0.0 0:01.72 postgres: wal writer process\n\n\n\nMemory reporting on CentOS Linux\n\n\nI am confused about what the OS is reporting for memory usage on CentOS 5.3 Linux. Looking at the resident memory size of the processes. Looking at the resident size of all postgres processes, the system should be using around 30Gb of physical ram. I know that it states that it is using a lot of shared memory. My question is how to I determine how much physical RAM postgres is using at any point in time?\n\nThis server has 24Gb of ram, and is reporting that 23GB is free for use. See calculation below\n\n(Memory Total –  Used) + (Buffers + Cached) = Free Memory\n(24675740 – 24105052) +  (140312 + 22825616) = 23,536,616 or ~23 Gigabytes\n\n\nSo if my server has 23Gb of ram that is free for use, why is postgres reporting resident sizes of 30GB? Shared memory is reporting the same values, so how is the OS reporting that only 1Gb of RAM is being used?\n\nHelp?\n\ntop - 12:43:41 up 2 days, 19:04,  2 users,  load average: 4.99, 4.81, 4.33\nTasks: 245 total,   4 running, 241 sleeping,   0 stopped,   0 zombie\nCpu(s): 26.0%us,  0.0%sy,  0.0%ni, 73.9%id,  0.1%wa,  0.0%hi,  0.0%si,  0.0%st\nMem:  24675740k total, 24105052k used,   570688k free,   140312k buffers\nSwap:  2097144k total,      272k used,  2096872k free, 22825616k cached\n---------------------\nPID     USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND \n19469 postgres  15   0 8324m 7.9g 7.9g S  0.0 33.7   0:54.30 postgres: writer process                                                           \n29763 postgres  25   0 8329m 4.5g 4.5g R 99.8 19.0  24:53.02 postgres: niadmin database x.x.x.49(51136) UPDATE\n29765 postgres  25   0 8329m 4.4g 4.4g R 99.8 18.8  24:42.77 postgres: niadmin database x.x.x.49(51138) UPDATE                        \n31778 postgres  25   0 8329m 4.2g 4.2g R 99.5 17.8  17:56.95 postgres: niadmin database x.x.x.49(51288) UPDATE\n31779 postgres  25   0 8329m 4.2g 4.2g R 99.1 17.8  17:59.62 postgres: niadmin database x.x.x.49(51289) UPDATE                        \n31780 postgres  23   0 8329m 4.1g 4.1g R 100.1 17.5  17:52.53 postgres: niadmin database x.x.x.49(51290) UPDATE\n19467 postgres  15   0 8320m 160m 160m S  0.0  0.7   0:00.24 /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data                        \n19470 postgres  15   0 8324m 2392 1880 S  0.0  0.0   0:01.72 postgres: wal writer process", "msg_date": "Fri, 14 Aug 2009 14:00:44 -0400", "msg_from": "Jeremy Carroll <[email protected]>", "msg_from_op": false, "msg_subject": "Memory reporting on CentOS Linux" }, { "msg_contents": "On Fri, 2009-08-14 at 14:00 -0400, Jeremy Carroll wrote:\n> I am confused about what the OS is reporting for memory usage on\n> CentOS 5.3 Linux. Looking at the resident memory size of the\n> processes. Looking at the resident size of all postgres processes, the\n> system should be using around 30Gb of physical ram. I know that it\n> states that it is using a lot of shared memory. My question is how to\n> I determine how much physical RAM postgres is using at any point in\n> time?\n> \n> This server has 24Gb of ram, and is reporting that 23GB is free for\n> use. See calculation below\n> \n> (Memory Total – Used) + (Buffers + Cached) = Free Memory\n> (24675740 – 24105052) + (140312 + 22825616) = 23,536,616 or ~23\n> Gigabytes\n> \nyou're using cached swap in your calculation ( 22825616 ) swap is not\nRAM -- it's disk\n\n\n> \n> So if my server has 23Gb of ram that is free for use, why is postgres\n> reporting resident sizes of 30GB? Shared memory is reporting the same\n> values, so how is the OS reporting that only 1Gb of RAM is being used?\n\nyou have 570688k free RAM + 140312k buffers RAM\nThis looks to me like the OS is saying that you are using 24105052k used\n> \n> Help?\n> \n> top - 12:43:41 up 2 days, 19:04, 2 users, load average: 4.99, 4.81,\n> 4.33\n> Tasks: 245 total, 4 running, 241 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 26.0%us, 0.0%sy, 0.0%ni, 73.9%id, 0.1%wa, 0.0%hi, 0.0%\n> si, 0.0%st\n> Mem: 24675740k total, 24105052k used, 570688k free, 140312k\n> buffers\n> Swap: 2097144k total, 272k used, 2096872k free, 22825616k\n> cached\n> ---------------------\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND \n> 19469 postgres 15 0 8324m 7.9g 7.9g S 0.0 33.7 0:54.30 postgres:\n> writer process\n> \n> 29763 postgres 25 0 8329m 4.5g 4.5g R 99.8 19.0 24:53.02 postgres:\n> niadmin database x.x.x.49(51136) UPDATE\n> 29765 postgres 25 0 8329m 4.4g 4.4g R 99.8 18.8 24:42.77 postgres:\n> niadmin database x.x.x.49(51138) UPDATE \n> 31778 postgres 25 0 8329m 4.2g 4.2g R 99.5 17.8 17:56.95 postgres:\n> niadmin database x.x.x.49(51288) UPDATE\n> 31779 postgres 25 0 8329m 4.2g 4.2g R 99.1 17.8 17:59.62 postgres:\n> niadmin database x.x.x.49(51289) UPDATE \n> 31780 postgres 23 0 8329m 4.1g 4.1g R 100.1 17.5 17:52.53\n> postgres: niadmin database x.x.x.49(51290) UPDATE\n> 19467 postgres 15 0 8320m 160m 160m S 0.0 0.7\n> 0:00.24 /opt/PostgreSQL/8.3/bin/postgres\n> -D /opt/PostgreSQL/8.3/data \n> 19470 postgres 15 0 8324m 2392 1880 S 0.0 0.0 0:01.72 postgres:\n> wal writer process \n\n\n", "msg_date": "Fri, 14 Aug 2009 15:43:04 -0400", "msg_from": "Reid Thompson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "But the kernel can take back any of the cache memory if it wants to. Therefore it is free memory.\n\nThis still does not explain why the top command is reporting ~9GB of resident memory, yet the top command does not suggest that any physical memory is being used.\n\n\nOn 8/14/09 2:43 PM, \"Reid Thompson\" <[email protected]> wrote:\n\nyou're using cached swap in your calculation ( 22825616 ) swap is not\nRAM -- it's disk\n\n\n\nRe: [PERFORM] Memory reporting on CentOS Linux\n\n\nBut the kernel can take back any of the cache memory if it wants to. Therefore it is free memory.\n\nThis still does not explain why the top command is reporting ~9GB of resident memory, yet the top command does not suggest that any physical memory is being used.\n\n\nOn 8/14/09 2:43 PM, \"Reid Thompson\" <[email protected]> wrote:\n\nyou're using cached swap in your calculation ( 22825616 )  swap is not\nRAM -- it's disk", "msg_date": "Fri, 14 Aug 2009 16:20:58 -0400", "msg_from": "Jeremy Carroll <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "On Fri, Aug 14, 2009 at 12:00 PM, Jeremy\nCarroll<[email protected]> wrote:\n> I am confused about what the OS is reporting for memory usage on CentOS 5.3\n> Linux. Looking at the resident memory size of the processes. Looking at the\n> resident size of all postgres processes, the system should be using around\n> 30Gb of physical ram. I know that it states that it is using a lot of shared\n> memory. My question is how to I determine how much physical RAM postgres is\n> using at any point in time?\n\nOK, take the first pg process, and write down its RES size. For all\nthe rest, write down RES-SHR for how much more it's using. Since they\nuse a lot of shared memory, and since you're showing something like\n7.9G shared, I'm gonna guess that's the size of your shared_buffers.\nWith those numbers you should get something just over a shade of 7.9G\nused, and most of that is shared_buffers. Also, a quick check is to\nlook at this number:\n\n22825616k cached\n\nwhich tells you how much memory the OS is using for cache, which is ~22G.\n\nI note that you've got 2G swapped out, this might well be\nshared_buffers or something you'd rather not have swapped out. Look\ninto setting your swappiness lower (5 or so should do) to stop the OS\nfrom swapping so much out.\n\n/sbin/sysctl -a|grep swappiness\nvm.swappiness = 60\n\nis the default. You can change it permanently by editing your\n/etc/sysctl.conf file (or wherever it lives) and rebooting, or running\n/sbin/sysctl -p to process the entries and make them stick this\nsession. My big servers run with swappiness of 1 with no problems.\n\n>\n> This server has 24Gb of ram, and is reporting that 23GB is free for use. See\n> calculation below\n>\n> (Memory Total –  Used) + (Buffers + Cached) = Free Memory\n> (24675740 – 24105052) +  (140312 + 22825616) = 23,536,616 or ~23 Gigabytes\n>\n>\n> So if my server has 23Gb of ram that is free for use, why is postgres\n> reporting resident sizes of 30GB? Shared memory is reporting the same\n> values, so how is the OS reporting that only 1Gb of RAM is being used?\n>\n> Help?\n>\n> top - 12:43:41 up 2 days, 19:04,  2 users,  load average: 4.99, 4.81, 4.33\n> Tasks: 245 total,   4 running, 241 sleeping,   0 stopped,   0 zombie\n> Cpu(s): 26.0%us,  0.0%sy,  0.0%ni, 73.9%id,  0.1%wa,  0.0%hi,  0.0%si,\n>  0.0%st\n> Mem:  24675740k total, 24105052k used,   570688k free,   140312k buffers\n> Swap:  2097144k total,      272k used,  2096872k free, 22825616k cached\n> ---------------------\n> PID     USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n> 19469 postgres  15   0 8324m 7.9g 7.9g S  0.0 33.7   0:54.30 postgres:\n> writer process\n> 29763 postgres  25   0 8329m 4.5g 4.5g R 99.8 19.0  24:53.02 postgres:\n> niadmin database x.x.x.49(51136) UPDATE\n> 29765 postgres  25   0 8329m 4.4g 4.4g R 99.8 18.8  24:42.77 postgres:\n> niadmin database x.x.x.49(51138) UPDATE\n> 31778 postgres  25   0 8329m 4.2g 4.2g R 99.5 17.8  17:56.95 postgres:\n> niadmin database x.x.x.49(51288) UPDATE\n> 31779 postgres  25   0 8329m 4.2g 4.2g R 99.1 17.8  17:59.62 postgres:\n> niadmin database x.x.x.49(51289) UPDATE\n> 31780 postgres  23   0 8329m 4.1g 4.1g R 100.1 17.5  17:52.53 postgres:\n> niadmin database x.x.x.49(51290) UPDATE\n> 19467 postgres  15   0 8320m 160m 160m S  0.0  0.7   0:00.24\n> /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data\n>\n> 19470 postgres  15   0 8324m 2392 1880 S  0.0  0.0   0:01.72 postgres: wal\n> writer process\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n", "msg_date": "Fri, 14 Aug 2009 14:21:42 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "I'm betting it's shared_buffers that have been swapped out (2G swapped\nout on his machine) for kernel cache. The RES and SHR being the\nsame says the actual processes are using hardly any ram, just hitting\nshared_buffers.\n\nOn Fri, Aug 14, 2009 at 2:20 PM, Jeremy\nCarroll<[email protected]> wrote:\n> But the kernel can take back any of the cache memory if it wants to.\n> Therefore it is free memory.\n>\n> This still does not explain why the top command is reporting ~9GB of\n> resident memory, yet the top command does not suggest that any physical\n> memory is being used.\n>\n>\n> On 8/14/09 2:43 PM, \"Reid Thompson\" <[email protected]> wrote:\n>\n> you're using cached swap in your calculation ( 22825616 )  swap is not\n> RAM -- it's disk\n>\n\n\n\n-- \nWhen fascism comes to America, it will be intolerance sold as diversity.\n", "msg_date": "Fri, 14 Aug 2009 14:23:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "\nOn 8/14/09 11:00 AM, \"Jeremy Carroll\" <[email protected]>\nwrote:\n\n> I am confused about what the OS is reporting for memory usage on CentOS 5.3\n> Linux. Looking at the resident memory size of the processes. Looking at the\n> resident size of all postgres processes, the system should be using around\n> 30Gb of physical ram. I know that it states that it is using a lot of shared\n> memory. My question is how to I determine how much physical RAM postgres is\n> using at any point in time?\n\nResident includes Shared. Shared is shared. So you have to subtract it\nfrom all the processes to see what they use on their own. What you really\nwant is RES-SHR, or some of the other columns available in top. Hit 'h' in\ntop to get some help on the other columns available, and 'f' and 'o'\nmanipulate them. In particular, you might find the \"DATA\" column useful.\nIt is approximately RES-SHR-CODE\n\n> \n> This server has 24Gb of ram, and is reporting that 23GB is free for use. See\n> calculation below\n> \n> (Memory Total ­ Used) + (Buffers + Cached) = Free Memory\n> (24675740 ­ 24105052) + (140312 + 22825616) = 23,536,616 or ~23 Gigabytes\n> \n> \n> So if my server has 23Gb of ram that is free for use, why is postgres\n> reporting resident sizes of 30GB? Shared memory is reporting the same values,\n> so how is the OS reporting that only 1Gb of RAM is being used?\n> \n> Help?\n> \n> top - 12:43:41 up 2 days, 19:04, 2 users, load average: 4.99, 4.81, 4.33\n> Tasks: 245 total, 4 running, 241 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 26.0%us, 0.0%sy, 0.0%ni, 73.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st\n> Mem: 24675740k total, 24105052k used, 570688k free, 140312k buffers\n> Swap: 2097144k total, 272k used, 2096872k free, 22825616k cached\n> ---------------------\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 19469 postgres 15 0 8324m 7.9g 7.9g S 0.0 33.7 0:54.30 postgres: writer\n> process \n> 29763 postgres 25 0 8329m 4.5g 4.5g R 99.8 19.0 24:53.02 postgres: niadmin\n> database x.x.x.49(51136) UPDATE\n\n\nLets just take the two above and pretend that they are the only postgres\nprocesses.\nThe RAM used by each exclusively is RES-SHR. Or, close to nothing for these\ntwo, aside from the rounding error.\n\nThe memory used by postgres for shared memory is the largest of all SHR\ncolumns for postgres columns. Or, about 7.9GB. So, postgres is using\nabout 7.9GB for shared memory, and very little for anything else.\n\nIn formula form, its close to\nSUM(RES) - SUM(SHR) + MAX(SHR).\nThat doesn't cover everything, but is very close. See the other columns\navailable in top.\n\n\n", "msg_date": "Fri, 14 Aug 2009 13:37:51 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "If I have 10GB of ram, and I see a process using 5Gb of RES size. Then TOP should at least report 5GB physical memory used (AKA: Not available in CACHED, or FREE). If I run a 'free -m', I should only see 5GB of ram available. I can understand with virtual memory that some of it may be on disk, therefore I may not see this memory being taken away from the physical memory available.\n\nI am thoroughly confused that TOP is reporting that I have 99% of my physical RAM free, while the process list suggests that some are taking ~8Gb of Resident (Physical) Memory. Any explanation as to why TOP is reporting this? I have a PostgreSQL 8.3 server with 48Gb of RAM on a Dell R610 server that is reporting that 46.5GB of RAM is free. This confuses me to no end. Why is it not reporting much more physical memory used?\n\n\n[root@pg6 jcarroll]# free -m\n total used free shared buffers cached\nMem: 48275 48136 138 0 141 46159\n-/+ buffers/cache: 1835 46439\nSwap: 2047 12 2035\nThanks!\n\ntop - 09:24:38 up 17:05, 1 user, load average: 1.09, 1.08, 1.18\nTasks: 239 total, 2 running, 237 sleeping, 0 stopped, 0 zombie\nCpu(s): 6.2%us, 0.1%sy, 0.0%ni, 93.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 49433916k total, 49295460k used, 138456k free, 145308k buffers\nSwap: 2097144k total, 12840k used, 2084304k free, 47267056k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n13200 postgres 15 0 16.1g 15g 15g R 2.3 33.6 2:36.78 postgres: writer process\n29029 postgres 25 0 16.1g 13g 13g R 99.9 29.4 36:35.13 postgres: dbuser database 192.168.200.8(36979) UPDATE\n13198 postgres 15 0 16.1g 317m 316m S 0.0 0.7 0:00.57 /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data\n13201 postgres 15 0 16.1g 2300 1824 S 0.0 0.0 0:00.39 postgres: wal writer process\n13202 postgres 15 0 98.7m 1580 672 S 0.0 0.0 0:15.12 postgres: stats collector process\n\n-----Original Message-----\nFrom: Scott Carey [mailto:[email protected]] \nSent: Friday, August 14, 2009 3:38 PM\nTo: Jeremy Carroll; [email protected]\nSubject: Re: [PERFORM] Memory reporting on CentOS Linux\n\n\nOn 8/14/09 11:00 AM, \"Jeremy Carroll\" <[email protected]>\nwrote:\n\n> I am confused about what the OS is reporting for memory usage on CentOS 5.3\n> Linux. Looking at the resident memory size of the processes. Looking at the\n> resident size of all postgres processes, the system should be using around\n> 30Gb of physical ram. I know that it states that it is using a lot of shared\n> memory. My question is how to I determine how much physical RAM postgres is\n> using at any point in time?\n\nResident includes Shared. Shared is shared. So you have to subtract it\nfrom all the processes to see what they use on their own. What you really\nwant is RES-SHR, or some of the other columns available in top. Hit 'h' in\ntop to get some help on the other columns available, and 'f' and 'o'\nmanipulate them. In particular, you might find the \"DATA\" column useful.\nIt is approximately RES-SHR-CODE\n\n> \n> This server has 24Gb of ram, and is reporting that 23GB is free for use. See\n> calculation below\n> \n> (Memory Total Used) + (Buffers + Cached) = Free Memory\n> (24675740 24105052) + (140312 + 22825616) = 23,536,616 or ~23 Gigabytes\n> \n> \n> So if my server has 23Gb of ram that is free for use, why is postgres\n> reporting resident sizes of 30GB? Shared memory is reporting the same values,\n> so how is the OS reporting that only 1Gb of RAM is being used?\n> \n> Help?\n> \n> top - 12:43:41 up 2 days, 19:04, 2 users, load average: 4.99, 4.81, 4.33\n> Tasks: 245 total, 4 running, 241 sleeping, 0 stopped, 0 zombie\n> Cpu(s): 26.0%us, 0.0%sy, 0.0%ni, 73.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st\n> Mem: 24675740k total, 24105052k used, 570688k free, 140312k buffers\n> Swap: 2097144k total, 272k used, 2096872k free, 22825616k cached\n> ---------------------\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 19469 postgres 15 0 8324m 7.9g 7.9g S 0.0 33.7 0:54.30 postgres: writer\n> process \n> 29763 postgres 25 0 8329m 4.5g 4.5g R 99.8 19.0 24:53.02 postgres: dbuser\n> database x.x.x.49(51136) UPDATE\n\n\nLets just take the two above and pretend that they are the only postgres\nprocesses.\nThe RAM used by each exclusively is RES-SHR. Or, close to nothing for these\ntwo, aside from the rounding error.\n\nThe memory used by postgres for shared memory is the largest of all SHR\ncolumns for postgres columns. Or, about 7.9GB. So, postgres is using\nabout 7.9GB for shared memory, and very little for anything else.\n\nIn formula form, its close to\nSUM(RES) - SUM(SHR) + MAX(SHR).\nThat doesn't cover everything, but is very close. See the other columns\navailable in top.\n\n\n", "msg_date": "Sat, 15 Aug 2009 10:25:45 -0400", "msg_from": "Jeremy Carroll <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "Jeremy Carroll <[email protected]> writes:\n> I am thoroughly confused that TOP is reporting that I have 99% of my\n> physical RAM free, while the process list suggests that some are\n> taking ~8Gb of Resident (Physical) Memory. Any explanation as to why\n> TOP is reporting this? I have a PostgreSQL 8.3 server with 48Gb of RAM\n> on a Dell R610 server that is reporting that 46.5GB of RAM is free.\n\nExactly where do you draw that conclusion from? I see \"free 138M\".\n\nIt does look like there's something funny about top's accounting for\nshared memory --- maybe it's counting it as \"cached\"? It's hardly\nunusual for top to give bogus numbers in the presence of shared memory,\nof course, but this seems odd :-(. With such large amounts of RAM\ninvolved I wonder if there could be an overflow problem. You might file\na bug against top in whatever distro you are using.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Aug 2009 11:24:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux " }, { "msg_contents": "Linux strives to always use 100% of memory at any given time. Therefore the system will always throw free memory into swap cache. The kernel will (and can) take any memory away from the swap cache at any time for resident (physical) memory for processes.\n\nThat's why they have the column \"-/+ buffers/cache:\". That shows 46Gb Free RAM.\n\nI cannot be the only person that has asked this question.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Saturday, August 15, 2009 10:25 AM\nTo: Jeremy Carroll\nCc: Scott Carey; [email protected]\nSubject: Re: [PERFORM] Memory reporting on CentOS Linux \n\nJeremy Carroll <[email protected]> writes:\n> I am thoroughly confused that TOP is reporting that I have 99% of my\n> physical RAM free, while the process list suggests that some are\n> taking ~8Gb of Resident (Physical) Memory. Any explanation as to why\n> TOP is reporting this? I have a PostgreSQL 8.3 server with 48Gb of RAM\n> on a Dell R610 server that is reporting that 46.5GB of RAM is free.\n\nExactly where do you draw that conclusion from? I see \"free 138M\".\n\nIt does look like there's something funny about top's accounting for\nshared memory --- maybe it's counting it as \"cached\"? It's hardly\nunusual for top to give bogus numbers in the presence of shared memory,\nof course, but this seems odd :-(. With such large amounts of RAM\ninvolved I wonder if there could be an overflow problem. You might file\na bug against top in whatever distro you are using.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Aug 2009 11:39:40 -0400", "msg_from": "Jeremy Carroll <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux " }, { "msg_contents": "On 08/15/2009 11:39 AM, Jeremy Carroll wrote:\n> Linux strives to always use 100% of memory at any given time. Therefore the system will always throw free memory into swap cache. The kernel will (and can) take any memory away from the swap cache at any time for resident (physical) memory for processes.\n>\n> That's why they have the column \"-/+ buffers/cache:\". That shows 46Gb Free RAM.\n>\n> I cannot be the only person that has asked this question.\n> \n\nI vote for screwed up reporting over some PostgreSQL-specific \nexplanation. My understanding of RSS is the same as you suggested \nearlier - if 50% RAM is listed as resident, then there should not be \n90%+ RAM free. I cannot think of anything PostgreSQL might be doing into \ninfluencing this to be false.\n\nDo you get the same results after reboot? :-) I'm serious. Memory can be \ncorrupted, and Linux can have bugs.\n\nI would not think that cache memory shows up as RSS for a particular \nprocess. Cache memory is shared by the entire system, and is not \nallocated towards any specific process. Or, at least, this is my \nunderstanding.\n\nJust for kicks, I tried an mmap() scenario (I do not think PostgreSQL \nuses mmap()), and it showed a large RSS, but it did NOT show free memory.\n\nCheers,\nmark\n\n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Saturday, August 15, 2009 10:25 AM\n> To: Jeremy Carroll\n> Cc: Scott Carey; [email protected]\n> Subject: Re: [PERFORM] Memory reporting on CentOS Linux\n>\n> Jeremy Carroll<[email protected]> writes:\n> \n>> I am thoroughly confused that TOP is reporting that I have 99% of my\n>> physical RAM free, while the process list suggests that some are\n>> taking ~8Gb of Resident (Physical) Memory. Any explanation as to why\n>> TOP is reporting this? I have a PostgreSQL 8.3 server with 48Gb of RAM\n>> on a Dell R610 server that is reporting that 46.5GB of RAM is free.\n>> \n> Exactly where do you draw that conclusion from? I see \"free 138M\".\n>\n> It does look like there's something funny about top's accounting for\n> shared memory --- maybe it's counting it as \"cached\"? It's hardly\n> unusual for top to give bogus numbers in the presence of shared memory,\n> of course, but this seems odd :-(. With such large amounts of RAM\n> involved I wonder if there could be an overflow problem. You might file\n> a bug against top in whatever distro you are using.\n>\n> \t\t\tregards, tom lane\n>\n> \n\n\n-- \nMark Mielke<[email protected]>\n\n", "msg_date": "Sat, 15 Aug 2009 12:18:23 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "On Fri, Aug 14, 2009 at 3:43 PM, Reid Thompson<[email protected]> wrote:\n> On Fri, 2009-08-14 at 14:00 -0400, Jeremy Carroll wrote:\n>> I am confused about what the OS is reporting for memory usage on\n>> CentOS 5.3 Linux. Looking at the resident memory size of the\n>> processes. Looking at the resident size of all postgres processes, the\n>> system should be using around 30Gb of physical ram. I know that it\n>> states that it is using a lot of shared memory. My question is how to\n>> I determine how much physical RAM postgres is using at any point in\n>> time?\n>>\n>> This server has 24Gb of ram, and is reporting that 23GB is free for\n>> use. See calculation below\n>>\n>> (Memory Total –  Used) + (Buffers + Cached) = Free Memory\n>> (24675740 – 24105052) +  (140312 + 22825616) = 23,536,616 or ~23\n>> Gigabytes\n>>\n> you're using cached swap in your calculation ( 22825616 )  swap is not\n> RAM -- it's disk\n\nAs far as I know, cached is only on the next line as a formatting\nconvenience. It is unrelated to swap and not on disk. What could\n\"cached swap\" possibly mean anyway?\n\nHaving said that, as a Linux user of many years, I have found that\nper-process memory reporting is completely worthless. All you can\nreally do is look at the overall statistics for the box and try to get\nsome sense as to whether the box, over all, is struggling. Trying to\nunderstand how individual processes are contributing to that is an\ninexact science when it's not a complete waste of time.\n\n...Robert\n", "msg_date": "Sat, 15 Aug 2009 16:10:57 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "On Sat, 15 Aug 2009, Mark Mielke wrote:\n> I vote for screwed up reporting over some PostgreSQL-specific explanation. My \n> understanding of RSS is the same as you suggested earlier - if 50% RAM is \n> listed as resident, then there should not be 90%+ RAM free. I cannot think of \n> anything PostgreSQL might be doing into influencing this to be false.\n\nThe only thing I would have thought that would allow this would be mmap.\n\n> Just for kicks, I tried an mmap() scenario (I do not think PostgreSQL uses \n> mmap()), and it showed a large RSS, but it did NOT show free memory.\n\nMore details please. What did you do, and what happened? I would have \nthought that a large read-only mmapped file that has been read (and \ntherefore is in RAM) would be counted as VIRT and RES of the process in \ntop, but can clearly be evicted from the cache at any time, and therefore \nwould show up as buffer or cache rather than process memory in the totals.\n\n+1 on the idea that Linux memory reporting is incomprehensible nowadays.\n\nMatthew\n\n-- \n There once was a limerick .sig\n that really was not very big\n It was going quite fine\n Till it reached the fourth line\n", "msg_date": "Mon, 17 Aug 2009 12:03:29 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "I believe this is exactly what is happening. I see that the TOP output lists a large amount ov VIRT & RES size being used, but the kernel does not report this memory as being reserved and instead lists it as free memory or cached.\n\nIf this is indeed the case, how does one determine if a PostgreSQL instance requires more memory? Or how to determine if the system is using memory efficiently?\n\nThanks for the responses.\n\n\nOn 8/17/09 6:03 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\nOn Sat, 15 Aug 2009, Mark Mielke wrote:\n> I vote for screwed up reporting over some PostgreSQL-specific explanation. My\n> understanding of RSS is the same as you suggested earlier - if 50% RAM is\n> listed as resident, then there should not be 90%+ RAM free. I cannot think of\n> anything PostgreSQL might be doing into influencing this to be false.\n\nThe only thing I would have thought that would allow this would be mmap.\n\n> Just for kicks, I tried an mmap() scenario (I do not think PostgreSQL uses\n> mmap()), and it showed a large RSS, but it did NOT show free memory.\n\nMore details please. What did you do, and what happened? I would have\nthought that a large read-only mmapped file that has been read (and\ntherefore is in RAM) would be counted as VIRT and RES of the process in\ntop, but can clearly be evicted from the cache at any time, and therefore\nwould show up as buffer or cache rather than process memory in the totals.\n\n+1 on the idea that Linux memory reporting is incomprehensible nowadays.\n\nMatthew\n\n--\n There once was a limerick .sig\n that really was not very big\n It was going quite fine\n Till it reached the fourth line\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\nRe: [PERFORM] Memory reporting on CentOS Linux\n\n\nI believe this is exactly what is happening. I see that the TOP output lists a large amount ov VIRT & RES size being used, but the kernel does not report this memory as being reserved and instead lists it as free memory or cached.\n\nIf this is indeed the case, how does one determine if a PostgreSQL instance requires more memory? Or how to determine if the system is using memory efficiently?\n\nThanks for the responses.\n\n\nOn 8/17/09 6:03 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n\nOn Sat, 15 Aug 2009, Mark Mielke wrote:\n> I vote for screwed up reporting over some PostgreSQL-specific explanation. My\n> understanding of RSS is the same as you suggested earlier - if 50% RAM is\n> listed as resident, then there should not be 90%+ RAM free. I cannot think of\n> anything PostgreSQL might be doing into influencing this to be false.\n\nThe only thing I would have thought that would allow this would be mmap.\n\n> Just for kicks, I tried an mmap() scenario (I do not think PostgreSQL uses\n> mmap()), and it showed a large RSS, but it did NOT show free memory.\n\nMore details please. What did you do, and what happened? I would have\nthought that a large read-only mmapped file that has been read (and\ntherefore is in RAM) would be counted as VIRT and RES of the process in\ntop, but can clearly be evicted from the cache at any time, and therefore\nwould show up as buffer or cache rather than process memory in the totals.\n\n+1 on the idea that Linux memory reporting is incomprehensible nowadays.\n\nMatthew\n\n--\n There once was a limerick .sig\n that really was not very big\n It was going quite fine\n Till it reached the fourth line\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 17 Aug 2009 13:24:36 -0400", "msg_from": "Jeremy Carroll <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "\n\n\nOn 8/17/09 10:24 AM, \"Jeremy Carroll\" <[email protected]>\nwrote:\n\n> I believe this is exactly what is happening. I see that the TOP output lists a\n> large amount ov VIRT & RES size being used, but the kernel does not report\n> this memory as being reserved and instead lists it as free memory or cached.\n\nOh! I recall I found that fun behaviour Linux and thought it was a Postgres\nbug a while back. It has lot of other bad effects on how the kernel chooses\nto swap. I really should have recalled that one. Due to this behavior, I\nhad initially blamed postgres for \"pinning\" memory in shared_buffers in the\ndisk cache. But that symptom is one of linux thinking somehow that pages\nread into shared memory are still cached (or something similar).\n\nBasically, it thinks that there is more free memory than there is when there\nis a lot of shared memory. Run a postgres instance with over 50% memory\nassigned to shared_buffers and when memory pressure builds kswapd will go\nNUTS in CPU use, apparently confused. With high OS 'swappiness' value it\nwill swap in and out too much, and with low 'swappiness' it will CPU spin,\naware on one hand that it is low on memory, but confused by the large\napparent amount free so it doesn't free up much and kswapd chews up all the\nCPU and the system almost hangs. It behaves as if the logic that determines\nwhere to get memory from for a process knows that its almost out, but the\nlogic that decides what to swap out thinks that there is plenty free. The\nlarger the ratio of shared memory to total memory in the system, the higher\nthe CPU use by the kernel when managing the buffer cache.\n\nBottom line is that Linux plus lots of SYSV shared mem doesn't work as well\nas it should. Setting shared_buffers past 35% RAM doesn't work well on\nLinux. Shared memory accounting is fundamentally broken in Linux (see some\nother threads on how the OOM killer works WRT shared memory for other\nexamples).\n\n\n> \n> If this is indeed the case, how does one determine if a PostgreSQL instance\n> requires more memory? Or how to determine if the system is using memory\n> efficiently?\n\nJust be aware that the definite memory used per process is RES-SHR, and that\nthe max SHR value is mostly duplicated in the 'cached' or 'free' columns.\nThat mas SHR value IS used by postgres, and not the OS cache.\nIf cached + memory free is on the order of your shared_buffers/SHR size,\nyou're pretty much out of memory.\n\nAdditionally, the OS will start putting things into swap before you reach\nthat point, so pay attention to the swap used column in top or free. That\nis a more reliable indicator than anything else at the system level.\n\nIf you want to know what postgres process is using the most memory on its\nown look at the DATA and CODE top columns, or calculate RES-SHR.\n\n\nI have no idea if more recent Linux Kernels have fixed this at all.\n\n> \n> Thanks for the responses.\n> \n> \n> On 8/17/09 6:03 AM, \"Matthew Wakeling\" <[email protected]> wrote:\n> \n>> On Sat, 15 Aug 2009, Mark Mielke wrote:\n>>> I vote for screwed up reporting over some PostgreSQL-specific explanation.\n>>> My\n>>> understanding of RSS is the same as you suggested earlier - if 50% RAM is\n>>> listed as resident, then there should not be 90%+ RAM free. I cannot think\n>>> of\n>>> anything PostgreSQL might be doing into influencing this to be false.\n>> \n>> The only thing I would have thought that would allow this would be mmap.\n>> \n>>> Just for kicks, I tried an mmap() scenario (I do not think PostgreSQL uses\n>>> mmap()), and it showed a large RSS, but it did NOT show free memory.\n>> \n>> More details please. What did you do, and what happened? I would have\n>> thought that a large read-only mmapped file that has been read (and\n>> therefore is in RAM) would be counted as VIRT and RES of the process in\n>> top, but can clearly be evicted from the cache at any time, and therefore\n>> would show up as buffer or cache rather than process memory in the totals.\n>> \n>> +1 on the idea that Linux memory reporting is incomprehensible nowadays.\n>> \n>> Matthew\n>> \n>> --\n>> There once was a limerick .sig\n>> that really was not very big\n>> It was going quite fine\n>> Till it reached the fourth line\n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n> \n\n", "msg_date": "Mon, 17 Aug 2009 16:43:16 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "\n\nOn 8/17/09 4:43 PM, \"Scott Carey\" <[email protected]> wrote:\n> \n> \n> On 8/17/09 10:24 AM, \"Jeremy Carroll\" <[email protected]>\n> wrote:\n> \n>> I believe this is exactly what is happening. I see that the TOP output lists\n>> a\n>> large amount ov VIRT & RES size being used, but the kernel does not report\n>> this memory as being reserved and instead lists it as free memory or cached.\n> \n> Oh! I recall I found that fun behaviour Linux and thought it was a Postgres\n> bug a while back. It has lot of other bad effects on how the kernel chooses\n> to swap. I really should have recalled that one. Due to this behavior, I\n> had initially blamed postgres for \"pinning\" memory in shared_buffers in the\n> disk cache. But that symptom is one of linux thinking somehow that pages\n> read into shared memory are still cached (or something similar).\n> \n> Basically, it thinks that there is more free memory than there is when there\n> is a lot of shared memory. Run a postgres instance with over 50% memory\n> assigned to shared_buffers and when memory pressure builds kswapd will go\n> NUTS in CPU use, apparently confused. With high OS 'swappiness' value it\n> will swap in and out too much, and with low 'swappiness' it will CPU spin,\n> aware on one hand that it is low on memory, but confused by the large\n> apparent amount free so it doesn't free up much and kswapd chews up all the\n> CPU and the system almost hangs. It behaves as if the logic that determines\n> where to get memory from for a process knows that its almost out, but the\n> logic that decides what to swap out thinks that there is plenty free. The\n> larger the ratio of shared memory to total memory in the system, the higher\n> the CPU use by the kernel when managing the buffer cache.\n> \n\n\nBased on a little digging, I'd say that this patch to the kernel probably\nalleviates the performance problems I've seen with swapping when shared mem\nis high:\n\nhttp://lwn.net/Articles/286472/\n\nOther patches have improved the shared memory tracking, but its not clear if\ntools like top have taken advantage of the new info available in /proc.\n\n\n", "msg_date": "Mon, 17 Aug 2009 19:35:28 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" }, { "msg_contents": "On Fri, 14 Aug 2009, Scott Carey wrote:\n\n> The memory used by postgres for shared memory is the largest of all SHR\n> columns for postgres columns. Or, about 7.9GB. So, postgres is using\n> about 7.9GB for shared memory, and very little for anything else.\n\nIt's a good idea to check this result against the actual shared memory \nblock allocated. If the server has been up long enough to go through all \nof shared_buffers once, the results should be close. You can look at the \nblock under Linux using \"ipcs -m\"; the one you want should look something \nlike this:\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x0052e2c1 21757972 gsmith 600 548610048 10\n\nThat represents a bit over 512MB worth of allocated memory for the server. \nAlternately, you can use \"pmap -d\" on a PostgreSQL process to find the \nblock, something like this works:\n\n$ pmap -d 13961 | egrep \"^Address|shmid\"\nAddress Kbytes Mode Offset Device Mapping\n96c41000 535752 rw-s- 0000000000000000 000:00009 [ shmid=0x14c0014 ]\n\nI have given up on presuming the summary values top shows are good for \nanything on Linux. I look at /proc/meminfo to see how much RAM is free, \nand to figure out what's going on with the server processes I use:\n\nps -e -o pid,rss,vsz,size,cmd | grep postgres\n\nAnd compute my own totals (one of these days I'm going to script that \nprocess). Useful reading on this topic:\n\nhttp://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html\nhttp://mail.nl.linux.org/linux-mm/2003-03/msg00077.html\nhttp://forums.gentoo.org/viewtopic.php?t=175419\n\nMost confusion about what's going on here can be resolved by spending some \nquality time with pmap.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 18 Aug 2009 00:33:01 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Memory reporting on CentOS Linux" } ]
[ { "msg_contents": "\nHi, \n\nAs of today, we are still enjoying our Informatica tool but in a few months\nwe will need to change. Basically we do not use our software at its full\ncapacity and don't feel we need it anymore. \nSo we are trying to find a less expensive solution that would have the same\nfeatures (or almost...). \n\nWe are looking at less expensive tools and Open source software. We have\npretty much targeted a few companies and would like to know which ones would\nbe the better solution compared to Informatica. \n\n-Apatar \n-Expressor \n-Pentaho \n-Talend \n\nSome are paying software, some are open source (but not free...), so i'm\nasking you to know which is the best software on the market.\n\nThanks.\n-- \nView this message in context: http://www.nabble.com/Less-expensive-proprietary-or-Open-source-ETL-tools-tp24951714p24951714.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 13 Aug 2009 02:22:28 -0700 (PDT)", "msg_from": "Rstat <[email protected]>", "msg_from_op": true, "msg_subject": "Less expensive proprietary or Open source ETL tools" }, { "msg_contents": "Rstat a �crit :\n> Hi, \n> \n> As of today, we are still enjoying our Informatica tool but in a few months\n> we will need to change. Basically we do not use our software at its full\n> capacity and don't feel we need it anymore. \n> So we are trying to find a less expensive solution that would have the same\n> features (or almost...). \n> \n> We are looking at less expensive tools and Open source software. We have\n> pretty much targeted a few companies and would like to know which ones would\n> be the better solution compared to Informatica. \n> \n> -Apatar \n> -Expressor \n> -Pentaho \n> -Talend \n> \n> Some are paying software, some are open source (but not free...), so i'm\n> asking you to know which is the best software on the market.\n\nTalend is a great Open Source tool and is OK with Postgres databases\n\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source http://www.ckr-solutions.com\n", "msg_date": "Sun, 16 Aug 2009 15:02:15 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less expensive proprietary or Open source ETL tools" } ]
[ { "msg_contents": "\nI'm trying to execute a query to take a row from a table, and return \nmultiple rows, one per integer in the range between two of the fields in \nthat row, for all rows in the table. Perhaps a better explanation would be \nthe query:\n\nSELECT id, objectid, bin\nFROM locationbintemp, generate_series(0, 100000) AS bin\nWHERE s <= bin AND e >= bin;\n\nNow, this query is planned as a horrendous nested loop. For each row in \nthe source table, it will iterate through 100000 rows of generate_series \nto find the couple of rows which match.\n\n QUERY PLAN\n------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..110890441.22 rows=447791333 width=12)\n Join Filter: ((locationbintemp.s <= bin.bin) AND (locationbintemp.e >= bin.bin))\n -> Seq Scan on locationbintemp (cost=0.00..62086.22 rows=4030122 width=16)\n -> Function Scan on generate_series bin (cost=0.00..12.50 rows=1000 width=4)\n(4 rows)\n\nNow, I'd like to get this done this side of Christmas, so I was wondering \nif there's a neat trick I can use to get it to only consider the rows from \ns to e, instead of having to iterate through them all. I tried this, but \ngot an error message:\n\nSELECT id, objectid, bin\nFROM locationbintemp, generate_series(s, e) AS bin;\n\nERROR: function expression in FROM cannot refer to other relations of same query level\nLINE 1: ...jectid, bin FROM locationbintemp, generate_series(s, e) AS b...\n\nAny help appreciated.\n\nMatthew\n\n-- \n If you're thinking \"Oh no, this lecturer thinks Turing Machines are a feasible\n method of computation, where's the door?\", then you are in luck. There are\n some there, there, and by the side there. Oxygen masks will not drop from the\n ceiling... -- Computer Science Lecturer\n", "msg_date": "Thu, 13 Aug 2009 15:16:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "How to run this in reasonable time:" }, { "msg_contents": "On Thu, Aug 13, 2009 at 3:16 PM, Matthew Wakeling<[email protected]> wrote:\n> Now, I'd like to get this done this side of Christmas, so I was wondering if\n> there's a neat trick I can use to get it to only consider the rows from s to\n> e, instead of having to iterate through them all. I tried this, but got an\n> error message:\n>\n> SELECT id, objectid, bin\n> FROM locationbintemp, generate_series(s, e) AS bin;\n\n\nsomething like:\n\nselect id, objectid, generate_series(s,e) as bin\n from locationbintemp\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 13 Aug 2009 15:25:31 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to run this in reasonable time:" }, { "msg_contents": "On Thu, 13 Aug 2009, Greg Stark wrote:\n> On Thu, Aug 13, 2009 at 3:16 PM, Matthew Wakeling<[email protected]> wrote:\n>> Now, I'd like to get this done this side of Christmas, so I was wondering if\n>> there's a neat trick I can use to get it to only consider the rows from s to\n>> e, instead of having to iterate through them all. I tried this, but got an\n>> error message:\n>>\n>> SELECT id, objectid, bin\n>> FROM locationbintemp, generate_series(s, e) AS bin;\n>\n> select id, objectid, generate_series(s,e) as bin\n> from locationbintemp\n\nThanks. That looks like it shouldn't work, but it does.\n\nMatthew\n\n-- \n\"Beware the lightning that lurketh in an undischarged capacitor, lest it\n cause thee to be bounced upon thy buttocks in a most ungentlemanly manner.\"\n -- The Ten Commandments of Electronics\n", "msg_date": "Thu, 13 Aug 2009 15:36:15 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to run this in reasonable time:" } ]
[ { "msg_contents": " developer came by and asked me an interesting question.\n\nIf he has a view with 20 columns in it, and he selects a specific column from the view\nin his query. Does the engine when accessing the view return all columns? or is it \nsmart enough to know to just retrive the one?\n\nexample:\n\ncreate view test as\nselect a,b,c,d,e,f,g from testtable;\n\n\nselect a from test;\n\n(does the engine retrieve b-g?)\n\nThanks\n\nDave\n", "msg_date": "Thu, 13 Aug 2009 09:07:48 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Under the hood of views" }, { "msg_contents": "David Kerr wrote:\n> \n> create view test as\n> select a,b,c,d,e,f,g from testtable;\n> \n> select a from test;\n> \n> (does the engine retrieve b-g?)\n\nShouldn't - the query just gets rewritten macro-style. I don't think it \neliminates joins if you don't need any columns, but that's not possible \nwithout a bit of analysis.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 13 Aug 2009 17:28:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Under the hood of views" }, { "msg_contents": "On Thu, Aug 13, 2009 at 05:28:01PM +0100, Richard Huxton wrote:\n- David Kerr wrote:\n- >\n- >create view test as\n- >select a,b,c,d,e,f,g from testtable;\n- >\n- >select a from test;\n- >\n- >(does the engine retrieve b-g?)\n- \n- Shouldn't - the query just gets rewritten macro-style. I don't think it \n- eliminates joins if you don't need any columns, but that's not possible \n- without a bit of analysis.\n\nPerfect, thanks!\n\nDave\n", "msg_date": "Thu, 13 Aug 2009 16:04:00 -0700", "msg_from": "David Kerr <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Under the hood of views" }, { "msg_contents": "On Fri, Aug 14, 2009 at 12:04 AM, David Kerr<[email protected]> wrote:\n> On Thu, Aug 13, 2009 at 05:28:01PM +0100, Richard Huxton wrote:\n> - David Kerr wrote:\n> - >\n> - >create view test as\n> - >select a,b,c,d,e,f,g from testtable;\n> - >\n> - >select a from test;\n> - >\n> - >(does the engine retrieve b-g?)\n> -\n> - Shouldn't - the query just gets rewritten macro-style. I don't think it\n> - eliminates joins if you don't need any columns, but that's not possible\n> - without a bit of analysis.\n\nIn the case above everything is simple enough that the planner will\ncertainly collapse everything and it'll be exactly as if you juts\nwrote the first query. In more complex cases involving LIMIT or GROUP\nBY etc that may not be true.\n\nHowever there's an underlying question here, what do you mean by\n\"retrieve\"? The database always reads the entire row from disk\nanyways. In fact it reads the whole block that the row is on.\n\nIf there are large values which have been toasted they're never\nretrieved unless you actually need their values either for some\noperation such as a function or operator or because they're in the\nfinal output to send to the client.\n\nIf you mean what is sent over the wire to the client then only the\ncolumns listed in the final select list get sent to the client.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 14 Aug 2009 00:14:00 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Under the hood of views" } ]
[ { "msg_contents": "On Thu, 4 Jun 2009 06:57:57 -0400, Robert Haas <[email protected]> wrote\nin http://archives.postgresql.org/pgsql-performance/2009-06/msg00065.php :\n\n> I think I see the distinction you're drawing here. IIUC, you're\n> arguing that other database products use connection pooling to handle\n> rapid connect/disconnect cycles and to throttle the number of\n> simultaneous queries, but not to cope with the possibility of large\n> numbers of idle sessions. My limited understanding of why PostgreSQL\n> has a problem in this area is that it has to do with the size of the\n> process array which must be scanned to derive an MVCC snapshot. I'd\n> be curious to know if anyone thinks that's correct, or not.\n>\n> Assuming for the moment that it's correct, databases that don't use\n> MVCC won't have this problem, but they give up a significant amount of\n> scalability in other areas due to increased blocking (in particular,\n> writers will block readers). So how do other databases that *do* use\n> MVCC mitigate this problem?\n\nI apologize if it is bad form to respond to a message that is two months old,\nbut I did not see this question answered elsewhere and thought it\nwould be helpful\nto have it answered. This my rough understanding. Oracle never\n\"takes\" a snapshot,\nit computes one the fly, if and when it is needed. It maintains a\nstructure of recently\ncommitted transactions, with the XID for when they committed. If a\nprocess runs into\na tuple that is neither from the future nor from the deep past, it\nconsults this structure\nto see if that transaction has committed, and if so whether it did so before or\nafter the current query was started. The structure is partionable so\nit does not have\none global lock to serialize on, and the lock is short as it only gets\nthe info it needs, not the\nentire set of global open transactions.\n\n> The only one that we've discussed here is\n> Oracle, which seems to get around the problem by having a built-in\n> connection pooler.\n\nThere are several reasons to have something like Oracle's shared\nserver (or whatever they\ncall it now), and I don't think global serialization on snapshots is\nhigh among them, at\nleast not for Oracle. With shared server, you can (theoretically)\ncontrol memory usage so that 10,000 independent processes don't all\ndecide to do a large in-memory sort or hash join at the same time.\n\nIt is also a bit more than a standard connection pooler, because\nmultiple connections can\nbe in the middle of non-read-only transactions on the same backend at\nthe same time. I\ndon't think client-based pools allow that.\n\nJeff\n", "msg_date": "Thu, 13 Aug 2009 18:18:10 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> I apologize if it is bad form to respond to a message that is two\n> months old, but I did not see this question answered elsewhere and\n> thought it would be helpful to have it answered. This my rough\n> understanding. Oracle never \"takes\" a snapshot, it computes one the\n> fly, if and when it is needed. It maintains a structure of recently\n> committed transactions, with the XID for when they committed. If a\n> process runs into a tuple that is neither from the future nor from the\n> deep past, it consults this structure to see if that transaction has\n> committed, and if so whether it did so before or after the current\n> query was started. The structure is partionable so it does not have\n> one global lock to serialize on, and the lock is short as it only gets\n> the info it needs, not the entire set of global open transactions.\n\nAre you sure it's partitionable? I've been told that Oracle's\ntransaction log is a serious scalability bottleneck. (But I think\nI first heard that in 2001, so maybe they've improved it in recent\nreleases.) We know that Postgres' WAL log is a bottleneck --- check\nfor recent discussions involving XLogInsert. But the WAL log is\nonly touched by read-write transactions, whereas in Oracle even\nread-only transactions often have to go to the transaction log.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Aug 2009 19:21:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres " }, { "msg_contents": "On 14 August 2009 at 03:18 Jeff Janes <[email protected]> wrote:\n\n\n> This my rough understanding.  Oracle never\n> \"takes\" a snapshot, it computes one the fly, if and when it is needed.  It\n> maintains a\n> structure of recently committed transactions, with the XID for when they\n> committed.  If a\n> process runs into a tuple that is neither from the future nor from the deep\n> past, it\n> consults this structure to see if that transaction has committed, and if so\n> whether it did so before or\n> after the current query was started.  The structure is partionable so\n> it does not have one global lock to serialize on, and the lock is short as it\n> only gets\n> the info it needs, not the entire set of global open transactions.\n\n\nIf this is the way Oracle does it then the data structure you describe would\nneed to be updated on both transaction start and transaction commit, as well as\nbeing locked while it was read. Transaction commits would need to be serialized\nso that the commit order was maintained. \n\n\nThe Oracle structure would be read much less often, yet updated twice as often\nat snapshot point and at commit. It could be partitionable, but that would\nincrease the conflict associated with reads of the data structure.\n\n\nOracle's structure works well for an \"ideal workload\" such as TPC-C where the\ndata is physically separated and so the reads on this structure are almost nil.\nIt would work very badly on data that continuously conflicted, which may account\nfor the fact that no Oracle benchmark has been published on TPC-E. This bears\nout the experience of many Oracle DBAs, including myself. I certainly wouldn't\nassume Oracle have solved every problem.\n\n\n\nThe Postgres procarray structure is read often, yet only exclusively locked\nduring commit. As Tom said, we optimize away the lock at xid assignment and also\noptimize away many xid assignments altogether. We don't have any evidence that\nthe size of the procarray reduces the speed of reads, but we do know that the\nincreased queue length you get from having many concurrent sessions increases\ntime to record commit.\n\n\nWe might be able to do something similar to Oracle with Postgres, but it would\nrequire significant changes and much complex thought. The reason for doing so\nwould be to reduce the number of reads on the \"MVCC structure\", making mild\npartitioning more palatable. The good thing about the current Postgres structure\nis that it doesn't increase contention when accessing concurrently updated data.\n\n\nOn balance it would appear that Oracle gains a benchmark win by giving up some\nreal world usefulness. That's never been something anybody here has been willing\nto trade. \n\n\nFurther thought in this area could prove useful, but it seems a lower priority\nfor development simply because of the code complexity required to make this sort\nof change.\n\n\nBest Regards, Simon Riggs\n\n\n\n\n\n\n\n On 14 August 2009 at 03:18 Jeff Janes <[email protected]> wrote:\n\n\n > This my rough understanding.  Oracle never\n > \"takes\" a snapshot, it computes one the fly, if and when it is needed.  It maintains a\n > structure of recently committed transactions, with the XID for when they committed.  If a\n > process runs into a tuple that is neither from the future nor from the deep past, it\n > consults this structure to see if that transaction has committed, and if so whether it did so before or\n > after the current query was started.  The structure is partionable so\n > it does not have one global lock to serialize on, and the lock is short as it only gets\n > the info it needs, not the entire set of global open transactions.\n\n\n\n If this is the way Oracle does it then the data structure you describe would need to be updated on both transaction start and transaction commit, as well as being locked while it was read. Transaction commits would need to be serialized so that the commit order was maintained. \n \n\n\n\n\n The Oracle structure would be read much less often, yet updated twice as often at snapshot point and at commit. It could be partitionable, but that would increase the conflict associated with reads of the data structure.\n \n\n\n\n\n Oracle's structure works well for an \"ideal workload\" such as TPC-C where the data is physically separated and so the reads on this structure are almost nil. It would work very badly on data that continuously conflicted, which may account for the fact that no Oracle benchmark has been published on TPC-E. This bears out the experience of many Oracle DBAs, including myself. I certainly wouldn't assume Oracle have solved every problem.\n \n\n\n\n\n\n The Postgres procarray structure is read often, yet only exclusively locked during commit. As Tom said, we optimize away the lock at xid assignment and also optimize away many xid assignments altogether. We don't have any evidence that the size of the procarray reduces the speed of reads, but we do know that the increased queue length you get from having many concurrent sessions increases time to record commit.\n \n\n\n\n\n We might be able to do something similar to Oracle with Postgres, but it would require significant changes and much complex thought. The reason for doing so would be to reduce the number of reads on the \"MVCC structure\", making mild partitioning more palatable. The good thing about the current Postgres structure is that it doesn't increase contention when accessing concurrently updated data.\n \n\n\n\n\n On balance it would appear that Oracle gains a benchmark win by giving up some real world usefulness. That's never been something anybody here has been willing to trade. \n \n\n\n\n\n Further thought in this area could prove useful, but it seems a lower priority for development simply because of the code complexity required to make this sort of change.\n \n\n\n\n\n Best Regards, Simon Riggs", "msg_date": "Sun, 16 Aug 2009 11:10:28 +0200 (CEST)", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Scalability in postgres" }, { "msg_contents": "On Fri, Aug 14, 2009 at 4:21 PM, Tom Lane<[email protected]> wrote:\n> Jeff Janes <[email protected]> writes:\n>> I apologize if it is bad form to respond to a message that is two\n>> months old, but I did not see this question answered elsewhere and\n>> thought it would be helpful to have it answered. This my rough\n>> understanding. Oracle never \"takes\" a snapshot, it computes one the\n>> fly, if and when it is needed. It maintains a structure of recently\n>> committed transactions, with the XID for when they committed. If a\n>> process runs into a tuple that is neither from the future nor from the\n>> deep past, it consults this structure to see if that transaction has\n>> committed, and if so whether it did so before or after the current\n>> query was started. The structure is partionable so it does not have\n>> one global lock to serialize on, and the lock is short as it only gets\n>> the info it needs, not the entire set of global open transactions.\n>\n> Are you sure it's partitionable?\n\nI don't have inside knowledge, but I'm pretty sure that that structure is\npartionable. Each data block has in its header a list of in-doubt\ntransactions touching that block, and a link to where in the rollback/UNDO\nto find info on each one. The UNDO header knows that transaction's status.\n\nOf course there is still the global serialization on obtaining\nthe SCN, but the work involved in obtaining that (other than\nfighting over the lock) is constant, it doesn't increase with the number of\nbackends. Real Applications Clusters must have solved that somehow,\nI don't recall how. But I think it does make compromises, like in read\ncommitted mode a change made by another transaction might be invisible\nto your simple select statements for up to 3 seconds or so. I've never\nhad the opportunity to play with a RAC.\n\nFor all I know, the work of scanning ProcArray is trivial compared to the\nwork of obtaining the lock, even if the array is large. If I had the talent\nto write and run stand alone programs that could attach themselves to\nthe shared memory structure and then run my arbitrary code, I would\ntest that out.\n\n> I've been told that Oracle's\n> transaction log is a serious scalability bottleneck. (But I think\n> I first heard that in 2001, so maybe they've improved it in recent\n> releases.)\n\nWell, something always has to be the bottleneck. Do you know at what\ndegree of scaling that became a major issue? I don't think that there\nis a point of global serialization, other than taking SCNs, but if\nthere is enough pair-wise fighting, it could still add up to a lot\nof contention.\n\n> We know that Postgres' WAL log is a bottleneck --- check\n> for recent discussions involving XLogInsert.\n\nWould these two be good places for me to start looking into that:\n\nhttp://archives.postgresql.org/pgsql-hackers/2009-06/msg01205.php\nhttp://archives.postgresql.org/pgsql-hackers/2009-06/msg01019.php\n\nOr is bulk-copy (but with WAL logging) to specific to apply findings to the\ngeneral case?\n\n> But the WAL log is\n> only touched by read-write transactions, whereas in Oracle even\n> read-only transactions often have to go to the transaction log.\n\nThat's true, but any given read only transaction shouldn't have to make heavy\nuse of the transaction log just to decide if a transaction has committed\nor not. It should be able to look that up once and cache it for the rest\nof that subtran. Of course if it actually has to construct a consistent\nread from the UNDO on many different buffers due to the same interfering\ntransaction, that is more work and more contention.\n\nCheers,\n\nJeff\n", "msg_date": "Sun, 16 Aug 2009 14:55:08 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Scalability in postgres" } ]
[ { "msg_contents": "8.4 has vastly improved the warm-standby features, but it looks to me like this is still an installation-wide backup, not a per-database backup. That is, if you have (say) a couple hundred databases, and you only want warm-backup on one of them, you can't do it (except using other solutions like Slony). Is that right?\n\nThanks,\nCraig\n", "msg_date": "Fri, 14 Aug 2009 14:20:31 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Per-database warm standby?" }, { "msg_contents": "Craig James <[email protected]> writes:\n> 8.4 has vastly improved the warm-standby features, but it looks to me like this is still an installation-wide backup, not a per-database backup. That is, if you have (say) a couple hundred databases, and you only want warm-backup on one of them, you can't do it (except using other solutions like Slony). Is that right?\n\nCorrect, and that's always going to be true of any WAL-based solution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Aug 2009 18:05:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database warm standby? " }, { "msg_contents": "Tom Lane wrote:\n> Craig James <[email protected]> writes:\n> > 8.4 has vastly improved the warm-standby features, but it looks to me like this is still an installation-wide backup, not a per-database backup. That is, if you have (say) a couple hundred databases, and you only want warm-backup on one of them, you can't do it (except using other solutions like Slony). Is that right?\n> \n> Correct, and that's always going to be true of any WAL-based solution.\n\nExcept that we could create a \"WAL filter\" to restore only relevant\nstuff to particular databases ... Would that work? Of course, it would\nhave to ensure that global objects are also recovered, but we could\nsimply ignore commands for other databases.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 14 Aug 2009 18:09:49 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database warm standby?" }, { "msg_contents": "I also have a question about warm standby replication.\nWhat'd be the best solution for the system with 2 db servers (nodes), 1 \ndatabase and 10 seconds max to switch between them (ready to switch time).\nCurrently I'm using Slony, but it's kind of slow when doing subscribe \nafter failover on the failed node (database can be really huge and it \nwould take a few hours to COPY tables using Slony).\nMay be WAL replication would be better?\n\nBest regards, Nick.\n\nTom Lane wrote:\n> Craig James <[email protected]> writes:\n> \n>> 8.4 has vastly improved the warm-standby features, but it looks to me like this is still an installation-wide backup, not a per-database backup. That is, if you have (say) a couple hundred databases, and you only want warm-backup on one of them, you can't do it (except using other solutions like Slony). Is that right?\n>> \n>\n> Correct, and that's always going to be true of any WAL-based solution.\n>\n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Sat, 15 Aug 2009 12:11:14 +0400", "msg_from": "Nickolay <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Per-database warm standby?" } ]
[ { "msg_contents": "Dear users,\n\nI try to optimize the time of my Postgresql-requests, but for that, the first step,\nI of course need to get that time.\n\nI know that with:\n\nEXPLAIN ANALYSE SELECT bundesland from\n bundesland WHERE ST_Contains(the_geom, $punktgeometrie_start) AND\n ST_Contains(the_geom, $punktgeometrie_ende)\n\nI can get that time on command line.\n\nBut I would like to get it in a php-script, like\n\n$timerequest_result=pg_result($timerequest,0);\n\n(well, that does not work).\n\nI wonder: Is there another way to get the time a request needs?\nHow do you handle this?\n\nThank you very much, Kai\n\n-- \nGRATIS f�r alle GMX-Mitglieder: Die maxdome Movie-FLAT!\nJetzt freischalten unter http://portal.gmx.net/de/go/maxdome01\n", "msg_date": "Mon, 17 Aug 2009 18:38:13 +0200", "msg_from": "\"Kai Behncke\" <[email protected]>", "msg_from_op": true, "msg_subject": "Getting time of a postgresql-request" }, { "msg_contents": "Kai Behncke wrote:\n>\n> But I would like to get it in a php-script, like\n>\n> $timerequest_result=pg_result($timerequest,0);\n>\n> (well, that does not work).\n>\n> I wonder: Is there another way to get the time a request needs?\n> How do you handle this?\n> \n$time = microtime()\n$result = pg_result($query);\necho \"Time to run query and return result to PHP: \".(microtime() - $time);\n\nSomething like that.\n\nRegards\n\nRussell\n", "msg_date": "Tue, 18 Aug 2009 14:25:57 +1000", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting time of a postgresql-request" }, { "msg_contents": "On Tue, 18 Aug 2009 06:25:57 +0200, Russell Smith <[email protected]> \nwrote:\n\n> Kai Behncke wrote:\n>>\n>> But I would like to get it in a php-script, like\n>>\n>> $timerequest_result=pg_result($timerequest,0);\n>>\n>> (well, that does not work).\n>>\n>> I wonder: Is there another way to get the time a request needs?\n>> How do you handle this?\n>>\n> $time = microtime()\n> $result = pg_result($query);\n> echo \"Time to run query and return result to PHP: \".(microtime() - \n> $time);\n>\n> Something like that.\n>\n> Regards\n>\n> Russell\n>\n\nI use the following functions wich protect against SQL injections, make \nusing the db a lot easier, and log query times to display at the bottom of \nthe page.\nIt is much less cumbersome than PEAR::DB or pdo which force you to use \nprepared statements (slower if you throw them away after using them just \nonce)\n\ndb_query( \"SELECT * FROM stuff WHERE a=%s AND b=%s\", array( $a, $b ))\n\ndb_query( \"SELECT * FROM stuff WHERE id IN (%s) AND b=%s\", array( \n$list_of_ints, $b ))\n\n------------\n\nfunction db_quote_query( $sql, $params=false )\n{\n\t// if no params, send query raw\n\tif( $params === false )\treturn $sql;\n\tif( !is_array( $params )) $params = array( $params );\n\n\t// quote params\n\tforeach( $params as $key => $val )\n\t{\n\t\tif( is_array( $val ))\n\t\t\t$params[$key] = implode( ', ', array_map( intval, $val ));\n\t\telse\n\t\t\t$params[$key] = is_null($val)?'NULL':(\"'\".pg_escape_string($val).\"'\");;\n\t}\n\treturn vsprintf( $sql, $params );\n}\n\nfunction db_query( $sql, $params=false )\n{\n\t// it's already a query\n\tif( is_resource( $sql ))\n\t\treturn $sql;\n\n\t$sql = db_quote_query( $sql, $params );\n\n\t$t = getmicrotime( true );\n\tif( DEBUG > 1 )\txdump( $sql );\n\t$r = pg_query( $sql );\n\tif( !$r )\n\t{\n\t\tif( DEBUG > 1 )\n\t\t{\n\t\t\techo \"<div class=bigerror><b>Erreur PostgreSQL :</b><br \n/>\".htmlspecialchars(pg_last_error()).\"<br /><br /><b>Requête</b> :<br \n/>\".$sql.\"<br /><br /><b>Traceback </b>:<pre>\";\n\t\t\tforeach( debug_backtrace() as $t ) xdump( $t );\n\t\t\techo \"</pre></div>\";\n\t\t}\n\t\tdie();\n\t}\n\tif( DEBUG > 1)\txdump( $r );\n\tglobal $_global_queries_log, $_mark_query_time;\n\t$_mark_query_time = getmicrotime( true );\n\t$_global_queries_log[] = array( $_mark_query_time-$t, $sql );\n\treturn $r;\n}\n", "msg_date": "Tue, 18 Aug 2009 11:38:35 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Getting time of a postgresql-request" }, { "msg_contents": "Hi to all,\n\ni am developing a web app for thousands users (1.000/2.000).\n\nEach user have a 2 table of work...I finally have 2.000 (users) x 2 \ntables = 4.000 tables!\n\nPostgres support an elevate number of tables??\ni have problem of performance ???\n\n\nThanks\n\nSorry for my english\n", "msg_date": "Thu, 20 Aug 2009 09:01:30 +0200", "msg_from": "Fabio La Farcioli <[email protected]>", "msg_from_op": false, "msg_subject": "Number of tables" }, { "msg_contents": "Thursday, August 20, 2009, 9:01:30 AM you wrote:\n\n> i am developing a web app for thousands users (1.000/2.000).\n\n> Each user have a 2 table of work...I finally have 2.000 (users) x 2 \n> tables = 4.000 tables!\n\nIf all tables are created equal, I would rethink the design. Instead of\nusing 2 tables per user I'd use 2 tables with one column specifying the\nuser(-id).\n\nEspecially changes in table layout would require you to change up to 2000 \ntables, which is prone to errors...\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Thu, 20 Aug 2009 09:49:12 +0200", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Thu, 2009-08-20 at 09:01 +0200, Fabio La Farcioli wrote:\n\n> Each user have a 2 table of work...I finally have 2.000 (users) x 2 \n> tables = 4.000 tables!\n\nHmm, ok. Does each user really need two tables each? Why?\n\nDoes the set of tables for each user have a different structure? Or are\nyou separating them so you can give each user a separate database role\nand ownership of their own tables?\n\n\n> Postgres support an elevate number of tables??\n\nThousands? Sure.\n\n> i have problem of performance ???\n> \nYes, you probably will. There is a cost to having _lots_ of tables in\nPostgreSQL in terms of maintaining table statistics, autovacuum work,\netc. I doubt it'll be too bad at 4000 tables, but if your user numbers\nkeep growing it could become a problem.\n\nOther concerns are that it'll also be hard to maintain your design,\ndifficult to write queries that read data from more than one user, etc.\nIf you need to change the schema of your user tables you're going to\nhave to write custom tools to automate it. It could get very clumsy.\n\nInstead of one or two tables per user, perhaps you should keep the data\nin just a few tables, with a composite primary key that includes the\nuser ID. eg given the user table:\n\nCREATE TABLE user (\n id SERIAL PRIMARY KEY,\n name text\n);\n\ninstead of:\n\nCREATE TABLE user1_tablea(\n id INTEGER PRIMARY KEY,\n blah text,\n blah2 integer\n);\n\nCREATE TABLE user2_tablea(\n id INTEGER PRIMARY KEY,\n blah text,\n blah2 integer\n);\n\n... etc ...\n\n\nyou might write:\n\nCREATE TABLE tablea (\n user_id INTEGER REFERENCES user(id),\n id INTEGER,\n PRIMARY KEY(user_id, id),\n blah text,\n blah2 integer\n);\n\n\nYou can, of course, partition this table into blocks of user-IDs behind\nthe scenes, but your partitioning is invisible to your web app and can\nbe done solely for performance reasons. You don't have to try juggling\nall these little tables.\n\n\nNote that whether this is a good idea DOES depend on how much data\nyou're going to have. If each user table will have _lots_ of data, then\nindividual tables might be a better approach after all. It's also a\nbenefit if you do intend to give each user their own database role.\n\n--\nCraig Ringer\n\n\n", "msg_date": "Thu, 20 Aug 2009 16:15:47 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Craig Ringer ha scritto:\n> On Thu, 2009-08-20 at 09:01 +0200, Fabio La Farcioli wrote:\n> \n>> Each user have a 2 table of work...I finally have 2.000 (users) x 2 \n>> tables = 4.000 tables!\n> \n> Hmm, ok. Does each user really need two tables each? Why?\n> \n> Does the set of tables for each user have a different structure? Or are\n> you separating them so you can give each user a separate database role\n> and ownership of their own tables?\n> \nNo no...\n\n>> i have problem of performance ???\n>>\n> Yes, you probably will. There is a cost to having _lots_ of tables in\n> PostgreSQL in terms of maintaining table statistics, autovacuum work,\n> etc. I doubt it'll be too bad at 4000 tables, but if your user numbers\n> keep growing it could become a problem.\n> \nThe number of the user probably will increase with the time...\n\n> Other concerns are that it'll also be hard to maintain your design,\n> difficult to write queries that read data from more than one user, etc.\n> If you need to change the schema of your user tables you're going to\n> have to write custom tools to automate it. It could get very clumsy.\n> \nIt's true...i don't think to this problem..\n\n\n> Note that whether this is a good idea DOES depend on how much data\n> you're going to have. If each user table will have _lots_ of data, then\n> individual tables might be a better approach after all. It's also a\n> benefit if you do intend to give each user their own database role.\n\nEvery table have between 1.000 and 100.000(MAX) records...\n\nDo you think i don't have problem in performance ??\nThe user only view the record whit its user_id....\n\nI am thinking to redesign the DB\n\n", "msg_date": "Thu, 20 Aug 2009 10:35:19 +0200", "msg_from": "Fabio La Farcioli <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Fabio La Farcioli wrote:\n> i am developing a web app for thousands users (1.000/2.000).\n> \n> Each user have a 2 table of work...I finally have 2.000 (users) x 2 \n> tables = 4.000 tables!\n> \n> Postgres support an elevate number of tables??\n> i have problem of performance ???\n\nWe have run databases with over 100,000 tables with no problems.\n\nHowever, we found that it's not a good idea to have a table-per-user design. As you get more users, it is hard to maintain the database. Most of the time there are only a few users active.\n\nSo, we create a single large \"archive\" table, identical to the per-user table except that it also has a user-id column. When a user hasn't logged in for a few hours, a cron process copies their tables into the large archive table, and returns their personal tables to a \"pool\" of available tables.\n\nWhen the user logs back in, a hidden part of the login process gets a table from the pool of available tables, assigns it to this user, and copies the user's data from the archive into this personal table. They are now ready to work. This whole process takes just a fraction of a second for most users.\n\nWe keep a pool of about 200 tables, which automatically will expand (create more tables) if needed, but we've never had more than 200 users active at one time.\n\nCraig\n", "msg_date": "Thu, 20 Aug 2009 13:16:06 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Thu, Aug 20, 2009 at 9:16 PM, Craig James<[email protected]> wrote:\n> Fabio La Farcioli wrote:\n>>\n>> i am developing a web app for thousands users (1.000/2.000).\n>>\n>> Each user have a 2 table of work...I finally have 2.000 (users) x 2 tables\n>> = 4.000 tables!\n>>\n>> Postgres support an elevate number of tables??\n>> i have problem of performance ???\n\nWhat you want is a multi-column primary key where userid is part of\nthe key. You don't want to have a separate table for each user unless\neach user has their own unique set of columns.\n\n\n> When the user logs back in, a hidden part of the login process gets a table\n> from the pool of available tables, assigns it to this user, and copies the\n> user's  data from the archive into this personal table.  They are now ready\n> to work. This whole process takes just a fraction of a second for most\n> users.\n\nAnd what does all this accomplish?\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Thu, 20 Aug 2009 22:41:57 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Greg Stark wrote:\n> What you want is a multi-column primary key where userid is part of\n> the key. You don't want to have a separate table for each user unless\n> each user has their own unique set of columns.\n\nNot always true.\n\n>> When the user logs back in, a hidden part of the login process gets a table\n>> from the pool of available tables, assigns it to this user, and copies the\n>> user's data from the archive into this personal table. They are now ready\n>> to work. This whole process takes just a fraction of a second for most\n>> users.\n> \n> And what does all this accomplish?\n\nThe primary difference is between\n\n delete from big_table where userid = xx\n\nvesus\n\n truncate user_table\n\nThere are also significant differences in performance for large inserts, because a single-user table almost never needs indexes at all, whereas a big table for everyone has to have at least one user-id column that's indexed.\n\nIn our application, the per-user tables are \"hitlists\" -- scratch lists that are populated something like this. The hitlist is something like this:\n\n create table hitlist_xxx (\n row_id integer,\n sortorder integer default nextval('hitlist_seq_xxx')\n )\n\n\n truncate table hitlist_xxx;\n select setval(hitlist_seq_xxx, 1, false);\n insert into hitlist_xxx (row_id) (select some_id from ... where ... order by ...);\n\nOnce the hitlist is populated, the user can page through it quickly with no further searching, e.g. using a web app.\n\nWe tested the performance using a single large table in Postgres, and it was not nearly what we needed. These hitlists tend to be transitory, and the typical operation is to discard the entire list and create a new one. Sometimes the user will sort the entire list based on some criterion, which also requires a copy/delete/re-insert using a new order-by.\n\nWith both Oracle and Postgres, truncate is MUCH faster than delete, and the added index needed for a single large table only makes it worse. With Postgres, the repeated large delete/insert makes for tables that need a lot of vacuuming and index bloat, further hurting performance.\n\nCraig\n", "msg_date": "Thu, 20 Aug 2009 15:18:59 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Thu, Aug 20, 2009 at 11:18 PM, Craig James<[email protected]> wrote:\n> Greg Stark wrote:\n>>\n>> What you want is a multi-column primary key where userid is part of\n>> the key. You don't want to have a separate table for each user unless\n>> each user has their own unique set of columns.\n> Not always true.\n...\n> The primary difference is between\n>  delete from big_table where userid = xx\n> vesus\n>  truncate user_table\n\n\nThis is a valid point but it's a fairly special case. For most\napplications the overhead of deleting records and having to run vacuum\nwill be manageable and a small contribution to the normal vacuum\ntraffic. Assuming the above is necessary is a premature optimization\nwhich is probably unnecessary.\n\n\n> There are also significant differences in performance for large inserts,\n> because a single-user table almost never needs indexes at all, whereas a big\n> table for everyone has to have at least one user-id column that's indexed.\n\nMaintaining indexes isn't free but one index is hardly going to be a\ndealbreaker.\n\n> Once the hitlist is populated, the user can page through it quickly with no\n> further searching, e.g. using a web app.\n\nThe \"traditional\" approach to this would be a temporary table. However\nin the modern world of web applications where the user session does\nnot map directly to a database session that no longer works (well it\nnever really worked in Postgres where temporary tables are not so\nlightweight :( ).\n\nIt would be nice to have a solution to that where you could create\nlightweight temporary objects which belong to an \"application session\"\nwhich can be picked up by a different database connection each go\naround.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 21 Aug 2009 00:52:50 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Greg Stark wrote:\n\n> It would be nice to have a solution to that where you could create\n> lightweight temporary objects which belong to an \"application session\"\n> which can be picked up by a different database connection each go\n> around.\n\nIt would be useful:\n\nCREATE SCHEMA session1234 UNLOGGED\n CREATE TABLE hitlist ( ... );\n\nEach table in the \"session1234\" schema would not be WAL-logged, and\nwould be automatically dropped on crash recovery (actually the whole\nschema would be). But while the server is live it behaves like a\nregular schema/table and can be seen by all backends (i.e. not temp)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 20 Aug 2009 20:38:55 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Fri, Aug 21, 2009 at 1:38 AM, Alvaro\nHerrera<[email protected]> wrote:\n> Greg Stark wrote:\n>\n>> It would be nice to have a solution to that where you could create\n>> lightweight temporary objects which belong to an \"application session\"\n>> which can be picked up by a different database connection each go\n>> around.\n>\n> It would be useful:\n>\n> CREATE SCHEMA session1234 UNLOGGED\n>  CREATE TABLE hitlist ( ... );\n>\n> Each table in the \"session1234\" schema would not be WAL-logged, and\n> would be automatically dropped on crash recovery (actually the whole\n> schema would be).  But while the server is live it behaves like a\n> regular schema/table and can be seen by all backends (i.e. not temp)\n\nI don't think unlogged is the only, and perhaps not even the most\nimportant, desirable property.\n\nI would want these objects not to cause catalog churn. I might have\nthousands of sessions being created all the time and creating new rows\nand index pointers which have to be vacuumed would be a headache.\n\nI would actually want the objects to be invisible to other sessions,\nat least by default. You would have to have the handle for the\napplication session to put them into your scope and then you would get\nthem all en masse. This isn't so much for security -- I would be fine\nif there was a back door if you have the right privileges -- but for\napplication design, so application queries could use prepared plans\nwithout modifying the query to point to hard code the session\ninformation within them and be replanned.\n\nI'm not sure if they should use shared buffers or local buffers. As\nlong as only one backend at a time could access them it would be\npossible to use local buffers and evict them all when the handle is\ngiven up. But that means giving up any caching benefit across\nsessions. On the other hand it means they'll be much lighter weight\nand easier to make safely unlogged than if they lived in shared\nbuffers.\n\nThese are just some brainstorming ideas, I don't have a clear vision\nof how to achieve all this yet. This does sound a lot like the SQL\nstandard temp table discussion and I think Tom and I are still at odds\non that. Creating new catalog entries for them gives up -- what I\nthink is the whole point of their design -- their lack of DDL\noverhead. But my design above means problems for transactional\nTRUNCATE and other DDL.\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Fri, 21 Aug 2009 01:53:54 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "I think this requirement can be lumped into the category of \"right \nhammer, right nail\" instead of the \"one hammer, all nails\" category. \nThere are many memory only or disk backed memory based key value \nstores which meet your requirements like Reddis and memcached.\n\n-Jerry\n\nJerry Champlin|Absolute Performance Inc.\n\nOn Aug 20, 2009, at 5:52 PM, Greg Stark <[email protected]> wrote:\n\n> On Thu, Aug 20, 2009 at 11:18 PM, Craig James<[email protected] \n> > wrote:\n>> Greg Stark wrote:\n>>>\n>>> What you want is a multi-column primary key where userid is part of\n>>> the key. You don't want to have a separate table for each user \n>>> unless\n>>> each user has their own unique set of columns.\n>> Not always true.\n> ...\n>> The primary difference is between\n>> delete from big_table where userid = xx\n>> vesus\n>> truncate user_table\n>\n>\n> This is a valid point but it's a fairly special case. For most\n> applications the overhead of deleting records and having to run vacuum\n> will be manageable and a small contribution to the normal vacuum\n> traffic. Assuming the above is necessary is a premature optimization\n> which is probably unnecessary.\n>\n>\n>> There are also significant differences in performance for large \n>> inserts,\n>> because a single-user table almost never needs indexes at all, \n>> whereas a big\n>> table for everyone has to have at least one user-id column that's \n>> indexed.\n>\n> Maintaining indexes isn't free but one index is hardly going to be a\n> dealbreaker.\n>\n>> Once the hitlist is populated, the user can page through it quickly \n>> with no\n>> further searching, e.g. using a web app.\n>\n> The \"traditional\" approach to this would be a temporary table. However\n> in the modern world of web applications where the user session does\n> not map directly to a database session that no longer works (well it\n> never really worked in Postgres where temporary tables are not so\n> lightweight :( ).\n>\n> It would be nice to have a solution to that where you could create\n> lightweight temporary objects which belong to an \"application session\"\n> which can be picked up by a different database connection each go\n> around.\n>\n> -- \n> greg\n> http://mit.edu/~gsstark/resume.pdf\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 20 Aug 2009 22:27:10 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Thu, Aug 20, 2009 at 8:38 PM, Alvaro\nHerrera<[email protected]> wrote:\n> Greg Stark wrote:\n>\n>> It would be nice to have a solution to that where you could create\n>> lightweight temporary objects which belong to an \"application session\"\n>> which can be picked up by a different database connection each go\n>> around.\n>\n> It would be useful:\n>\n> CREATE SCHEMA session1234 UNLOGGED\n>  CREATE TABLE hitlist ( ... );\n>\n> Each table in the \"session1234\" schema would not be WAL-logged, and\n> would be automatically dropped on crash recovery (actually the whole\n> schema would be).  But while the server is live it behaves like a\n> regular schema/table and can be seen by all backends (i.e. not temp)\n\n+1. In fact, I don't even see why the \"unlogged\" property needs to be\na schema property. I think you could just add a table reloption.\n(There are some possible foot-gun scenarios if the option were changed\nsubsequent to table creation, so we'd either need to decide how to\ndeal with those, or decide not to allow it.)\n\n...Robert\n", "msg_date": "Sat, 22 Aug 2009 20:40:32 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Greg Stark <[email protected]> wrote:\n \n> Creating new catalog entries for [temp tables] gives up -- what I\n> think is the whole point of their design -- their lack of DDL\n> overhead.\n \nAs long as we're brainstorming... Would it make any sense for temp\ntables to be created as in-memory tuplestores up to the point that we\nhit the temp_buffers threshold? Creating and deleting a whole set of\ndisk files per temp table is part of what makes them so heavy. \n(There's still the issue of dealing with the catalogs, of course....)\n \n-Kevin\n", "msg_date": "Mon, 24 Aug 2009 09:29:58 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Fabio La Farcioli wrote:\n> Hi to all,\n>\n> i am developing a web app for thousands users (1.000/2.000).\n>\n> Each user have a 2 table of work...I finally have 2.000 (users) x 2 \n> tables = 4.000 tables!\n\nAs a someone with a ~50K-table database, I can tell you it's definitely \npossible to survive with such a layout :-)\n\nHowever, expect very slow (hours) pg_dump, \\dt and everything else that \nrequires reading schema information for the whole db.\n\n\nMike\n\n", "msg_date": "Mon, 31 Aug 2009 17:19:01 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "On Tue, Sep 1, 2009 at 1:19 AM, Mike Ivanov<[email protected]> wrote:\n>> i am developing a web app for thousands users (1.000/2.000).\n>>\n>> Each user have a 2 table of work...I finally have 2.000 (users) x 2 tables\n>> = 4.000 tables!\n>\n> As a someone with a ~50K-table database, I can tell you it's definitely\n> possible to survive with such a layout :-)\n\nThe usual recommendation is to have a single table (or two tables in\nthis case) with userid forming part of the primary key in addition to\nwhatever identifies the records within the user's set of data. You may\nnot expect to be need to run queries which combine multiple users'\ndata now but you will eventually.\n\nThis doesn't work so great when each user is going to be specifying\ntheir own custom schema on the fly but that's not really what\nrelational databases were designed for. For that you might want to\nlook into the hstore contrib module or something like CouchDB (which\ncan be combined with Postgres I hear)\n\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Tue, 1 Sep 2009 02:01:26 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" }, { "msg_contents": "Greg Stark wrote:\n> You may\n> not expect to be need to run queries which combine multiple users'\n> data now but you will eventually.\n> \n\nWe store cross-user data in a separate schema, which solves all *our* \nproblems.\n\n> This doesn't work so great when each user is going to be specifying\n> their own custom schema on the fly \n\nThis works fine, at least we didn't encounter any issues with that.\n\n> but that's not really what\n> relational databases were designed for. \n\nSometimes you have to.. you know, unusual things to meet some specific \nrequirements, like independent user schemas. It's not a conventional web \napp we run :-)\n\nI'm not arguing this is a bit extremal approach, but if one is forced to \ngo this path, it's quite walkable ;-)\n\nMike\n\n", "msg_date": "Mon, 31 Aug 2009 18:10:39 -0700", "msg_from": "Mike Ivanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Number of tables" } ]
[ { "msg_contents": "Hi,\nI am using int8 field to pack a number of error flags. This is very common technique for large tables to pack multiple flags in one integer field.\n\nFor most records - the mt_flags field is 0. Here is the statistics (taken from pgAdmin Statistics tab for mt_flags column):\nMost common Values: {0,128,2,4,8)\nMost common Frequencies: {0.96797,0.023,0.0076,0.0005,0.00029)\n\nWhat I notice that when bit-AND function is used - Postgres significantly underestimates the amount of rows:\n\n\nexplain analyze select count(*) from mt__20090801 where mt_flags&8=0;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=83054.43..83054.44 rows=1 width=0) (actual time=2883.154..2883.154 rows=1 loops=1)\n -> Seq Scan on mt__20090801 (cost=0.00..83023.93 rows=12200 width=0) (actual time=0.008..2100.390 rows=2439435 loops=1)\n Filter: ((mt_flags & 8) = 0)\n Total runtime: 2883.191 ms\n(4 rows)\n\nThis is not an issue for the particular query above, but I noticed that due to that miscalculation in many cases Postgres chooses plan with Nested Loops for other queries. I can fix it by setting enable_nest_loops to off, but it's not something I should set for all queries.\nIs there any way to help Postgres make a better estimation for number of rows returned by bit function?\nThanks,\n-Slava Moudry, Senior DW Engineer. 4Info Inc.\n\nP.S. table definition:\n\n\\d mt__20090801\n Table \"dw.mt__20090801\"\n Column | Type | Modifiers\n--------------------------+-----------------------------+-----------\n mt_id | bigint | not null\n mt_ts | timestamp without time zone |\n ad_cost | numeric(10,5) |\n short_code | integer |\n message_id | bigint | not null\n mp_code | character(1) | not null\n al_id | integer | not null\n cust_id | integer |\n device_id | integer | not null\n broker_id | smallint |\n partner_id | integer |\n ad_id | integer |\n keyword_id | integer |\n sc_id | integer |\n cp_id | integer |\n src_alertlog_id | bigint |\n src_query_id | bigint |\n src_response_message_num | smallint |\n src_gateway_message_id | bigint |\n mt_flags | integer |\n message_length | integer | not null\n created_etl | timestamp without time zone |\nIndexes:\n \"mt_device_id__20090801\" btree (device_id) WITH (fillfactor=100), tablespace \"index2\"\n \"mt_ts__20090801\" btree (mt_ts) WITH (fillfactor=100) CLUSTER, tablespace \"index2\"\nCheck constraints:\n \"mt__20090801_mt_ts_check\" CHECK (mt_ts >= '2009-08-01 00:00:00'::timestamp without time zone AND mt_ts < '2009-08-02 00:00:00'::timestamp without time\nzone)\nInherits: mt\nTablespace: \"dw_tables3\"\n\n\n\n\n\n\n\n\n\n\n\nHi,\nI am using int8 field to pack a number of error flags. This\nis very common technique for large tables to pack multiple flags in one integer\nfield.\n \nFor most records – the mt_flags field is 0. Here is\nthe statistics (taken from pgAdmin Statistics tab for mt_flags column):\nMost common Values: {0,128,2,4,8)\nMost common Frequencies: {0.96797,0.023,0.0076,0.0005,0.00029)\n \nWhat I notice that when bit-AND function is used –\nPostgres significantly underestimates the amount of rows:\n \n \nexplain analyze select count(*) from mt__20090801\nwhere  mt_flags&8=0;\n                                                        \nQUERY\nPLAN                                                         \n\n-----------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=83054.43..83054.44 rows=1\nwidth=0) (actual time=2883.154..2883.154 rows=1 loops=1)\n   ->  Seq Scan on mt__20090801 \n(cost=0.00..83023.93 rows=12200\nwidth=0) (actual time=0.008..2100.390 rows=2439435\nloops=1)\n         Filter:\n((mt_flags & 8) = 0)\n Total runtime: 2883.191 ms\n(4 rows)\n \nThis is not an issue for the particular query above, but I\nnoticed that due to that miscalculation in many cases Postgres chooses plan\nwith Nested Loops for other queries. I can fix it by setting enable_nest_loops\nto off, but it's not something I should set for all queries.\nIs there any way to help Postgres make a better estimation for\nnumber of rows returned by bit function?\nThanks,\n-Slava Moudry, Senior DW Engineer. 4Info Inc.\n \nP.S. table definition:\n \n\\d\nmt__20090801\n                     \nTable \"dw.mt__20090801\"\n         \nColumn         \n|           \nType             |\nModifiers \n--------------------------+-----------------------------+-----------\n mt_id                   \n|\nbigint                     \n| not null\n mt_ts                   \n| timestamp without time zone | \n ad_cost                 \n|\nnumeric(10,5)              \n| \n short_code      \n        |\ninteger                    \n| \n message_id              \n|\nbigint                     \n| not null\n mp_code                 \n|\ncharacter(1)               \n| not null\n al_id                   \n|\ninteger                    \n| not null\n cust_id     \n            |\ninteger                    \n| \n device_id               \n|\ninteger                    \n| not null\n broker_id               \n|\nsmallint                   \n| \n partner_id              \n|\ninteger                    \n| \n ad_id                   \n|\ninteger                    \n| \n keyword_id              \n|\ninteger                    \n| \n sc_id                   \n|\ninteger                    \n| \n cp_id                   \n|\ninteger                    \n| \n src_alertlog_id   \n      |\nbigint                     \n| \n src_query_id            \n|\nbigint                     \n| \n src_response_message_num\n|\nsmallint                   \n| \n src_gateway_message_id  \n|\nbigint                     \n| \n mt_flags                \n| integer                     |\n\n message_length          \n|\ninteger                    \n| not null\n created_etl             \n| timestamp without time zone | \nIndexes:\n   \n\"mt_device_id__20090801\" btree (device_id) WITH (fillfactor=100),\ntablespace \"index2\"\n   \n\"mt_ts__20090801\" btree (mt_ts) WITH (fillfactor=100) CLUSTER,\ntablespace \"index2\"\nCheck\nconstraints:\n   \n\"mt__20090801_mt_ts_check\" CHECK (mt_ts >= '2009-08-01\n00:00:00'::timestamp without time zone AND mt_ts < '2009-08-02\n00:00:00'::timestamp without time \nzone)\nInherits:\nmt\nTablespace:\n\"dw_tables3\"", "msg_date": "Mon, 17 Aug 2009 13:07:18 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "number of rows estimation for bit-AND operation" }, { "msg_contents": "On Mon, Aug 17, 2009 at 2:07 PM, Slava Moudry<[email protected]> wrote:\n> Hi,\n>\n> I am using int8 field to pack a number of error flags. This is very common\n> technique for large tables to pack multiple flags in one integer field.\n>\n> For most records – the mt_flags field is 0. Here is the statistics (taken\n> from pgAdmin Statistics tab for mt_flags column):\n>\n> Most common Values: {0,128,2,4,8)\n>\n> Most common Frequencies: {0.96797,0.023,0.0076,0.0005,0.00029)\n>\n> What I notice that when bit-AND function is used – Postgres significantly\n> underestimates the amount of rows:\n>\n> explain analyze select count(*) from mt__20090801 where  mt_flags&8=0;\n>\n>                               QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n>\n>  Aggregate  (cost=83054.43..83054.44 rows=1 width=0) (actual\n> time=2883.154..2883.154 rows=1 loops=1)\n>\n>    ->  Seq Scan on mt__20090801  (cost=0.00..83023.93 rows=12200 width=0)\n> (actual time=0.008..2100.390 rows=2439435 loops=1)\n>\n>          Filter: ((mt_flags & 8) = 0)\n>\n>  Total runtime: 2883.191 ms\n>\n> (4 rows)\n>\n> This is not an issue for the particular query above, but I noticed that due\n> to that miscalculation in many cases Postgres chooses plan with Nested Loops\n> for other queries. I can fix it by setting enable_nest_loops to off, but\n> it's not something I should set for all queries.\n>\n> Is there any way to help Postgres make a better estimation for number of\n> rows returned by bit function?\n\nYou can index on the function. For instance:\n\ncreate table t (mt_flags int);\ncreate index t_mtflags_bit on t ((mt_flags&8));\ninsert into t select case when random() > 0.95 then case when random()\n>0.5 then 8 else 12 end else 0 end from generate_series(1,10000);\nanalyze t;\nexplain select * from t where mt_flags&8=8;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Scan using t_mtflags_bit on t (cost=0.00..52.17 rows=467 width=4)\n Index Cond: ((mt_flags & 8) = 8)\n(2 rows)\n\nHope that helps a little.\n", "msg_date": "Tue, 18 Aug 2009 01:08:46 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "Hi Scott,\nThank you for reply.\nI am using Postgres 8.4.0 (btw - great release --very happy about it) and I got a different plan after following your advice:\ncreate index t_mtflags_bit on staging.tmp_t ((mt_flags&8));\nanalyze staging.tmp_t;\nexplain analyze select count(*) from staging.tmp_t where mt_flags&8=0;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=89122.78..89122.79 rows=1 width=0) (actual time=2994.970..2994.971 rows=1 loops=1)\n -> Seq Scan on tmp_t (cost=0.00..83023.93 rows=2439541 width=0) (actual time=0.012..2161.886 rows=2439435 loops=1)\n Filter: ((mt_flags & 8) = 0)\n Total runtime: 2995.017 ms\n(4 rows)\n\nThe seq scan is OK, since I don't expect Postgres to use index scan for such low-selective condition.\nIt would be tough for me to support indexes for each bit flag value and their combinations. E.g. in the query below it is again 200x off on number of rows.\nexplain analyze select count(*) from staging.tmp_t where mt_flags&134=0;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=83054.43..83054.44 rows=1 width=0) (actual time=2964.960..2964.960 rows=1 loops=1)\n -> Seq Scan on tmp_t (cost=0.00..83023.93 rows=12200 width=0) (actual time=0.014..2152.031 rows=2362257 loops=1)\n Filter: ((mt_flags & 134) = 0)\n Total runtime: 2965.009 ms\n(4 rows)\n\nI still wonder if it's something I could/should report as a bug? I've been struggling with this issue in 8.2, 8.3.x (now using 8.4.0).\nWe can more or less work around this by disabling nestloop in our analytics queries but I have problems enforcing this in reporting applications.\nThanks,\n-Slava Moudry.\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, August 18, 2009 12:09 AM\nTo: Slava Moudry\nCc: [email protected]\nSubject: Re: [PERFORM] number of rows estimation for bit-AND operation\n\nOn Mon, Aug 17, 2009 at 2:07 PM, Slava Moudry<[email protected]> wrote:\n> Hi,\n>\n> I am using int8 field to pack a number of error flags. This is very common\n> technique for large tables to pack multiple flags in one integer field.\n>\n> For most records - the mt_flags field is 0. Here is the statistics (taken\n> from pgAdmin Statistics tab for mt_flags column):\n>\n> Most common Values: {0,128,2,4,8)\n>\n> Most common Frequencies: {0.96797,0.023,0.0076,0.0005,0.00029)\n>\n> What I notice that when bit-AND function is used - Postgres significantly\n> underestimates the amount of rows:\n>\n> explain analyze select count(*) from mt__20090801 where  mt_flags&8=0;\n>\n>                               QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n>\n>  Aggregate  (cost=83054.43..83054.44 rows=1 width=0) (actual\n> time=2883.154..2883.154 rows=1 loops=1)\n>\n>    ->  Seq Scan on mt__20090801  (cost=0.00..83023.93 rows=12200 width=0)\n> (actual time=0.008..2100.390 rows=2439435 loops=1)\n>\n>          Filter: ((mt_flags & 8) = 0)\n>\n>  Total runtime: 2883.191 ms\n>\n> (4 rows)\n>\n> This is not an issue for the particular query above, but I noticed that due\n> to that miscalculation in many cases Postgres chooses plan with Nested Loops\n> for other queries. I can fix it by setting enable_nest_loops to off, but\n> it's not something I should set for all queries.\n>\n> Is there any way to help Postgres make a better estimation for number of\n> rows returned by bit function?\n\nYou can index on the function. For instance:\n\ncreate table t (mt_flags int);\ncreate index t_mtflags_bit on t ((mt_flags&8));\ninsert into t select case when random() > 0.95 then case when random()\n>0.5 then 8 else 12 end else 0 end from generate_series(1,10000);\nanalyze t;\nexplain select * from t where mt_flags&8=8;\n QUERY PLAN\n--------------------------------------------------------------------------\n Index Scan using t_mtflags_bit on t (cost=0.00..52.17 rows=467 width=4)\n Index Cond: ((mt_flags & 8) = 8)\n(2 rows)\n\nHope that helps a little.\n", "msg_date": "Tue, 18 Aug 2009 14:52:11 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "2009/8/18 Slava Moudry <[email protected]>:\n> Hi Scott,\n> Thank you for reply.\n> I am using Postgres 8.4.0 (btw - great release --very happy about it) and I got a different plan after following your advice:\n\nYeah, you're returning most of the rows, so a seq scan makes sense.\nTry indexing / matching on something more uncommon and you should get\nan index scan.\n\n\n\n> The seq scan is OK, since I don't expect Postgres to use index scan for such low-selective condition.\n> It would be tough for me to support indexes for each bit flag value and their combinations. E.g. in the query below it is again 200x off on number of rows.\n\nincrease default stats target, analyze, try again.\n\n\n> explain analyze select count(*) from staging.tmp_t where  mt_flags&134=0;\n>                                                      QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=83054.43..83054.44 rows=1 width=0) (actual time=2964.960..2964.960 rows=1 loops=1)\n>   ->  Seq Scan on tmp_t  (cost=0.00..83023.93 rows=12200 width=0) (actual time=0.014..2152.031 rows=2362257 loops=1)\n>         Filter: ((mt_flags & 134) = 0)\n>  Total runtime: 2965.009 ms\n> (4 rows)\n>\n> I still wonder if it's something I could/should report as a bug? I've been struggling with this issue in 8.2, 8.3.x  (now using 8.4.0).\n> We can more or less work around this by disabling nestloop in our analytics queries but I have problems enforcing this in reporting applications.\n\nLooks more like a low stats target. Try increasing that first.\n", "msg_date": "Tue, 18 Aug 2009 15:58:29 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "> increase default stats target, analyze, try again.\nThis field has only 5 values. I had put values/frequencies in my first post.\nBased on the values (see below) - there is no reason for planner to think that mt_flags&134=0 should return 12200 rows.\nselect mt_flags, count(*) from staging.tmp_t group by 1;\n mt_flags | count \n----------+---------\n 128 | 57362\n 4 | 1371\n 8 | 627\n 2 | 19072\n 0 | 2361630\n(5 rows)\n\nIn fact, if I rewrite the query using value matching - the estimations are right on:\nexplain analyze select count(*) from staging.tmp_t where mt_flags not in (128,2,4);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=85878.63..85878.64 rows=1 width=0) (actual time=2904.005..2904.005 rows=1 loops=1)\n -> Seq Scan on tmp_t (cost=0.00..79973.85 rows=**2361910** width=0) (actual time=0.008..2263.983 rows=2362257 loops=1)\n Filter: (mt_flags <> ALL ('{128,2,4}'::integer[]))\n Total runtime: 2904.038 ms\n(4 rows)\n\n\nAnyways, I've been using statistics target of 100 in 8.3 and in 8.4 100 is default. I am currently using default_statistics_target=1000.\n\nDo you think that bit-and function might be skewing the statistics for execution plan somehow?\nThanks,\n-Slava.\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Tuesday, August 18, 2009 2:58 PM\nTo: Slava Moudry\nCc: [email protected]\nSubject: Re: [PERFORM] number of rows estimation for bit-AND operation\n\n2009/8/18 Slava Moudry <[email protected]>:\n> Hi Scott,\n> Thank you for reply.\n> I am using Postgres 8.4.0 (btw - great release --very happy about it) and I got a different plan after following your advice:\n\nYeah, you're returning most of the rows, so a seq scan makes sense.\nTry indexing / matching on something more uncommon and you should get\nan index scan.\n\n\n\n> The seq scan is OK, since I don't expect Postgres to use index scan for such low-selective condition.\n> It would be tough for me to support indexes for each bit flag value and their combinations. E.g. in the query below it is again 200x off on number of rows.\n\nincrease default stats target, analyze, try again.\n\n\n> explain analyze select count(*) from staging.tmp_t where  mt_flags&134=0;\n>                                                      QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n>  Aggregate  (cost=83054.43..83054.44 rows=1 width=0) (actual time=2964.960..2964.960 rows=1 loops=1)\n>   ->  Seq Scan on tmp_t  (cost=0.00..83023.93 rows=12200 width=0) (actual time=0.014..2152.031 rows=2362257 loops=1)\n>         Filter: ((mt_flags & 134) = 0)\n>  Total runtime: 2965.009 ms\n> (4 rows)\n>\n> I still wonder if it's something I could/should report as a bug? I've been struggling with this issue in 8.2, 8.3.x  (now using 8.4.0).\n> We can more or less work around this by disabling nestloop in our analytics queries but I have problems enforcing this in reporting applications.\n\nLooks more like a low stats target. Try increasing that first.\n", "msg_date": "Tue, 18 Aug 2009 15:11:20 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "2009/8/18 Slava Moudry <[email protected]>:\n>> increase default stats target, analyze, try again.\n> This field has only 5 values. I had put values/frequencies in my first post.\n\nSorry, kinda missed that. Anyway, there's no way for pg to know which\noperation is gonna match. Without an index on it. So my guess is\nthat it just guesses some fixed value. With an index it might be able\nto get it right, but you'll need an index for each type of match\nyou're looking for. I think. Maybe someone else on the list has a\nbetter idea.\n", "msg_date": "Tue, 18 Aug 2009 16:34:20 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "On Tue, Aug 18, 2009 at 6:34 PM, Scott Marlowe<[email protected]> wrote:\n> 2009/8/18 Slava Moudry <[email protected]>:\n>>> increase default stats target, analyze, try again.\n>> This field has only 5 values. I had put values/frequencies in my first post.\n>\n> Sorry, kinda missed that.  Anyway, there's no way for pg to know which\n> operation is gonna match.  Without an index on it.  So my guess is\n> that it just guesses some fixed value.  With an index it might be able\n> to get it right, but you'll need an index for each type of match\n> you're looking for.  I think.  Maybe someone else on the list has a\n> better idea.\n\nThe best way to handle this is probably to not cram multiple vales\ninto a single field. Just use one boolean for each flag. It won't\neven cost you any space, because right now you are using 8 bytes to\nstore 5 booleans, and 5 booleans will (I believe) only require 5\nbytes. Even if you were using enough of the bits for the space usage\nto be higher with individual booleans, the overall performance is\nlikely to be better that way.\n\nThis is sort of stating the obvious, but it doesn't make it any less\ntrue. Unfortunately, PG's selectivity estimator can't handle cases\nlike this. Tom Lane recently made some noises about trying to improve\nit, but it's not clear whether that will go anywhere, and in any event\nit won't happen before 8.5.0 comes out next spring/summer.\n\n...Robert\n", "msg_date": "Thu, 20 Aug 2009 13:55:29 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "Hi,\nYes, I thought about putting the bit-flags in separate fields.\nUnfortunately - I expect to have quite a lot of these and space is an issue when you are dealing with billions of records in fact table, so I prefer to pack them into one int8.\nFor users it's also much easier to write \"where mt_flags&134=0\" instead of \"where f_2=false and f4=false and f_128=false\".\nIn Teradata - that worked just fine, but it costs millions vs. zero cost for Postgres, so I am not really complaining out loud :)\n\nHopefully Tom or other bright folks at PG could take a look at this for the next patch/release.\nBtw, can you send me the link to \" PG's selectivity estimator\" discussion - I'd like to provide feedback if I can.\nThanks,\n-Slava.\n\n\n-----Original Message-----\nFrom: Robert Haas [mailto:[email protected]] \nSent: Thursday, August 20, 2009 10:55 AM\nTo: Scott Marlowe\nCc: Slava Moudry; [email protected]\nSubject: Re: [PERFORM] number of rows estimation for bit-AND operation\n\nOn Tue, Aug 18, 2009 at 6:34 PM, Scott Marlowe<[email protected]> wrote:\n> 2009/8/18 Slava Moudry <[email protected]>:\n>>> increase default stats target, analyze, try again.\n>> This field has only 5 values. I had put values/frequencies in my first post.\n>\n> Sorry, kinda missed that.  Anyway, there's no way for pg to know which\n> operation is gonna match.  Without an index on it.  So my guess is\n> that it just guesses some fixed value.  With an index it might be able\n> to get it right, but you'll need an index for each type of match\n> you're looking for.  I think.  Maybe someone else on the list has a\n> better idea.\n\nThe best way to handle this is probably to not cram multiple vales\ninto a single field. Just use one boolean for each flag. It won't\neven cost you any space, because right now you are using 8 bytes to\nstore 5 booleans, and 5 booleans will (I believe) only require 5\nbytes. Even if you were using enough of the bits for the space usage\nto be higher with individual booleans, the overall performance is\nlikely to be better that way.\n\nThis is sort of stating the obvious, but it doesn't make it any less\ntrue. Unfortunately, PG's selectivity estimator can't handle cases\nlike this. Tom Lane recently made some noises about trying to improve\nit, but it's not clear whether that will go anywhere, and in any event\nit won't happen before 8.5.0 comes out next spring/summer.\n\n...Robert\n", "msg_date": "Thu, 20 Aug 2009 15:59:43 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "2009/8/20 Slava Moudry <[email protected]>:\n> Hi,\n> Yes, I thought about putting the bit-flags in separate fields.\n> Unfortunately - I expect to have quite a lot of these and space is an issue when you are dealing with billions of records in fact table, so I prefer to pack them into one int8.\n\nFor giggles I created two test tables, one with a single int, one with\n8 bools, and put 100M entries in each. The table with 8 bools took up\naprrox. 3560616 bytes, while the one with a single int took up approx.\n3544212\n\nI.e they're about the same. You should really test to see if having a\nlot of bools costs more than mangling ints around. I'm guessing I\ncould fit a lot more bools in the test table due to alignment issues\nthan just 8.\n", "msg_date": "Thu, 20 Aug 2009 19:32:02 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "On Thu, Aug 20, 2009 at 7:32 PM, Scott Marlowe<[email protected]> wrote:\n> 2009/8/20 Slava Moudry <[email protected]>:\n>> Hi,\n>> Yes, I thought about putting the bit-flags in separate fields.\n>> Unfortunately - I expect to have quite a lot of these and space is an issue when you are dealing with billions of records in fact table, so I prefer to pack them into one int8.\n>\n> For giggles I created two test tables, one with a single int, one with\n> 8 bools, and put 100M entries in each.  The table with 8 bools took up\n> aprrox. 3560616 bytes, while the one with a single int took up approx.\n> 3544212\n>\n> I.e they're about the same.  You should really test to see if having a\n> lot of bools costs more than mangling ints around.  I'm guessing I\n> could fit a lot more bools in the test table due to alignment issues\n> than just 8.\n\nSo, I made a table with 26 bool fields, and added 100M rows to it, and\nthat table took up about 5906028 bytes. So yea, the storage is\ngreater for boolean fields, but only if they aren't null. making them\nnull would save a lot of space, so if null bits fit your model, then\nit might be worth looking into. Certainly they're not so much bigger\nas to be unmanageable.\n", "msg_date": "Thu, 20 Aug 2009 19:58:41 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "Hi,\nSorry I don't understand how the numbers came so low.\nIf you assume that 8 boolean fields take 1 byte each, so for 100M table it \nwill be 800M bytes.\nHow did your table fit in 3560616 bytes ?\n\nUsing postgres 8.4.0 on Linux x64:\ncreate table staging.tmp_t1(a1 boolean, a2 boolean, a3 boolean, a4 boolean, \na5 boolean, a6 boolean, a7 boolean, a8 boolean) tablespace stage3;\ninsert into staging.tmp_t1 select \n1:boolean,1:boolean,1:boolean,1:boolean,1:boolean,1:boolean,1:boolean,1:boolean \nfrom generate_series(1,100000000);\nselect pg_total_relation_size('staging.tmp_t1');\n pg_total_relation_size\n------------------------\n 3,625,689,088\n(1 row)\nThe table with 16 booleans took just 766MB more, so the growth appears to be \nnon-linear.\nMost likely Postgres does some compression.. I can't tell for sure without \nlooking into source code.\n\nAnyway, given that int8 can accomodate 64 flags - the space saving can be \nsubstantial.\nThanks,\n-Slava.\n\n\n\n----- Original Message ----- \nFrom: \"Scott Marlowe\" <[email protected]>\nTo: \"Slava Moudry\" <[email protected]>\nCc: \"Robert Haas\" <[email protected]>; \n<[email protected]>\nSent: Thursday, August 20, 2009 6:58 PM\nSubject: Re: [PERFORM] number of rows estimation for bit-AND operation\n\n\nOn Thu, Aug 20, 2009 at 7:32 PM, Scott Marlowe<[email protected]> \nwrote:\n> 2009/8/20 Slava Moudry <[email protected]>:\n>> Hi,\n>> Yes, I thought about putting the bit-flags in separate fields.\n>> Unfortunately - I expect to have quite a lot of these and space is an \n>> issue when you are dealing with billions of records in fact table, so I \n>> prefer to pack them into one int8.\n>\n> For giggles I created two test tables, one with a single int, one with\n> 8 bools, and put 100M entries in each. The table with 8 bools took up\n> aprrox. 3560616 bytes, while the one with a single int took up approx.\n> 3544212\n>\n> I.e they're about the same. You should really test to see if having a\n> lot of bools costs more than mangling ints around. I'm guessing I\n> could fit a lot more bools in the test table due to alignment issues\n> than just 8.\n\nSo, I made a table with 26 bool fields, and added 100M rows to it, and\nthat table took up about 5906028 bytes. So yea, the storage is\ngreater for boolean fields, but only if they aren't null. making them\nnull would save a lot of space, so if null bits fit your model, then\nit might be worth looking into. Certainly they're not so much bigger\nas to be unmanageable.\n\n", "msg_date": "Fri, 21 Aug 2009 00:04:47 -0700", "msg_from": "Slava Moudry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "On Thu, Aug 20, 2009 at 9:58 PM, Scott Marlowe<[email protected]> wrote:\n> On Thu, Aug 20, 2009 at 7:32 PM, Scott Marlowe<[email protected]> wrote:\n>> 2009/8/20 Slava Moudry <[email protected]>:\n>>> Hi,\n>>> Yes, I thought about putting the bit-flags in separate fields.\n>>> Unfortunately - I expect to have quite a lot of these and space is an issue when you are dealing with billions of records in fact table, so I prefer to pack them into one int8.\n>>\n>> For giggles I created two test tables, one with a single int, one with\n>> 8 bools, and put 100M entries in each.  The table with 8 bools took up\n>> aprrox. 3560616 bytes, while the one with a single int took up approx.\n>> 3544212\n>>\n>> I.e they're about the same.  You should really test to see if having a\n>> lot of bools costs more than mangling ints around.  I'm guessing I\n>> could fit a lot more bools in the test table due to alignment issues\n>> than just 8.\n>\n> So, I made a table with 26 bool fields, and added 100M rows to it, and\n> that table took up about 5906028 bytes.  So yea, the storage is\n> greater for boolean fields, but only if they aren't null.  making them\n> null would save a lot of space, so if null bits fit your model, then\n> it might be worth looking into.  Certainly they're not so much bigger\n> as to be unmanageable.\n\nThis is a clever idea. Tables with any non-null columns have a null\nbitmap with 1 bit per field, followed by the actual values of the\nnon-null fields. So if the OP arranges to use true and null as the\nvalues instead of true and false, and uses null for the flag value\nthat is most often wanted, it will pack down pretty tight.\n\nScott, did you check whether a toast table got created here and what\nthe size of it was?\n\n...Robert\n", "msg_date": "Fri, 21 Aug 2009 16:04:23 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" }, { "msg_contents": "Robert Haas escribi�:\n\n> Scott, did you check whether a toast table got created here and what\n> the size of it was?\n\nA table with only bool columns (and, say, one int8 column) would not\nhave a toast table. Only varlena columns produce toast tables.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 21 Aug 2009 16:12:25 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: number of rows estimation for bit-AND operation" } ]
[ { "msg_contents": "\nI'm seeing some interesting behaviour. I'm executing a query where I \nperform a merge join between two copies of the same table, completely \nsymmetrically, and the two sides of the merge are sourced differently.\n\nSELECT COUNT(*)\nFROM\n (SELECT DISTINCT\n l1.objectid,\n l1.id AS id1,\n l1.intermine_start AS start1,\n l1.intermine_end AS end1,\n l2.id AS id2,\n l2.intermine_start AS start2,\n l2.intermine_end AS end2\n FROM\n locationbin8000 l1,\n locationbin8000 l2\n WHERE\n l1.subjecttype = 'GeneFlankingRegion'\n AND l2.subjecttype = 'GeneFlankingRegion'\n AND l1.objectid = l2.objectid\n AND l1.bin = l2.bin\n ) AS a\nWHERE\n start1 <= end2\n AND start2 <= end1;\n\n QUERY PLAN\n---------------------------------------------------------\n Aggregate\n (cost=703459.72..703459.73 rows=1 width=0)\n (actual time=43673.526..43673.527 rows=1 loops=1)\n -> HashAggregate\n (cost=657324.23..677828.89 rows=2050466 width=28)\n (actual time=33741.380..42187.885 rows=17564726 loops=1)\n -> Merge Join\n (cost=130771.22..621441.07 rows=2050466 width=28)\n (actual time=456.970..15292.997 rows=21463106 loops=1)\n Merge Cond: ((l1.objectid = l2.objectid) AND (l1.bin = l2.bin))\n Join Filter: ((l1.intermine_start <= l2.intermine_end) AND (l2.intermine_start <= l1.intermine_end))\n -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l1\n (cost=0.00..72096.78 rows=670733 width=20)\n (actual time=0.085..345.834 rows=664588 loops=1)\n Index Cond: (subjecttype = 'GeneFlankingRegion'::text)\n -> Sort\n (cost=130771.22..132448.05 rows=670733 width=20)\n (actual time=456.864..3182.638 rows=38231659 loops=1)\n Sort Key: l2.objectid, l2.bin\n Sort Method: quicksort Memory: 81690kB\n -> Bitmap Heap Scan on locationbin8000 l2\n (cost=12706.60..65859.76 rows=670733 width=20)\n (actual time=107.259..271.026 rows=664588 loops=1)\n Recheck Cond: (subjecttype = 'GeneFlankingRegion'::text)\n -> Bitmap Index Scan on locationbin8000__subjecttypeid\n (cost=0.00..12538.92 rows=670733 width=0)\n (actual time=106.327..106.327 rows=664588 loops=1)\n Index Cond: (subjecttype = 'GeneFlankingRegion'::text)\n Total runtime: 44699.675 ms\n(15 rows)\n\nHere is the definition of the locationbin8000 table:\n\n Table \"public.locationbin8000\"\n Column | Type | Modifiers\n-----------------+---------+-----------\n id | integer |\n objectid | integer |\n intermine_start | integer |\n intermine_end | integer |\n subjecttype | text |\n bin | integer |\nIndexes:\n \"locationbin8000__subjectobjectbin\" btree (subjecttype, objectid, bin)\n \"locationbin8000__subjecttypeid\" btree (subjecttype, id)\n\nThe table is clustered on the locationbin8000__subjectobjectbin index, and \nhas been analysed.\n\nSo you can see, the merge join requires two inputs both ordered by \n(objectid, bin), which is readily supplied by the \nlocationbin8000__subjectobjectbin index, given that I am restricting the \nsubjecttype of both sides (to the same thing, I might add). Therefore, I \nwould expect the merge join to feed off two identical index scans. This is \nwhat happens for one of the sides of the merge join, but not the other, \neven though the sides are symmetrical.\n\nDoes anyone know why it isn't doing two index scans? Given that the cost \nof the index scan is half that of the alternative, I'm surprised that it \nuses this plan.\n\nI'm using Postgres 8.4.0\n\nMatthew\n\n-- \n \"Interwoven alignment preambles are not allowed.\"\n If you have been so devious as to get this message, you will understand\n it, and you deserve no sympathy. -- Knuth, in the TeXbook\n", "msg_date": "Tue, 18 Aug 2009 14:20:21 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Weird index or sort behaviour" }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> I'm seeing some interesting behaviour. I'm executing a query where I \n> perform a merge join between two copies of the same table, completely \n> symmetrically, and the two sides of the merge are sourced differently.\n\nThis is not as surprising as you think. A mergejoin is *not*\nsymmetrical between its two inputs: the inner side is subject to being\npartially rewound and rescanned when the outer side is advanced to a new\nrow with the same merge key. This means there is a premium on cheap\nrescan for the inner side that doesn't exist for the outer ... and a\nsort node is cheaper to rescan than a generic indexscan. It's\nimpossible to tell from the data you provided whether the planner was\ncorrect to pick a sort over an indexscan for the inner side, but the\nfact that it did so is not prima facie evidence of a bug. You could\nforce choice of the other plan via enable_sort = off and then compare\nestimated and actual runtimes to see if the planner got it right.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 11:05:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "On Tue, 18 Aug 2009, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> I'm seeing some interesting behaviour. I'm executing a query where I\n>> perform a merge join between two copies of the same table, completely\n>> symmetrically, and the two sides of the merge are sourced differently.\n>\n> This is not as surprising as you think. A mergejoin is *not*\n> symmetrical between its two inputs: the inner side is subject to being\n> partially rewound and rescanned when the outer side is advanced to a new\n> row with the same merge key. This means there is a premium on cheap\n> rescan for the inner side that doesn't exist for the outer ... and a\n> sort node is cheaper to rescan than a generic indexscan.\n\nVery clever. Yes, that is what is happening. I'm surprised that the system \ndoesn't buffer the inner side to avoid having to rescan each time, but \nthen I guess you would have problems if the buffer grew larger than \nmemory.\n\n> It's impossible to tell from the data you provided whether the planner \n> was correct to pick a sort over an indexscan for the inner side, but the \n> fact that it did so is not prima facie evidence of a bug. You could \n> force choice of the other plan via enable_sort = off and then compare \n> estimated and actual runtimes to see if the planner got it right.\n\nYes, it does get an almost unmeasureable amount slower if I force sorts \noff and nested loop (its next choice) off.\n\nMatthew\n\n-- \n $ rm core\n Segmentation Fault (core dumped)\n", "msg_date": "Tue, 18 Aug 2009 17:41:10 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> Very clever. Yes, that is what is happening. I'm surprised that the system \n> doesn't buffer the inner side to avoid having to rescan each time, but \n> then I guess you would have problems if the buffer grew larger than \n> memory.\n\nWell, it does consider adding a Materialize node for that purpose,\nbut in this case it evidently thought a sort was cheaper.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 12:49:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "I wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> Very clever. Yes, that is what is happening. I'm surprised that the system \n>> doesn't buffer the inner side to avoid having to rescan each time, but \n>> then I guess you would have problems if the buffer grew larger than \n>> memory.\n\n> Well, it does consider adding a Materialize node for that purpose,\n> but in this case it evidently thought a sort was cheaper.\n\nHmmm ... actually, after looking at the code, I notice that we only\nconsider adding a Materialize node to buffer an inner input that is a\nSort node. The idea was suggested by Greg Stark, if memory serves.\nI wonder now if it'd be worthwhile to generalize that to consider\nadding a Materialize above *any* inner mergejoin input.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 12:57:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "On Tue, Aug 18, 2009 at 5:57 PM, Tom Lane<[email protected]> wrote:\n> Hmmm ... actually, after looking at the code, I notice that we only\n> consider adding a Materialize node to buffer an inner input that is a\n> Sort node.  The idea was suggested by Greg Stark, if memory serves.\n> I wonder now if it'd be worthwhile to generalize that to consider\n> adding a Materialize above *any* inner mergejoin input.\n\nIf my recollection is right the reason we put the materialize above\nthe sort node has to do with Simon's deferred final merge pass\noptimization. The materialize was a way to lazily build the final\nmerge as we do the merge but still have the ability to rewind.\n\nI would be more curious in the poster's situation to turn off\nenable_seqscan, enable_sort, and/or enable_nestloop see how the index\nscan merge join plan runs. rewinding an index scan is more expensive\nthan rewinding a materialize node but would it really be so much\nexpensive that it's worth copying the entire table into temporary\nspace?\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Tue, 18 Aug 2009 18:44:21 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> If my recollection is right the reason we put the materialize above\n> the sort node has to do with Simon's deferred final merge pass\n> optimization. The materialize was a way to lazily build the final\n> merge as we do the merge but still have the ability to rewind.\n\n> I would be more curious in the poster's situation to turn off\n> enable_seqscan, enable_sort, and/or enable_nestloop see how the index\n> scan merge join plan runs. rewinding an index scan is more expensive\n> than rewinding a materialize node but would it really be so much\n> expensive that it's worth copying the entire table into temporary\n> space?\n\nAbsolutely not, but remember that what we're expecting the Materialize\nto do is buffer only as far back as the last Mark, so that it's unlikely\never to spill to disk. It might well be a win to do that rather than\nre-fetching from the indexscan. The incremental win compared to not\nhaving the materialize would be small compared to what it is for a sort,\nbut it could still be worthwhile I think. In particular, in Matthew's\nexample the sort is being estimated at significantly higher cost than\nthe indexscan, which presumably means that we are estimating there will\nbe a *lot* of re-fetches, else we wouldn't have rejected the indexscan\non the inside. Inserting a materialize would make the re-fetches\ncheaper. I'm fairly sure that this plan structure would cost out\ncheaper than the sort according to cost_mergejoin's cost model. As\nnoted in the comments therein, that cost model is a bit oversimplified,\nso it might not be cheaper in reality ... but we ought to try it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 13:57:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "On Tue, 18 Aug 2009, Tom Lane wrote:\n>> I would be more curious in the poster's situation to turn off\n>> enable_seqscan, enable_sort, and/or enable_nestloop see how the index\n>> scan merge join plan runs.\n\nLike this:\n\n QUERY PLAN\n-----------------------------------------------------------------------\n Aggregate\n (cost=2441719.92..2441719.93 rows=1 width=0)\n (actual time=50087.537..50087.538 rows=1 loops=1)\n -> HashAggregate\n (cost=2397366.95..2417079.38 rows=1971243 width=28)\n (actual time=40462.069..48634.713 rows=17564726 loops=1)\n -> Merge Join\n (cost=0.00..2362870.20 rows=1971243 width=28)\n (actual time=0.095..22041.693 rows=21463106 loops=1)\n Merge Cond: ((l1.objectid = l2.objectid) AND (l1.bin = l2.bin))\n Join Filter: ((l1.intermine_start <= l2.intermine_end) AND (l2.intermine_start <= l1.intermine_end))\n -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l1\n (cost=0.00..71635.23 rows=657430 width=20)\n (actual time=0.056..170.857 rows=664588 loops=1)\n Index Cond: (subjecttype = 'GeneFlankingRegion'::text)\n -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l2\n (cost=0.00..71635.23 rows=657430 width=20)\n (actual time=0.020..9594.466 rows=38231659 loops=1)\n Index Cond: (l2.subjecttype = 'GeneFlankingRegion'::text)\n Total runtime: 50864.569 ms\n(10 rows)\n\n>> rewinding an index scan is more expensive than rewinding a materialize \n>> node but would it really be so much expensive that it's worth copying \n>> the entire table into temporary space?\n>\n> Absolutely not, but remember that what we're expecting the Materialize\n> to do is buffer only as far back as the last Mark, so that it's unlikely\n> ever to spill to disk.\n\nIf that's how it works, then that sounds very promising indeed.\n\n> In particular, in Matthew's example the sort is being estimated at \n> significantly higher cost than the indexscan, which presumably means \n> that we are estimating there will be a *lot* of re-fetches, else we \n> wouldn't have rejected the indexscan on the inside.\n\nselect sum(c * c) / sum(c) from (select objectid, bin, count(*) AS c from \nlocationbin8000 where subjecttype = 'GeneFlankingRegion' GROUP BY \nobjectid, bin) as a;\n ?column?\n---------------------\n 57.5270393085641029\n\nSo on average, we will be rewinding by 57 rows each time. A materialise \nstep really does sound like a win in this situation.\n\nMatthew\n\n-- \n Patron: \"I am looking for a globe of the earth.\"\n Librarian: \"We have a table-top model over here.\"\n Patron: \"No, that's not good enough. Don't you have a life-size?\"\n Librarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Tue, 18 Aug 2009 19:40:18 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l1\n> (cost=0.00..71635.23 rows=657430 width=20)\n> (actual time=0.056..170.857 rows=664588 loops=1)\n> Index Cond: (subjecttype = 'GeneFlankingRegion'::text)\n> -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l2\n> (cost=0.00..71635.23 rows=657430 width=20)\n> (actual time=0.020..9594.466 rows=38231659 loops=1)\n> Index Cond: (l2.subjecttype = 'GeneFlankingRegion'::text)\n\n> ... So on average, we will be rewinding by 57 rows each time.\n\nAs indeed is reflected in those actual rowcounts. (The estimated\ncounts and costs don't include re-fetching, but the actuals do.)\n\nEven more interesting, the actual runtime is about 56x different too,\nwhich implies that Matthew's re-fetches are not noticeably cheaper than\nthe original fetches. I'd be surprised if that were true in an\nindexscan pulling from disk (you'd expect recently-touched rows to stay\ncached for awhile). But it could easily be true if the whole table were\ncached already. Matthew, how big is this table compared to your RAM?\nWere you testing a case in which it'd be in cache?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 15:09:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "On Tue, 18 Aug 2009, Tom Lane wrote:\n>> -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l1\n>> (cost=0.00..71635.23 rows=657430 width=20)\n>> (actual time=0.056..170.857 rows=664588 loops=1)\n>> Index Cond: (subjecttype = 'GeneFlankingRegion'::text)\n>> -> Index Scan using locationbin8000__subjectobjectbin on locationbin8000 l2\n>> (cost=0.00..71635.23 rows=657430 width=20)\n>> (actual time=0.020..9594.466 rows=38231659 loops=1)\n>> Index Cond: (l2.subjecttype = 'GeneFlankingRegion'::text)\n>\n>> ... So on average, we will be rewinding by 57 rows each time.\n>\n> As indeed is reflected in those actual rowcounts. (The estimated\n> counts and costs don't include re-fetching, but the actuals do.)\n>\n> Even more interesting, the actual runtime is about 56x different too,\n> which implies that Matthew's re-fetches are not noticeably cheaper than\n> the original fetches. I'd be surprised if that were true in an\n> indexscan pulling from disk (you'd expect recently-touched rows to stay\n> cached for awhile). But it could easily be true if the whole table were\n> cached already. Matthew, how big is this table compared to your RAM?\n> Were you testing a case in which it'd be in cache?\n\nOh, definitely. I have run this test so many times, it's all going to be \nin the cache. Luckily, that's what we are looking at as a normal situation \nin production. Also, since the table is clustered on that index, I would \nexpect the performance when it is out of cache to be fairly snappy anyway.\n\nFor reference, the table is 350 MB, the index is 238 MB, and the RAM in \nthe machine is 4GB (although it's my desktop so it'll have all sorts of \nother rubbish using that up). Our servers have 16GB to 32GB of RAM, so no \nproblem there.\n\nMatthew\n\n-- \n I'm always interested when [cold callers] try to flog conservatories.\n Anyone who can actually attach a conservatory to a fourth floor flat\n stands a marginally better than average chance of winning my custom.\n (Seen on Usenet)\n", "msg_date": "Wed, 19 Aug 2009 12:00:32 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "Matthew Wakeling <[email protected]> writes:\n> [ discussion about applying materialize to a mergejoin's inner indexscan ]\n\nI have finally gotten round to doing something about this, and applied\nthe attached patch to CVS HEAD. Could you test it on your problem case\nto see what happens? If it's not convenient to load your data into\n8.5devel, I believe the patch would work all right in 8.4.x. (A quick\ncheck shows that it applies except for one deletion hunk that has a\nconflict due to a comment change; you could easily do that deletion\nmanually.)\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 14 Nov 2009 21:50:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Weird index or sort behaviour " }, { "msg_contents": "On Sat, 14 Nov 2009, Tom Lane wrote:\n> Matthew Wakeling <[email protected]> writes:\n>> [ discussion about applying materialize to a mergejoin's inner indexscan ]\n>\n> I have finally gotten round to doing something about this, and applied\n> the attached patch to CVS HEAD. Could you test it on your problem case\n> to see what happens? If it's not convenient to load your data into\n> 8.5devel, I believe the patch would work all right in 8.4.x. (A quick\n> check shows that it applies except for one deletion hunk that has a\n> conflict due to a comment change; you could easily do that deletion\n> manually.)\n\nUm, cool. I'm going to have to look back at the archives to even work out \nwhat query I was complaining about though. May take a little while to \ntest, but I'll get back to you. Thanks,\n\nMatthew\n\n-- \n The only secure computer is one that's unplugged, locked in a safe,\n and buried 20 feet under the ground in a secret location...and i'm not\n even too sure about that one. --Dennis Huges, FBI\n", "msg_date": "Wed, 18 Nov 2009 11:39:03 +0000 (GMT)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Weird index or sort behaviour " } ]
[ { "msg_contents": "Let's take the following EXPLAIN results:\n\nticker=# explain select * from post, forum where forum.name = post.forum\nand invisible <> 1 and to_tsvector('english', message) @@\nto_tsquery('violence') order by modified desc limit\n100; \n\nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Limit (cost=5951.85..5952.10 rows=100 width=706)\n -> Sort (cost=5951.85..5955.37 rows=1408 width=706)\n Sort Key: post.modified\n -> Hash Join (cost=613.80..5898.04 rows=1408 width=706)\n Hash Cond: (post.forum = forum.name)\n -> Bitmap Heap Scan on post (cost=370.93..5635.71\nrows=1435 width=435)\n Recheck Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('violence'::text))\n Filter: (invisible <> 1)\n -> Bitmap Index Scan on idx_message \n(cost=0.00..370.57 rows=1435 width=0)\n Index Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('violence'::text))\n -> Hash (cost=242.07..242.07 rows=64 width=271)\n -> Index Scan using forum_name on forum \n(cost=0.00..242.07 rows=64 width=271)\n(12 rows)\n\nticker=#\n\nAnd\n\n\nticker=# explain select * from post, forum where forum.name = post.forum\nand invisible <> 1 and ((permission & '127') = permission) and (contrib\nis null or contrib = ' ' or contrib like '%b%') and \nto_tsvector('english', message) @@ to_tsquery('violence') order by\nmodified desc limit 100;\n \nQUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1329.81..1329.87 rows=22 width=706)\n -> Sort (cost=1329.81..1329.87 rows=22 width=706)\n Sort Key: post.modified\n -> Nested Loop (cost=978.96..1329.32 rows=22 width=706)\n -> Index Scan using forum_name on forum \n(cost=0.00..242.71 rows=1 width=271)\n Filter: (((contrib IS NULL) OR (contrib = '\n'::text) OR (contrib ~~ '%b%'::text)) AND ((permission & 127) = permission))\n -> Bitmap Heap Scan on post (cost=978.96..1086.28\nrows=27 width=435)\n Recheck Cond: ((to_tsvector('english'::text,\npost.message) @@ to_tsquery('violence'::text)) AND (post.forum =\nforum.name))\n Filter: (post.invisible <> 1)\n -> BitmapAnd (cost=978.96..978.96 rows=27 width=0)\n -> Bitmap Index Scan on idx_message \n(cost=0.00..370.57 rows=1435 width=0)\n Index Cond:\n(to_tsvector('english'::text, post.message) @@ to_tsquery('violence'::text))\n -> Bitmap Index Scan on post_forum \n(cost=0.00..607.78 rows=26575 width=0)\n Index Cond: (post.forum = forum.name)\n(14 rows)\n\nticker=#\n\n\nThe difference in these two queries is that the second qualifies the\nreturned search to check two permission blocks - one related to the\nuser's permission bit mask, and the second a mask of single-character\n\"flags\" (the user's classification must be in the list of permitted\nclassifications)\n\nOk. Notice that the top-line cost of the first query is HIGHER.\n\nThe first query runs almost instantly - average execution latency is\nfrequently in the few-hundred millisecond range.\n\nThe second query can take upward of 30 seconds (!) to run.\n\nNeither hits the disk, the machine in question has scads of free RAM\navailable, and while busy is not particularly constrained. Other\nsimultaneous users on the database are getting queries back immediately\n(no unreasonable delays).\n\nIf I remove parts of the permission tests it does not matter. If ANY of\nthose tests qualifies the returned values the performance goes in the\ntoilet. If I re-order when the permission tests appear (e.g. at the end\nof the search command) it makes no difference in the response time\neither (it does, however, change the EXPLAIN output somewhat, and\nthereby appears to change the query plan.\n\nWhat's going on here? I can usually figure out what's causing bad\nperformance and fix it with the judicious addition of an index or other\nsimilar thing - this one has me completely mystified.\n\n-- Karl", "msg_date": "Tue, 18 Aug 2009 14:45:11 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Query Performance - what gives?" }, { "msg_contents": "Karl Denninger <[email protected]> wrote:\n \n> Let's take the following EXPLAIN results:\n \nWe could tell a lot more from EXPLAIN ANALYZE results.\n \nThe table definitions (with index information) would help, too.\n \n-Kevin\n", "msg_date": "Tue, 18 Aug 2009 15:23:08 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Query Performance - what gives?" }, { "msg_contents": "First query:\n\n\nticker=# explain analyze select * from post, forum where forum.name =\npost.forum and invisible <> 1 and to_tsvector('english', message) @@\nto_tsquery('violence') order by modified desc limit 100;\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5959.78..5960.03 rows=100 width=706) (actual\ntime=49.847..50.264 rows=100 loops=1)\n -> Sort (cost=5959.78..5963.30 rows=1408 width=706) (actual\ntime=49.843..49.982 rows=100 loops=1)\n Sort Key: post.modified\n Sort Method: top-N heapsort Memory: 168kB\n -> Hash Join (cost=621.72..5905.96 rows=1408 width=706)\n(actual time=4.050..41.238 rows=2055 loops=1)\n Hash Cond: (post.forum = forum.name)\n -> Bitmap Heap Scan on post (cost=370.93..5635.71\nrows=1435 width=435) (actual time=3.409..32.648 rows=2055 loops=1)\n Recheck Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('violence'::text))\n Filter: (invisible <> 1)\n -> Bitmap Index Scan on idx_message \n(cost=0.00..370.57 rows=1435 width=0) (actual time=2.984..2.984\nrows=2085 loops=1)\n Index Cond: (to_tsvector('english'::text,\nmessage) @@ to_tsquery('violence'::text))\n -> Hash (cost=249.97..249.97 rows=66 width=271) (actual\ntime=0.596..0.596 rows=64 loops=1)\n -> Index Scan using forum_name on forum \n(cost=0.00..249.97 rows=66 width=271) (actual time=0.093..0.441 rows=64\nloops=1)\n Total runtime: 50.625 ms\n(14 rows)\n\nticker=#\n\nSecond query:\n\n\n\nticker=# explain analyze select * from post, forum where forum.name =\npost.forum and invisible <> 1 and ((permission & '127') = permission)\nand (contrib is null or contrib = ' ' or contrib like '%b%') and \nto_tsvector('english', message) @@ to_tsquery('violence') order by\nmodified desc limit 100;\n \nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1337.71..1337.76 rows=21 width=706) (actual\ntime=31121.317..31121.736 rows=100 loops=1)\n -> Sort (cost=1337.71..1337.76 rows=21 width=706) (actual\ntime=31121.313..31121.452 rows=100 loops=1)\n Sort Key: post.modified\n Sort Method: top-N heapsort Memory: 168kB\n -> Nested Loop (cost=978.97..1337.25 rows=21 width=706)\n(actual time=2.841..31108.926 rows=2055 loops=1)\n -> Index Scan using forum_name on forum \n(cost=0.00..250.63 rows=1 width=271) (actual time=0.013..0.408 rows=63\nloops=1)\n Filter: (((contrib IS NULL) OR (contrib = '\n'::text) OR (contrib ~~ '%b%'::text)) AND ((permission & 127) = permission))\n -> Bitmap Heap Scan on post (cost=978.97..1086.28\nrows=27 width=435) (actual time=109.832..493.648 rows=33 loops=63)\n Recheck Cond: ((to_tsvector('english'::text,\npost.message) @@ to_tsquery('violence'::text)) AND (post.forum =\nforum.name))\n Filter: (post.invisible <> 1)\n -> BitmapAnd (cost=978.97..978.97 rows=27\nwidth=0) (actual time=98.832..98.832 rows=0 loops=63)\n -> Bitmap Index Scan on idx_message \n(cost=0.00..370.57 rows=1435 width=0) (actual time=0.682..0.682\nrows=2085 loops=63)\n Index Cond:\n(to_tsvector('english'::text, post.message) @@ to_tsquery('violence'::text))\n -> Bitmap Index Scan on post_forum \n(cost=0.00..607.78 rows=26575 width=0) (actual time=97.625..97.625\nrows=22616 loops=63)\n Index Cond: (post.forum = forum.name)\n Total runtime: 31122.781 ms\n(16 rows)\n\nticker=#\nticker=# \\d post\n Table \"public.post\"\n Column | Type | \nModifiers \n-----------+--------------------------+--------------------------------------------------------\n forum | text |\n number | integer |\n toppost | integer |\n views | integer | default 0\n login | text |\n subject | text |\n message | text |\n inserted | timestamp with time zone |\n modified | timestamp with time zone |\n replied | timestamp with time zone |\n who | text |\n reason | text |\n ordinal | integer | not null default\nnextval('post_ordinal_seq'::regclass)\n replies | integer | default 0\n invisible | integer |\n sticky | integer |\n ip | inet |\n lock | integer | default 0\n pinned | integer | default 0\n marked | boolean |\nIndexes:\n \"post_pkey\" PRIMARY KEY, btree (ordinal)\n \"idx_message\" gin (to_tsvector('english'::text, message))\n \"idx_subject\" gin (to_tsvector('english'::text, subject))\n \"post_forum\" btree (forum)\n \"post_getlastpost\" btree (forum, modified)\n \"post_inserted\" btree (inserted)\n \"post_login\" btree (login)\n \"post_modified\" btree (modified)\n \"post_number\" btree (number)\n \"post_order\" btree (number, inserted)\n \"post_ordinal\" btree (ordinal)\n \"post_top\" btree (toppost)\n \"post_toppost\" btree (forum, toppost, inserted)\nForeign-key constraints:\n \"forum_fk\" FOREIGN KEY (forum) REFERENCES forum(name) ON UPDATE\nCASCADE ON DELETE CASCADE\n \"login_fk\" FOREIGN KEY (login) REFERENCES usertable(login) ON UPDATE\nCASCADE ON DELETE CASCADE\nTriggers:\n _tickerforum_logtrigger AFTER INSERT OR DELETE OR UPDATE ON post FOR\nEACH ROW EXECUTE PROCEDURE _tickerforum.logtrigger('_tickerforum', '20',\n'vvvvvvvvvvvvk')\nDisabled triggers:\n _tickerforum_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON post\nFOR EACH ROW EXECUTE PROCEDURE _tickerforum.denyaccess('_tickerforum')\n\nticker=# \\d forum\n Table \"public.forum\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------\n name | text | not null\n description | text |\n long_desc | text |\n forum_type | integer |\n forum_order | integer |\n lastpost | timestamp with time zone |\n lastperson | text |\n permission | integer | default 0\n modtime | integer |\n numposts | integer | default 0\n type | integer | default 0\n readonly | integer | default 0\n moderated | integer | default 0\n flags | integer |\n rsslength | text |\n contrib | text |\n autolock | text |\n autodest | text |\n open | text |\nIndexes:\n \"forum_pkey\" PRIMARY KEY, btree (name)\n \"forum_name\" UNIQUE, btree (name)\n \"forum_order\" UNIQUE, btree (forum_order)\nTriggers:\n _tickerforum_logtrigger AFTER INSERT OR DELETE OR UPDATE ON forum\nFOR EACH ROW EXECUTE PROCEDURE _tickerforum.logtrigger('_tickerforum',\n'7', 'k')\nDisabled triggers:\n _tickerforum_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON forum\nFOR EACH ROW EXECUTE PROCEDURE _tickerforum.denyaccess('_tickerforum')\n\n(The triggers exist due to replication via Slony)\n\n\nKevin Grittner wrote:\n> Karl Denninger <[email protected]> wrote:\n> \n> \n>> Let's take the following EXPLAIN results:\n>> \n> \n> We could tell a lot more from EXPLAIN ANALYZE results.\n> \n> The table definitions (with index information) would help, too.\n> \n> -Kevin\n>\n>", "msg_date": "Tue, 18 Aug 2009 15:58:46 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Query Performance - what gives?" }, { "msg_contents": "Karl Denninger <[email protected]> wrote:\n> -> Index Scan using forum_name on forum \n> (cost=0.00..250.63 rows=1 width=271) (actual time=0.013..0.408\n> rows=63 loops=1)\n> Filter: (((contrib IS NULL) OR (contrib = '\n> '::text) OR (contrib ~~ '%b%'::text)) AND ((permission & 127) =\n> permission))\n \nThe biggest issue, as far as I can see, is that it thinks that the\nselection criteria on forum will limit to one row, while it really\nmatches 63 rows.\n \nYou might be able to coerce it into a faster plan with something like\nthis (untested):\n \nselect *\n from (select * from post\n where invisible <> 1\n and to_tsvector('english', message)\n @@ to_tsquery('violence')\n ) p,\n forum\n where forum.name = p.forum\n and (permission & '127') = permission\n and (contrib is null or contrib = ' ' or contrib like '%b%')\n order by modified desc\n limit 100\n;\n \n-Kevin\n", "msg_date": "Tue, 18 Aug 2009 16:59:22 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Query Performance - what gives?" }, { "msg_contents": "Kevin Grittner wrote:\n> Karl Denninger <[email protected]> wrote:\n> \n>> -> Index Scan using forum_name on forum \n>> (cost=0.00..250.63 rows=1 width=271) (actual time=0.013..0.408\n>> rows=63 loops=1)\n>> Filter: (((contrib IS NULL) OR (contrib = '\n>> '::text) OR (contrib ~~ '%b%'::text)) AND ((permission & 127) =\n>> permission))\n>> \n> \n> The biggest issue, as far as I can see, is that it thinks that the\n> selection criteria on forum will limit to one row, while it really\n> matches 63 rows.\n> \n> You might be able to coerce it into a faster plan with something like\n> this (untested):\n> \n> select *\n> from (select * from post\n> where invisible <> 1\n> and to_tsvector('english', message)\n> @@ to_tsquery('violence')\n> ) p,\n> forum\n> where forum.name = p.forum\n> and (permission & '127') = permission\n> and (contrib is null or contrib = ' ' or contrib like '%b%')\n> order by modified desc\n> limit 100\n> ;\n> \n> -Kevin\n> \n\nThat didn't help.\n\nThe FTS alone returns 2,000 records on that table, and does so VERY quickly:\n\nticker=# explain analyze select count(ordinal) from post, forum where\npost.forum=forum.name and invisible <> 1\n and to_tsvector('english', message)\n @@ to_tsquery('violence');\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=5901.57..5901.58 rows=1 width=4) (actual\ntime=17.492..17.494 rows=1 loops=1)\n -> Hash Join (cost=613.80..5898.04 rows=1408 width=4) (actual\ntime=1.436..14.620 rows=2056 loops=1)\n Hash Cond: (post.forum = forum.name)\n -> Bitmap Heap Scan on post (cost=370.93..5635.71 rows=1435\nwidth=14) (actual time=1.123..7.944 rows=2056 loops=1)\n Recheck Cond: (to_tsvector('english'::text, message) @@\nto_tsquery('violence'::text))\n Filter: (invisible <> 1)\n -> Bitmap Index Scan on idx_message (cost=0.00..370.57\nrows=1435 width=0) (actual time=0.738..0.738 rows=2099 loops=1)\n Index Cond: (to_tsvector('english'::text, message)\n@@ to_tsquery('violence'::text))\n -> Hash (cost=242.07..242.07 rows=64 width=9) (actual\ntime=0.300..0.300 rows=64 loops=1)\n -> Index Scan using forum_name on forum \n(cost=0.00..242.07 rows=64 width=9) (actual time=0.011..0.182 rows=64\nloops=1)\n Total runtime: 17.559 ms\n(11 rows)\n\nticker=#\n\nOk, but now when we check the permission mask....\n\n\nticker=# explain analyze select count(ordinal) from post, forum where\npost.forum=forum.name and invisible <> 1\n and to_tsvector('english', message)\n @@ to_tsquery('violence') and (permission & 4 = permission);\n \nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1329.07..1329.08 rows=1 width=4) (actual\ntime=29819.293..29819.295 rows=1 loops=1)\n -> Nested Loop (cost=978.97..1329.01 rows=22 width=4) (actual\ntime=2.575..29815.530 rows=2056 loops=1)\n -> Index Scan using forum_name on forum (cost=0.00..242.39\nrows=1 width=13) (actual time=0.016..0.355 rows=62 loops=1)\n Filter: ((permission & 4) = permission)\n -> Bitmap Heap Scan on post (cost=978.97..1086.28 rows=27\nwidth=14) (actual time=97.997..480.746 rows=33 loops=62)\n Recheck Cond: ((to_tsvector('english'::text,\npost.message) @@ to_tsquery('violence'::text)) AND (post.forum =\nforum.name))\n Filter: (post.invisible <> 1)\n -> BitmapAnd (cost=978.97..978.97 rows=27 width=0)\n(actual time=91.106..91.106 rows=0 loops=62)\n -> Bitmap Index Scan on idx_message \n(cost=0.00..370.57 rows=1435 width=0) (actual time=0.680..0.680\nrows=2099 loops=62)\n Index Cond: (to_tsvector('english'::text,\npost.message) @@ to_tsquery('violence'::text))\n -> Bitmap Index Scan on post_forum \n(cost=0.00..607.78 rows=26575 width=0) (actual time=89.927..89.927\nrows=22980 loops=62)\n Index Cond: (post.forum = forum.name)\n Total runtime: 29819.376 ms\n(13 rows)\n\nticker=#\n\nThe problem appearsa to lie in the \"nested loop\", and I don't understand\nwhy that's happening. Isn't a **LINEAR** check on each returned value\n(since we do the aggregate first?) sufficient? Why is the query planner\ncreating a nested loop - the aggregate contains the tested field and it\nis not subject to change once aggregated?!", "msg_date": "Tue, 18 Aug 2009 18:03:44 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Query Performance - what gives?" }, { "msg_contents": "Karl Denninger <[email protected]> writes:\n> The problem appearsa to lie in the \"nested loop\", and I don't understand\n> why that's happening.\n\nIt looks to me like there are several issues here.\n\nOne is the drastic underestimate of the number of rows satisfying the\npermission condition. That leads the planner to think that a nestloop\njoin with the other table will be fast, which is only right if there are\njust one or a few rows coming out of \"forum\". With sixty-some rows you\nget sixty-some repetitions of the scan of the other table, which loses.\n\nProblem number two is the overeager use of a BitmapAnd to add on another\nindex that isn't really very selective. That might be a correct\ndecision but it looks fishy here. We rewrote choose_bitmap_and a couple\nof times to try to fix that problem ... what PG version is this exactly?\n\nThe third thing that looks fishy is that it's using unqualified index\nscans for no apparent reason. Have you got enable_seqscan turned off,\nand if so what happens when you fix that? What other nondefault planner\nsettings are you using?\n\nBut anyway, the big problem seems to be poor selectivity estimates for \nconditions like \"(permission & 127) = permission\". I have bad news for\nyou: there is simply no way in the world that Postgres is not going to\nsuck at estimating that, because the planner has no knowledge whatsoever\nof the behavior of \"&\". You could consider writing and submitting a\npatch that would teach it something about that, but in the near term\nit would be a lot easier to reconsider your representation of\npermissions. You'd be likely to get significantly better results,\nnot to mention have more-readable queries, if you stored them as a group\nof simple boolean columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Aug 2009 22:02:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] SQL Query Performance - what gives? " }, { "msg_contents": "Tom Lane wrote:\n> Karl Denninger <[email protected]> writes:\n> \n>> The problem appearsa to lie in the \"nested loop\", and I don't understand\n>> why that's happening.\n>> \n> It looks to me like there are several issues here.\n>\n> One is the drastic underestimate of the number of rows satisfying the\n> permission condition. That leads the planner to think that a nestloop\n> join with the other table will be fast, which is only right if there are\n> just one or a few rows coming out of \"forum\". With sixty-some rows you\n> get sixty-some repetitions of the scan of the other table, which loses.\n> \n\"Loses\" isn't quite the right word... :)\n> Problem number two is the overeager use of a BitmapAnd to add on another\n> index that isn't really very selective. That might be a correct\n> decision but it looks fishy here. We rewrote choose_bitmap_and a couple\n> of times to try to fix that problem ... what PG version is this exactly?\n> \n$ psql ticker\nWelcome to psql 8.3.6, the PostgreSQL interactive terminal.\n\n> The third thing that looks fishy is that it's using unqualified index\n> scans for no apparent reason. Have you got enable_seqscan turned off,\n> and if so what happens when you fix that? What other nondefault planner\n> settings are you using?\n> \nNone; here is the relevant section of the postgresql.conf file:\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\n#random_page_cost = 4.0 # same scale as above\n#cpu_tuple_cost = 0.01 # same scale as above\n#cpu_index_tuple_cost = 0.005 # same scale as above\n#cpu_operator_cost = 0.0025 # same scale as above\n#effective_cache_size = 128MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\ndefault_statistics_target = 100 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOIN clauses\n\nAll commented out - nothing set to non-defaults, other than the default\nstatistics target.\n> But anyway, the big problem seems to be poor selectivity estimates for \n> conditions like \"(permission & 127) = permission\". I have bad news for\n> you: there is simply no way in the world that Postgres is not going to\n> suck at estimating that, because the planner has no knowledge whatsoever\n> of the behavior of \"&\". You could consider writing and submitting a\n> patch that would teach it something about that, but in the near term\n> it would be a lot easier to reconsider your representation of\n> permissions. You'd be likely to get significantly better results,\n> not to mention have more-readable queries, if you stored them as a group\n> of simple boolean columns.\n>\n> \t\t\tregards, tom lane\n> \nUgh.\n\nThe issue here is that the permission structure is quite extensible by\nthe users of the code; there are defined bits (Bit 4, for example, means\nthat the user is an \"ordinary user\" and has a login account) but the\nupper bits are entirely administrator-defined and may vary from one\ninstallation to another (and do)\n\nThe bitmask allows the setting of multiple permissions but the table\ndefinition doesn't have to change (well, so long as the bits fit into a\nword!) Finally, this is a message forum - the actual code itself is\ntemplate-driven and the bitmask permission structure is ALL OVER the\ntemplates; getting that out of there would be a really nasty rewrite,\nnot to mention breaking the user (non-developer, but owner)\nextensibility of the current structure.\n\nIs there a way to TELL the planner how to deal with this, even if it\nmakes the SQL non-portable or is a hack on the source mandatory?\n\nFor the particular instance where this came up it won't be murderous to\nomit the bitmask check from the query, as there are no \"owner/moderator\nonly\" sub-forums (the one place where not checking that would bite HARD\nas it would allow searches of \"hidden\" content by ordinary users.) \nHowever, there are other installations where this will be a bigger deal;\nI can in the immediate term put that query into the config file (instead\nof hard-coding it) so for people who can't live with the performance\nthey can make the tradeoff decision.\n\n\n-- Karl", "msg_date": "Tue, 18 Aug 2009 21:47:57 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] SQL Query Performance - what gives?" }, { "msg_contents": "\n> The bitmask allows the setting of multiple permissions but the table\n> definition doesn't have to change (well, so long as the bits fit into a\n> word!) Finally, this is a message forum - the actual code itself is\n> template-driven and the bitmask permission structure is ALL OVER the\n> templates; getting that out of there would be a really nasty rewrite,\n> not to mention breaking the user (non-developer, but owner)\n> extensibility of the current structure.\n>\n> Is there a way to TELL the planner how to deal with this, even if it\n> makes the SQL non-portable or is a hack on the source mandatory?\n\n\tYou could use an integer array instead of a bit mask, make a gist index \non it, and instead of doing \"mask & xxx\" do \"array contains xxx\", which is \nindexable with gist. The idea is that it can get much better row \nestimation. Instead of 1,2,3, you can use 1,2,4,8, etc if you like. you'd \nprobably need a function to convert a bitmask into ints and another to do \nthe conversion back, so the rest of your app gets the expected bitmasks. \nOr add a bitmask type to postgres with ptoper statistics...\n", "msg_date": "Wed, 19 Aug 2009 08:27:52 +0200", "msg_from": "=?utf-8?Q?Pierre_Fr=C3=A9d=C3=A9ric_Caillau?= =?utf-8?Q?d?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] SQL Query Performance - what gives?" }, { "msg_contents": "Karl Denninger wrote:\n\n> The bitmask allows the setting of multiple permissions but the table\n> definition doesn't have to change (well, so long as the bits fit into a\n> word!) Finally, this is a message forum - the actual code itself is\n> template-driven and the bitmask permission structure is ALL OVER the\n> templates; getting that out of there would be a really nasty rewrite,\n> not to mention breaking the user (non-developer, but owner)\n> extensibility of the current structure.\n> \n> Is there a way to TELL the planner how to deal with this, even if it\n> makes the SQL non-portable or is a hack on the source mandatory?\n\nYou could maybe create function indexes for common bitmap operations; \nfor example if it's common to check a single bit you could create 32 \nindexes, on (field & 1), (field & 2), (field & 4), etc. You could also \nmaybe extend this so if you need to query multiple bits you decompose \nthem into individual single-bit queries, e.g. instead of (field & 3) you \ndo ((field & 1) and (field & 2)).\n\nI suppose there will be a break-even point in complexity before which \nthe above approach will be very slow but after it it should scale better \nthen the alternative.\n\n", "msg_date": "Wed, 19 Aug 2009 12:19:17 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Query Performance - what gives?" } ]
[ { "msg_contents": "I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.\nPG version is 8.3.7, compiled as 64bit.\nThe memory is 8GB.\nIt's a 2 x Dual Core Intel 5310.\nHard disks are Raid 1, SCSI 15 rpm.\n\nThe server is running just one website. So there's Apache 2.2.11,\nMySQL (for some small tasks, almost negligible).\n\nAnd then there's PG, which in the \"top\" command shows up as the main beast.\n\nMy server load is going to 64, 63, 65, and so on.\n\nWhere should I start debugging? What should I see? TOP command does\nnot yield anything meaningful. I mean, even if it shows that postgres\nuser for \"postmaster\" and nobody user for \"httpd\" (apache) are the\nmain resource hogs, what should I start with in terms of debugging?\n", "msg_date": "Wed, 19 Aug 2009 21:33:20 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "PG 8.3 and server load" }, { "msg_contents": "Phoenix Kiula wrote:\n> I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.\n> PG version is 8.3.7, compiled as 64bit.\n> The memory is 8GB.\n> It's a 2 x Dual Core Intel 5310.\n> Hard disks are Raid 1, SCSI 15 rpm.\n> \n> The server is running just one website. So there's Apache 2.2.11,\n> MySQL (for some small tasks, almost negligible).\n> \n> And then there's PG, which in the \"top\" command shows up as the main beast.\n> \n> My server load is going to 64, 63, 65, and so on.\n> \n> Where should I start debugging? What should I see? TOP command does\n> not yield anything meaningful. I mean, even if it shows that postgres\n> user for \"postmaster\" and nobody user for \"httpd\" (apache) are the\n> main resource hogs, what should I start with in terms of debugging?\n\nIf postgres or apache are the reason for the high load, it means you \nhave lots of simultaneous users hitting either server.\n\nThe only thing you can do (except of course denying service to the \nusers) is investigate which requests / queries take the most time and \noptimize them.\n\npgtop (http://pgfoundry.org/projects/pgtop/) might help you see what is \nyour database doing. You will also probably need to use something like \npqa (http://pqa.projects.postgresql.org/) to find top running queries.\n\nUnfortunately, if you cannot significantly optimize your queries, there \nis not much else you can do with the hardware you have.\n\n", "msg_date": "Wed, 19 Aug 2009 15:53:44 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Ivan Voras <ivoras 'at' freebsd.org> writes:\n\n> pgtop (http://pgfoundry.org/projects/pgtop/) might help you see what\n> is your database doing.\n\nA simpler (but most probably less powerful) method would be to\nactivate \"stats_command_string = on\" in the server configuration,\nthen issue that query to view the currently running queries:\n\nSELECT procpid, datname, current_query, query_start FROM pg_stat_activity WHERE current_query <> '<IDLE>'\n\nThat may also be interesting.\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Wed, 19 Aug 2009 16:00:34 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Phoenix Kiula wrote:\n> I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.\n> PG version is 8.3.7, compiled as 64bit.\n> The memory is 8GB.\n> It's a 2 x Dual Core Intel 5310.\n> Hard disks are Raid 1, SCSI 15 rpm.\n> \n> The server is running just one website. So there's Apache 2.2.11,\n> MySQL (for some small tasks, almost negligible).\n> \n> And then there's PG, which in the \"top\" command shows up as the main beast.\n> \n> My server load is going to 64, 63, 65, and so on.\n> \n> Where should I start debugging? What should I see? TOP command does\n> not yield anything meaningful. I mean, even if it shows that postgres\n> user for \"postmaster\" and nobody user for \"httpd\" (apache) are the\n> main resource hogs, what should I start with in terms of debugging?\n> \n\n1) check if you are using swap space. Use free and make sure swap/used \nis a small number. Check vmstat and see if swpd is moving up and down. \n (Posting a handful of lines from vmstat might help us).\n\n2) check 'ps ax|grep postgres' and make sure nothing says \"idle in \ntransaction\"\n\n3) I had a web box where the number of apache clients was set very high, \nand the box was brought to its knees by the sheer number of connections. \n check \"ps ax|grep http|wc --lines\" and make sure its not too big. \n(perhaps less than 100)\n\n-Andy\n", "msg_date": "Wed, 19 Aug 2009 09:01:25 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Andy Colson wrote:\n> Phoenix Kiula wrote:\n>> I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.\n>> PG version is 8.3.7, compiled as 64bit.\n>> The memory is 8GB.\n>> It's a 2 x Dual Core Intel 5310.\n>> Hard disks are Raid 1, SCSI 15 rpm.\n>>\n>> The server is running just one website. So there's Apache 2.2.11,\n>> MySQL (for some small tasks, almost negligible).\n>>\n>> And then there's PG, which in the \"top\" command shows up as the main\n>> beast.\n>>\n>> My server load is going to 64, 63, 65, and so on.\n>>\n>> Where should I start debugging? What should I see? TOP command does\n>> not yield anything meaningful. I mean, even if it shows that postgres\n>> user for \"postmaster\" and nobody user for \"httpd\" (apache) are the\n>> main resource hogs, what should I start with in terms of debugging?\n>>\n>\n> 1) check if you are using swap space. Use free and make sure\n> swap/used is a small number. Check vmstat and see if swpd is moving\n> up and down. (Posting a handful of lines from vmstat might help us).\n>\n> 2) check 'ps ax|grep postgres' and make sure nothing says \"idle in\n> transaction\"\n>\n> 3) I had a web box where the number of apache clients was set very\n> high, and the box was brought to its knees by the sheer number of\n> connections. check \"ps ax|grep http|wc --lines\" and make sure its not\n> too big. (perhaps less than 100)\n>\n> -Andy\n>\nI will observe that in some benchmark tests I've done on my application\n(a VERY heavy Postgres user) CentOS was RADICALLY inferior in terms of\ncarrying capacity and performance to FreeBSD on the same hardware.\n\nI have no idea why - you wouldn't expect this sort of result, but it is\nwhat it is. The test platform in my case was a Core i7 box (8 cores\nSMP) with 6GB of memory running 64-bit code across the board. Disks\nwere on a 3Ware coprocessor board.\n\nI was quite surprised by this given that in general CentOS seems to be\ncomparable for base Apache (web service) use to FreeBSD, but due to this\nrecommend strongly in favor of FreeBSD for applications where web\nservice + PostgreSQL are the intended application mix.\n\n-- Karl", "msg_date": "Wed, 19 Aug 2009 10:17:51 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Phoenix Kiula wrote:\n> Thanks, but swap is not changing, there is no idle transaction, and\n> number of connections are 28/29.\n> \n> Here are some command line stamps...any other ideas?\n> \n> \n> \n> [MYSITE] ~ > date && vmstat\n> Wed Aug 19 10:00:37 CDT 2009\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 3 1 20920 25736 60172 7594988 0 0 74 153 0 3 10 5 74 12\n> \n> [MYSITE] ~ > date && vmstat\n> Wed Aug 19 10:00:40 CDT 2009\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 20920 34696 60124 7593996 0 0 74 153 0 3 10 5 74 12\n> \n> [MYSITE] ~ > ps ax|grep postgres\n> 25302 ? Ss 0:00 postgres: logger process\n> 25352 ? Ss 0:07 postgres: writer process\n> 25353 ? Ss 4:21 postgres: stats collector process\n> 23483 ? Ds 0:00 postgres: snipurl_snipurl snipurl\n> 127.0.0.1(51622) UPDATE\n> 23485 pts/12 S+ 0:00 grep postgres\n> \n> [MYSITE] ~ > date && vmstat\n> Wed Aug 19 10:00:55 CDT 2009\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 0 20920 49464 60272 7597748 0 0 74 153 0 3 10 5 74 12\n> \n> [MYSITE] ~ > ps ax|grep http|wc --lines\n> 28\n> \n> [MYSITE] ~ > ps ax|grep http|wc --lines\n> 29\n> \n> [MYSITE] ~ > ps ax|grep postgres\n> 25302 ? Ss 0:00 postgres: logger process\n> 25352 ? Ss 0:07 postgres: writer process\n> 25353 ? Ss 4:21 postgres: stats collector process\n> 24718 pts/12 S+ 0:00 grep postgres\n> \n> [MYSITE] ~ > date && vmstat\n> Wed Aug 19 10:01:23 CDT 2009\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 0 20920 106376 59220 7531016 0 0 74 153 0 3 10 5 74 12\n> \n> \n> \n> \n> On Wed, Aug 19, 2009 at 10:01 PM, Andy Colson<[email protected]> wrote:\n>> Phoenix Kiula wrote:\n>>> I'm on a CentOS 5 OS 64 bit, latest kernel and all of that.\n>>> PG version is 8.3.7, compiled as 64bit.\n>>> The memory is 8GB.\n>>> It's a 2 x Dual Core Intel 5310.\n>>> Hard disks are Raid 1, SCSI 15 rpm.\n>>>\n>>> The server is running just one website. So there's Apache 2.2.11,\n>>> MySQL (for some small tasks, almost negligible).\n>>>\n>>> And then there's PG, which in the \"top\" command shows up as the main\n>>> beast.\n>>>\n>>> My server load is going to 64, 63, 65, and so on.\n>>>\n>>> Where should I start debugging? What should I see? TOP command does\n>>> not yield anything meaningful. I mean, even if it shows that postgres\n>>> user for \"postmaster\" and nobody user for \"httpd\" (apache) are the\n>>> main resource hogs, what should I start with in terms of debugging?\n>>>\n>> 1) check if you are using swap space. Use free and make sure swap/used is a\n>> small number. Check vmstat and see if swpd is moving up and down. (Posting\n>> a handful of lines from vmstat might help us).\n>>\n>> 2) check 'ps ax|grep postgres' and make sure nothing says \"idle in\n>> transaction\"\n>>\n>> 3) I had a web box where the number of apache clients was set very high, and\n>> the box was brought to its knees by the sheer number of connections. check\n>> \"ps ax|grep http|wc --lines\" and make sure its not too big. (perhaps less\n>> than 100)\n>>\n>> -Andy\n>>\n>>\n\nthe first line of vmstat is an average since bootup. Kinda useless. \nrun it as: 'vmstat 4'\n\nit will print a line every 4 seconds, which will be a summary of \neverything that happened in the last 4 seconds.\n\nsince boot, you've written out an average of 153 blocks (the bo column). \n Thats very small, so your not io bound.\n\nbut... you have average 74% idle cpu. So your not cpu bound either?\n\nAhh? I'm not sure what that means. Maybe I'm reading something wrong?\n\n-Andy\n", "msg_date": "Wed, 19 Aug 2009 10:25:27 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "On Wed, Aug 19, 2009 at 11:25 PM, Andy Colson<[email protected]> wrote:\n\n....<snip>.....\n\n\n>\n> the first line of vmstat is an average since bootup.  Kinda useless. run it\n> as:  'vmstat 4'\n>\n> it will print a line every 4 seconds, which will be a summary of everything\n> that happened in the last 4 seconds.\n>\n> since boot, you've written out an average of 153 blocks (the bo column).\n>  Thats very small, so your not io bound.\n>\n> but... you have average 74% idle cpu.  So your not cpu bound either?\n>\n> Ahh?  I'm not sure what that means.  Maybe I'm reading something wrong?\n>\n> -Andy\n>\n\n\n\n\n~ > vmstat 4\nprocs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 2 16128 35056 62800 7697428 0 0 74 153 0 3 10 5 74 12\n 0 0 16128 38256 62836 7698172 0 0 166 219 1386 1440 7 4 85 4\n 0 1 16128 34704 62872 7698916 0 0 119 314 1441 1589 7 4 85 5\n 0 0 16128 29544 62912 7699396 0 0 142 144 1443 1418 6 3 88 2\n 7 1 16128 26784 62832 7692196 0 0 343 241 1492 1671 8 5 83 4\n 0 0 16128 32840 62880 7693188 0 0 253 215 1459 1511 7 4 85 4\n 0 0 16128 30112 62940 7693908 0 0 187 216 1395 1282 6 3 87 4\n", "msg_date": "Wed, 19 Aug 2009 23:29:45 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Andy Colson <[email protected]> wrote:\n> Phoenix Kiula wrote:\n \n>>>> It's a 2 x Dual Core Intel 5310.\n \n> you have average 74% idle cpu. So your not cpu bound either?\n \nOr one CPU is pegged and the other three are idle....\n \n-Kevin\n", "msg_date": "Wed, 19 Aug 2009 10:33:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Kevin Grittner wrote:\n> Andy Colson <[email protected]> wrote:\n>> Phoenix Kiula wrote:\n> \n>>>>> It's a 2 x Dual Core Intel 5310.\n> \n>> you have average 74% idle cpu. So your not cpu bound either?\n> \n> Or one CPU is pegged and the other three are idle....\n> \n> -Kevin\n\nAhh, yeah...\n\nPhoenix: run top again, and hit the '1' key. It'll show you stats for \neach cpu. Is one pegged and the others idle?\n\n\ndo a 'cat /proc/cpuinfo' and make sure your os is seeing all your cpus.\n\n-Andy\n", "msg_date": "Wed, 19 Aug 2009 10:37:11 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "On Wed, 19 Aug 2009, Phoenix Kiula wrote:\n> ~ > vmstat 4\n> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 2 16128 35056 62800 7697428 0 0 74 153 0 3 10 5 74 12\n> 0 0 16128 38256 62836 7698172 0 0 166 219 1386 1440 7 4 85 4\n> 0 1 16128 34704 62872 7698916 0 0 119 314 1441 1589 7 4 85 5\n> 0 0 16128 29544 62912 7699396 0 0 142 144 1443 1418 6 3 88 2\n> 7 1 16128 26784 62832 7692196 0 0 343 241 1492 1671 8 5 83 4\n> 0 0 16128 32840 62880 7693188 0 0 253 215 1459 1511 7 4 85 4\n> 0 0 16128 30112 62940 7693908 0 0 187 216 1395 1282 6 3 87 4\n\nAs far as I can see from this, your machine isn't very busy at all.\n\n> [MYSITE] ~ > ps ax|grep postgres\n> 25302 ? Ss 0:00 postgres: logger process\n> 25352 ? Ss 0:07 postgres: writer process\n> 25353 ? Ss 4:21 postgres: stats collector process\n> 24718 pts/12 S+ 0:00 grep postgres\n\nMoreover, Postgres isn't doing anything either.\n\nSo, what is the problem that you are seeing? What do you want to change?\n\nMatthew\n\n-- \nSurely the value of C++ is zero, but C's value is now 1?\n -- map36, commenting on the \"No, C++ isn't equal to D. 'C' is undeclared\n [...] C++ should really be called 1\" response to \"C++ -- shouldn't it\n be called D?\"\n", "msg_date": "Wed, 19 Aug 2009 16:37:13 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "On Wed, Aug 19, 2009 at 11:37 PM, Andy Colson<[email protected]> wrote:\n\n>\n> Phoenix: run top again, and hit the '1' key. It'll show you stats for\neach\n> cpu. Is one pegged and the others idle?\n>\n\n\ntop - 10:38:53 up 29 days, 5 min, 1 user, load average: 64.99, 65.17,\n65.06\nTasks: 568 total, 1 running, 537 sleeping, 6 stopped, 24 zombie\nCpu0 : 17.7% us, 7.7% sy, 0.0% ni, 74.0% id, 0.7% wa, 0.0% hi, 0.0% si\nCpu1 : 6.3% us, 5.6% sy, 0.0% ni, 84.4% id, 3.6% wa, 0.0% hi, 0.0% si\nCpu2 : 5.6% us, 5.9% sy, 0.0% ni, 86.8% id, 1.7% wa, 0.0% hi, 0.0% si\nCpu3 : 5.6% us, 4.0% sy, 0.0% ni, 74.2% id, 16.2% wa, 0.0% hi, 0.0% si\nMem: 8310256k total, 8277416k used, 32840k free, 61944k buffers\nSwap: 2096440k total, 16128k used, 2080312k free, 7664224k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n\n9922 nobody 15 0 49024 16m 7408 S 3.0 0.2 0:00.52 httpd\n\n9630 nobody 15 0 49020 16m 7420 S 2.3 0.2 0:00.60 httpd\n\n9848 nobody 16 0 48992 16m 7372 S 2.3 0.2 0:00.51 httpd\n\n10995 nobody 15 0 49024 16m 7304 S 2.3 0.2 0:00.35 httpd\n\n11031 nobody 15 0 48860 16m 7104 S 2.3 0.2 0:00.34 httpd\n\n6701 nobody 15 0 49028 17m 7576 S 2.0 0.2 0:01.50 httpd\n\n10996 nobody 15 0 48992 16m 7328 S 2.0 0.2 0:00.31 httpd\n\n12232 nobody 15 0 48860 16m 7004 S 1.7 0.2 0:00.05 httpd\n\n9876 nobody 15 0 48992 16m 7400 S 1.3 0.2 0:00.73 httpd\n\n12231 nobody 15 0 48860 16m 6932 S 1.3 0.2 0:00.04 httpd\n\n12233 nobody 16 0 48860 16m 6960 S 1.3 0.2 0:00.04 httpd\n\n20315 postgres 19 0 325m 9732 9380 S 1.0 0.1 0:10.39 postmaster\n\n31573 nobody 15 0 49024 17m 7664 S 1.0 0.2 0:03.14 httpd\n\n7954 nobody 15 0 49032 16m 7400 S 1.0 0.2 0:01.14 httpd\n\n9918 nobody 15 0 48956 16m 7344 S 1.0 0.2 0:00.44 httpd\n\n12298 nobody 16 0 48860 16m 6780 S 1.0 0.2 0:00.03 httpd\n\n6479 nobody 16 0 49040 16m 7412 S 0.7 0.2 0:01.20 httpd\n\n7950 nobody 15 0 49020 16m 7388 S 0.7 0.2 0:00.83 httpd\n\n7951 nobody 15 0 49032 16m 7384 S 0.7 0.2 0:01.03 httpd\n\n9875 nobody 15 0 48948 16m 7096 S 0.7 0.2 0:00.51 httpd\n\n9916 nobody 16 0 48860 16m 7124 S 0.7 0.2 0:00.59 httpd\n\n10969 nobody 15 0 49036 16m 7380 S 0.7 0.2 0:00.29 httpd\n\n11752 root 16 0 3620 1288 772 R 0.7 0.0 0:00.14 top\n\n12309 nobody 16 0 48860 16m 6844 S 0.7 0.2 0:00.02 httpd\n\n20676 mysql 15 0 182m 20m 2916 S 0.3 0.3 0:00.95 mysqld\n\n20811 root 21 0 47920 14m 5872 S 0.3 0.2 0:00.71 httpd\n\n7952 nobody 15 0 49024 16m 7524 S 0.3 0.2 0:00.96 httpd\n\n11036 nobody 15 0 48992 16m 7320 S 0.3 0.2 0:00.36 httpd\n\n12230 nobody 15 0 48860 16m 6956 S 0.3 0.2 0:00.01 httpd\n\n12297 nobody 16 0 48860 16m 6932 S 0.3 0.2 0:00.01 httpd\n\n12299 nobody 16 0 48992 16m 7120 S 0.3 0.2 0:00.01 httpd\n\n12301 nobody 20 0 48860 16m 6816 S 0.3 0.2 0:00.01 httpd\n\n12307 nobody 15 0 48860 16m 6880 S 0.3 0.2 0:00.01 httpd\n\n\n\n\n> do a 'cat /proc/cpuinfo' and make sure your os is seeing all your cpus.\n>\n\n\n\nI guess it's using all 4?\n\nOn Wed, Aug 19, 2009 at 11:37 PM, Andy Colson<[email protected]> wrote:>> Phoenix:  run top again, and hit the '1' key.  It'll show you stats for each\n> cpu.  Is one pegged and the others idle?>top - 10:38:53 up 29 days, 5 min,  1 user,  load average: 64.99, 65.17, 65.06Tasks: 568 total,   1 running, 537 sleeping,   6 stopped,  24 zombie\nCpu0  : 17.7% us,  7.7% sy,  0.0% ni, 74.0% id,  0.7% wa,  0.0% hi,  0.0% siCpu1  :  6.3% us,  5.6% sy,  0.0% ni, 84.4% id,  3.6% wa,  0.0% hi,  0.0% siCpu2  :  5.6% us,  5.9% sy,  0.0% ni, 86.8% id,  1.7% wa,  0.0% hi,  0.0% si\nCpu3  :  5.6% us,  4.0% sy,  0.0% ni, 74.2% id, 16.2% wa,  0.0% hi,  0.0% siMem:   8310256k total,  8277416k used,    32840k free,    61944k buffersSwap:  2096440k total,    16128k used,  2080312k free,  7664224k cached\n  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                             9922 nobody    15   0 49024  16m 7408 S  3.0  0.2   0:00.52 httpd                                                                               \n 9630 nobody    15   0 49020  16m 7420 S  2.3  0.2   0:00.60 httpd                                                                               9848 nobody    16   0 48992  16m 7372 S  2.3  0.2   0:00.51 httpd                                                                               \n10995 nobody    15   0 49024  16m 7304 S  2.3  0.2   0:00.35 httpd                                                                               11031 nobody    15   0 48860  16m 7104 S  2.3  0.2   0:00.34 httpd                                                                               \n 6701 nobody    15   0 49028  17m 7576 S  2.0  0.2   0:01.50 httpd                                                                               10996 nobody    15   0 48992  16m 7328 S  2.0  0.2   0:00.31 httpd                                                                               \n12232 nobody    15   0 48860  16m 7004 S  1.7  0.2   0:00.05 httpd                                                                               9876 nobody    15   0 48992  16m 7400 S  1.3  0.2   0:00.73 httpd                                                                               \n12231 nobody    15   0 48860  16m 6932 S  1.3  0.2   0:00.04 httpd                                                                               12233 nobody    16   0 48860  16m 6960 S  1.3  0.2   0:00.04 httpd                                                                               \n20315 postgres  19   0  325m 9732 9380 S  1.0  0.1   0:10.39 postmaster                                                                          31573 nobody    15   0 49024  17m 7664 S  1.0  0.2   0:03.14 httpd                                                                               \n 7954 nobody    15   0 49032  16m 7400 S  1.0  0.2   0:01.14 httpd                                                                               9918 nobody    15   0 48956  16m 7344 S  1.0  0.2   0:00.44 httpd                                                                               \n12298 nobody    16   0 48860  16m 6780 S  1.0  0.2   0:00.03 httpd                                                                               6479 nobody    16   0 49040  16m 7412 S  0.7  0.2   0:01.20 httpd                                                                               \n 7950 nobody    15   0 49020  16m 7388 S  0.7  0.2   0:00.83 httpd                                                                               7951 nobody    15   0 49032  16m 7384 S  0.7  0.2   0:01.03 httpd                                                                               \n 9875 nobody    15   0 48948  16m 7096 S  0.7  0.2   0:00.51 httpd                                                                               9916 nobody    16   0 48860  16m 7124 S  0.7  0.2   0:00.59 httpd                                                                               \n10969 nobody    15   0 49036  16m 7380 S  0.7  0.2   0:00.29 httpd                                                                               11752 root      16   0  3620 1288  772 R  0.7  0.0   0:00.14 top                                                                                 \n12309 nobody    16   0 48860  16m 6844 S  0.7  0.2   0:00.02 httpd                                                                               20676 mysql     15   0  182m  20m 2916 S  0.3  0.3   0:00.95 mysqld                                                                              \n20811 root      21   0 47920  14m 5872 S  0.3  0.2   0:00.71 httpd                                                                               7952 nobody    15   0 49024  16m 7524 S  0.3  0.2   0:00.96 httpd                                                                               \n11036 nobody    15   0 48992  16m 7320 S  0.3  0.2   0:00.36 httpd                                                                               12230 nobody    15   0 48860  16m 6956 S  0.3  0.2   0:00.01 httpd                                                                               \n12297 nobody    16   0 48860  16m 6932 S  0.3  0.2   0:00.01 httpd                                                                               12299 nobody    16   0 48992  16m 7120 S  0.3  0.2   0:00.01 httpd                                                                               \n12301 nobody    20   0 48860  16m 6816 S  0.3  0.2   0:00.01 httpd                                                                               12307 nobody    15   0 48860  16m 6880 S  0.3  0.2   0:00.01 httpd                 \n> do a 'cat /proc/cpuinfo' and make sure your os is seeing all your cpus.>I guess it's using all 4?", "msg_date": "Wed, 19 Aug 2009 23:40:40 +0800", "msg_from": "Phoenix Kiula <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Phoenix Kiula wrote:\n> On Wed, Aug 19, 2009 at 11:37 PM, Andy Colson<[email protected] \n> <mailto:[email protected]>> wrote:\n> \n> >\n> > Phoenix: run top again, and hit the '1' key. It'll show you stats \n> for each\n> > cpu. Is one pegged and the others idle?\n> >\n> \n> \n> top - 10:38:53 up 29 days, 5 min, 1 user, load average: 64.99, 65.17, \n> 65.06\n> Tasks: 568 total, 1 running, 537 sleeping, 6 stopped, 24 zombie\n> Cpu0 : 17.7% us, 7.7% sy, 0.0% ni, 74.0% id, 0.7% wa, 0.0% hi, 0.0% si\n> Cpu1 : 6.3% us, 5.6% sy, 0.0% ni, 84.4% id, 3.6% wa, 0.0% hi, 0.0% si\n> Cpu2 : 5.6% us, 5.9% sy, 0.0% ni, 86.8% id, 1.7% wa, 0.0% hi, 0.0% si\n> Cpu3 : 5.6% us, 4.0% sy, 0.0% ni, 74.2% id, 16.2% wa, 0.0% hi, 0.0% si\n> Mem: 8310256k total, 8277416k used, 32840k free, 61944k buffers\n> Swap: 2096440k total, 16128k used, 2080312k free, 7664224k cached\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n> \n> 9922 nobody 15 0 49024 16m 7408 S 3.0 0.2 0:00.52 httpd \n> \n> 9630 nobody 15 0 49020 16m 7420 S 2.3 0.2 0:00.60 httpd \n> \n> 9848 nobody 16 0 48992 16m 7372 S 2.3 0.2 0:00.51 httpd \n> \n> 10995 nobody 15 0 49024 16m 7304 S 2.3 0.2 0:00.35 httpd \n> \n> 11031 nobody 15 0 48860 16m 7104 S 2.3 0.2 0:00.34 httpd \n> \n> 6701 nobody 15 0 49028 17m 7576 S 2.0 0.2 0:01.50 httpd \n> \n> 10996 nobody 15 0 48992 16m 7328 S 2.0 0.2 0:00.31 httpd \n> \n> 12232 nobody 15 0 48860 16m 7004 S 1.7 0.2 0:00.05 httpd \n> \n> 9876 nobody 15 0 48992 16m 7400 S 1.3 0.2 0:00.73 httpd \n> \n> 12231 nobody 15 0 48860 16m 6932 S 1.3 0.2 0:00.04 httpd \n> \n> 12233 nobody 16 0 48860 16m 6960 S 1.3 0.2 0:00.04 httpd \n> \n> 20315 postgres 19 0 325m 9732 9380 S 1.0 0.1 0:10.39 postmaster \n> \n> 31573 nobody 15 0 49024 17m 7664 S 1.0 0.2 0:03.14 httpd \n> \n> 7954 nobody 15 0 49032 16m 7400 S 1.0 0.2 0:01.14 httpd \n> \n> 9918 nobody 15 0 48956 16m 7344 S 1.0 0.2 0:00.44 httpd \n> \n> 12298 nobody 16 0 48860 16m 6780 S 1.0 0.2 0:00.03 httpd \n> \n> 6479 nobody 16 0 49040 16m 7412 S 0.7 0.2 0:01.20 httpd \n> \n> 7950 nobody 15 0 49020 16m 7388 S 0.7 0.2 0:00.83 httpd \n> \n> 7951 nobody 15 0 49032 16m 7384 S 0.7 0.2 0:01.03 httpd \n> \n> 9875 nobody 15 0 48948 16m 7096 S 0.7 0.2 0:00.51 httpd \n> \n> 9916 nobody 16 0 48860 16m 7124 S 0.7 0.2 0:00.59 httpd \n> \n> 10969 nobody 15 0 49036 16m 7380 S 0.7 0.2 0:00.29 httpd \n> \n> 11752 root 16 0 3620 1288 772 R 0.7 0.0 0:00.14 top \n> \n> 12309 nobody 16 0 48860 16m 6844 S 0.7 0.2 0:00.02 httpd \n> \n> 20676 mysql 15 0 182m 20m 2916 S 0.3 0.3 0:00.95 mysqld \n> \n> 20811 root 21 0 47920 14m 5872 S 0.3 0.2 0:00.71 httpd \n> \n> 7952 nobody 15 0 49024 16m 7524 S 0.3 0.2 0:00.96 httpd \n> \n> 11036 nobody 15 0 48992 16m 7320 S 0.3 0.2 0:00.36 httpd \n> \n> 12230 nobody 15 0 48860 16m 6956 S 0.3 0.2 0:00.01 httpd \n> \n> 12297 nobody 16 0 48860 16m 6932 S 0.3 0.2 0:00.01 httpd \n> \n> 12299 nobody 16 0 48992 16m 7120 S 0.3 0.2 0:00.01 httpd \n> \n> 12301 nobody 20 0 48860 16m 6816 S 0.3 0.2 0:00.01 httpd \n> \n> 12307 nobody 15 0 48860 16m 6880 S 0.3 0.2 0:00.01 httpd \n> \n> \n> \n> \n> > do a 'cat /proc/cpuinfo' and make sure your os is seeing all your cpus.\n> >\n> \n> \n> \n> I guess it's using all 4? \n\nYeah.\n\nYou aren't serving data from a shared drive (smb or nsf) are you? You \nhave a bunch of httpd just sitting around doing very little.\n\nOr do you have any php/perl/python/whatever turning around and doing \nnetwork stuff?\n\nCheck your nic's for errors (run ifconfig), check these stats:\n\nRX packets:15606269 errors:0 dropped:0 overruns:0 frame:0\nTX packets:13173940 errors:5 dropped:0 overruns:0 carrier:10\n collisions:0 txqueuelen:1000\n\n\nthe load average is a summary of a bunch of things, including whats \nwaiting on something else. I'll bet your httpd's are sitting around \nwaiting on something, (its not cpu or disk, it must be something else), \nwhich is causing the load average to spike up.\n\n-Andy\n", "msg_date": "Wed, 19 Aug 2009 10:52:40 -0500", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Phoenix Kiula <[email protected]> writes:\n> top - 10:38:53 up 29 days, 5 min, 1 user, load average: 64.99, 65.17,\n> 65.06\n> Tasks: 568 total, 1 running, 537 sleeping, 6 stopped, 24 zombie\n> Cpu0 : 17.7% us, 7.7% sy, 0.0% ni, 74.0% id, 0.7% wa, 0.0% hi, 0.0% si\n> Cpu1 : 6.3% us, 5.6% sy, 0.0% ni, 84.4% id, 3.6% wa, 0.0% hi, 0.0% si\n> Cpu2 : 5.6% us, 5.9% sy, 0.0% ni, 86.8% id, 1.7% wa, 0.0% hi, 0.0% si\n> Cpu3 : 5.6% us, 4.0% sy, 0.0% ni, 74.2% id, 16.2% wa, 0.0% hi, 0.0% si\n> Mem: 8310256k total, 8277416k used, 32840k free, 61944k buffers\n> Swap: 2096440k total, 16128k used, 2080312k free, 7664224k cached\n\nIt sure looks from here like your box is not under any particular\nstress. The only thing that suggests a problem is the high load\naverage, but since that doesn't agree with any other measurements,\nI'm inclined to think that the load average is simply wrong.\nDo you have any actual evidence of a problem (like slow response)?\n\n(I've seen load averages that had nothing to do with observable\nreality on other Unixes, though not before on RHEL.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Aug 2009 12:02:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load " }, { "msg_contents": "Phoenix Kiula <phoenix.kiula 'at' gmail.com> writes:\n\n> Tasks: 568 total, � 1 running, 537 sleeping, � 6 stopped, �24 zombie\n\nThe stopped and zombie processes look odd. Any reason for these?\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Wed, 19 Aug 2009 18:08:20 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "On Wed, Aug 19, 2009 at 9:40 AM, Phoenix Kiula<[email protected]> wrote:\n> On Wed, Aug 19, 2009 at 11:37 PM, Andy Colson<[email protected]> wrote:\n>\n>>\n>> Phoenix:  run top again, and hit the '1' key.  It'll show you stats for\n>> each\n>> cpu.  Is one pegged and the others idle?\n>\n> top - 10:38:53 up 29 days, 5 min,  1 user,  load average: 64.99, 65.17,\n> 65.06\n> Tasks: 568 total,   1 running, 537 sleeping,   6 stopped,  24 zombie\n> Cpu0  : 17.7% us,  7.7% sy,  0.0% ni, 74.0% id,  0.7% wa,  0.0% hi,  0.0% si\n> Cpu1  :  6.3% us,  5.6% sy,  0.0% ni, 84.4% id,  3.6% wa,  0.0% hi,  0.0% si\n> Cpu2  :  5.6% us,  5.9% sy,  0.0% ni, 86.8% id,  1.7% wa,  0.0% hi,  0.0% si\n> Cpu3  :  5.6% us,  4.0% sy,  0.0% ni, 74.2% id, 16.2% wa,  0.0% hi,  0.0% si\n> Mem:   8310256k total,  8277416k used,    32840k free,    61944k buffers\n> Swap:  2096440k total,    16128k used,  2080312k free,  7664224k cached\n>\n\nOK, nothing looks odd except, as pointed out, the stopped, zombie and\nhigh load. The actual amount of stuff running is minimal.\n\nI'm wondering if you've got something causing apache children to crash\nand go zombie. What parts of this setup are compiled by hand? Are\nyou sure that you don't have something like apache compiled against\none version of zlib and php-mysql against another? Not that exact\nproblem, but it's one of many ways to make a crash prone apache.\n", "msg_date": "Wed, 19 Aug 2009 10:19:14 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, Aug 19, 2009 at 9:40 AM, Phoenix Kiula<[email protected]> wrote:\n>> On Wed, Aug 19, 2009 at 11:37 PM, Andy Colson<[email protected]> wrote:\n>>\n>>> Phoenix: run top again, and hit the '1' key. It'll show you stats for\n>>> each\n>>> cpu. Is one pegged and the others idle?\n>> top - 10:38:53 up 29 days, 5 min, 1 user, load average: 64.99, 65.17,\n>> 65.06\n>> Tasks: 568 total, 1 running, 537 sleeping, 6 stopped, 24 zombie\n>> Cpu0 : 17.7% us, 7.7% sy, 0.0% ni, 74.0% id, 0.7% wa, 0.0% hi, 0.0% si\n>> Cpu1 : 6.3% us, 5.6% sy, 0.0% ni, 84.4% id, 3.6% wa, 0.0% hi, 0.0% si\n>> Cpu2 : 5.6% us, 5.9% sy, 0.0% ni, 86.8% id, 1.7% wa, 0.0% hi, 0.0% si\n>> Cpu3 : 5.6% us, 4.0% sy, 0.0% ni, 74.2% id, 16.2% wa, 0.0% hi, 0.0% si\n>> Mem: 8310256k total, 8277416k used, 32840k free, 61944k buffers\n>> Swap: 2096440k total, 16128k used, 2080312k free, 7664224k cached\n>>\n> \n> OK, nothing looks odd except, as pointed out, the stopped, zombie and\n> high load. The actual amount of stuff running is minimal.\n> \n> I'm wondering if you've got something causing apache children to crash\n> and go zombie. What parts of this setup are compiled by hand? Are\n\nGood point. Does Linux have \"last PID\" field in top? If so, you could \nmonitor it to find if it it's rapidly changing.\n\n", "msg_date": "Thu, 20 Aug 2009 14:50:23 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG 8.3 and server load" } ]
[ { "msg_contents": "Hi all;\n\nwe've been fighting this query for a few days now. we bumped up the statistict \ntarget for the a.id , c.url_hits_id and the b.id columns below to 250 and ran \nan analyze on the relevant tables. we killed it after 8hrs. \n\nNote the url_hits table has > 1.4billion rows\n\nAny suggestions?\n\n\n\n$ psql -ef expl.sql pwreport \nexplain \nselect \na.id, \nident_id, \ntime, \ncustomer_name, \nextract('day' from timezone(e.name, to_timestamp(a.time))) as day, \ncategory_id \nfrom \npwreport.url_hits a left outer join \npwreport.url_hits_category_jt c on (a.id = c.url_hits_id), \npwreport.ident b, \npwreport.timezone e \nwhere \na.ident_id = b.id \nand b.timezone_id = e.id \nand time >= extract ('epoch' from timestamp '2009-08-12') \nand time < extract ('epoch' from timestamp '2009-08-13' ) \nand direction = 'REQUEST' \n;\n QUERY \nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Right Join (cost=47528508.61..180424544.59 rows=10409251 width=53)\n Merge Cond: (c.url_hits_id = a.id)\n -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on \nurl_hits_category_jt c (cost=0.00..122162596.63 rows=4189283233 width=8)\n -> Sort (cost=47528508.61..47536931.63 rows=3369210 width=49)\n Sort Key: a.id\n -> Hash Join (cost=2565.00..47163219.21 rows=3369210 width=49)\n Hash Cond: (b.timezone_id = e.id)\n -> Hash Join (cost=2553.49..47116881.07 rows=3369210 \nwidth=37)\n Hash Cond: (a.ident_id = b.id)\n -> Seq Scan on url_hits a (cost=0.00..47051154.89 \nrows=3369210 width=12)\n Filter: ((direction = \n'REQUEST'::proxy_direction_enum) AND ((\"time\")::double precision >= \n1250035200::double precision) AND ((\"time\")::double precision < \n1250121600::double precision))\n -> Hash (cost=2020.44..2020.44 rows=42644 width=29)\n -> Seq Scan on ident b (cost=0.00..2020.44 \nrows=42644 width=29)\n -> Hash (cost=6.78..6.78 rows=378 width=20)\n -> Seq Scan on timezone e (cost=0.00..6.78 rows=378 \nwidth=20)\n(15 rows)\n\n", "msg_date": "Wed, 19 Aug 2009 10:28:41 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Query tuning" }, { "msg_contents": "that seems to be the killer:\n\nand time >= extract ('epoch' from timestamp '2009-08-12')\nand time < extract ('epoch' from timestamp '2009-08-13' )\n\nYou probably need an index on time/epoch:\n\nCREATE INDEX foo ON table(extract ('epoch' from timestamp time );\n\nor something like that, vacuum analyze and retry.\n", "msg_date": "Wed, 19 Aug 2009 17:38:08 +0100", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query tuning" }, { "msg_contents": "\n\n\nOn 8/19/09 9:28 AM, \"Kevin Kempter\" <[email protected]> wrote:\n\n> Hi all;\n> \n> we've been fighting this query for a few days now. we bumped up the statistict\n> target for the a.id , c.url_hits_id and the b.id columns below to 250 and ran\n> an analyze on the relevant tables. we killed it after 8hrs.\n> \n> Note the url_hits table has > 1.4billion rows\n> \n> Any suggestions?\n> \n\nHave you tried setting work_mem higher for just this query?\n\nThe big estimated cost is the sequential scan on url_hits. But in reality,\nif the estimates are off the sort and index scan at the end might be your\nbottleneck. Larger work_mem might make it choose another plan there.\n\nBut if the true cost is the sequential scan on url_hits, then only an index\nthere will help.\n\n> \n> \n> $ psql -ef expl.sql pwreport\n> explain \n> select \n> a.id, \n> ident_id, \n> time, \n> customer_name, \n> extract('day' from timezone(e.name, to_timestamp(a.time))) as day,\n> category_id \n> from \n> pwreport.url_hits a left outer join\n> pwreport.url_hits_category_jt c on (a.id = c.url_hits_id),\n> pwreport.ident b,\n> pwreport.timezone e\n> where \n> a.ident_id = b.id\n> and b.timezone_id = e.id\n> and time >= extract ('epoch' from timestamp '2009-08-12')\n> and time < extract ('epoch' from timestamp '2009-08-13' )\n> and direction = 'REQUEST'\n> ;\n> \n> QUERY\n> PLAN \n> ------------------------------------------------------------------------------\n> ------------------------------------------------------------------------------\n> --------------------------------------------------------\n> Merge Right Join (cost=47528508.61..180424544.59 rows=10409251 width=53)\n> Merge Cond: (c.url_hits_id = a.id)\n> -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on\n> url_hits_category_jt c (cost=0.00..122162596.63 rows=4189283233 width=8)\n> -> Sort (cost=47528508.61..47536931.63 rows=3369210 width=49)\n> Sort Key: a.id\n> -> Hash Join (cost=2565.00..47163219.21 rows=3369210 width=49)\n> Hash Cond: (b.timezone_id = e.id)\n> -> Hash Join (cost=2553.49..47116881.07 rows=3369210\n> width=37)\n> Hash Cond: (a.ident_id = b.id)\n> -> Seq Scan on url_hits a (cost=0.00..47051154.89\n> rows=3369210 width=12)\n> Filter: ((direction =\n> 'REQUEST'::proxy_direction_enum) AND ((\"time\")::double precision >=\n> 1250035200::double precision) AND ((\"time\")::double precision <\n> 1250121600::double precision))\n> -> Hash (cost=2020.44..2020.44 rows=42644 width=29)\n> -> Seq Scan on ident b (cost=0.00..2020.44\n> rows=42644 width=29)\n> -> Hash (cost=6.78..6.78 rows=378 width=20)\n> -> Seq Scan on timezone e (cost=0.00..6.78 rows=378\n> width=20)\n> (15 rows)\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n", "msg_date": "Wed, 19 Aug 2009 10:17:26 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query tuning" }, { "msg_contents": "2009/8/19 Grzegorz Jaśkiewicz <[email protected]>\n\n> that seems to be the killer:\n>\n> and time >= extract ('epoch' from timestamp '2009-08-12')\n> and time < extract ('epoch' from timestamp '2009-08-13' )\n>\n> You probably need an index on time/epoch:\n>\n> CREATE INDEX foo ON table(extract ('epoch' from timestamp time );\n\n\nIt looks like those extracts just make constant integer times. You probably\njust create an index on the time column.\n\nAlso, why not store times as timestamps?\n\n\n>\n>\n> or something like that, vacuum analyze and retry.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2009/8/19 Grzegorz Jaśkiewicz <[email protected]>\n\nthat seems to be the killer:\n\nand time >= extract ('epoch' from timestamp '2009-08-12')\nand time < extract ('epoch' from timestamp '2009-08-13' )\n\nYou probably need an index on time/epoch:\n\nCREATE INDEX foo ON table(extract ('epoch' from timestamp time );It looks like those extracts just make constant integer times. You probably just create an index on the time column.Also, why not store times as timestamps?\n\n \n\nor something like that, vacuum analyze and retry.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 19 Aug 2009 13:31:30 -0400", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query tuning" }, { "msg_contents": "On Wednesday 19 August 2009 11:17:26 Scott Carey wrote:\n> On 8/19/09 9:28 AM, \"Kevin Kempter\" <[email protected]> wrote:\n> > Hi all;\n> >\n> > we've been fighting this query for a few days now. we bumped up the\n> > statistict target for the a.id , c.url_hits_id and the b.id columns below\n> > to 250 and ran an analyze on the relevant tables. we killed it after\n> > 8hrs.\n> >\n> > Note the url_hits table has > 1.4billion rows\n> >\n> > Any suggestions?\n>\n> Have you tried setting work_mem higher for just this query?\n\nYes, we upped it to 500Meg\n\n\n>\n> The big estimated cost is the sequential scan on url_hits. But in reality,\n> if the estimates are off the sort and index scan at the end might be your\n> bottleneck. Larger work_mem might make it choose another plan there.\n>\n> But if the true cost is the sequential scan on url_hits, then only an index\n> there will help.\n>\n> > $ psql -ef expl.sql pwreport\n> > explain\n> > select\n> > a.id,\n> > ident_id,\n> > time,\n> > customer_name,\n> > extract('day' from timezone(e.name, to_timestamp(a.time))) as day,\n> > category_id\n> > from\n> > pwreport.url_hits a left outer join\n> > pwreport.url_hits_category_jt c on (a.id = c.url_hits_id),\n> > pwreport.ident b,\n> > pwreport.timezone e\n> > where\n> > a.ident_id = b.id\n> > and b.timezone_id = e.id\n> > and time >= extract ('epoch' from timestamp '2009-08-12')\n> > and time < extract ('epoch' from timestamp '2009-08-13' )\n> > and direction = 'REQUEST'\n> > ;\n> >\n> > QUERY\n> > PLAN\n> > -------------------------------------------------------------------------\n> >-----\n> > -------------------------------------------------------------------------\n> >----- --------------------------------------------------------\n> > Merge Right Join (cost=47528508.61..180424544.59 rows=10409251\n> > width=53) Merge Cond: (c.url_hits_id = a.id)\n> > -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on\n> > url_hits_category_jt c (cost=0.00..122162596.63 rows=4189283233 width=8)\n> > -> Sort (cost=47528508.61..47536931.63 rows=3369210 width=49)\n> > Sort Key: a.id\n> > -> Hash Join (cost=2565.00..47163219.21 rows=3369210 width=49)\n> > Hash Cond: (b.timezone_id = e.id)\n> > -> Hash Join (cost=2553.49..47116881.07 rows=3369210\n> > width=37)\n> > Hash Cond: (a.ident_id = b.id)\n> > -> Seq Scan on url_hits a (cost=0.00..47051154.89\n> > rows=3369210 width=12)\n> > Filter: ((direction =\n> > 'REQUEST'::proxy_direction_enum) AND ((\"time\")::double precision >=\n> > 1250035200::double precision) AND ((\"time\")::double precision <\n> > 1250121600::double precision))\n> > -> Hash (cost=2020.44..2020.44 rows=42644\n> > width=29) -> Seq Scan on ident b (cost=0.00..2020.44 rows=42644\n> > width=29)\n> > -> Hash (cost=6.78..6.78 rows=378 width=20)\n> > -> Seq Scan on timezone e (cost=0.00..6.78\n> > rows=378 width=20)\n> > (15 rows)\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list\n> > ([email protected]) To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 19 Aug 2009 11:36:55 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query tuning" }, { "msg_contents": "On Wednesday 19 August 2009 11:31:30 Nikolas Everett wrote:\n> 2009/8/19 Grzegorz Jaśkiewicz <[email protected]>\n>\n> > that seems to be the killer:\n> >\n> > and time >= extract ('epoch' from timestamp '2009-08-12')\n> > and time < extract ('epoch' from timestamp '2009-08-13' )\n> >\n> > You probably need an index on time/epoch:\n> >\n> > CREATE INDEX foo ON table(extract ('epoch' from timestamp time );\n>\n> It looks like those extracts just make constant integer times. You probably\n> just create an index on the time column.\n>\n> Also, why not store times as timestamps?\n>\n> > or something like that, vacuum analyze and retry.\n> >\n> > --\n> > Sent via pgsql-performance mailing list\n> > ([email protected]) To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n\n\nWe do have an index on url_hits.time\n\nnot sure why timestamps were not used, I was not here for the design phase.\n\n\nThx\n\n\n\n", "msg_date": "Wed, 19 Aug 2009 11:37:58 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query tuning" }, { "msg_contents": "2009/8/19 Kevin Kempter <[email protected]>\n\n>\n> We do have an index on url_hits.time\n>\n> not sure why timestamps were not used, I was not here for the design phase.\n>\n\nWhat's type of time column? I don't like it casts it to double in explain.\nIf it is integer, may be you need to change\n\nand time >= extract ('epoch' from timestamp '2009-08-12')\nand time < extract ('epoch' from timestamp '2009-08-13' )\n\nto\n\nand time >= extract ('epoch' from timestamp '2009-08-12')::int4\nand time < extract ('epoch' from timestamp '2009-08-13' )::int4\n\nfor the index to be used?\n\n2009/8/19 Kevin Kempter <[email protected]>\n\n\nWe do have an index on url_hits.time\n\nnot sure why timestamps were not used, I was not here for the design phase.\nWhat's type of time column? I don't like it casts it to double in explain. If it is integer, may be you need to changeand time >= extract ('epoch' from timestamp '2009-08-12')\n\nand time < extract ('epoch' from timestamp '2009-08-13' )toand time >= extract ('epoch' from timestamp '2009-08-12')::int4\nand time < extract ('epoch' from timestamp '2009-08-13' )::int4for the index to be used?", "msg_date": "Thu, 20 Aug 2009 15:03:07 +0300", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query tuning" } ]
[ { "msg_contents": "Hi,\n\nOur fine manual says:\n\"\"\"\nThe amount of memory used in shared memory for WAL data. The default\nis 64 kilobytes (64kB). The setting need only be large enough to hold\nthe amount of WAL data generated by one typical transaction, since the\ndata is written out to disk at every transaction commit. This\nparameter can only be set at server start.\n\"\"\"\n\nbut how can measure \"one typical transaction\"? i read in the archives\nthat the useful top for this parameter is 1MB is that an \"oficial\"\nopinion?\nwhile we are there is there any way to know how many transactions are\nwe processing per period of time?\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Wed, 19 Aug 2009 19:25:11 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] how to set wal_buffers" } ]
[ { "msg_contents": "Hi,\n\nAFAIUI, work_mem is used for some operations (sort, hash, etc) for\navoiding the use of temp files on disk...\n\nIn a client server i'm monitoring (pg 8.3.7, 32GB of ram) work_mem is\nset to 8MB, however i'm seeing a lot of temp files (>30000 in 4 hours)\nwith small sizes (ie: 2021520 obviously lower than 8MB). so, why?\nmaybe we use work_mem until we find isn't enough and we send just the\ndifference to a temp file?\n\ni'm not thinking in raising work_mem until i understand this well,\nwhat's the point if we still create temp files that could fit in\nwork_mem...\n\nPS: i have max_connections to 1024, i know i need a pool but the app\nis still opening persistent conecctions to the db, so is not like i\ncould raise work_mem just easy until the app gets fixed\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Wed, 19 Aug 2009 19:45:28 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Aug 19, 2009, at 7:45 PM, Jaime Casanova wrote:\n> AFAIUI, work_mem is used for some operations (sort, hash, etc) for\n> avoiding the use of temp files on disk...\n>\n> In a client server i'm monitoring (pg 8.3.7, 32GB of ram) work_mem is\n> set to 8MB, however i'm seeing a lot of temp files (>30000 in 4 hours)\n> with small sizes (ie: 2021520 obviously lower than 8MB). so, why?\n> maybe we use work_mem until we find isn't enough and we send just the\n> difference to a temp file?\n>\n> i'm not thinking in raising work_mem until i understand this well,\n> what's the point if we still create temp files that could fit in\n> work_mem...\n\n\nAre you using temp tables? Those end up in pgsql_tmp as well.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Sun, 13 Sep 2009 17:12:19 -0500", "msg_from": "decibel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "decibel <[email protected]> writes:\n> On Aug 19, 2009, at 7:45 PM, Jaime Casanova wrote:\n>> AFAIUI, work_mem is used for some operations (sort, hash, etc) for\n>> avoiding the use of temp files on disk...\n>> \n>> In a client server i'm monitoring (pg 8.3.7, 32GB of ram) work_mem is\n>> set to 8MB, however i'm seeing a lot of temp files (>30000 in 4 hours)\n>> with small sizes (ie: 2021520 obviously lower than 8MB). so, why?\n>> maybe we use work_mem until we find isn't enough and we send just the\n>> difference to a temp file?\n>> \n>> i'm not thinking in raising work_mem until i understand this well,\n>> what's the point if we still create temp files that could fit in\n>> work_mem...\n\n> Are you using temp tables? Those end up in pgsql_tmp as well.\n\nUh, no, they don't.\n\nIt might be useful to turn on trace_sort to see if the small files\nare coming from sorts. If they're from hashes I'm afraid there's\nno handy instrumentation ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Sep 2009 18:37:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane <[email protected]> wrote:\n>\n> It might be useful to turn on trace_sort to see if the small files\n> are coming from sorts.  If they're from hashes I'm afraid there's\n> no handy instrumentation ...\n>\n\nyes they are, this is the log (i deleted the STATEMENT lines because\nthey were redundant), seems like all the temp files are used to\nexecute the same sentence...\n\nBTW, this is my laptop no the server.\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Mon, 11 Jan 2010 13:15:20 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova\n<[email protected]> wrote:\n> On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane <[email protected]> wrote:\n>>\n>> It might be useful to turn on trace_sort to see if the small files\n>> are coming from sorts.  If they're from hashes I'm afraid there's\n>> no handy instrumentation ...\n>>\n>\n> yes they are, this is the log (i deleted the STATEMENT lines because\n> they were redundant), seems like all the temp files are used to\n> execute the same sentence...\n>\n> BTW, this is my laptop no the server.\n\nI think maybe there was supposed to be an attachment here?\n\n...Robert\n", "msg_date": "Mon, 11 Jan 2010 14:07:37 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Mon, Jan 11, 2010 at 2:07 PM, Robert Haas <[email protected]> wrote:\n> On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova\n> <[email protected]> wrote:\n>> On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane <[email protected]> wrote:\n>>>\n>>> It might be useful to turn on trace_sort to see if the small files\n>>> are coming from sorts.  If they're from hashes I'm afraid there's\n>>> no handy instrumentation ...\n>>>\n>>\n>> yes they are, this is the log (i deleted the STATEMENT lines because\n>> they were redundant), seems like all the temp files are used to\n>> execute the same sentence...\n>>\n>> BTW, this is my laptop no the server.\n>\n> I think maybe there was supposed to be an attachment here?\n>\n\ni knew i was forgotting something ;)\nah! and this is in 8.5dev but it's the same in 8.3\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Mon, 11 Jan 2010 14:14:52 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Mon, Jan 11, 2010 at 2:14 PM, Jaime Casanova\n<[email protected]> wrote:\n> On Mon, Jan 11, 2010 at 2:07 PM, Robert Haas <[email protected]> wrote:\n>> On Mon, Jan 11, 2010 at 1:15 PM, Jaime Casanova\n>> <[email protected]> wrote:\n>>> On Sun, Sep 13, 2009 at 5:37 PM, Tom Lane <[email protected]> wrote:\n>>>>\n>>>> It might be useful to turn on trace_sort to see if the small files\n>>>> are coming from sorts.  If they're from hashes I'm afraid there's\n>>>> no handy instrumentation ...\n>>>>\n>>>\n>>> yes they are, this is the log (i deleted the STATEMENT lines because\n>>> they were redundant), seems like all the temp files are used to\n>>> execute the same sentence...\n>>>\n>>> BTW, this is my laptop no the server.\n>>\n>> I think maybe there was supposed to be an attachment here?\n>>\n>\n> i knew i was forgotting something ;)\n> ah! and this is in 8.5dev but it's the same in 8.3\n>\n\noh! boy this can't be happen!\nattaching again\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157", "msg_date": "Mon, 11 Jan 2010 14:16:38 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "Jaime Casanova <[email protected]> writes:\n> LOG: begin tuple sort: nkeys = 1, workMem = 1024, randomAccess = f\n> LOG: switching to bounded heapsort at 641 tuples: CPU 0.08s/0.13u sec elapsed 0.25 sec\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.5\", size 471010\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.10\", size 81096\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.0\", size 467373\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.9\", size 110200\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.3\", size 470011\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.8\", size 157192\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.4\", size 468681\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.12\", size 101624\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.1\", size 472285\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.11\", size 100744\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.6\", size 467173\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.7\", size 141888\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.2\", size 476227\n> LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.13\", size 89072\n> LOG: performsort starting: CPU 0.10s/0.19u sec elapsed 0.33 sec\n> LOG: performsort done: CPU 0.10s/0.19u sec elapsed 0.33 sec\n> LOG: internal sort ended, 118 KB used: CPU 0.10s/0.19u sec elapsed 0.33 sec\n\nHmm. Not clear where the temp files are coming from, but it's *not* the\nsort --- the \"internal sort ended\" line shows that that sort never went\nto disk. What kind of plan is feeding the sort node?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Jan 2010 15:18:40 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane <[email protected]> wrote:\n> Jaime Casanova <[email protected]> writes:\n>> LOG:  begin tuple sort: nkeys = 1, workMem = 1024, randomAccess = f\n>> LOG:  switching to bounded heapsort at 641 tuples: CPU 0.08s/0.13u sec elapsed 0.25 sec\n>> LOG:  temporary file: path \"base/pgsql_tmp/pgsql_tmp8507.5\", size 471010\n[... some more temp files logged ...]\n>> LOG:  internal sort ended, 118 KB used: CPU 0.10s/0.19u sec elapsed 0.33 sec\n>\n> Hmm.  Not clear where the temp files are coming from, but it's *not* the\n> sort --- the \"internal sort ended\" line shows that that sort never went\n> to disk.  What kind of plan is feeding the sort node?\n>\n\ni'm sure i have seen on disk sorts even when the files are small, but\nstill i see a problem here...\n\nthe temp files shoul be coming from hash operations but AFAICS the\nfiles are small and every hash operation should be using until\nwork_mem memory, right?\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157", "msg_date": "Mon, 11 Jan 2010 16:11:50 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "Jaime Casanova <[email protected]> writes:\n> the temp files shoul be coming from hash operations but AFAICS the\n> files are small and every hash operation should be using until\n> work_mem memory, right?\n\nNo, when a hash spills to disk the code has to guess the partition sizes\n(number of buckets per partition) in advance. So it wouldn't be at all\nsurprising if the actual sizes come out substantially different from\nwork_mem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Jan 2010 16:36:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>\n> Hmm.  Not clear where the temp files are coming from, but it's *not* the\n> sort --- the \"internal sort ended\" line shows that that sort never went\n> to disk.  What kind of plan is feeding the sort node?\n>\n\nsome time ago, you said:\n\"\"\"\nIt might be useful to turn on trace_sort to see if the small files\nare coming from sorts. If they're from hashes I'm afraid there's\nno handy instrumentation ...\n\"\"\"\n\nand is clearly what was bother me... because most of all temp files\nare coming from hash...\n\nwhy we don't show some of that info in explain? for example: we can\nshow memory used, no? or if the hash goes to disk... if i remove\n#ifdef HJDEBUG seems like we even know how many batchs the hash\nused...\n\nthe reason i say \"most of the temp files\" is that when i removed\n#ifdef HJDEBUG it says that in total i was using 10 batchs but there\nwere 14 temp files created (i guess we use 1 file per batch, no?)\n\n\"\"\"\nnbatch = 1, nbuckets = 1024\nnbatch = 1, nbuckets = 1024\nnbatch = 8, nbuckets = 2048\n\"\"\"\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Wed, 13 Jan 2010 01:31:43 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "Jaime Casanova <[email protected]> writes:\n> why we don't show some of that info in explain?\n\nLack of round tuits; plus concern about breaking programs that read\nEXPLAIN output, which I guess will be alleviated in 8.5.\n\n> the reason i say \"most of the temp files\" is that when i removed\n> #ifdef HJDEBUG it says that in total i was using 10 batchs but there\n> were 14 temp files created (i guess we use 1 file per batch, no?)\n\nTwo files per batch, in general --- I suppose some of the buckets\nwere empty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jan 2010 09:45:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Wed, Jan 13, 2010 at 1:31 AM, Jaime Casanova\n<[email protected]> wrote:\n> On Mon, Jan 11, 2010 at 3:18 PM, Tom Lane <[email protected]> wrote:\n>>\n>> Hmm.  Not clear where the temp files are coming from, but it's *not* the\n>> sort --- the \"internal sort ended\" line shows that that sort never went\n>> to disk.  What kind of plan is feeding the sort node?\n>>\n>\n> some time ago, you said:\n> \"\"\"\n> It might be useful to turn on trace_sort to see if the small files\n> are coming from sorts.  If they're from hashes I'm afraid there's\n> no handy instrumentation ...\n> \"\"\"\n>\n> and is clearly what was bother me... because most of all temp files\n> are coming from hash...\n>\n> why we don't show some of that info in explain? for example: we can\n> show memory used, no? or if the hash goes to disk... if i remove\n> #ifdef HJDEBUG seems like we even know how many batchs the hash\n> used...\n\nI had an idea at one point of making explain show the planned and\nactual # of batches for each hash join. I believe that \"actual # of\nbatches > 1\" is isomorphic to \"hash join went to disk\". The code is\nactually pretty easy; the hard part is figuring out what to do about\nthe UI. The choices seem to be:\n\n1. Create a new EXPLAIN option just for this - what would we call it?\n2. Think of some more, similar things and come up with a new EXPLAIN\noption covering all of them - what else would go along with?\n3. Sandwhich it into an existing EXPLAIN option, most likely VERBOSE.\n4. Display it by default.\n\n...Robert\n", "msg_date": "Wed, 13 Jan 2010 10:23:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> I had an idea at one point of making explain show the planned and\n> actual # of batches for each hash join. I believe that \"actual # of\n> batches > 1\" is isomorphic to \"hash join went to disk\". The code is\n> actually pretty easy; the hard part is figuring out what to do about\n> the UI. The choices seem to be:\n\n> 1. Create a new EXPLAIN option just for this - what would we call it?\n> 2. Think of some more, similar things and come up with a new EXPLAIN\n> option covering all of them - what else would go along with?\n> 3. Sandwhich it into an existing EXPLAIN option, most likely VERBOSE.\n> 4. Display it by default.\n\nTreat it the same as the Sort-node actual usage information. We did not\nadd a special option when we added that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jan 2010 10:42:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Wed, Jan 13, 2010 at 10:42 AM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> I had an idea at one point of making explain show the planned and\n>> actual # of batches for each hash join.  I believe that \"actual # of\n>> batches > 1\" is isomorphic to \"hash join went to disk\".  The code is\n>> actually pretty easy; the hard part is figuring out what to do about\n>> the UI.  The choices seem to be:\n>\n>> 1. Create a new EXPLAIN option just for this - what would we call it?\n>> 2. Think of some more, similar things and come up with a new EXPLAIN\n>> option covering all of them - what else would go along with?\n>> 3. Sandwhich it into an existing EXPLAIN option, most likely VERBOSE.\n>> 4. Display it by default.\n>\n> Treat it the same as the Sort-node actual usage information.  We did not\n> add a special option when we added that.\n\nWell, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?\nIt'll add another line to the output for the expected number of\nbatches.\n\n...Robert\n", "msg_date": "Wed, 13 Jan 2010 11:11:21 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas <[email protected]> wrote:\n>\n> Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?\n> It'll add another line to the output for the expected number of\n> batches.\n>\n\nand when we are in EXPLAIN ANALYZE the real number as well?\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Wed, 13 Jan 2010 11:14:55 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "On Wed, Jan 13, 2010 at 11:14 AM, Jaime Casanova\n<[email protected]> wrote:\n> On Wed, Jan 13, 2010 at 11:11 AM, Robert Haas <[email protected]> wrote:\n>> Well, what about when we're just doing EXPLAIN, not EXPLAIN ANALYZE?\n>> It'll add another line to the output for the expected number of\n>> batches.\n>\n> and when we are in EXPLAIN ANALYZE the real number as well?\n\nYeah. My question is whether it's acceptable to add an extra line to\nthe EXPLAIN output for every hash join, even w/o ANALYZE.\n\n...Robert\n", "msg_date": "Wed, 13 Jan 2010 11:24:09 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah. My question is whether it's acceptable to add an extra line to\n> the EXPLAIN output for every hash join, even w/o ANALYZE.\n\nWe could add it if either VERBOSE or ANALYZE appears. Not sure if\nthat's just too much concern for backwards compatibility, though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jan 2010 11:53:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue " }, { "msg_contents": "On Wed, Jan 13, 2010 at 11:53 AM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> Yeah.  My question is whether it's acceptable to add an extra line to\n>> the EXPLAIN output for every hash join, even w/o ANALYZE.\n>\n> We could add it if either VERBOSE or ANALYZE appears.  Not sure if\n> that's just too much concern for backwards compatibility, though.\n\nI think having it controlled by either of two options is to weird.\nI'm not worried so much about backward compatibility as I am about\ncluttering the output. Maybe making it controlled by VERBOSE is the\nright thing to do, although I'm sort of tempted to figure out if there\nis more useful instrumentation that could be done and put it all under\na new option called, say, HASH_DETAILS. Not sure what else we could\nshow though.\n\n...Robert\n", "msg_date": "Wed, 13 Jan 2010 12:02:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] work_mem vs temp files issue" } ]
[ { "msg_contents": "Hi all;\n\nI have a simple query against two very large tables ( > 800million rows in \ntheurl_hits_category_jt table and 9.2 million in the url_hits_klk1 table )\n\nI have indexes on the join columns and I've run an explain.\nalso I've set the default statistics to 250 for both join columns. I get a \nvery high overall query cost:\n\n\nexplain \n select \n category_id, \n url_hits_id \n from \n url_hits_klk1 a , \n pwreport.url_hits_category_jt b \nwhere \n a.id = b.url_hits_id \n ; \n QUERY PLAN \n-------------------------------------------------------------------------------------------- \n Hash Join (cost=296959.90..126526916.55 rows=441764338 width=8) \n Hash Cond: (b.url_hits_id = a.id) \n -> Seq Scan on url_hits_category_jt b (cost=0.00..62365120.22 \nrows=4323432222 width=8) \n -> Hash (cost=179805.51..179805.51 rows=9372351 width=4)\n -> Seq Scan on url_hits_klk1 a (cost=0.00..179805.51 rows=9372351 \nwidth=4)\n(5 rows)\n\n\n\nIf I turn off sequential scans I still get an even higher query cost:\n\nset enable_seqscan = off;\nSET\nexplain\n select\n category_id,\n url_hits_id\n from\n url_hits_klk1 a ,\n pwreport.url_hits_category_jt b\nwhere\n a.id = b.url_hits_id\n ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=127548504.83..133214707.19 rows=441791932 width=8)\n Merge Cond: (a.id = b.url_hits_id)\n -> Index Scan using klk1 on url_hits_klk1 a (cost=0.00..303773.29 \nrows=9372351 width=4)\n -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on \nurl_hits_category_jt b (cost=0.00..125058243.39 rows=4323702284 width=8)\n(4 rows)\n\n\nThoughts?\n\n\nThanks in advance\n\n\n\nHi all;\nI have a simple query against two very large tables ( > 800million rows in theurl_hits_category_jt table and 9.2 million in the url_hits_klk1 table )\nI have indexes on the join columns and I've run an explain.\nalso I've set the default statistics to 250 for both join columns. I get a very high overall query cost:\nexplain \n select \n category_id, \n url_hits_id \n from \n url_hits_klk1 a , \n pwreport.url_hits_category_jt b \nwhere \n a.id = b.url_hits_id \n ; \n QUERY PLAN \n-------------------------------------------------------------------------------------------- \n Hash Join (cost=296959.90..126526916.55 rows=441764338 width=8) \n Hash Cond: (b.url_hits_id = a.id) \n -> Seq Scan on url_hits_category_jt b (cost=0.00..62365120.22 rows=4323432222 width=8) \n -> Hash (cost=179805.51..179805.51 rows=9372351 width=4)\n -> Seq Scan on url_hits_klk1 a (cost=0.00..179805.51 rows=9372351 width=4)\n(5 rows)\nIf I turn off sequential scans I still get an even higher query cost:\nset enable_seqscan = off;\nSET\nexplain\n select\n category_id,\n url_hits_id\n from\n url_hits_klk1 a ,\n pwreport.url_hits_category_jt b\nwhere\n a.id = b.url_hits_id\n ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=127548504.83..133214707.19 rows=441791932 width=8)\n Merge Cond: (a.id = b.url_hits_id)\n -> Index Scan using klk1 on url_hits_klk1 a (cost=0.00..303773.29 rows=9372351 width=4)\n -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on url_hits_category_jt b (cost=0.00..125058243.39 rows=4323702284 width=8)\n(4 rows)\nThoughts?\nThanks in advance", "msg_date": "Thu, 20 Aug 2009 17:09:25 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "improving my query plan" }, { "msg_contents": "Kevin Kempter wrote:\n> Hi all;\n> \n> \n> I have a simple query against two very large tables ( > 800million rows \n> in theurl_hits_category_jt table and 9.2 million in the url_hits_klk1 \n> table )\n> \n> \n> I have indexes on the join columns and I've run an explain.\n> also I've set the default statistics to 250 for both join columns. I get \n> a very high overall query cost:\n\nIf you had an extra where condition it might be different, but you're \njust returning results from both tables that match up so doing a \nsequential scan is going to be the fastest way anyway.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n\n", "msg_date": "Fri, 21 Aug 2009 11:21:45 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving my query plan" }, { "msg_contents": "\n\nOn 8/20/09 4:09 PM, \"Kevin Kempter\" <[email protected]> wrote:\n\n> Hi all;\n> \n> \n> I have a simple query against two very large tables ( > 800million rows in\n> theurl_hits_category_jt table and 9.2 million in the url_hits_klk1 table )\n> \n> \n> I have indexes on the join columns and I've run an explain.\n> also I've set the default statistics to 250 for both join columns. I get a\n> very high overall query cost:\n> \n> \n\nWhat about the actual times? The latter plan has higher cost, but perhaps\nit is actually faster? If so, you can change the estimated cost by changing\nthe db cost parameters.\n\nHowever, the second plan will surely be slower if the table is not in memory\nand causes random disk access.\n\nNote that EXPLAIN ANALYZE for the hash plan will take noticeably longer than\na plain query due to the cost of analysis on hashes.\n\n\n> \n> \n> explain \n> select \n> category_id,\n> url_hits_id\n> from \n> url_hits_klk1 a ,\n> pwreport.url_hits_category_jt b\n> where \n> a.id = b.url_hits_id\n> ; \n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> -------------- \n> Hash Join (cost=296959.90..126526916.55 rows=441764338 width=8)\n> Hash Cond: (b.url_hits_id = a.id)\n> -> Seq Scan on url_hits_category_jt b (cost=0.00..62365120.22\n> rows=4323432222 width=8)\n> -> Hash (cost=179805.51..179805.51 rows=9372351 width=4)\n> -> Seq Scan on url_hits_klk1 a (cost=0.00..179805.51 rows=9372351\n> width=4)\n> (5 rows)\n> \n> \n> \n> \n> \n> \n> If I turn off sequential scans I still get an even higher query cost:\n> \n> \n> set enable_seqscan = off;\n> SET\n> explain\n> select\n> category_id,\n> url_hits_id\n> from\n> url_hits_klk1 a ,\n> pwreport.url_hits_category_jt b\n> where\n> a.id = b.url_hits_id\n> ;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> -----------------------------------------------------------------\n> Merge Join (cost=127548504.83..133214707.19 rows=441791932 width=8)\n> Merge Cond: (a.id = b.url_hits_id)\n> -> Index Scan using klk1 on url_hits_klk1 a (cost=0.00..303773.29\n> rows=9372351 width=4)\n> -> Index Scan using mt_url_hits_category_jt_url_hits_id_index on\n> url_hits_category_jt b (cost=0.00..125058243.39 rows=4323702284 width=8)\n> (4 rows)\n> \n> \n> \n> \n> Thoughts?\n> \n> \n> \n> \n> Thanks in advance\n> \n> \n> \n> \n> \n\n", "msg_date": "Thu, 20 Aug 2009 18:33:56 -0700", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving my query plan" }, { "msg_contents": "Kevin Kempter <[email protected]> wrote: \n \n> I have a simple query against two very large tables ( > 800million\n> rows in theurl_hits_category_jt table and 9.2 million in the\n> url_hits_klk1 table )\n \n> I get a very high overall query cost:\n \n> Hash Join (cost=296959.90..126526916.55 rows=441764338 width=8)\n \nWell, the cost is an abstraction which, if you haven't configured it\notherwise, equals the estimated time to return a tuple in a sequential\nscan. This plan is taking advantage of memory to join these two large\ntables and return 441 million result rows in the time it would take to\nread 126 million rows. That doesn't sound like an unreasonable\nestimate to me.\n \nDid you think there should be a faster plan for this query, or is the\nlarge number for the estimated cost worrying you?\n \n-Kevin\n", "msg_date": "Mon, 24 Aug 2009 09:45:42 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: improving my query plan" } ]
[ { "msg_contents": "Hi,\n\nin a web app we have a query that we want to show in limited results\nat a time, this one executes in 10 seconds if i use limit but executes\nin 300ms if i remove it.\nwhy is that happening? the query is using and index for avoiding the\nsort so the nestloop should go only for the first 20 records on\ntgen_persona, no?\nbelow some more info\n\npostgresql 8.3.7\nram 32GB\nshared_buffers 8GB\nwork_mem 8MB\n\ntgen_persona has 185732 records and tcom_invitacion is a partitioned\n(by dates: 1 month every partition) table and has more than 29million\nrecords in the partitions\n\nexplain analyze here: http://explain.depesz.com/s/B4\n\nthe situation improves if i disable nestloops, explain analyze with\nnestloop off here: http://explain.depesz.com/s/Jv\n\nselect Per.razon_social as MAIL,inv.cata_esta_calificacion,\n inv.observa_calificacion,\n to_char(inv.fech_crea,'YYYY:MM:DD') as fech_crea,\n case when (( select cod_estado FROM TPRO_PROVEEDOR\n WHERE id_proveedor = (select max(a.id_proveedor)\n from tpro_proveedor a\n where persona_id = Inv.persona_id )\n )='Habilitado')\n then 'Habilitado'\n else 'Deshabilitado'\n end as empresa_id\n from tgen_persona Per, tcom_invitacion Inv\n where Per.persona_id = Inv.persona_id\n and inv.id_soli_compra = '60505'\n ORDER BY Per.razon_social asc limit 20 offset 0\n\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Thu, 20 Aug 2009 20:50:04 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": true, "msg_subject": "limiting results makes the query slower" }, { "msg_contents": "On Thu, Aug 20, 2009 at 9:50 PM, Jaime\nCasanova<[email protected]> wrote:\n> in a web app we have a query that we want to show in limited results\n> at a time, this one executes in 10 seconds if i use limit but executes\n> in 300ms if i remove it.\n> why is that happening? the query is using and index for avoiding the\n> sort so the nestloop should go only for the first 20 records on\n> tgen_persona, no?\n> below some more info\n>\n> postgresql 8.3.7\n> ram 32GB\n> shared_buffers 8GB\n> work_mem 8MB\n>\n> tgen_persona has 185732 records and tcom_invitacion is a partitioned\n> (by dates: 1 month every partition) table and has more than 29million\n> records in the partitions\n>\n> explain analyze here: http://explain.depesz.com/s/B4\n>\n> the situation improves if i disable nestloops, explain analyze with\n> nestloop off here: http://explain.depesz.com/s/Jv\n>\n> select Per.razon_social as MAIL,inv.cata_esta_calificacion,\n>       inv.observa_calificacion,\n>       to_char(inv.fech_crea,'YYYY:MM:DD') as fech_crea,\n>       case when (( select cod_estado FROM TPRO_PROVEEDOR\n>                     WHERE id_proveedor = (select max(a.id_proveedor)\n>                                             from tpro_proveedor a\n>                                            where persona_id = Inv.persona_id )\n>                  )='Habilitado')\n>            then 'Habilitado'\n>            else 'Deshabilitado'\n>       end as empresa_id\n>  from tgen_persona Per, tcom_invitacion Inv\n>  where Per.persona_id = Inv.persona_id\n>   and inv.id_soli_compra = '60505'\n>  ORDER BY Per.razon_social asc limit 20 offset 0\n\nThis is pretty common. Tom Lane pointed out in a message I don't\nfeel like searching for right now that LIMIT tends to magnify the\neffect of bad selectivity estimates. In this case, the join\nselectivity is off by more than 3 orders of magnitude right here:\n\nNested Loop (cost=0.00..4280260.77 rows=8675 width=588) (actual\ntime=4835.934..11335.731 rows=2 loops=1)\n\nI'm not familiar with how we estimate join selectivity in this case,\nbut it's obviously giving really, really wrong answers. The problem\nmay be here:\n\nAppend (cost=0.00..22.00 rows=23 width=560) (actual time=0.055..0.055\nrows=0 loops=185732)\n\nIt appears that we're estimating 23 rows because we have 23\npartitions, and we're estimating one row for each. There are a lot of\nplaces in the planner where we round off to an integer with a floor of\n1, which may be part of the problem here... but I don't know without\nlooking at the code.\n\n...Robert\n", "msg_date": "Sun, 23 Aug 2009 17:41:39 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limiting results makes the query slower" } ]
[ { "msg_contents": "> ---------- Forwarded message ----------\n> From: Jaime Casanova <[email protected]>\n> To: psql performance list <[email protected]>\n> Date: Wed, 19 Aug 2009 19:25:11 -0500\n> Subject: [PERFORMANCE] how to set wal_buffers\n> Hi,\n>\n> Our fine manual says:\n> \"\"\"\n> The amount of memory used in shared memory for WAL data. The default\n> is 64 kilobytes (64kB). The setting need only be large enough to hold\n> the amount of WAL data generated by one typical transaction, since the\n> data is written out to disk at every transaction commit. This\n> parameter can only be set at server start.\n> \"\"\"\n\nI don't care for that description for several reasons, but haven't\nbeen able to come up with a good alternative.\n\nOne problem is as you note. How is the average user supposed to know\nwhat is the size of the redo that is generated by a typical\ntransaction?\n\nBut other that, I still think it is not good advice. If your typical\ntransaction runs for 5 minutes and generates 100's of MB of WAL and\nthere is only one of them at a time, there is certainly no reason to\nhave several hundred MB of wal_buffers. It will merrily run around\nthe buffer ring of just a few MB.\n\nOn the other extreme, if you have many connections rapidly firing\nsmall transactions, and you hope for the WAL of many of them to all\nget flushed down to disk with a single fsync,\nthen your wal_buffers should be big enough to hold the WAL data of all\nthose transactions. Running out of WAL space has a nasty effect on\ngroup commits.\n\nThe default value of wal_buffers is low because many older systems\nhave a low default value for the kernel setting shmmax. On any\ndecent-sized server, I'd just automatically increase wal_buffers to 1\nor 2 MB. It might help and lot, and it is unlikely to hurt.\n\nJeff\n", "msg_date": "Thu, 20 Aug 2009 21:38:43 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] how to set wal_buffers" }, { "msg_contents": "On Thu, Aug 20, 2009 at 11:38 PM, Jeff Janes<[email protected]> wrote:\n>> ---------- Forwarded message ----------\n>> From: Jaime Casanova <[email protected]>\n>> To: psql performance list <[email protected]>\n>> Date: Wed, 19 Aug 2009 19:25:11 -0500\n>> Subject: [PERFORMANCE] how to set wal_buffers\n>> Hi,\n>>\n>> Our fine manual says:\n>> \"\"\"\n>> The amount of memory used in shared memory for WAL data. The default\n>> is 64 kilobytes (64kB). The setting need only be large enough to hold\n>> the amount of WAL data generated by one typical transaction, since the\n>> data is written out to disk at every transaction commit. This\n>> parameter can only be set at server start.\n>> \"\"\"\n>\n> I don't care for that description for several reasons, but haven't\n> been able to come up with a good alternative.\n>\n> One problem is as you note.  How is the average user supposed to know\n> what is the size of the redo that is generated by a typical\n> transaction?\n>\n\none way is if there is a way to know how many blocks have been written\nby postgres (even a total is usefull because we can divide that per\npg_stat_database.xact_commits), maybe\npg_stat_bgwriter.buffers_checkpoint can give us an idea of that?\n\n>\n> On the other extreme, if you have many connections rapidly firing\n> small transactions, and you hope for the WAL of many of them to all\n> get flushed down to disk with a single fsync,\n> then your wal_buffers should be big enough to hold the WAL data of all\n> those transactions.  Running out of WAL space has a nasty effect on\n> group commits.\n>\n\nthat's exactly my situation... and i was thinking on raising\nwal_buffers at least to hold variuos transactions... i can use\npg_stat_database.xact_commits to calculate an avg of transactions per\nsecond...\n\nplus i think we need a bit more space for transactions marked as\n\"synchrounous_commit to off\"\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Sun, 23 Aug 2009 15:25:59 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] how to set wal_buffers" }, { "msg_contents": "On Sun, Aug 23, 2009 at 1:25 PM, Jaime\nCasanova<[email protected]> wrote:\n> On Thu, Aug 20, 2009 at 11:38 PM, Jeff Janes<[email protected]> wrote:\n>>> ---------- Forwarded message ----------\n>>> From: Jaime Casanova <[email protected]>\n>>> To: psql performance list <[email protected]>\n>>> Date: Wed, 19 Aug 2009 19:25:11 -0500\n>>> Subject: [PERFORMANCE] how to set wal_buffers\n>>> Hi,\n>>>\n>>> Our fine manual says:\n>>> \"\"\"\n>>> The amount of memory used in shared memory for WAL data. The default\n>>> is 64 kilobytes (64kB). The setting need only be large enough to hold\n>>> the amount of WAL data generated by one typical transaction, since the\n>>> data is written out to disk at every transaction commit. This\n>>> parameter can only be set at server start.\n>>> \"\"\"\n>>\n>> I don't care for that description for several reasons, but haven't\n>> been able to come up with a good alternative.\n>>\n>> One problem is as you note. How is the average user supposed to know\n>> what is the size of the redo that is generated by a typical\n>> transaction?\n>>\n>\n> one way is if there is a way to know how many blocks have been written\n> by postgres (even a total is usefull because we can divide that per\n> pg_stat_database.xact_commits), maybe\n> pg_stat_bgwriter.buffers_checkpoint can give us an idea of that?\n\nNo, you want the amount of WAL data written, not the tablespace data written,\nwhich is what pg_stat_bgwriter gives you. Just look at how fast your pg_xlogs\nare being archived and turned over to determine that WAL volume (unless you\nhave archive_timeout set).\n\nHowever, I don't think this will help. The amount of WAL logs increases\ndramatically right after a checkpoint, so looking at the bulk average doesn't\nsay anything about how much is being generated at the peak. Does your\nperformance drop right after a checkpoint is started?\n\nmaybe the code bracketed by the probes\nTRACE_POSTGRESQL_WAL_BUFFER_WRITE_DIRTY* should be counted\nand reported under one of the stat tables.\n\nJeff\n", "msg_date": "Sun, 23 Aug 2009 15:26:16 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORMANCE] how to set wal_buffers" }, { "msg_contents": "On Sun, Aug 23, 2009 at 5:26 PM, Jeff Janes<[email protected]> wrote:\n>>>\n>>> One problem is as you note.  How is the average user supposed to know\n>>> what is the size of the redo that is generated by a typical\n>>> transaction?\n>>>\n>>\n>> one way is if there is a way to know how many blocks have been written\n>> by postgres (even a total is usefull because we can divide that per\n>> pg_stat_database.xact_commits), maybe\n>> pg_stat_bgwriter.buffers_checkpoint can give us an idea of that?\n>\n> No, you want the amount of WAL data written, not the tablespace data written,\n> which is what pg_stat_bgwriter gives you. Just look at how fast your pg_xlogs\n> are being archived and turned over to determine that WAL volume (unless you\n> have archive_timeout set).\n\nmmm... what about turning log_checkpoint on and look at the recycled\nsegments number...\n(recycled_segments * wal_segment_size) / number of xact commited in that period\n\ndo that for some days at the same (hopefully peak) hours...\n\n>\n> maybe the code bracketed by the probes\n> TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DIRTY* should be counted\n> and reported under one of the stat tables.\n>\n\n+1, at least could be useful for some of us that do not have dtrace\n\n-- \nAtentamente,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nAsesoría y desarrollo de sistemas\nGuayaquil - Ecuador\nCel. +59387171157\n", "msg_date": "Tue, 25 Aug 2009 15:16:40 -0500", "msg_from": "Jaime Casanova <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] how to set wal_buffers" } ]
[ { "msg_contents": "\nHello,\n\nWe are using PostgreSQL to index a huge collection (570 000) of articles for a french daily newspaper (Lib�ration). We use massively the full text search feature. I attach to this mail the schema of the database we use.\n\nOverall, we have very interesting performances, except in a few cases, when combining a full text match with a lot of matches with a date order and a limit (which is a very common use case, asking for the 50 more recent articles speaking about a famous person, for example).\n\nThe reason of this mail is what we noticed a performance drop from PostgreSQL 8.3 to PostgreSQL 8.4.\n\nIn order to try to locate the performance cost, I changed a few settings in 8.4 to have the same values than in 8.3 (and rerun analyze after) ::\n\n cursor_tuple_fraction = 1.0\n default_statistics_target = 10\n\nWe the modified settings, the peformance drop is much lower, but still\npresent. Here are the statistics on replaying sequentially a bunch of\nreal-life queries to the two versions of the database :\n\nWith 8.3 ::\n\n 7334 queries, average time is 0.20 s\n 6 queries above 20.00 s (0.08 %)\n 20 queries above 10.00 s (0.27 %)\n 116 queries above 2.00 s (1.58 %)\n top ten: 15.09 15.15 15.19 16.60 20.40 63.05 67.89 78.21 90.30 97.56\n\nWith 8.4 ::\n\n 7334 queries, average time is 0.23 s\n 12 queries above 20.00 s (0.16 %)\n 24 queries above 10.00 s (0.33 %)\n 112 queries above 2.00 s (1.53 %)\n top ten: 31.76 31.94 32.63 47.21 48.80 63.50 79.57 83.36 96.44 113.61\n\n\nHere is an example query that is significantly slower in 8.4 (31.76 seconds) than in 8.3 (10.52 seconds) ::\n\n SELECT classname, id FROM libeindex WHERE (classname = 'article' AND (source IN ('methode','nica') AND (keywords_tsv @@ plainto_tsquery('french', 'assassinat') AND fulltext_tsv @@ to_tsquery('french', 'claude & duviau')))) ORDER BY publicationDate DESC,pageNumber ASC LIMIT 50\n\nAnd the explain on it :\n\nWith 8.3 ::\n\n Limit (cost=752.67..752.67 rows=1 width=24)\n -> Sort (cost=752.67..752.67 rows=1 width=24)\n Sort Key: publicationdate, pagenumber\n -> Bitmap Heap Scan on libeindex (cost=748.64..752.66 rows=1 width=24)\n Recheck Cond: ((keywords_tsv @@ '''assassinat'''::tsquery) AND (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery))\n Filter: (((source)::text = ANY ('{methode,nica}'::text[])) AND ((classname)::text = 'article'::text))\n -> BitmapAnd (cost=748.64..748.64 rows=1 width=0)\n -> Bitmap Index Scan on keywords_index (cost=0.00..48.97 rows=574 width=0)\n Index Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n -> Bitmap Index Scan on fulltext_index (cost=0.00..699.42 rows=574 width=0)\n Index Cond: (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery)\n (11 rows)\n\nWith 8.4 ::\n\n Limit (cost=758.51..758.51 rows=1 width=24)\n -> Sort (cost=758.51..758.51 rows=1 width=24)\n Sort Key: publicationdate, pagenumber\n -> Bitmap Heap Scan on libeindex (cost=14.03..758.50 rows=1 width=24)\n Recheck Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n Filter: (((source)::text = ANY ('{methode,nica}'::text[])) AND (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery) AND ((classname)::text = 'article'::text))\n -> Bitmap Index Scan on keywords_index (cost=0.00..14.03 rows=192 width=0)\n Index Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n (8 rows)\n\nMore informations on the setup :\n\n- postgresql 8.3.7 from Debian Lenny ;\n\n- postgresql 8.4.0 from Debian Lenny backports ;\n\n- rurnning in a Xen virtual machine, using 64-bits kernel ;\n\n- 2 cores of a 2GHz Core2Quad and 2Gb of RAM dedicated to the VM.\n\nIf you need additional informations, we'll gladly provide them. If you have any tips or advises so we could make the 8.4 behave as least as good as the 8.3 it would be very nice.\n\nHoping this can help you to improve this great software.\n\nRegards,\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Fri, 21 Aug 2009 15:37:35 +0200", "msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)", "msg_from_op": true, "msg_subject": "Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "Hi Gaël,\n\nOn Fri, Aug 21, 2009 at 3:37 PM, Gaël Le Mignot<[email protected]> wrote:\n> With 8.3 ::\n>\n>  Limit  (cost=752.67..752.67 rows=1 width=24)\n>  (11 rows)\n>\n> With 8.4 ::\n>  (8 rows)\n\nCould you provide us the EXPLAIN *ANALYZE* output of both plans?\n\n From what I can see, one of the difference is that the estimates of\nthe number of rows are / 3 for this part of the query:\n8.3 -> Bitmap Index Scan on keywords_index (cost=0.00..48.97 rows=574 width=0)\n8.4 -> Bitmap Index Scan on keywords_index (cost=0.00..14.03 rows=192 width=0)\n\nIt might be interesting to see if 8.4 is right or not.\n\nBefore 8.4, the selectivity for full text search was a constant (as\nyou can see it in your 8.3 plan: the number of rows are equal in both\nbitmap index scan). 8.4 is smarter which might lead to other plans.\n\n-- \nGuillaume\n", "msg_date": "Sun, 23 Aug 2009 14:49:05 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "Hello Guillaume!\n\nSun, 23 Aug 2009 14:49:05 +0200, you wrote: \n\n > Hi Ga�l,\n > On Fri, Aug 21, 2009 at 3:37 PM, Ga�l Le Mignot<[email protected]> wrote:\n >> With 8.3 ::\n >> \n >> �Limit �(cost=752.67..752.67 rows=1 width=24)\n >> �(11 rows)\n >> \n >> With 8.4 ::\n >> �(8 rows)\n\n > Could you provide us the EXPLAIN *ANALYZE* output of both plans?\n\nSure, here it is :\n\nWith 8.3 ::\n\nlibearticles=> explain analyze SELECT classname, id FROM libeindex WHERE (classname = 'article' AND (source IN ('methode','nica') AND (keywords_tsv @@ plainto_tsquery('french', 'assassinat') AND fulltext_tsv @@ to_tsquery('french', 'claude & duviau')))) ORDER BY publicationDate DESC,pageNumber ASC LIMIT 50;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=760.74..760.74 rows=1 width=24) (actual time=449.057..449.080 rows=9 loops=1)\n -> Sort (cost=760.74..760.74 rows=1 width=24) (actual time=449.053..449.061 rows=9 loops=1)\n Sort Key: publicationdate, pagenumber\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on libeindex (cost=756.71..760.73 rows=1 width=24) (actual time=420.704..448.571 rows=9 loops=1)\n Recheck Cond: ((keywords_tsv @@ '''assassinat'''::tsquery) AND (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery))\n Filter: (((source)::text = ANY ('{methode,nica}'::text[])) AND ((classname)::text = 'article'::text))\n -> BitmapAnd (cost=756.71..756.71 rows=1 width=0) (actual time=420.612..420.612 rows=0 loops=1)\n -> Bitmap Index Scan on keywords_index (cost=0.00..48.96 rows=573 width=0) (actual time=129.338..129.338 rows=10225 loops=1)\n Index Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n -> Bitmap Index Scan on fulltext_index (cost=0.00..707.50 rows=573 width=0) (actual time=289.775..289.775 rows=14 loops=1)\n Index Cond: (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery)\n Total runtime: 471.905 ms\n(13 rows)\n\nWith 8.4 ::\n\nlibebench=> explain analyze SELECT classname, id FROM libeindex WHERE (classname = 'article' AND (source IN ('methode','nica') AND (keywords_tsv @@ plainto_tsquery('french', 'assassinat') AND fulltext_tsv @@ to_tsquery('french', 'claude & duviau')))) ORDER BY publicationDate DESC,pageNumber ASC LIMIT 50;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=758.51..758.51 rows=1 width=24) (actual time=50816.635..50816.660 rows=9 loops=1)\n -> Sort (cost=758.51..758.51 rows=1 width=24) (actual time=50816.628..50816.637 rows=9 loops=1)\n Sort Key: publicationdate, pagenumber\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on libeindex (cost=14.03..758.50 rows=1 width=24) (actual time=8810.133..50816.484 rows=9 loops=1)\n Recheck Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n Filter: (((source)::text = ANY ('{methode,nica}'::text[])) AND (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery) AND ((classname)::text = 'article'::text))\n -> Bitmap Index Scan on keywords_index (cost=0.00..14.03 rows=192 width=0) (actual time=158.563..158.563 rows=10222 loops=1)\n Index Cond: (keywords_tsv @@ '''assassinat'''::tsquery)\n Total runtime: 50817.040 ms\n(10 rows)\n\nSo it seems it was quite wrong about estimated matching rows (192 predicted, 10222 reals).\n\n >> From what I can see, one of the difference is that the estimates of\n > the number of rows are / 3 for this part of the query:\n > 8.3 -> Bitmap Index Scan on keywords_index (cost=0.00..48.97 rows=574 width=0)\n > 8.4 -> Bitmap Index Scan on keywords_index (cost=0.00..14.03 rows=192 width=0)\n\n > It might be interesting to see if 8.4 is right or not.\n\n > Before 8.4, the selectivity for full text search was a constant (as\n > you can see it in your 8.3 plan: the number of rows are equal in both\n > bitmap index scan). 8.4 is smarter which might lead to other plans.\n\nI see, thanks for your answer. What's weird is that this \"smartness\"\nleads to overall worse results in our case, is there some tweaking we\ncan do? I didn't see anything in the documentation to change\nweighting inside the text-match heuristic.\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Wed, 26 Aug 2009 18:03:34 +0200", "msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)", "msg_from_op": true, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:\n> So it seems it was quite wrong about estimated matching rows (192 predicted, 10222 reals).\n\nYup. What's even more interesting is that it seems the real win would\nhave been to use just the 'claude & duviau' condition (which apparently\nmatched only 14 rows). 8.3 had no hope whatever of understanding that,\nit just got lucky. 8.4 should have figured it out, I'm thinking.\nDoes it help if you increase the statistics target for fulltext_tsv?\n(Don't forget to re-ANALYZE after doing so.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Aug 2009 12:29:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "On Wed, Aug 26, 2009 at 6:29 PM, Tom Lane<[email protected]> wrote:\n> [email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:\n>> So it seems it was quite wrong about estimated matching rows (192 predicted, 10222 reals).\n>\n> Yup.  What's even more interesting is that it seems the real win would\n> have been to use just the 'claude & duviau' condition (which apparently\n> matched only 14 rows).  8.3 had no hope whatever of understanding that,\n> it just got lucky.  8.4 should have figured it out, I'm thinking.\n> Does it help if you increase the statistics target for fulltext_tsv?\n> (Don't forget to re-ANALYZE after doing so.)\n\nIt could be interesting to run the query without the condition\n(keywords_tsv @@ '''assassinat'''::tsquery) to see the estimate of\n(fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery) in 8.4.\n\nBtw, what Tom means by increasing the statistics is executing the\nfollowing queries:\nALTER TABLE libeindex ALTER COLUMN fulltext_tsv SET STATISTICS 500;\nANALYZE;\nrun your query with EXPLAIN ANALYZE;\nALTER TABLE libeindex ALTER COLUMN fulltext_tsv SET STATISTICS 1000;\nANALYZE;\nrun your query with EXPLAIN ANALYZE;\nALTER TABLE libeindex ALTER COLUMN fulltext_tsv SET STATISTICS 5000;\nANALYZE;\nrun your query with EXPLAIN ANALYZE;\n\nto see if it improves the estimates.\n\n-- \nGuillaume\n", "msg_date": "Wed, 26 Aug 2009 23:59:25 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "Hello Guillaume!\n\nWed, 26 Aug 2009 23:59:25 +0200, you wrote: \n\n > On Wed, Aug 26, 2009 at 6:29 PM, Tom Lane<[email protected]> wrote:\n >> [email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot) writes:\n >>> So it seems it was quite wrong about estimated matching rows (192 predicted, 10222 reals).\n >> \n >> Yup. �What's even more interesting is that it seems the real win would\n >> have been to use just the 'claude & duviau' condition (which apparently\n >> matched only 14 rows). �8.3 had no hope whatever of understanding that,\n >> it just got lucky. �8.4 should have figured it out, I'm thinking.\n >> Does it help if you increase the statistics target for fulltext_tsv?\n >> (Don't forget to re-ANALYZE after doing so.)\n\n > It could be interesting to run the query without the condition\n > (keywords_tsv @@ '''assassinat'''::tsquery) to see the estimate of\n > (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery) in 8.4.\n\nHere it is ::\n\nlibebench=> explain analyze SELECT classname, id FROM libeindex WHERE (classname = 'article' AND (source IN ('methode','nica') AND fulltext_tsv @@ to_tsquery('french', 'claude & duviau'))) ORDER BY publicationDate DESC,pageNumber ASC LIMIT 50;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=12264.98..12265.11 rows=50 width=24) (actual time=3.799..3.825 rows=10 loops=1)\n -> Sort (cost=12264.98..12271.03 rows=2421 width=24) (actual time=3.794..3.802 rows=10 loops=1)\n Sort Key: publicationdate, pagenumber\n Sort Method: quicksort Memory: 25kB\n -> Bitmap Heap Scan on libeindex (cost=2363.10..12184.56 rows=2421 width=24) (actual time=3.579..3.693 rows=10 loops=1)\n Recheck Cond: (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery)\n Filter: (((source)::text = ANY ('{methode,nica}'::text[])) AND ((classname)::text = 'article'::text))\n -> Bitmap Index Scan on fulltext_index (cost=0.00..2362.49 rows=2877 width=0) (actual time=3.499..3.499 rows=14 loops=1)\n Index Cond: (fulltext_tsv @@ '''claud'' & ''duviau'''::tsquery)\n Total runtime: 166.772 ms\n(10 rows)\n\nSo it estimates 2877 rows for that, while in reality it's 14.\n\n > Btw, what Tom means by increasing the statistics is executing the\n > following queries:\n > ALTER TABLE libeindex ALTER COLUMN fulltext_tsv SET STATISTICS 500;\n\nOk, I did it for 500 also on the keywords_tsv column, which was the\nother contestor. Here we have a clear improvement: the search in\nkeyword_tsv is now estimated at 10398 (real being 10222) and the one\non fulltext_tsv at 1 (real being 14).\n\nI did it at 1000 too, it's almost the same result.\n\nBy re-running our sampling of 7334 queries on the database with the\nstatistics at 1000 on both fulltext_tsv and keywords_tsv, we do have\noverall better results than with 8.3 ! So a greeat thanks to everyone.\n\nThe weird thing was that with the default of 100 for statistics\ntarget, it was worse than when we moved back to 10. So I didn't try\nwith 1000, but I should have.\n\nI'll do more tests and keep the list informed if it can be of any\nhelp.\n\n-- \nGa�l Le Mignot - [email protected]\nPilot Systems - 9, rue Desargues - 75011 Paris\nTel : +33 1 44 53 05 55 - www.pilotsystems.net\nG�rez vos contacts et vos newsletters : www.cockpit-mailing.com\n", "msg_date": "Thu, 27 Aug 2009 22:03:10 +0200", "msg_from": "[email protected] (=?iso-8859-1?Q?Ga=EBl?= Le Mignot)", "msg_from_op": true, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" }, { "msg_contents": "2009/8/27 Gaël Le Mignot <[email protected]>:\n> The  weird thing  was  that with  the  default of  100 for  statistics\n> target, it was  worse than when we  moved back to 10. So  I didn't try\n> with 1000, but I should have.\n\nWhen you have so much data and a statistics target so low, you can't\nexpect the sample taken to be representative :between different runs\nof ANALYZE, you can have totally different estimates and so totally\ndifferent plans. You just were lucky at 10 and unlucky at 100.\n\nThanks for your feedback and it's nice to see your problem solved.\n\n-- \nGuillaume\n", "msg_date": "Fri, 28 Aug 2009 00:20:34 +0200", "msg_from": "Guillaume Smet <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance regression between 8.3 and 8.4 on heavy text indexing" } ]
[ { "msg_contents": "hi,\n\ni have a query that uses a Hash-Join, but it would be faster with Nested-Loop,\nand i don't know how to persuade postgresql to do it.\n\ndetails:\n\npostgresql-8.2 + tsearch2\n\ni have 2 tables, one for people, and one that does a many-to-many\nlink between people:\n\nCREATE TABLE personlink (\n id integer NOT NULL,\n relid integer NOT NULL,\n created timestamp with time zone DEFAULT now() NOT NULL,\n changed timestamp with time zone,\n editorid integer NOT NULL\n);\n\nbtree indexes on \"id\" and \"relid\",\nPRIMARY KEY btree index on (id,relid).\n\nCREATE TABLE person (\n id integer NOT NULL,\n firstname character varying(255),\n .\n .\n .\n);\nPRIMARY KEY btree index on \"id\".\ngin index on \"firstname\" (for tsearch2)\n\n(the \"person\" table contains more columns (around 30))\n\npersonlink contains 1.500.000 rows, person contains 900.000 rows.\ni did a vacuum-with-analyze.\n\nmy query is:\n\nSELECT personlink.id\nFROM personlink\nINNER JOIN person ON personlink.relid=person.id\nWHERE to_tsquery('default','duck') @@ to_tsvector('default',person.firstname);\n\nexplain analyze says this:\n\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3108.62..35687.67 rows=1535 width=4) (actual\ntime=901.110..6113.683 rows=2 loops=1)\n Hash Cond: (personlink.relid = person.id)\n -> Seq Scan on personlink (cost=0.00..26805.14 rows=1535614\nwidth=8) (actual time=0.029..3000.503 rows=1535614 loops=1)\n -> Hash (cost=3097.80..3097.80 rows=866 width=4) (actual\ntime=0.185..0.185 rows=8 loops=1)\n -> Bitmap Heap Scan on person (cost=23.09..3097.80 rows=866\nwidth=4) (actual time=0.078..0.160 rows=8 loops=1)\n Recheck Cond: ('''duck'''::tsquery @@\nto_tsvector('default'::text, (firstname)::text))\n -> Bitmap Index Scan on person_firstname_exact\n(cost=0.00..22.87 rows=866 width=0) (actual time=0.056..0.056 rows=8\nloops=1)\n Index Cond: ('''duck'''::tsquery @@\nto_tsvector('default'::text, (firstname)::text))\n Total runtime: 6113.748 ms\n(9 rows)\n\nif i disable hash-joins with \"SET enable_hashjoin =false;\"\n\ni get:\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..45698.23 rows=1535 width=4) (actual\ntime=4.960..15.098 rows=2 loops=1)\n -> Index Scan using person_firstname_exact on person\n(cost=0.00..3463.53 rows=866 width=4) (actual time=0.117..0.234 rows=8\nloops=1)\n Index Cond: ('''duck'''::tsquery @@\nto_tsvector('default'::text, (firstname)::text))\n -> Index Scan using personlink_relid_idx on personlink\n(cost=0.00..48.54 rows=18 width=8) (actual time=1.848..1.849 rows=0\nloops=8)\n Index Cond: (personlink.relid = person.id)\n Total runtime: 15.253 ms\n(6 rows)\n\nwhat could i do to persuade postgresql to choose the faster Nested-Loop?\n\nthanks,\ngabor\n", "msg_date": "Mon, 24 Aug 2009 08:54:46 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "postgresql uses Hash-join, i need Nested-loop" }, { "msg_contents": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]> writes:\n> i have a query that uses a Hash-Join, but it would be faster with Nested-Loop,\n> and i don't know how to persuade postgresql to do it.\n\nFix the way-off-base rowcount estimates ...\n\n> postgresql-8.2 + tsearch2\n\n... which is just about impossible in 8.2, because there's no\nstatistical support for @@ selectivity estimation. Consider\nupdating to 8.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Aug 2009 10:37:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgresql uses Hash-join, i need Nested-loop " } ]
[ { "msg_contents": "Hi,\n\nI am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE\ncolumns like 'startdate' and 'enddate' (just date, not interested in time in\nthese columns). I have some queries (some using OVERLAPS) involving both\n'startdate' and 'enddate' columns. I tried to create a multi column index\nusing pgAdmin and it comes back with this error:\n\nERROR: data type date has no default operator class for access method \"gist\"\nHINT: You must specify an operator class for the index or define a default\noperator class for the data type.\n\nI search the pdf docs and online without finding what an \"operator class\"\nfor DATE would be. Would a multi-column index help in that case (OVERLAPS\nand dates comparison) anyway? Or should I just define an index for each of\nthe dates?\n\nBelow are the table and index defintions.\n\nThanks\n\nFred\n\n---------------------------------------------\nCREATE INDEX startenddate\n ON times USING gist (startdate, enddate);\n\n---------------------------------------------\n-- Table: times\n\n-- DROP TABLE times;\n\nCREATE TABLE times\n(\n id serial NOT NULL,\n startdate date NOT NULL,\n enddate date NOT NULL,\n starttime time without time zone,\n endtime time without time zone,\n CONSTRAINT pk_id PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\nALTER TABLE times OWNER TO postgres;\nGRANT ALL ON TABLE times TO postgres;\nGRANT ALL ON TABLE times TO public;\n\nHi,I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE columns like 'startdate' and 'enddate' (just date, not interested in time in these columns). I have some queries (some using OVERLAPS) involving both 'startdate' and 'enddate' columns. I tried to create a multi column index using pgAdmin and it comes back with this error:\nERROR: data type date has no default operator class for access method \"gist\"HINT: You must specify an operator class for the index or define a default operator class for the data type.I search the pdf docs and online without finding what an \"operator class\" for DATE would be. Would a multi-column index help in that case (OVERLAPS and dates comparison) anyway? Or should I just define an index for each of the dates?\nBelow are the table and index defintions.ThanksFred---------------------------------------------CREATE INDEX startenddate   ON times USING gist (startdate, enddate);---------------------------------------------\n-- Table: times-- DROP TABLE times;CREATE TABLE times(  id serial NOT NULL,  startdate date NOT NULL,  enddate date NOT NULL,  starttime time without time zone,  endtime time without time zone,\n  CONSTRAINT pk_id PRIMARY KEY (id))WITH (OIDS=FALSE);ALTER TABLE times OWNER TO postgres;GRANT ALL ON TABLE times TO postgres;GRANT ALL ON TABLE times TO public;", "msg_date": "Mon, 24 Aug 2009 17:24:59 +0800", "msg_from": "Fred Janon <[email protected]>", "msg_from_op": true, "msg_subject": "How to create a multi-column index with 2 dates using 'gist'?" }, { "msg_contents": "Asking the Performance people as well, since I didn't get any answer from\nGeneral...\n\nI have been unable to create a multi column index with 2 integers as well,\nsame error as the one I get with 2 dates.\n\nThanks\n\nFred\n\n---------- Forwarded message ----------\nFrom: Fred Janon <[email protected]>\nDate: Mon, Aug 24, 2009 at 17:24\nSubject: How to create a multi-column index with 2 dates using 'gist'?\nTo: [email protected]\n\n\nHi,\n\nI am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE\ncolumns like 'startdate' and 'enddate' (just date, not interested in time in\nthese columns). I have some queries (some using OVERLAPS) involving both\n'startdate' and 'enddate' columns. I tried to create a multi column index\nusing pgAdmin and it comes back with this error:\n\nERROR: data type date has no default operator class for access method \"gist\"\nHINT: You must specify an operator class for the index or define a default\noperator class for the data type.\n\nI search the pdf docs and online without finding what an \"operator class\"\nfor DATE would be. Would a multi-column index help in that case (OVERLAPS\nand dates comparison) anyway? Or should I just define an index for each of\nthe dates?\n\nBelow are the table and index defintions.\n\nThanks\n\nFred\n\n---------------------------------------------\nCREATE INDEX startenddate\n ON times USING gist (startdate, enddate);\n\n---------------------------------------------\n-- Table: times\n\n-- DROP TABLE times;\n\nCREATE TABLE times\n(\n id serial NOT NULL,\n startdate date NOT NULL,\n enddate date NOT NULL,\n starttime time without time zone,\n endtime time without time zone,\n CONSTRAINT pk_id PRIMARY KEY (id)\n)\nWITH (OIDS=FALSE);\nALTER TABLE times OWNER TO postgres;\nGRANT ALL ON TABLE times TO postgres;\nGRANT ALL ON TABLE times TO public;\n\nAsking the Performance people as well, since I didn't get any answer from General...I have been unable to create a multi column index with 2 integers as well, same error as the one I get with 2 dates.Thanks\nFred---------- Forwarded message ----------From: Fred Janon <[email protected]>\nDate: Mon, Aug 24, 2009 at 17:24Subject: How to create a multi-column index with 2 dates using 'gist'?To: [email protected],\nI am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE columns like 'startdate' and 'enddate' (just date, not interested in time in these columns). I have some queries (some using OVERLAPS) involving both 'startdate' and 'enddate' columns. I tried to create a multi column index using pgAdmin and it comes back with this error:\nERROR: data type date has no default operator class for access method \"gist\"HINT: You must specify an operator class for the index or define a default operator class for the data type.I search the pdf docs and online without finding what an \"operator class\" for DATE would be. Would a multi-column index help in that case (OVERLAPS and dates comparison) anyway? Or should I just define an index for each of the dates?\nBelow are the table and index defintions.ThanksFred---------------------------------------------CREATE INDEX startenddate   ON times USING gist (startdate, enddate);---------------------------------------------\n\n-- Table: times-- DROP TABLE times;CREATE TABLE times(  id serial NOT NULL,  startdate date NOT NULL,  enddate date NOT NULL,  starttime time without time zone,  endtime time without time zone,\n\n  CONSTRAINT pk_id PRIMARY KEY (id))WITH (OIDS=FALSE);ALTER TABLE times OWNER TO postgres;GRANT ALL ON TABLE times TO postgres;GRANT ALL ON TABLE times TO public;", "msg_date": "Tue, 25 Aug 2009 17:29:47 +0800", "msg_from": "Fred Janon <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: How to create a multi-column index with 2 dates using 'gist'?" }, { "msg_contents": "On Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:\n> I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE\n> columns like 'startdate' and 'enddate' (just date, not interested in time in\n> these columns). I have some queries (some using OVERLAPS) involving both\n> 'startdate' and 'enddate' columns. I tried to create a multi column index\n> using pgAdmin and it comes back with this error:\n> \n> ERROR: data type date has no default operator class for access method \"gist\"\n> HINT: You must specify an operator class for the index or define a default\n> operator class for the data type.\n\nI've not had the opportunity to try doing this, but it would seem to\nrequire hacking some C code to get this working. Have a look here:\n\n http://www.postgresql.org/docs/current/static/gist.html\n\n> I search the pdf docs and online without finding what an \"operator class\"\n> for DATE would be. Would a multi-column index help in that case (OVERLAPS\n> and dates comparison) anyway? Or should I just define an index for each of\n> the dates?\n\nAn operator class bundles together various bits of code so that the\nindex knows which functions to call when it needs to compare things.\n\nIf you were creating an GiST index over a pair of dates to support\nan \"overlaps\" operator you'd have to define a set of functions that\nimplement the various checks needed.\n\n\nDepending on your data you may be easier with just a multi-column index\nand using normal comparisons, I can't see how OVERLAPS could use indexes\nas it does some strange things with NULL values. The cases a B-Tree\nindex would win over GiST (this is an educated guess) is when few of the\nranges overlap within a table. If that's the case then I'd do:\n\n CREATE INDEX tbl_start_end_idx ON tbl (startdate,enddate);\n\nto create the btree index (they're the default, so nothing else is\nneeded) and then write queries as:\n\n SELECT r.range, t.*\n FROM tbl t, ranges r\n WHERE t.startdate <= r.rangeend\n AND t.enddate >= r.rangestart;\n\nif there are lots of overlapping ranges in the table then this is going\nto do badly and you may need to start thinking about writing some C code\nto get a GiST index going.\n\n-- \n Sam http://samason.me.uk/\n", "msg_date": "Tue, 25 Aug 2009 11:52:11 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to create a multi-column index with 2 dates using 'gist'?" }, { "msg_contents": "Thanks Sam. I looked at the gist documentation and although it would be fun,\nI don't have the time at the moment to explore that avenue (and scratching\nmy head!). I also think it would require a lot of work testing to validate\nthe code and that the gist index is better than the B-tree one. So I am\nfollowing your advice using a B-tree index for now.\n\nBasically I have an events table representing events with a duration\n(startdate, enddate). I was wondering if it would improve the performance if\nI was creating a separate table (indexed as you suggested) with the date\nranges (startdate, enddate) and point to that from my events table. That\nwould eliminate the duplicate ranges, costing a join to find the events\nwithin a date range, but maybe improving the search performance for events\nthat overlap a certain date range. Any feedback on that?\n\nThanks\n\nFred\n\nOn Tue, Aug 25, 2009 at 18:52, Sam Mason <[email protected]> wrote:\n\n> On Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:\n> > I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE\n> > columns like 'startdate' and 'enddate' (just date, not interested in time\n> in\n> > these columns). I have some queries (some using OVERLAPS) involving both\n> > 'startdate' and 'enddate' columns. I tried to create a multi column index\n> > using pgAdmin and it comes back with this error:\n> >\n> > ERROR: data type date has no default operator class for access method\n> \"gist\"\n> > HINT: You must specify an operator class for the index or define a\n> default\n> > operator class for the data type.\n>\n> I've not had the opportunity to try doing this, but it would seem to\n> require hacking some C code to get this working. Have a look here:\n>\n> http://www.postgresql.org/docs/current/static/gist.html\n>\n> > I search the pdf docs and online without finding what an \"operator class\"\n> > for DATE would be. Would a multi-column index help in that case (OVERLAPS\n> > and dates comparison) anyway? Or should I just define an index for each\n> of\n> > the dates?\n>\n> An operator class bundles together various bits of code so that the\n> index knows which functions to call when it needs to compare things.\n>\n> If you were creating an GiST index over a pair of dates to support\n> an \"overlaps\" operator you'd have to define a set of functions that\n> implement the various checks needed.\n>\n>\n> Depending on your data you may be easier with just a multi-column index\n> and using normal comparisons, I can't see how OVERLAPS could use indexes\n> as it does some strange things with NULL values. The cases a B-Tree\n> index would win over GiST (this is an educated guess) is when few of the\n> ranges overlap within a table. If that's the case then I'd do:\n>\n> CREATE INDEX tbl_start_end_idx ON tbl (startdate,enddate);\n>\n> to create the btree index (they're the default, so nothing else is\n> needed) and then write queries as:\n>\n> SELECT r.range, t.*\n> FROM tbl t, ranges r\n> WHERE t.startdate <= r.rangeend\n> AND t.enddate >= r.rangestart;\n>\n> if there are lots of overlapping ranges in the table then this is going\n> to do badly and you may need to start thinking about writing some C code\n> to get a GiST index going.\n>\n> --\n> Sam http://samason.me.uk/\n>\n> --\n> Sent via pgsql-general mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-general\n>\n\nThanks Sam. I looked at the gist documentation and although it would be fun, I don't have the time at the moment to explore that avenue (and scratching my head!). I also think it would require a lot of work testing to validate the code and that the gist index is better than the B-tree one. So I am following your advice using a B-tree index for now.\nBasically I have an events table representing events with a duration (startdate, enddate). I was wondering if it would improve the performance if I was creating a separate table (indexed as you suggested) with the date ranges (startdate, enddate) and point to that from my events table. That would eliminate the duplicate ranges, costing a join to find the events within a date range, but maybe improving the search performance for events that overlap a certain date range. Any feedback on that?\nThanksFredOn Tue, Aug 25, 2009 at 18:52, Sam Mason <[email protected]> wrote:\nOn Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:\n> I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE\n> columns like 'startdate' and 'enddate' (just date, not interested in time in\n> these columns). I have some queries (some using OVERLAPS) involving both\n> 'startdate' and 'enddate' columns. I tried to create a multi column index\n> using pgAdmin and it comes back with this error:\n>\n> ERROR: data type date has no default operator class for access method \"gist\"\n> HINT: You must specify an operator class for the index or define a default\n> operator class for the data type.\n\nI've not had the opportunity to try doing this, but it would seem to\nrequire hacking some C code to get this working.  Have a look here:\n\n  http://www.postgresql.org/docs/current/static/gist.html\n\n> I search the pdf docs and online without finding what an \"operator class\"\n> for DATE would be. Would a multi-column index help in that case (OVERLAPS\n> and dates comparison) anyway? Or should I just define an index for each of\n> the dates?\n\nAn operator class bundles together various bits of code so that the\nindex knows which functions to call when it needs to compare things.\n\nIf you were creating an GiST index over a pair of dates to support\nan \"overlaps\" operator you'd have to define a set of functions that\nimplement the various checks needed.\n\n\nDepending on your data you may be easier with just a multi-column index\nand using normal comparisons, I can't see how OVERLAPS could use indexes\nas it does some strange things with NULL values.  The cases a B-Tree\nindex would win over GiST (this is an educated guess) is when few of the\nranges overlap within a table.  If that's the case then I'd do:\n\n  CREATE INDEX tbl_start_end_idx ON tbl (startdate,enddate);\n\nto create the btree index (they're the default, so nothing else is\nneeded) and then write queries as:\n\n  SELECT r.range, t.*\n  FROM tbl t, ranges r\n  WHERE t.startdate <= r.rangeend\n    AND t.enddate   >= r.rangestart;\n\nif there are lots of overlapping ranges in the table then this is going\nto do badly and you may need to start thinking about writing some C code\nto get a GiST index going.\n\n--\n  Sam  http://samason.me.uk/\n\n--\nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general", "msg_date": "Tue, 25 Aug 2009 19:39:26 +0800", "msg_from": "Fred Janon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to create a multi-column index with 2 dates using\n\t'gist'?" }, { "msg_contents": "On Tue, Aug 25, 2009 at 07:39:26PM +0800, Fred Janon wrote:\n> Basically I have an events table representing events with a duration\n> (startdate, enddate). I was wondering if it would improve the performance if\n> I was creating a separate table (indexed as you suggested) with the date\n> ranges (startdate, enddate) and point to that from my events table. That\n> would eliminate the duplicate ranges, costing a join to find the events\n> within a date range, but maybe improving the search performance for events\n> that overlap a certain date range. Any feedback on that?\n\nIt depends on the sorts of queries you're going to be doing most often.\n\nNot sure how is best to explain when GiST is going to win, but if you\nthink of a rectangle with the start dates going along the top edge and\nthe end dates going down the side. If you sort the values by the start\ndate will you end up with most of them on a diagonal or will they be\nscattered randomly around. I.e the less correlation between the start\nand end date the better GiST will do, relative to a btree index. I\nthink that's right anyway!\n\n-- \n Sam http://samason.me.uk/\n", "msg_date": "Tue, 25 Aug 2009 12:57:50 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to create a multi-column index with 2 dates using 'gist'?" }, { "msg_contents": "On Tue, 25 Aug 2009, Fred Janon wrote:\n> Asking the Performance people as well, since I didn't get any answer from General...\n> \n> I have been unable to create a multi column index with 2 integers as well, same error as\n> the one I get with 2 dates.\n\n> ERROR: data type date has no default operator class for access method \"gist\"\n> HINT: You must specify an operator class for the index or define a default operator class\n> for the data type.\n\nYou need to install the contrib package btree_gist, which contains default \noperators for various data types, including (at least) integer, and \nprobably date as well. However, there seems to be very little point in \ndoing so, as the standard Postgres btree will handle these many times \nbetter than GiST.\n\n> I search the pdf docs and online without finding what an \"operator class\" for DATE would\n> be. Would a multi-column index help in that case (OVERLAPS and dates comparison) anyway?\n> Or should I just define an index for each of the dates?\n\nHere we have a clue as to why you are wanting GiST. You want to say \"Find \nme the rows that overlap in date with this range\". That requires more than \njust a standard index, and creating a two-column GiST date index will not \nsolve your problem.\n\nYour query will look something like:\n\nSELECT blah FROM blah\nWHERE start_date <= range_end AND end_date >= range_start\n\nAnd for that, you need an R-Tree index. Now, I am not aware of one in \nPostgres which indexes dates, however the \"seg\" package in contrib will \nindex floating point values, and \"bioseg\" (available from \nhttp://www.bioinformatics.org/bioseg/wiki/ which I am maintaining at the \nmoment) will index integers.\n\nMatthew\n\n-- \n The early bird gets the worm. If you want something else for breakfast, get\n up later.\n", "msg_date": "Wed, 26 Aug 2009 15:07:45 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: How to create a multi-column index with 2 dates using\n 'gist'?" }, { "msg_contents": "On Mon, Aug 24, 2009 at 05:24:59PM +0800, Fred Janon wrote:\n> Hi,\n> \n> I am using 8.3 and pgAdmin III. I have a couple of tables using 2 DATE columns\n> like 'startdate' and 'enddate' (just date, not interested in time in these\n> columns). I have some queries (some using OVERLAPS) involving both 'startdate'\n> and 'enddate' columns. I tried to create a multi column index using pgAdmin and\n> it comes back with this error:\n> \n> ERROR: data type date has no default operator class for access method \"gist\"\n> HINT: You must specify an operator class for the index or define a default\n> operator class for the data type.\n> \n> I search the pdf docs and online without finding what an \"operator class\" for\n> DATE would be. Would a multi-column index help in that case (OVERLAPS and dates\n> comparison) anyway? Or should I just define an index for each of the dates?\n> \n> Below are the table and index defintions.\n\nHave a look at http://pgfoundry.org/projects/temporal\n\nBut currently there is no way to avoid overlapping of such periods :(\n\n> Thanks\n> \n> Fred\n\nRegards,\n Gerhard", "msg_date": "Wed, 26 Aug 2009 16:26:13 +0200", "msg_from": "Gerhard Heift <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to create a multi-column index with 2 dates using\n 'gist'?" }, { "msg_contents": "Thanks Gerhard, interesting but I wonder if it is a maintained project, the\nfiles date from May 2008 and there is not much forum activity. I'll out it\non my list of \"To be investigated\".\n\nFred\n\nOn Wed, Aug 26, 2009 at 22:26, Gerhard Heift <\[email protected]> wrote:\n\n> But currently there is no way to avoid overlapping of such periods\n>\n\nThanks Gerhard, interesting but I wonder if it is a maintained project, the files date from May 2008 and there is not much forum activity. I'll out it on my list of \"To be investigated\".Fred\nOn Wed, Aug 26, 2009 at 22:26, Gerhard Heift <[email protected]> wrote:\nBut currently there is no way to avoid overlapping of such periods", "msg_date": "Wed, 26 Aug 2009 23:08:52 +0800", "msg_from": "Fred Janon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to create a multi-column index with 2 dates using\n\t'gist'?" }, { "msg_contents": "On Wed, 2009-08-26 at 23:08 +0800, Fred Janon wrote:\n> Thanks Gerhard, interesting but I wonder if it is a maintained\n> project, the files date from May 2008 and there is not much forum\n> activity. I'll out it on my list of \"To be investigated\".\n\nWell, it's maintained in the sense of \"I don't know of any problems with\nit.\" Right now all it does is implement the PERIOD data type, which is\nindexable so that you can do searches on predicates like\n\"&&\" (overlaps).\n\nIt may get a little more exciting when more features like temporal keys\n(which I'm planning to make possible in the next commitfest) or temporal\njoins (no serious plans yet, but seems doable) are implemented. \n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Wed, 26 Aug 2009 13:16:29 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to create a multi-column index with 2 dates\n using 'gist'?" } ]
[ { "msg_contents": "Hey,\n\nI seem to be unable to get postgres to use a gist index we have on a \ncircle data type.\n\nTable \"public.tradesmen_profiles\"\n Column | Type | Modifiers \n-----------------------+-----------------------------+----------------------- \n\nid | integer | not null\nwork_area | circle |\nIndexes:\n \"tradesmen_profiles_pkey\" PRIMARY KEY, btree (id)\n \"tradesmen_profiles_test\" gist (work_area)\n\nWe are then trying to do the following query\n\nSELECT id FROM tradesmen_profiles WHERE tradesmen_profiles.work_area \n@> point(0.0548691728419,51.5404384172);\n\nWhich produces the following:\n\nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------- \n\nSeq Scan on tradesmen_profiles (cost=0.00..3403.55 rows=14942 width=4) \n(actual time=0.042..31.427 rows=5898 loops=1)\n Filter: (work_area @> '(0.0548691728419,51.5404384172)'::point)\nTotal runtime: 39.556 ms\n\nI have also vacuum'd and reindexed the table after building the index\n\nVACUUM ANALYZE VERBOSE tradesmen_profiles;\nREINDEX TABLE tradesmen_profiles;\n\nSo am I just trying to do something that is not possible or have I just \nmade a mistake with what I am trying to do?\nThis is not a big problem just now but as our data set grows I am \nworried that having to do a sequence scan on this table every time will \nbe a serious performance overhead.\n\nThanks for your help,\n\nGavin\n", "msg_date": "Mon, 24 Aug 2009 17:27:49 +0100", "msg_from": "Gavin Love <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing on a circle datatype" }, { "msg_contents": "On Mon, 24 Aug 2009, Gavin Love wrote:\n> I seem to be unable to get postgres to use a gist index we have on a circle \n> data type.\n\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------------- \n> Seq Scan on tradesmen_profiles (cost=0.00..3403.55 rows=14942 width=4) \n> (actual time=0.042..31.427 rows=5898 loops=1)\n> Filter: (work_area @> '(0.0548691728419,51.5404384172)'::point)\n> Total runtime: 39.556 ms\n\nIf a sequential scan takes 39 ms, and returns 5898 rows, I'd say it's much \nquicker than an index scan could ever be. Postgres assumes that a \nsequential scan can access disc at a reasonable rate, but an index scan \ninvolves lots of seeking, which can be a lot slower. You would be looking \nat 6000 seeks here if the data wasn't in the cache, which could take tens \nof seconds.\n\n> This is not a big problem just now but as our data set grows I am worried \n> that having to do a sequence scan on this table every time will be a serious \n> performance overhead.\n\nTry with a lot more data, like a thousand times as much. You will probably \nfind that Postgres will automatically switch over to an index scan when it \nbecomes beneficial.\n\nAlternatively, if you really want to force its hand (just for testing \npurposes), then try running:\n\nSET enable_seqscan TO off;\n\nand see what happens.\n\nMatthew\n\n-- \n When I first started working with sendmail, I was convinced that the cf\n file had been created by someone bashing their head on the keyboard. After\n a week, I realised this was, indeed, almost certainly the case.\n -- Unknown\n", "msg_date": "Mon, 24 Aug 2009 18:03:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing on a circle datatype" }, { "msg_contents": "Gavin Love <[email protected]> writes:\n> I seem to be unable to get postgres to use a gist index we have on a \n> circle data type.\n> SELECT id FROM tradesmen_profiles WHERE tradesmen_profiles.work_area \n> @> point(0.0548691728419,51.5404384172);\n\nSo far as I can see, the member operators of gist circle_ops are\n\n gist | circle_ops | <<(circle,circle)\n gist | circle_ops | &<(circle,circle)\n gist | circle_ops | &>(circle,circle)\n gist | circle_ops | >>(circle,circle)\n gist | circle_ops | <@(circle,circle)\n gist | circle_ops | @>(circle,circle)\n gist | circle_ops | ~=(circle,circle)\n gist | circle_ops | &&(circle,circle)\n gist | circle_ops | |>>(circle,circle)\n gist | circle_ops | <<|(circle,circle)\n gist | circle_ops | &<|(circle,circle)\n gist | circle_ops | |&>(circle,circle)\n gist | circle_ops | @(circle,circle)\n gist | circle_ops | ~(circle,circle)\n\n(this is extracted from the output of the query shown in 8.4 docs\nsection 11.9). So, circle @> point is out of luck. Try using a\nzero- or small-radius circle on the right.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Aug 2009 13:06:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexing on a circle datatype " }, { "msg_contents": "Tom Lane wrote:\n> Gavin Love <[email protected]> writes:\n>> I seem to be unable to get postgres to use a gist index we have on a \n>> circle data type.\n>> SELECT id FROM tradesmen_profiles WHERE tradesmen_profiles.work_area \n>> @> point(0.0548691728419,51.5404384172);\n> \n> So far as I can see, the member operators of gist circle_ops are\n> \n> gist | circle_ops | <<(circle,circle)\n> gist | circle_ops | &<(circle,circle)\n> gist | circle_ops | &>(circle,circle)\n> gist | circle_ops | >>(circle,circle)\n> gist | circle_ops | <@(circle,circle)\n> gist | circle_ops | @>(circle,circle)\n> gist | circle_ops | ~=(circle,circle)\n> gist | circle_ops | &&(circle,circle)\n> gist | circle_ops | |>>(circle,circle)\n> gist | circle_ops | <<|(circle,circle)\n> gist | circle_ops | &<|(circle,circle)\n> gist | circle_ops | |&>(circle,circle)\n> gist | circle_ops | @(circle,circle)\n> gist | circle_ops | ~(circle,circle)\n> \n> (this is extracted from the output of the query shown in 8.4 docs\n> section 11.9). So, circle @> point is out of luck. Try using a\n> zero- or small-radius circle on the right.\n> \n\nI thought that might be the case but was unsure from the documentation I \ncould find. With a small circle it does indeed use the index.\n\nThanks for your help.\n\nEXPLAIN ANALYZE\nSELECT tradesmen_profiles.id FROM tradesmen_profiles WHERE \ntradesmen_profiles.work_area @> circle \n'((0.0548691728419,51.5404384172),0)';\n\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on tradesmen_profiles (cost=4.50..115.92 rows=30 \nwidth=4) (actual time=2.339..18.495 rows=5898 loops=1)\n Filter: (work_area @> '<(0.0548691728419,51.5404384172),0>'::circle)\n -> Bitmap Index Scan on tradesmen_profiles_test (cost=0.00..4.49 \nrows=30 width=0) (actual time=1.927..1.927 rows=6404 loops=1)\n Index Cond: (work_area @> \n'<(0.0548691728419,51.5404384172),0>'::circle)\n Total runtime: 26.554 ms\n(5 rows)\n", "msg_date": "Mon, 24 Aug 2009 18:46:59 +0100", "msg_from": "Gavin Love <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexing on a circle datatype" } ]
[ { "msg_contents": "Dear friends,\n\nI contact on Postgresql hackers request.\n\nI am running into a systemic\nproblem using Drupal under PostgreSQL 8.4\n\nDrupal relies heavily on a domain derived from int:\n\nCREATE DOMAIN int_unsigned\n AS integer\n CONSTRAINT int_unsigned_check CHECK ((VALUE >= 0));\n\nAnalysing slow queries, I noticed that PostgreSQL 8.4 would cast data\nfrom int4 to int_unsigned. Some queries range between 400ms and 700ms.\n\nThis provides some large sequential scans.\nCould you help understand why a cast happens?\n\nDetails, query plan and database: \nhttp://drupal.org/node/559986\n\nPostgresql.conf has no special settings for optimizing queries other\nthan PostgreSQL 8.4 syntax. Only shared memory is much larger.\n\nKind regards,\nJean-Michel", "msg_date": "Wed, 26 Aug 2009 16:40:31 +0200", "msg_from": "Jean-Michel =?ISO-8859-1?Q?Pour=E9?= <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL does CAST implicitely between int and a domain derived\n\tfrom int" }, { "msg_contents": "Jean-Michel Pourᅵ<[email protected]> wrote:\n> Details, query plan and database:\n> http://drupal.org/node/559986\n \nThat still has EXPLAIN output rather than EXPLAIN ANALYZE output. \nWithout the \"actual\" information, it's much harder to tell where\nthings might be improved.\n \n-Kevin\n", "msg_date": "Wed, 26 Aug 2009 09:59:45 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL does CAST implicitely between int and\n\ta domain derivedfrom int" } ]
[ { "msg_contents": "Hi All,\n\nWe are improving our network appliance monitoring system, and are evaluating\nusing PostgreSQL as the back-end traffic statistics database (we're\ncurrently running a home-grown Berkeley-DB based statistics database).\n\nWe log data from various network elements (it's mainly in/out bytes and\npacket counters, recorded for every port that we see traffic on). As such,\nthe system can expect to get data from 2000 devices (eventually, at the\nmoment it's only about 250), and has a monitoring target of 100 ports\n(although this is not enforced at 100, in practice we've seen only about\n20-30 ports in a given timeframe, and only about 50 distinct ports over a\nwhole year of monitoring) -- this is akin to RRD (e.g. MRTG or Cacti) but\nwith a lot more flexibility.\n\nOur current monitoring system reports the data per device as\n\nkey = {device_id (uint64), identifier (uint32), sub_identifier (uint32),\nunix_time} (these four taken together are unique)\ndata = 4 x uint64 (BIGINT in PG tables)\n\n\nMy table structure in PG mirrors this format with a UNIQUE constraint across\nthe four columns, and an index on each column separately. The data is\nreceived every 5 minutes, and stored at 5 minute, 1 hour and 1-day\ngranularities into partitioned tables named like stats_300 ->\n(stats_300_begintime_endtime, stats_300_begintime_endtime) and so on. I have\ncurrently split the 5min tables at every 2 hours, 1 hour tables at 2 days,\nand 1-day tables at every month).\n\nFor this schema, the typical queries would be:\n\nFor timeseries graphs (graphed as bar/line graphs):\n SELECT TIMESTAMP, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3)\n FROM <appropriate parent table> WHERE TIMESTAMP >= X AND TIMESTAMP < Y\n AND DEVICE IN (id1, id2, id3, ..... up to 2000 IDs can be here)\n GROUP BY TIMESTAMP;\n\nFor aggregate graphs (graphed as a pie chart):\n SELECT SUB_ID, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3)\n FROM <appropriate top table> WHERE TIMESTAMP >= X AND TIMESTAMP < Y\n AND DEVICE IN (id1, id2, id3, ..... up to 2000 IDs can be here)\n GROUP BY SUB_ID;\n\nIn my timing tests, the performance of PG is quite a lot worse than the\nequivalent BerkeleyDB implementation. Specifically, I get the following\ntiming results:\n\nFor the longest-running queries:\nBDB - 10-15 sec (cold transfer), <2 sec (warm - if I rerun the query\nimmediately)\nPG (command line) - 25 - 30 sec (cold), 25-30 sec (warm).\nPG (via libpqxx) - ~40 sec (cold), 25-30 sec (warm)\n\nThe data is immutable once it goes in (unless I DROP TABLE), and I've VACUUM\nFULL ANALYZED the whole database *before* my timing queries.\n\nAn explain analyze looks like (the tables are prepopulated with data for\n2000 devices and 100 sub_ids):\n\nmydb=> explain analyze SELECT TIMESTAMP, SUM(DATA_0), SUM(DATA_1),\nSUM(DATA_2), SUM(DATA_3) FROM stats_3600 WHERE MAIN_ID = 1 AND SUB_ID = 0\nAND TIMESTAMP >= 1251676859 AND TIMESTAMP <= 1251849659 GROUP BY TIMESTAMP;\n\nQUERY\nPLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=226659.20..226663.20 rows=200 width=36) (actual\ntime=1709.651..1709.745 rows=48 loops=1)\n -> Append (cost=0.00..225288.47 rows=109659 width=36) (actual\ntime=33.840..1264.328 rows=96000 loops=1)\n -> Index Scan using uniq_3600 on stats_3600 (cost=0.00..8.28\nrows=1 width=36) (actual time=0.019..0.019 rows=0 loops=1)\n Index Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\"\n>= 1251676859) AND (\"timestamp\" <= 1251849659))\n -> Bitmap Heap Scan on stats_3600_1251590400_1251763199\nstats_3600 (cost=2131.71..112946.75 rows=60642 width=36) (actual\ntime=33.816..495.239 rows=46000 loops=1)\n Recheck Cond: ((main_id = 1) AND (sub_id = 0) AND\n(\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n -> Bitmap Index Scan on\nstats_3600_1251590400_1251763199_unique_check (cost=0.00..2116.55\nrows=60642 width=0) (actual time=21.415..21.415 rows=46000 loops=1)\n Index Cond: ((main_id = 1) AND (sub_id = 0) AND\n(\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n -> Bitmap Heap Scan on stats_3600_1251763200_1251935999\nstats_3600 (cost=1727.24..112333.44 rows=49016 width=36) (actual\ntime=38.169..526.578 rows=50000 loops=1)\n Recheck Cond: ((main_id = 1) AND (sub_id = 0) AND\n(\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n -> Bitmap Index Scan on\nstats_3600_1251763200_1251935999_unique_check (cost=0.00..1714.99\nrows=49016 width=0) (actual time=24.059..24.059 rows=50000 loops=1)\n Index Cond: ((main_id = 1) AND (sub_id = 0) AND\n(\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n Total runtime: 1710.844 ms\n(13 rows)\n\n\nmydb=> explain analyze SELECT SUB_ID, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2),\nSUM(DATA_3) FROM stats_3600 WHERE MAIN_ID = 1 AND TIMESTAMP >= 1251676859\nAND TIMESTAMP <= 1251849659 GROUP BY SUB_ID;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=881887.53..881891.53 rows=200 width=36) (actual\ntime=82007.298..82007.493 rows=99 loops=1)\n -> Append (cost=0.00..771583.84 rows=8824295 width=36) (actual\ntime=37.206..42504.106 rows=8819844 loops=1)\n -> Index Scan using uniq_3600 on stats_3600 (cost=0.00..8.32\nrows=1 width=36) (actual time=0.024..0.024 rows=0 loops=1)\n Index Cond: ((main_id = 1) AND (\"timestamp\" >= 1251676859)\nAND (\"timestamp\" <= 1251849659))\n -> Index Scan using idx_ts_stats_3600_1251590400_1251763199 on\nstats_3600_1251590400_1251763199 stats_3600 (cost=0.00..369424.65\nrows=4234747 width=36) (actual time=37.178..9776.530 rows=4226249 loops=1)\n Index Cond: ((\"timestamp\" >= 1251676859) AND (\"timestamp\" <=\n1251849659))\n Filter: (main_id = 1)\n -> Index Scan using idx_ts_stats_3600_1251763200_1251935999 on\nstats_3600_1251763200_1251935999 stats_3600 (cost=0.00..402150.87\nrows=4589547 width=36) (actual time=0.119..11339.277 rows=4593595 loops=1)\n Index Cond: ((\"timestamp\" >= 1251676859) AND (\"timestamp\" <=\n1251849659))\n Filter: (main_id = 1)\n Total runtime: 82007.762 ms\n\nThe corresponding table definition looks like:\nmydb=> \\d stats_3600_1251590400_1251763199\nTable \"public.stats_3600_1251590400_1251763199\"\n Column | Type | Modifiers\n-------------+---------+-----------\n main_id | integer |\n sub_id | integer |\n timestamp | integer |\n device | bigint |\n data_0 | bigint |\n data_1 | bigint |\n data_2 | bigint |\n data_3 | bigint |\nIndexes:\n \"stats_3600_1251590400_1251763199_unique_check\" UNIQUE, btree (main_id,\nsub_id, \"timestamp\", device)\n \"idx_cid_stats_3600_1251590400_1251763199\" btree (main_id)\n \"idx_scid_stats_3600_1251590400_1251763199\" btree (sub_id)\n \"idx_dev_stats_3600_1251590400_1251763199\" btree (device)\n \"idx_ts_stats_3600_1251590400_1251763199\" btree (\"timestamp\")\nCheck constraints:\n \"stats_3600_1251590400_1251763199_timestamp_check\" CHECK (\"timestamp\" >=\n1251590400 AND \"timestamp\" <= 1251763199)\nInherits: stats_3600\n\nThe table contains the following data (other tables are similar):\nmydb=> select relname, relpages, reltuples from pg_class where relname like\n'stats_%';\n relname | relpages | reltuples\n------------------------------------------------+----------+-------------\n stats_300_1251705600_1251712799 | 49532 | 4.8046e+06\n stats_3600_1251763200_1251935999 | 181861 | 1.76404e+07\n stats_86400_1244160000_1246751999 | 61845 | 5.99888e+06\n[the rest truncated for brevity]\n\n\nSo my questions are:\n1. Is there anything I can do to speed up performance for the queries? Even\na warm performance comparable to the BDB version would be a big improvement\nfrom the current numbers.\n2. Does the order in which data was received vs. data being queried matter?\n(If so, I can either cache the data before writing to DB, or rewrite the\ntable when I rollover to the next one)\n\n\nSystem Configuration:\n - 64-bit quad-core Xeon with 6 GB RAM\n - 4x250 GB SATA disks configured as RAID stripe+mirror\n - Linux 2.6.9-34 with some custom patches (CentOS 4.2 based)\n - postgres 8.3.7 (from sources, no special config options, installed to\n/var/opt/pgsql-8.3)\n - C++ interface using libpqxx-3.0 (also built from sources)Relevant\nparameters from postgresql.conf:\n - Relevant postgresql.conf parameters:\n data_directory = /data/pg (400 GB partition)\n max_connections = 8\n shared_buffers = 128MB\n work_mem = 256MB\n maintenance_work_mem=64MB\n effective_cache_size = 2048MB\n max_fsm_pages=204800\n default_statistics_target = 100\n constraint_exclusion = on\n\n\nThanks Much!\nHrishi\n\nHi All,We are improving our network appliance monitoring system, and are evaluating using PostgreSQL as the back-end traffic statistics database (we're currently running a home-grown Berkeley-DB based statistics database).\nWe log data from various network elements (it's mainly in/out bytes and packet counters, recorded for every port that we see traffic on). As such, the system  can expect to get data from 2000 devices (eventually, at the moment it's only about 250), and has a monitoring target of 100 ports (although this is not enforced at 100, in practice we've seen only about 20-30 ports in a given timeframe, and only about 50 distinct ports over a whole year of monitoring) -- this is akin to RRD (e.g. MRTG or Cacti) but with a lot more flexibility.\nOur current monitoring system reports the data per device askey = {device_id (uint64), identifier (uint32), sub_identifier (uint32), unix_time} (these four taken together are unique)\ndata = 4 x uint64 (BIGINT in PG tables)My table structure in PG mirrors this format with a UNIQUE constraint across the four columns, and an index on each column separately. The data is received every 5 minutes, and stored at 5 minute, 1 hour and 1-day granularities into partitioned tables named like stats_300 -> (stats_300_begintime_endtime, stats_300_begintime_endtime) and so on. I have currently split the 5min tables at every 2 hours, 1 hour tables at  2 days, and 1-day tables at every month).\nFor this schema, the typical queries would be:For timeseries graphs (graphed as bar/line graphs):\n  SELECT TIMESTAMP, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3) \n    FROM <appropriate parent table> WHERE TIMESTAMP >= X AND TIMESTAMP < Y\n    AND DEVICE IN (id1, id2, id3, ..... up to 2000 IDs can be here)\n    GROUP BY TIMESTAMP;\n\nFor aggregate graphs (graphed as a pie chart):\n  SELECT SUB_ID,  SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3) \n    FROM <appropriate top table> WHERE TIMESTAMP >= X AND TIMESTAMP < Y\n    AND DEVICE IN (id1, id2, id3, ..... up to 2000 IDs can be here)\n    GROUP BY SUB_ID;\nIn my timing tests, the performance of PG is quite a lot worse than the equivalent BerkeleyDB implementation. Specifically, I get the following timing results:For the longest-running queries:BDB - 10-15 sec (cold transfer), <2 sec (warm - if I rerun the query immediately) \nPG (command line) - 25 - 30 sec (cold), 25-30 sec (warm).PG (via libpqxx) - ~40 sec (cold), 25-30 sec (warm)The data is immutable once it goes in (unless I DROP TABLE), and I've VACUUM FULL ANALYZED the whole database *before* my timing queries.\nAn explain analyze looks like (the tables are prepopulated with data for 2000 devices and 100 sub_ids):mydb=> explain analyze SELECT TIMESTAMP, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3) FROM stats_3600 WHERE MAIN_ID = 1 AND SUB_ID = 0 AND TIMESTAMP >= 1251676859 AND TIMESTAMP <= 1251849659 GROUP BY TIMESTAMP; \n                                                                                  QUERY PLAN                                                                                   \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=226659.20..226663.20 rows=200 width=36) (actual time=1709.651..1709.745 rows=48 loops=1)   ->  Append  (cost=0.00..225288.47 rows=109659 width=36) (actual time=33.840..1264.328 rows=96000 loops=1)\n         ->  Index Scan using uniq_3600 on stats_3600  (cost=0.00..8.28 rows=1 width=36) (actual time=0.019..0.019 rows=0 loops=1)\n               Index Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n         ->  Bitmap Heap Scan on stats_3600_1251590400_1251763199 stats_3600  (cost=2131.71..112946.75 rows=60642 width=36) (actual time=33.816..495.239 rows=46000 loops=1)\n               Recheck Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n               ->  Bitmap Index Scan on stats_3600_1251590400_1251763199_unique_check  (cost=0.00..2116.55 rows=60642 width=0) (actual time=21.415..21.415 rows=46000 loops=1)\n                     Index Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n         ->  Bitmap Heap Scan on stats_3600_1251763200_1251935999 stats_3600  (cost=1727.24..112333.44 rows=49016 width=36) (actual time=38.169..526.578 rows=50000 loops=1)\n               Recheck Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n               ->  Bitmap Index Scan on stats_3600_1251763200_1251935999_unique_check  (cost=0.00..1714.99 rows=49016 width=0) (actual time=24.059..24.059 rows=50000 loops=1)\n                     Index Cond: ((main_id = 1) AND (sub_id = 0) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n Total runtime: 1710.844 ms(13 rows)\nmydb=> explain analyze SELECT SUB_ID, SUM(DATA_0), SUM(DATA_1), SUM(DATA_2), SUM(DATA_3) FROM stats_3600 WHERE MAIN_ID = 1 AND TIMESTAMP >= 1251676859 AND TIMESTAMP <= 1251849659  GROUP BY SUB_ID;\n                                                                                                      QUERY PLAN-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=881887.53..881891.53 rows=200 width=36) (actual time=82007.298..82007.493 rows=99 loops=1)   ->  Append  (cost=0.00..771583.84 rows=8824295 width=36) (actual time=37.206..42504.106 rows=8819844 loops=1)\n         ->  Index Scan using uniq_3600 on stats_3600  (cost=0.00..8.32 rows=1 width=36) (actual time=0.024..0.024 rows=0 loops=1)\n               Index Cond: ((main_id = 1) AND (\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))\n         ->  Index Scan using idx_ts_stats_3600_1251590400_1251763199 on stats_3600_1251590400_1251763199 stats_3600  (cost=0.00..369424.65 rows=4234747 width=36) (actual time=37.178..9776.530 rows=4226249 loops=1)\n               Index Cond: ((\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))               Filter: (main_id = 1)\n         ->  Index Scan using idx_ts_stats_3600_1251763200_1251935999 on stats_3600_1251763200_1251935999 stats_3600  (cost=0.00..402150.87 rows=4589547 width=36) (actual time=0.119..11339.277 rows=4593595 loops=1)\n               Index Cond: ((\"timestamp\" >= 1251676859) AND (\"timestamp\" <= 1251849659))               Filter: (main_id = 1)\n Total runtime: 82007.762 msThe corresponding table definition looks like:\nmydb=> \\d stats_3600_1251590400_1251763199Table \"public.stats_3600_1251590400_1251763199\"\n   Column    |  Type   | Modifiers -------------+---------+-----------\n main_id    | integer |  sub_id | integer | \n timestamp   | integer |  device      | bigint  | \n data_0      | bigint  |  data_1      | bigint  | \n data_2      | bigint  |  data_3      | bigint  | \nIndexes:    \"stats_3600_1251590400_1251763199_unique_check\" UNIQUE, btree (main_id, sub_id, \"timestamp\", device)\n    \"idx_cid_stats_3600_1251590400_1251763199\" btree (main_id)    \"idx_scid_stats_3600_1251590400_1251763199\" btree (sub_id)\n    \"idx_dev_stats_3600_1251590400_1251763199\" btree (device)    \"idx_ts_stats_3600_1251590400_1251763199\" btree (\"timestamp\")\nCheck constraints:    \"stats_3600_1251590400_1251763199_timestamp_check\" CHECK (\"timestamp\" >= 1251590400 AND \"timestamp\" <= 1251763199)\nInherits: stats_3600The table contains the following data (other tables are similar):mydb=> select relname, relpages, reltuples from pg_class where relname like 'stats_%';\n                    relname                     | relpages |  reltuples  ------------------------------------------------+----------+-------------\n stats_300_1251705600_1251712799                |    49532 |  4.8046e+06 \nstats_3600_1251763200_1251935999               |   181861 | 1.76404e+07\n  stats_86400_1244160000_1246751999              |    61845 | 5.99888e+06[the rest truncated for brevity]\nSo my questions are:1. Is there anything I can do to speed up performance for the queries? Even a warm performance comparable to the BDB version would be a big improvement from the current numbers.2. Does the order in which data was received vs. data being queried matter? (If so, I can either cache the data before writing to DB, or rewrite the table when I rollover to the next one)\nSystem Configuration: - 64-bit quad-core Xeon with 6 GB RAM - 4x250 GB SATA disks configured as RAID stripe+mirror - Linux 2.6.9-34 with some custom patches (CentOS 4.2 based) - postgres 8.3.7 (from sources, no special config options, installed to /var/opt/pgsql-8.3)\n - C++ interface using libpqxx-3.0 (also built from sources)Relevant parameters from postgresql.conf: - Relevant postgresql.conf parameters:     data_directory = /data/pg (400 GB partition)     max_connections = 8\n     shared_buffers = 128MB     work_mem = 256MB     maintenance_work_mem=64MB     effective_cache_size = 2048MB     max_fsm_pages=204800     default_statistics_target = 100      constraint_exclusion = on\nThanks Much!Hrishi", "msg_date": "Wed, 26 Aug 2009 10:31:13 -0700", "msg_from": "\n =?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=\n\t=?UTF-8?B?4KWHKQ==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issues with large amounts of time-series data" }, { "msg_contents": "=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <[email protected]> writes:\n> In my timing tests, the performance of PG is quite a lot worse than the\n> equivalent BerkeleyDB implementation.\n\nAre you actually comparing apples to apples? I don't recall that BDB\nhas any built-in aggregation functionality. It looks to me like you've\nmoved some work out of the client into the database.\n\n> 1. Is there anything I can do to speed up performance for the queries?\n\nDo the data columns have to be bigint, or would int be enough to hold\nthe expected range? SUM(bigint) is a *lot* slower than SUM(int),\nbecause the former has to use \"numeric\" arithmetic whereas the latter\ncan sum in bigint. If you want to keep the data on-disk as bigint,\nbut you know the particular values being summed here are not that\nbig, you could cast in the query (SUM(data_1::int) etc).\n\nI'm also wondering if you've done something to force indexscans to be\nused. If I'm interpreting things correctly, some of these scans are\ntraversing all/most of a partition and would be better off as seqscans.\n\n> shared_buffers = 128MB\n\nThis is really quite lame for the size of machine and database you've\ngot. Consider knocking it up to 1GB or so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Aug 2009 14:01:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with large amounts of time-series data " }, { "msg_contents": "Hi Tom,\n\nThanks for your quick response.\n\n2009/8/26 Tom Lane <[email protected]>\n> <[email protected]> writes:\n> > In my timing tests, the performance of PG is quite a lot worse than the\n> > equivalent BerkeleyDB implementation.\n>\n> Are you actually comparing apples to apples?  I don't recall that BDB\n> has any built-in aggregation functionality.  It looks to me like you've\n> moved some work out of the client into the database.\n\nI'm measuring end-to-end time, which includes the in-code aggregation\nwith BDB (post DB fetch) and the in-query aggregation in PG.\n\n> > 1. Is there anything I can do to speed up performance for the queries?\n>\n> Do the data columns have to be bigint, or would int be enough to hold\n> the expected range?  SUM(bigint) is a *lot* slower than SUM(int),\n> because the former has to use \"numeric\" arithmetic whereas the latter\n> can sum in bigint.  If you want to keep the data on-disk as bigint,\n> but you know the particular values being summed here are not that\n> big, you could cast in the query (SUM(data_1::int) etc).\n\nFor the 300-sec tables I probably can drop it to an integer, but for\n3600 and 86400 tables (1 hr, 1 day) will probably need to be BIGINTs.\nHowever, given that I'm on a 64-bit platform (sorry if I didn't\nmention it earlier), does it make that much of a difference? How does\na float (\"REAL\") compare in terms of SUM()s ?\n\n> I'm also wondering if you've done something to force indexscans to be\n> used.  If I'm interpreting things correctly, some of these scans are\n> traversing all/most of a partition and would be better off as seqscans.\nOne thing I noticed is that if I specify what devices I want the data\nfor (specifically, all of them, listed out as DEVICE IN (1,2,3,4,5...)\nin the WHERE clause, PG uses a Bitmap heap scan, while if I don't\nspecify the list (which still gives me data for all the devices), PG\nuses a sequential scan. (I might have missed the DEVICE IN (...) in my\nearlier query). However, more often than not, the query _will_ be of\nthe form DEVICE IN (...). If I actually execute the queries (on the\npsql command line), their runtimes are about the same (15s vs 16s)\n\n> >      shared_buffers = 128MB\n>\n> This is really quite lame for the size of machine and database you've\n> got.  Consider knocking it up to 1GB or so.\n\nOK, I've bumped it up to 1 GB. However, that doesn't seem to make a\nhuge difference (unless I need to do the same on libpqxx's connection\nobject too).\n\nCheers,\nHrishi\n", "msg_date": "Wed, 26 Aug 2009 11:39:40 -0700", "msg_from": "\n =?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=\n\t=?UTF-8?B?4KWHKQ==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues with large amounts of time-series\n\tdata" }, { "msg_contents": "=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <[email protected]> writes:\n> 2009/8/26 Tom Lane <[email protected]>\n>> Do the data columns have to be bigint, or would int be enough to hold\n>> the expected range?\n\n> For the 300-sec tables I probably can drop it to an integer, but for\n> 3600 and 86400 tables (1 hr, 1 day) will probably need to be BIGINTs.\n> However, given that I'm on a 64-bit platform (sorry if I didn't\n> mention it earlier), does it make that much of a difference?\n\nEven more so.\n\n> How does a float (\"REAL\") compare in terms of SUM()s ?\n\nCasting to float or float8 is certainly a useful alternative if you\ndon't mind the potential for roundoff error. On any non-ancient\nplatform those will be considerably faster than numeric. BTW,\nI think that 8.4 might be noticeably faster than 8.3 for summing\nfloats, because of the switch to pass-by-value for them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Aug 2009 14:52:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with large amounts of time-series data " }, { "msg_contents": "2009/8/26 Tom Lane <[email protected]>:\n>> How does a float (\"REAL\") compare in terms of SUM()s ?\n>\n> Casting to float or float8 is certainly a useful alternative if you\n> don't mind the potential for roundoff error.  On any non-ancient\n> platform those will be considerably faster than numeric.  BTW,\n> I think that 8.4 might be noticeably faster than 8.3 for summing\n> floats, because of the switch to pass-by-value for them.\n\nIt occurs to me we could build a special case state variable which\ncontains a bigint or a numeric only if it actually overflows. This\nwould be like my other suggestion with dates only it would never be\nexposed. The final function would always convert to a numeric.\n\nAlternatively we could change the numeric data type as was proposed\naeons ago but make it more general so it stores integers that fit in a\nbigint as a 64-bit integer internally. That would be more work but be\nmore generally useful. I'm not sure it would be possible to avoid\ngenerating palloc garbage for sum() that way though.\n\n-- \ngreg\nhttp://mit.edu/~gsstark/resume.pdf\n", "msg_date": "Wed, 26 Aug 2009 21:42:35 +0100", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with large amounts of time-series data" }, { "msg_contents": "On Wed, 26 Aug 2009, Hrishikesh (??????? ????????) wrote:\n\n> key = {device_id (uint64), identifier (uint32), sub_identifier (uint32), unix_time} (these four taken together are unique)\n\nYou should probably tag these fields as NOT NULL to eliminate needing to \nconsider that possibility during query planning. As of V8.3 this isn't as \ncritical anymore, but it's still good practice.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 27 Aug 2009 18:57:47 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issues with large amounts of time-series\n data" }, { "msg_contents": "Hi Tom, Greg,\n\nThanks for your helpful suggestions - switching the BIGINT to FLOAT\nand fixing the postgresql.conf to better match my server configuration\ngave me about 30% speedup on the queries.\n\nBecause of the fact that my data insert order was almost never the\ndata retrieval order, I also got a significant (about 3x - 10x)\nspeedup by CLUSTERing the tables on an index that represented the most\nfrequent query orders (main_id, timestamp, sub_id, device_id) - the\nqueries that were taking a few seconds earlier now complete in a few\nhundred milliseconds (5s vs. 600ms in some instances).\n\nThanks Again,\nHrishikesh\n", "msg_date": "Tue, 1 Sep 2009 15:11:50 -0700", "msg_from": "\n =?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?=\n\t=?UTF-8?B?4KWHKQ==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issues with large amounts of time-series\n\tdata" } ]